Firewall Configuration for the Access Grid

Also available at the Access Grid website.

Purpose of the Document

The Access Grid is a software suite that provides video, audio and file collaboration, built on well-known streaming audio (Robust Audio Tool, or RAT) and Vic (Video Conferencing tool). This document seeks to help network administrators implement Access Grid with respect to local security policies and firewalls.

Please send any questions or concerns about this document to, or file a bug against it under Documentation in the Access Grid Bugzilla (1).

This document was written by Lev Lafayette and derives heavily from a version was written by Tom Uram with additional material from Jason Bell.

Multicast Access Grid Client Requirements

The Access Grid Venue Client operates best with multicast which is the preferred solution for one-to-many content distribution. The network address range for multicast, as defined by IANA, is to or 224.0.0./4 (2)

Multicast does not present a threat to your network. Consider an analogy with firewalls; a firewall will commonly allow incoming UDP traffic on a port if a client behind the firewall has sent data out on that port first (a "reflexive firewall rule"). Multicast operates in a similar fashion. If a client wishes to receive data from a particular multicast group, he must first subscribe to that group. Data flows to the user until membership is terminated.Unwanted influx of data is not an option.

Outgoing multicast data can be read by parties unknown to the sender. If this is an important concern "AG Virtual Venues" can be secured using X.509 certificates to control who has access to the Venue. Only individuals authorized to join the Venue would have access to the multicast addresses and media encryption keys, so even if an outsider were to discover the multicast addresses, he would still not be able to read the data. If a Venue is configured with encrypted media, AES encryption is used (3).

It should be noted that traffic on port 5353 is required for Multicast DNS (eg Zeroconf, Apple Bonjour or Avahi).

Additionally, the multicast beacon (DAST) [Linux install guide can be found at] is a useful tool to help you debug your multicasting problems. The beacon provides useful information regarding a particular multicast group’s connection activity. Therefore it is recommended to have the beacon software installed and running to assist with identifying any multicast problems.

Therefore, to ensure the multicast beacon works, traffic needs to be accepted from the following ports:

  • Client-to-Client (RTP) multicast traffic on port: 10002

  • RTCP traffic on port: 10003

  • TCP unicast reports going back to the Central Server on port 10004

Unicast Access Grid Client Requirements

As an alternative to multicast, an Access Grid can use unicast bridges to exchange traffic. Unicast bridges join a ‘bridge network’. Venue Clients then query this bridge network for available bridges. Unicast bridges forward data using UDP; any firewalls between the bridge and the Venue Client must be configured to allow this connectionless protocol.

The information required, including the bridge hostname and port range, can be determined by looking at the Venue Client Preferences dialogue under Bridging (see figure 1). The list of bridges should be in order of network proximity; starting from the top of the list select a number of bridges and accept traffic from them in your firewall.

The default unicast port range is 50000 to 52000; this can be modified, so one should always check the port range in the Preferences dialogue of the Venue Client.

One thing to note, it is recommended to have available connections to your regional list of unicast Access Grid bridges, as can be found at

For routers that perform Network Address Translation (NAT), the port range will need to be forwarded for each bridge to the target computer.

Other Recommended Requirements

Accept everything from localhost. This might sound obvious but it is also an AG requirement for software like RAT.

Accept incoming traffic from ports 11000, required for NodeService Manager.

Accept port 21 traffic, as Access Grid uses FTPS; initial connection is via port 21 and thereafter may negotiate a secure TLS session (and therefore port 443, HTTPS, TLS, SSL).

Venue Servers

Accept incoming traffic from ports 8000, 8002, 8003, 8004 and 8006 ; this is absolutely required for VenueServers.

Specifically Port Protocol Purpose

  • 8000 TCP Virtual Venue Server Port (machine hosting the Venue Server)

  • 8002 TCP Event Port

  • 8004 TCP Text port

  • 8006 TCP Data port

  • Port, host and traffic overview

    Port/Host Protocol
    Incoming / Outgoing
    21 TCP/UDP Both Required for FTPS - Data Transfer
    443 TCP/UDP Both Required for HTTPS, TLS, SSL
    5353 TCP/UDP Both Required for multicast DNS
    5909 TCP/UDP IN (OUT - Only required if running your own VenueVNC Server) Required for VenueVNC
    8000 TCP IN (OUT - Only required if running your own Venue Server) Virtual Venue Server Port (machine hosting the Venue Server)
    8002 TCP IN (OUT - Only required if running your own Venue Server) Virtual Venue Server Event Port
    8004 TCP IN (OUT - Only required if running your own Venue Server) Virtual Venue Server Text Port
    8006 TCP IN (OUT - Only required if running your own Venue Server) Virtual Venue Server Data port
    10002 UDP Both Required for Multicast Beacon
    10003 UDP Both Required for Multicast Beacon
    10004 TCP Both Required for Multicast Beacon
    11000 TCP IN (OUT - Only required if running your NodeManager) Required for NodeService Manager
    50000 - 52000 UDP Both

    Required for connections to Unicast Bridges

    (Note that the port range might change for different bridges)

    Other Firewall considerations
    22 TCP/UDP Required for SSH
    80 TCP IN Required for HTTP
    ICMP Both Required for Ping UDP Both
    Local Host Firewall considerations
    localhost TCP/UDP Both Required for Rat and other applications


    1) Access Grid Bugzilla,

    2) Internet Multicast Addresses,

    3) AES Algorithm (Rijndael) Information,

    Special Note

    This document has been made possible by the support of the Australian Research Collaboration Service (ARCS -


    Revision History


    Revised by


    5th / May / 2009

    Lev Lafayette

    Created Document

    6th / May / 2009

    Jason Bell

    Editioral Changes