Bandwidth management

Bandwidth management is a rela-tively new topic for many of us. Routing a VTR to a monitor does not involve a calculation to determine if the router has sufficient bandwidth to make the connection. This is because, in an existing television facility infrastructure, the router guarantees a full-bandwidth point-to-point connection between any two points in the system. However, computers and their associated networking architectures are finding their way into the broadcast production chain. (See Figure 1.) Many computer networks do not guarantee a full-bandwidth connection from one place to another. This can be a problem if these computers are in mission-critical applications. Managing bandwidth in critical computer networks is something that will become more familiar as computers become more entrenched in broadcast facilities.

This month, we will explore issues surrounding bandwidth management inside a single broadcast.

What's the problem?

Most broadcast content moves around facilities using analog or perhaps Serial Digital Interface (SDI) routers. However, some video and audio content in your facility probably travels on computer networks. As network speeds increase, it becomes more and more feasible to send content across these networks. One might wonder why someone would choose to move video across a network rather than using a conventional broadcast router. One answer is that when using a Non-Linear Editor (NLE) connected to central storage, the network connection becomes the obvious choice for moving content. Another common network application is moving content between servers in a large server-based play-to-air system. In these environments, it is easier and quicker to move content using the computer network. As these systems become more common, the consequences of a bandwidth-starved network become more serious.

One way to head off this problem is to implement bandwidth management. Bandwidth management systems manage network bandwidth, avoiding network slow-downs and blocking. Bandwidth is allocated to applications as needed, if bandwidth is available. If bandwidth is not available, the application must wait. Various priority schemes are in place to allow a high-priority transfer to get the bandwidth it needs.

This sounds like a great solution. There is only one problem - while bandwidth management systems and protocols have been developed, these solutions have not been widely implemented. Proprietary systems are available, but they do not work well (or at all) in mixed-vendor environments.

Bandwidth management today The most popular high-capacity networking architectures in use today for moving rich media content are ATM, Gigabit Ethernet, Fibre Channel and 1394 Firewire.

Ethernet The most likely solution to bandwidth management in Ethernet systems is the Resource Reservation Protocol (RSVP). RSVP is a network-control protocol that enables applications to obtain special qualities of service (QoSs) for their data flows. RSVP allows an application to specify three different traffic types: delay sensitive, best effort and rate sensitive. A device requesting a streaming transfer across Ethernet would specify both rate-sensitive and delay-sensitive, since streaming video is disturbed by changes in rate and changes in delay along the transmission path. Once a rate-sensitive session has been established, the RSVP protocol will not grant a subsequent RSVP request that would cause the network to provide less than the required rate to existing rate-guaranteed sessions. While RSVP would appear to be a solution, implementation has been slow and availability of equipment implementing RSVP is limited.

Fibre Channel Fibre Channel networks can be set up as either point-to-point connections or as a switched fabric. Bandwidth is not an issue in point-to-point applications as there are only two devices connected to the network. However, bandwidth management is an issue in switched fabric networks.

Fibre Channel has several classes or operating modes. Class 1 is connection-oriented without acknowledgement, much like a router, offering a direct connection via a switch from one device to another. Unfortunately, Class 1 was not widely implemented. Class 2 is a connectionless service with acknowledgement, which is used primarily for tape devices. It is not generally used to move content between servers. Class 3 is connectionless without acknowledgement and is the most widely implemented version of Fibre Channel. In Class 3, a central switch provides connections in a star topology, much like Ethernet. Once two devices establish a channel, they have a virtual direct connection from point-to-point via the switch, so bandwidth across the connection is guaranteed. There is a problem, as there is nothing in the Class 3 protocol that requires that the switch check to see if its total backbone bandwidth will be exceeded before a connection is granted. If a nearly saturated switch approves an additional connection that exceeds the switch's total bandwidth, the network crashes.

Fibre Channel designers foresaw this weakness. Class 4 contains bandwidth management protocols. In Class 4 operation, the switch checks for available bandwidth before making a connection. If bandwidth is not available, the connection is not be made. Unfortunately, as far as I know, Class 4 Fibre Channel is not yet available.

Firewire Firewire is a slightly different story. The 1394 Firewire protocols allocate bandwidth on a slot basis. All devices on the network (up to a maximum of 63 or so) are allocated a guaranteed bandwidth slot. For this reason, all devices have their required bandwidth available at all times. With Firewire technology, bandwidth reservation systems are not necessary.

Firewire is a relatively new technology, and although the bandwidth issue was resolved from the start, it is important to note that Firewire switches are not widely available. Finding switches that work in a multi-vendor environment may be a challenge.

ATM Another technology that takes bandwidth management into account in its core protocols is ATM. For now, it is sufficient to say that as an ATM connection is established, if the customer requests a Constant Bit-rate (CBR) connection and the rate is successfully negotiated through the ATM switch or switches, then a guaranteed amount of bandwidth will be available.

Potential solutions Given that there are issues with many network technologies discussed above, this leaves users with several choices:

1) roll their own bandwidth management system;

2) buy all of their network-connected devices from a single vendor who has implemented a proprietary solution;

3) overdesign the network such that bandwidth management is not a problem; or

4) implement a network topology now, hoping that vendors will provide bandwidth management support in the future.

Rolling your own bandwidth management system is likely to be impractical. Bandwidth management is quite complex and requires access to lower levels of the networking stack. For this reason it is usually well beyond the capabilities of most users. Even skilled users may not be able to implement a solution; source code for network driver stacks is seldom available from the vendors. For good reasons, they do not want users modifying this code.

Buying a proprietary solution from a single vendor may be a good solution for some users, depending on requirements. Automation vendors for example may be able to provide network traffic management in air playout systems, but these systems may not be available in all application areas such as post production or graphics rendering.

Many of the proprietary systems have knowledge of the maximum number of transfers that a particular network topology will support. In most cases, these systems are responsible for initiating the transfers in the first place, and they simply refuse to initiate a transfer that would break the network. These systems have an abstraction of the network topology in their software code. They model the network loading at any given time. Although these systems work well, they may not provide a solution in all situations.

One note about proprietary systems: While proprietary solutions to the bandwidth management problem may not allow for interoperability in a multi-vendor environment, many of these systems have been developed to meet specific user needs. Frankly, standards organizations have been somewhat slow to address this area, and proprietary systems are a response to user demand before appropriate standards are in place.

A third way for the user to deal with bandwidth is to build a lot of overhead into the system and move on. This argument has its merits. First, high-speed hardware is becoming plentiful and cheap. Because Web-based rich media applications are driving the consumer market to faster networking technologies, the power of this market is brought to bear on the whole network speed and capacity problem. The result is R&D dollars are being spent on faster hardware. The price for a given number of connections to a network seems to hold steady, but the speed of the network for a given price seems to double about every 14 months. The reason you do not see much hardware implementing bandwidth management protocols is that the emphasis is on faster hardware, not bandwidth control. If the trend continues, so the argument goes, bandwidth will never be an issue. If a network looks like it is starting to get saturated, just replace the switch and interface cards with the newest technology. Your speeds will likely double or more, and your capacity problems will go away.

Interfacility bandwidth management In the January column of Computers and Networks, we will look at bandwidth management in situations where content is being moved between facilities. In these situations, a discussion on bandwidth management takes on a different tone. Excess capacity costs real dollars. Demand for bandwidth in excess of capacity could have disastrous consequences not just for one client, but for a whole network.

CATEGORIES