Facing media traffic challenges
As media companies transition to file-based production environments, many have experienced problems translating media applications to an IP environment. They find that IP networks do not behave as expected. Throughput decreases and becomes unpredictable, transfers are lost, and conventional network monitoring tools cannot explain why these issues occur.
The reason for this mysterious behavior is that media traffic is intrinsically different from IT traffic, especially at small-time scales. Traditional tools used to measure and characterize IT traffic no longer apply. A new mode of thinking is required.
Our laboratory sought to describe this unique behavior and identify the specific requirements for an IP network to support media file transfers. Ultimately, we created an IP network infrastructure that was more than capable of handling media traffic flows.
Comparing IT and media traffic
IT traffic generally consists of short messages or small files, such as those generated by SAP or e-mail, which use the IP network for only relatively brief time periods. Transferspeed is not critical, and packet loss and resulting retransmission are acceptable. Media traffic, however, generally consists of large files (several gigabytes), which typically use the link for a longer time period and almost constantly try to use 100 percent of the available bandwidth. The longer this period, the more “bursty” the traffic becomes.
When two traffic streams share the same link, the unique nature of media traffic exacerbates the problem of bandwidth competition between concurrent transfers, leading to packet loss. Any required data retransmissions decrease the overall efficiency of the transfers drastically and, if sustained, can lead to complete transfer interruption. This occurs even when the network has been designed (at least on the macroscopic scale) with sufficient bandwidth to accommodate both streams.
To understand why this happens, consider an analogy of traffic traveling along a two-lane highway. (Figure 1.) Imagine two clients running standard IT services, sending traffic at a speed of 400Mb/s to a common file server. Since each car in the stream is relatively small, the two lanes of IT traffic can merge (i.e., when they reach the network switch) and efficiently combine into a single lane. The server receives the traffic at 800Mb/s without much delay. Bandwidth or throughput adds up linearly in most cases.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Now examine the right portion of Figure 1; it illustrates what happens when large media files are being transported over the same network. Keeping with the transportation analogy, we see that the large media files actually behave more like trains approaching a junction than cars (i.e., long, continuous bursts of traffic arriving back to back). Two clients sending large files to the same server at 400Mb/s no longer manage to get all the traffic through to the server without interference. If both trains (large media files) arrive at the (switch) junction simultaneously, they crash into each other; all traffic is stopped. Consequently, the receiving media server does not attain an aggregated throughput of 800Mb/s, but much less. The bottom line from this example is that data/throughput can no longer be added linearly.
Unfortunately, these effects are not typically visible when applying classical network monitoring tools. Such tools are geared toward IT traffic and typically measure throughput by averaging over relatively long time intervals. Building networks for media traffic requires fundamentally different tools that can look at the network on a time scale several orders of magnitudes smaller. At this scale, concepts such as average network throughput become meaningless. In these cases, a network link is loaded with a packet, or it is idle. There is no such thing as bandwidth percentage anymore.
Testing HD over IP networks
To demonstrate these effects, the VRT-medialab analyzed the IP network traffic generated by an editing client connected to a media storage cluster on different time scales. As a challenging test case, we used a set of Avid Media Composer editing clients using multiple streams of HD video, all connected to a generic media storage cluster architecture like that shown in Figure 2. The network used a Workhorse Application Raw Power (WARP) media storage cluster from VRT-medialab as a generic storage platform. Cisco Nexus 5000 switches with lossless 10GE and Data Center Bridging functionality were used in the cluster network, providing a generic media storage solution with ample throughput for HD editing.
Several Avid Media composer editing clients were connected via a Cisco Nexus 7000 switch to the storage cluster using both 1Gb and 10Gb links. IP media traffic was transported via the SMB/CIFS (Server Message Block/Common Internet File System) protocol between the server and the client.
Analyzing media traffic
The analysis consisted of playing a single DN×HD 145Mb/s HD video stream with a frame rate of 29.97 frames per second over a single 1Gb connection. Figure 3 illustrates the traffic moving between the storage and client.
The left side of the figure displays the macroscopic time scale of seconds and measures an average bandwidth of around 150Mb/s. This corresponds nicely with the compression bandwidth of the DNxHD 145Mb/s codec. The measured throughput is slightly higher than the codec specs, due to the header and protocol overhead in the TCP/IP packets. The first 1.6 seconds, the client is prefetching a number of video frames into its buffers to overcome the eventual jitter of the storage and network. During this period, the measured macroscopic bandwidth is around 580Mb/s, 4X the codec spec, indicating the client is reading four times faster than the playout speed.
These results correspond to the specifications typically used by video engineers and media solution suppliers. Throughput is expressed on this macroscopic time scale, and network architectural designs are based solely on these values. However, zooming in on increasingly smaller time scales reveals a completely different story.
Now, examine the right side of Figure 3; the seemingly continuous throughput displayed on the left actually appears to consist of discrete blocks of five consecutive video frames. During the prefetch period, 37 groups of five video frames are sent over the network, almost back to back. In steady state, blocks of five video frames are interleaved with long periods of no traffic on the link. At the 10ms time scale, the network reaches an average bandwidth of around 600Mb/s during transmissions. This is slightly higher than the average throughput during the prefetch period, but more than four times higher than the average throughput during the steady-state playout. For this codec and frame rate, each video frame corresponds with 606,208 bytes of video data, so the five-frame burst corresponds to around 3MB, with a duration of 42.4ms.
Looking at an even smaller time scale, on the level of individual packets, each video frame is actually split into 47 smaller bursts. Each of these smaller bursts consists of 45 (14 for the smaller last burst) 1518-byte packets, transmitted back to back within each burst. Hence, on this µs time scale, a continuous burst of packets is measured at a throughput of 980Mb/s, reaching full line rate of the link for 555µs. This is 6.75 times higher than the average steady-state macroscopic bandwidth specified by the codec.
Implications for the IP infrastructure
Clearly, measuring data rates at a macroscopic average throughput is too limited to fully characterize media traffic. It is only by analyzing the traffic at smaller time scales that we can determine how media traffic will be processed by the IP switch and understand the requirements of the network.
As we have seen, media flows that share a common link can interfere with each other on a small time scale, generating a local oversubscription of the switch buffers and ultimately introducing packet loss. Similarly, a bandwidth mismatch in the networkwill put the internal switch buffers under pressure and can result in packet loss. The solution is to provide sufficient buffering.
The next examination used a Cisco Nexus 7000 IP switch to assess its buffer performance and overall functionality in a media environment. (See Figure 4.) Note that the conclusions drawn here are only valid for this particular setup, and have to be reconsidered for other protocols and applications.
The test analyzed the following links:
- 1Gb (server) to 1Gb (client)
- 10Gb (server) to 10Gb (client)
- 10Gb (server) to 1Gb (client)
In the first two cases, traffic passes unhindered through the switch with no oversubscription or bandwidth mismatch in the network path. Hence, the detailed structural description of the bursty media traffic given above is valid for both cases. The traffic is bursty, and the microscopic SMB bursts are concatenated in case of multiple streams per video. This leads to high throughputs compared with the average macroscopic video specifications. Under these tests, the Cisco Nexus 7000 switch was perfectly capable of transporting these high loads.
In the third case, the 10Gb-to-1Gb bandwidth mismatch creates an internal oversubscription in the switch. Hence, at the 1Gb egress ports, packets arrive at a much higher rate than they can be forwarded to the client, stressing the egress buffer. The maximum burst that the SMB traffic will produce before it requires a response from the client is 68,144 bytes (TCP/IP overhead included). Because the egress port sends out packets at a rate 10X slower than the incoming rate, the egress buffer must be able to store 90 percent of this burst to avoid packet loss. This leads to a buffer requirement of around 60Kb per single video stream, or around 300Kb for a test using five streams. This is well within the specifications of the 48 × 1Gb port blades of the Cisco Nexus 7000 switch (max 6.15MB/port).
Change must come
The way the IT world characterizes and defines IP traffic must change to accommodate the demands of today's media. Macroscopic quantities such as average bandwidth, oversubscription and available capacity are no longer the only relevant parameters and must be interpreted in a different way. Additional specifications on much smaller time scales are required, and a deeper understanding of the detailed traffic characteristics and network switch and buffer mechanisms should be modeled.
It is critical to understand that IP switches needed for media networks are not a commodity like in IT networking. Most classical IT switches are designed for environments where oversubscription is less likely. Such switches may lack the proper buffers or QoS capabilities to avoid transfer interference of large media files.
Be sure to carefully consider these factors when building a media network.
Luc Andries is ICT-architect for VRT-medialab.
VRT-medialab is the technological research department of the VRT, the public service broadcaster of Flanders, Belgium.