HTTP ABR streaming
Cisco Systems Visual Networking Index (VNI) predicts that more than 50 percent of all global Internet traffic will be attributed to video by the end of 2012. It also confirms, in addition to television screens, video delivery to cell phone and computer screens will be increasingly common Globally, Internet video traffic is projected to be 58 percent of all consumer Internet traffic in 2015, up from 40 percent in 2010. At that time, three trillion minutes of video content are projected to cross the Internet each month, up from 664 billion in 2010, when 16 percent of consumer internet video traffic in 2015 will be TV video. There is no doubt that if you are in the business of transmitting video, you will likely be using IP in the near future.
Delivering acceptable video quality over IP to TV viewers and other devices has led to a still-evolving delivery infrastructure. The required network scale has higher packet loss and error rates than smaller managed networks. Adaptive Bit Rate (ABR) delivery protocols like Apple's HLS and Microsoft's Silverlight, among others, help address these issues. These protocols use HTTP over TCP to mitigate data loss by dynamically adapting bit rates to adjust to networks that can provide only unpredictable instantaneous bandwidths.
Using a CDN to distribute the content to a range of servers located close to the viewers is another key feature to successful deployments to avoid the congestion and bottlenecks of centralized servers. Yet, despite more complex protocols to handle a range of transport issues, high-quality performance is not guaranteed. Cost-effective operations and a good viewer experience depend on good monitoring observability and targeted performance metrics for rapid problem identification, location and resolution.
ABR protocols
ABR video delivery mechanisms over IP that enable this rapidly growing Internet video market are effective, but complex. Not only do they require the usual video compression encoders to achieve practical bit rates, but they also require a host of other devices and infrastructures, including segmenting servers, origin servers, a CDN and a last-mile delivery network.
ABR protocols help deliver a quality video experience to viewers by overcoming common IP data network performance issues such as packet arrival jitter, high loss rates, unpredictable bandwidth and security firewall issues. HTTP delivery solves most firewall issues as it is almost universally unblocked since it is also used for web browsing. HTTP, which uses TCP, assures loss-free payload delivery as well. While predictable instantaneous bandwidth levels are a challenge in unmanaged networks, by using variable encoding rates and these protocols, the viewer's client device can dynamically select the best stream bit rate for the instantaneously available bandwidth.
Apple's HTTP Live Streaming (HLS) is an example of a protocol that successfully navigates the challenges of unmanaged networks to transfer multimedia streams using HTTP. To play a stream, an HLS client first obtains the playlist file, which contains an ordered URI list of media files to be played. It then successively obtains each of the media files in the playlist. Each media file is, typically, a 10-second segment of the desired multimedia stream. A playlist file is simply a plain text file containing the locations of one or more media files that together make up the desired program.
The media file is a segment, or “chunk,” of the overall presentation. For HLS, it is always formatted as an ISO 13818 MPEG-2 TS or an MPEG-2 audio elementary stream. The content server divides the media stream into media files of approximately equal durations at packet and key frame boundaries to support effective decoding of individual media files. (See Figure 1.) The server creates a URI for each media file that allows clients to obtain the file and creates the playlist file that lists the URIs in play order.
Multiple playlist files are used to provide different encodings of the same presentation. A variant playlist file that lists each variant stream allows clients to dynamically switch between encodings. Each variant stream presents the same content, and each variant playlist file has the same target duration. If the playlist file obtained by the client is a variant playlist, the client can choose the media files from the variants as needed based on its own criteria, such as how much network bandwidth is currently available. The client will attempt to load media files in advance of when they will be required for uninterrupted playback to compensate for temporary variations in latency. The client must periodically reload the playlist file to get the newest available media file list, unless it receives a tag marking the end of the available media.
CDN operation
Using HTTP client-driven streaming protocols like HLS effectively supports adaptive bit rates, handles high network error rates and firewall issues, and supports both on-demand and live streaming. However, with millions of clients establishing individual protocol sessions to receive video, scalability must be considered. Further challenging the system design are sudden spikes in requests from “flash crowds” or “SlashDot effects” that may be caused by current events where a sudden, unexpected demand overwhelms servers, and content becomes temporarily unavailable.
The CDN is a collection of network elements that replicates content to multiple servers to transparently deliver content to users. The elements are designed to maximize the use of bandwidth and network resources to provide scalable accessibility and maintain acceptable QoS. Particular content can be replicated as users request it or can be copied before requests are made by pushing the content to distributed servers closer to where it is anticipated users will be requesting it.
In either case, the viewer receives the content from a local server, relieving congestion on the origin server and minimizing the transmission bandwidth required across wide areas. Caching and/or replica servers located close to the viewer are also known as edge servers or surrogates. To realize the desired efficiencies, client requests must be transparently redirected to the optimal nearby edge server.
Content distribution and management strategies are critical in a CDN for efficient delivery and high-quality performance. The type and frequency of viewer requests and their location must dynamically drive the directory services that transparently steer the viewer to the optimum edge server, as well as the replication service, to assure that the requested content is available at that edge server for a timely response to the viewer. A simplistic approach is to replicate all content from the origin server to all surrogate servers, but this solution is not efficient or reasonable given the increase in the size of available content. Even though the cost of storage is decreasing, available edge server storage space is not assured. Updating this scale of copies is also unmanageable.
Practically, a combination of predicted and measured content popularity and on-demand algorithms are used for replication. Organizing and locating edge server clusters to maintain optimum content availability relies on policies and protocols to load balance the servers. Random, round robin or various weighted server selection policies, along with selections based on number of current connections, number of packets served, and/or server CPU utilization, health and capacity are all utilized and varied based on load persistence considerations.
Quality assurance
Cost-effective operations rely on good monitoring observability and performance metrics for rapid problem identification and resolution. QoS performance monitoring metrics provide needed information about stream delivery quality, key information about the types of impairments and their causes, as well as warnings of impending impairments for ABR streaming networks. Combined with end-to-end monitoring, QoS monitoring used in production network monitors network delivery quality of the flows and for other applications such as system commissioning and tuning. (See Figure 2.)
1Severe underrun Interval between segments and the file transfer time are slower than the drain rate. 2Underrun Segment interval is slower than the drain rate, but file transfer time is faster than the drain rate. 3Warning Interval between segments and the file transfer time are marginal. 4Growing buffer Interval between segments and the file transfer are faster than the drain rate. 5Balanced system Interval between segments is balanced, and the file transfer is faster than the drain rate.
Figure 2. In adaptive streaming environments, QoS should be monitored post caching server and at the client. This chart shows how the VeriStream metric characterizes instantaneous network delivery quality on a 1-5 scale.
Such metrics are intended to analyze streams susceptible to IP network device and client/server impairments. For adaptive streaming environments, it is also important to monitor QoS at the client end point, which can be used to assess the dynamic performance of network and system delivery. QoS metrics for ABR must continuously analyze the dynamic delivery of stream segments. An example of this summary is shown in Figure 3.
Summary
Leaving the well-managed network domain of provider IP networks requires new adaptive bit-rate protocols that are rapidly proving their effectiveness. A comprehensive, end-to-end monitoring strategy gives content and service providers the streamobservability and fault-isolating capabilities needed for timely and efficient adaptive bit-rate network delivery deployments.
James Welch is senior consulting engineer for IneoQuest Technologies.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.