Vast distribution
Distributing content to multiple viewing platforms is commonplace for almost all content rights owners in today's competitive and increasingly fragmented media environment. The expanding breadth of viewing devices is accompanied by divergence in the encoding formats and parameters required for optimal device compatibility. While traditional production and distribution platforms used a limited range of formats, little such consistency exists today. IPTV, Internet TV, broadcast, mobile phones and personal media players all have their own unique characteristics, and no one encoding format comprehensively serves them all. For any multiplatform distribution strategy, encoding to multiple formats, resolutions and bit rates is an unavoidable requirement. Factor in nondistribution formats for acquisition, production and archive, and the types of deliverables number in the dozens.
Just looking at one distribution platform, playback on a personal computer from the Web, leads to an array of format considerations. Compression formats common on the Web currently include VC-1 (the SMPTE standard related to Microsoft Windows Media), H.264 (also known as AVC, or MPEG-4 Part 10), On2 VP6 and Apple Quick-Time. Technologies for building Web-viewing environments vary in which compression formats they support. Adobe Flash technologies support H.264 and On2 VP6, while Microsoft Silverlight supports VC-1 and H.264.
To reach the broadest audience possible across varying connection speeds and operating systems — and especially if syndicating the content — you may choose to offer Web-based content in multiple resolutions, bit rates and compression formats. Even if only a single format is used, recent enhancements in Web video delivery to dynamically adapt to variations in consumer bandwidth are dependent on the creation of streams in multiple bit rates and resolutions concurrently. You will likely also want to create a full-resolution archive copy of the content for future transcoding into additional new deliverable formats as required.
Efficiently repurposing content for these varying platforms and devices presents new challenges, and the encoding systems used to create these deliverables are critical points in the process. The quality, performance and efficiency of the encoders and surrounding workflow have a significant impact on productivity, costs, the viewer experience and the timely availability of content. It is no longer enough for an encoding system to be able to output just a full-resolution and proxy version. Efficient multiplatform output requires encoding solutions that optimally support multiple formats and target devices. Even better are solutions that can create these deliverables simultaneously in real time, where the total encoding time is the same as the duration of the content. Faster-than-real-time encoding can also be achieved when encoding from a nonlive source, such as existing mezzanine media files.
In this article, encoding primarily refers to creating file-based deliverables. The inputs to the encoder could be live sources (such as a video router), decks or existing media files, while the deliverables could be used for subsequent on-demand viewing or download on multiple devices, Blu-ray or DVD authoring, archive, or a plethora of other purposes. Most of the same concepts also apply to encoding for real-time distribution (such as broadcast or live streaming), although the breadth of compression and container formats used in file-based production and distribution far exceeds that used for live delivery.
Real-time, multiformat efficiency
Most file-to-file transcoding solutions can create multiple output formats from a single source. However, when capturing and encoding from a live source or tape in real time, solutions differ considerably. (See Figure 1.) Some output only a single compression format, resolution and bit rate at a time. Others can output multiple resolutions and bit rates concurrently but only in a single format, while the most flexible solutions can capture and encode to multiple formats and output parameters simultaneously in real time.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
This latter class of encoding solution offers numerous benefits:
- Less equipment
By eliminating the need for separate encoders for each format, multiformat systems reduce space and power requirements while lowering equipment acquisition costs. - Reduced operational complexity
While separate encoders for each format could be deployed in parallel to achieve simultaneous deliverables, doing so adds to the complexity of the workflow. Automation requirements would increase to ensure that all encoding is perfectly synchronized, while any change to the combinations of desired deliverables may involve rerouting of signals between the encoders and reconfiguration of the settings on multiple systems. - Faster turnaround
Systems that support multiple formats, but not concurrently, can still reduce overall equipment requirements, but require multiple passes from a repeatable source (such as tape) to achieve the desired deliverables. By performing multiformat encodes in a single real-time pass, significant time is saved. - Less wear on supporting equipment
If the source content is tape-based, each additional ingest and encoding pass incurs another complete playout of the tape content, causing additional wear on the deck. - Flexible redeployment
A broadcaster's encoding needs today may be very different in the not-so-distant future, and multiformat systems can easily be redeployed and repurposed. Significant changes in preferred deliverable formats (for example, from On2 VP6 to H.264 on a Web site) may also be transitioned by offering both formats in parallel on an interim basis to reduce viewer alienation.
While all of these benefits are advantageous operationally to the content provider performing the encoding, the speed of turnaround is particularly noteworthy, as timely availability of content is critical to audience acquisition and retention. Timeliness is particularly significant with news and sports content, but even extends to longer-form content such as episodic series.
Hardware and software
For an encoder to ingest content from a live or tape-based source, it must have at least a basic hardware component to interface to analog or SDI sources. But, precompression image processing and the actual compression itself may be performed in hardware or software.
Systems relying solely on hardware compression tend to be limited in the breadth of supported encoding formats. Even where the hardware supports multiple formats, it may only be able to do one at any given time. Furthermore, while compression formats such as MPEG-2 are relatively mature, newer formats such as H.264 are still evolving, and new formats continue to emerge. While most hardware-centric encoders are firmware-upgradeable with certain extensions of existing formats, more dramatic extensions or completely new formats may require new hardware. As such, these hardware-centric encoders are not well-suited to multiplatform, multiformat applications, but may be appropriate for usages targeting a single platform (such as a live satellite channel).
Systems that combine hardware and software in a common computing platform offer greater flexibility in the breadth and upgradability of compression and container formats, as enhancements and extensions can be applied through software updates. Beyond the formats themselves, software-centric encoders may also provide more flexibility and robustness in the features surrounding those formats. Examples include automated distribution of the resulting deliverables (such as publishing to a Web site, or file transfer to distribution partners), branding, and content protection or usage tracking through watermarking and DRM.
Optimal effectiveness in multiformat encoding can be achieved through a combination of hardware and software processing. (See Figure 2.) A key step in high-quality encoding is preprocessing — essentially grooming the source signal prior to compression. Deinterlacing, video noise reduction and filtering are all examples of preprocessing functions that can significantly improve the quality and bandwidth efficiency of the compressed output. Performing preprocessing in hardware enables the use of more sophisticated algorithms over what could be achieved in real time in software alone. Furthermore, even basic software preprocessing algorithms consume processing time on the host system's CPUs. By performing the preprocessing in hardware, more CPU processing power is left available for the actual compression, increasing the number of outputs that can be created simultaneously.
Further contributing to efficiency in a hardware/software system, preprocessing settings common to all target outputs need only be applied once, with the preprocessing shared as input to all output compression algorithms.
Given the virtually limitless combinations of compression format, container, encoding parameters and quality settings for each deliverable, and unlimited number of deliverables, it's possible to exceed the real-time capabilities of even the most robust encoder. While systems comprised of integrated hardware and software may not have predetermined limits on the number of concurrent encodes, the CPU horsepower of the system imposes a practical limit. The number of simultaneous outputs the system can handle will vary depending on the combination of deliverables and the specifications of each. Advanced H.264 encoding, for example, is more computationally intensive than basic MPEG-2 encoding, so more outputs could be created alongside a full-resolution MPEG-2 than H.264.
When the desired combination can't be achieved in real time, workflow features of the encoder can still reduce the operational complexity and manual effort required. Automated features to ingest to an uncompressed or lossless intermediate and subsequently transcode it into the desired deliverables enable the desired results to be obtained without manual intervention. At the same time, it allows you to maintain the benefits of reduced equipment overhead, elimination of multiple playout passes and minimal operational effort.
Brian Stevenson is the director of product management for Digital Rapids.