Integrating HD and SD video and audio


Figure 1. Aspect ratio control with AFD in a hybrid family. Click here to see an enlarged diagram.

A critical issue in the successful deployment of HD services is the management of content with different aspect ratios and effective audio processing. Naturally, the complexity of format issues is dependent on the scale of HD operations at a facility, but these signal-processing challenges are typically faced by all HD facilities.





Aspect ratio conversion

Broadcasters working with both HD and SD need to maintain separate HD and SD output paths, while also broadcasting the same content on both channels. While most new material will be acquired natively in HD, broadcasters will continue to receive some SD material in both 16:9 and 4:3 formats. You also need to still work with your 4:3 format-archived material. Efficiently handling all these different aspect ratios has become a major issue for many broadcasters.


Figure 2. AFD for sequential up- and downconversion. Click here to see an enlarged diagram.

To address this challenge, broadcasters use automation and traffic systems, which allow operators to specify the ARC format for each piece of content. This setting can be recalled by the output conversion card for the HD and SD signal paths.

While the automation system determines whether the ARC works for the station output, it doesn't apply to other parts of the facility, such as the studio, online editing suite and ingest area. What's more, the addition of all of this information to the automation system brings greater complexity to the playout process. And because it relies on multiple operators to enter the correct ARC information, it opens the door to errors and playout using the wrong ARC format.

Dealing with a mix of aspect ratios on a daily basis involves multiple challenges if the broadcast output is to be of the quality today's viewers expect. In the past, broadcasters would select one aspect ratio conversion (ARC) and use that format at all times. This practice often led to the postage-stamp look or other undesirable effects. To overcome this, broadcasters must use the appropriate ARC for the HD content in order to maintain the shape of an image and avoid cutting out key elements of the picture.

In HD and SD signals, the AFD information is inserted into the vertical ancillary data, and it identifies not only the raster (4:3 or 16:9), but also the video signal's ARC type, whether that be 16:9 full screen, 16:9 with black pillarbox, 4:3 full screen, 4:3 with letterbox, etc.

In SD, the user can also apply the modified SMPTE RP-186 standard to identify the ARC type of the material. Because this information becomes part of the video signal, it is maintained as material moves throughout the broadcast facility.


Figure 3. AFD for downconversion. Click here to see an enlarged diagram.

AFD information can be inserted by different devices, including the frame sync; incoming-feed cards; up-, down- and crossconverters; or simply by an AFD flag-inserter card. Every signal that comes into a station can be flagged, either automatically or manually. With up-, down- and crossconverters, the flag is inserted automatically during the conversion.

Once this identifying information is included in video content, it can be used by the conversion card to perform the appropriate ARC automatically. The broadcaster need only predetermine the look that is wanted for an up- or downconverted signal, and all converter cards using the AFD information will be able to apply the appropriate conversion without station automation or manual intervention. Another key advantage to this solution is that the up-, down- or crossconverter card will do the ARC change automatically on a frame-by-frame basis, thus ensuring frame-accurate performance.

To solve this issue, Miranda Technologies developed a new standard, “Image Formatting Information — Active Format Description (AFD), Bar Data, and Pan-Scan,” to identify the different source ARC types and automate all these conversions. This standard has been proposed to SMPTE and currently is undergoing evaluation. (See Figures 1, 2, and 3.)

Preserving audio metadata

The audio-processing challenge for multiformat, HD/SD installations is significant. Broadcasters need to provide a stereo output (LoRo) or a Pro-Logic (LtRt) output for the SD signal, along with a 5.1 or 2.0 signal fed to the AC-3 encoder for HD broadcast as Dolby E, AC-3 or simply as discreet audio as part native HD content. The key challenge here is to maintain and apply audio metadata across the broadcast plant.

Working with 5.1, Dolby E or AC-3 audio always involves handling associated metadata, which includes such information as the type (e.g., 5.1, 2.0, 2+2+2+2), dialnorm values, dynamic range and other details defining the signal. Metadata must accompany the video signal because this information allows the home viewer's AC-3 decoder to react properly and provide the right sound effect at the right moment.

Until recently there were no effective means of maintaining metadata inside a plant without a Dolby E or RS422 interconnect. Miranda Technologies has worked with customers and other manufacturers to develop a standard that will enable carrying of metadata in the ancillary space of the HD-SDI signal. It was logical to embed the audio into the HD-SDI signal, so why not embed the audio data as well? By allowing broadcasters to carry this metadata throughout their facilities, this solution expands the opportunities for producing material with 5.1 audio.

Downmixing and upmixing audio

The preservation of audio metadata is critical to up- or downmixing audio signals. Although more material is produced with 5.1 audio, broadcasters still need to provide the SD output with a stereo or stereo-coded signal (Dolby Pro Logic I or Pro-Logic II). Some broadcasters can carry both a 5.1 and a stereo signal, but others simply don't have enough channels available — particularly given the need to provide 5.1, SAP and descriptive video. In an eight-channel server system, for example, there is no room for a stereo pair.

To overcome this obstacle, broadcasters can implement a stereo-coded downmix from the 5.1 audio. (See Figure 4.) The stereo-coded output allows home users with a Dolby surround decoder to enjoy sound separation for multiple audio channels. Downmixing is also used throughout a broadcast facility for dubbing purposes and to provide an audio sample similar to the sound experienced by home viewers watching the broadcast in SD.


Figure 4. Creating an audio downmix. Click here to see an enlarged diagram.

While a basic stereo downmix is relatively easy, a stereo-coded output gets a little bit complicated. Using only the 5.1 audio with the metadata simplifies the audio scheme. When the audio is still 2.0, the device performing the downmix must auto-detect that the signal is 2.0 and deliver a stereo-encoded signal with the appropriate metadata. When no metadata is available, the device must be able to generate a default value for 2.0 metadata to be sent with the audio.

With the move toward 5.1 audio, some broadcasters would rather not carry 2.0, but they still want to have a 5.1 signal created from the 2.0 (stereo) signals. Transforming a stereo 2.0 signal into a 5.1 audio signal is called an upmix, and this function helps to create the sound effects desired. It simplifies the audio path and functions in a facility and delivers a higher-quality experience to the home viewer. Hence, the ability to create and manage high-quality up- and downmixes is also fundamental to successful HD plant operations.

Jean-Claude Krelic is interfaces project manager for Miranda Technologies.

CATEGORIES