HD editing in an SD world
Since the launch of the Discovery HD Theater network in June of 2002, Discovery Communications has begun to edit, mix and create high-definition programming at their technical center in Bethesda, MD. One of the first steps in preparing to process HD shows was design of their first HD nonlinear edit suite. The chosen platform was the Avid|DS HD.
This crew was on location shooting “James Cameron’s Expedition: Bismarck” in high definition for the Discovery Channel.
Aside from its own particular interface and operational quirks, and the massive amount of storage required, the DS is quite similar to other Avid products, so this article will focus on how it was integrated into an edit system that can accommodate both high-definition and standard-definition editing, and an assortment of complex audio requirements.
General suite applications
It was decided during the design phase that this edit suite should be able to function in three distinct modes:
- Edit programs in standard definition with normal stereo soundtrack; output to Digital Betacam.
- Edit new programs in high definition, with audio elements in place but not a final mix (mix to be done in audio post); output video to HDCAM master, audio tracks transferred via OMF or other means.
- Modify existing HD programs, including 5.1 surround soundtracks (audio not changed); output to HDCAM master with 5.1 soundtrack encoded in Dolby E on two channels, plus stereo soundtrack on two channels.
Incoming material would be delivered on HDCAM (Discovery house format), D5 HD (some older programs), Digital Betacam (SD programs), DA-88 (audio stems for SD programs or 5.1 surround mixes), DAT and other miscellaneous formats.
Audio, in particular, would be quirky due to the requirements of bringing as many as eight tracks into the Avid|DS, from all the above formats, in a plant with a primarily analog audio infrastructure. In this case, it was decided to handle audio for SD production as embedded AES (which was already in use for 10 other nonlinear rooms) and to handle audio from DA-88 or DAT as analog into the suite mixer. Audio tracks from HDCAM or D5 tapes could be brought into the mixer as discrete AES.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Video strategy and signal path
The video portion of this system is surprisingly simple, and comparable to most nonlinear systems using serial component digital (SDI) signals. The Avid|DS has separate inputs and outputs for SD (SMPTE 259) and HD (SMPTE 292) signals, and a single input for reference. Since the DS was being integrated into a small HD “island,” which includes VTRs and patching, signal paths were provided for SD and HD in and out of the Avid and to the monitors in the suite.
AES audio passes between the patchbays and the Avid via a Yamaha 01V mixer. A Martinsound MultiMAX was added to the suite to give editors a wide range of monitoring options. Photo by John Spaulding, Discovery Communications.
A video distribution amp supplies reference to all system devices. The input to this DA is patchable, which allows the system to be locked to various sync rates. The standard reference signal is NTSC blackburst, which is accepted by the Avid and other devices for editing in most 59.94 frame rate formats. (Discovery's house format is 1080/59i.) Also patchable is tri-level sync in 1080/23.98p and 24p flavors for editing in these formats.
Video monitoring in the edit suite is via a Videotek VTM-420SD/HD “rasterizing scope” (which displays waveform and metering on a standard SVGA computer display) and a Sony BVM monitor. The VTM unit has two SDI inputs, which auto-detect both SD and HD signals and change the display accordingly. The design intent was to give the editor a monitor input (and corresponding patch) to see an incoming VTR, and another to see the Avid. Although the Sony monitor has separate input cards for SD and HD signals, the Videotek has only two discrete, non-looping inputs, so there could be only two patches to feed both devices. The simple approach appeared to be taking the feed from a patchpoint, passing (active loop) through the BVM HD card, then the SD card, then terminating at the VTM. Unfortunately, this proved not to be feasible because the Sony HD card would not pass SD signals, and vice versa. The problem was solved with some HD DAs from AJA Video, which will pass anything from 270Mb on up. So, each of the two monitoring patches was split with a DA and fed to the VTM and both types of BVM inputs. The BVM is programmed with two operator “channels” for SD and two for HD, corresponding to whatever is plugged into the two patch-points.
Operationally, the editor must patch reference to the suite (or leave the normalled black) and patch the desired VTR into the Avid and the “A” monitor patch (the Avid output is normalled to the “B” monitor patch). The VTM is programmed with several presets that associate a video input (A or B) with the necessary audio to be metered (see below).
Audio strategy and signal path
The audio portion of this system is where things really get fun! Even though the room would not be creating “finished” surround audio, it was felt that the editor should always be able to listen to a mix in context — meaning stereo or surround — while editing the picture. It was also necessary to give the editor some control over audio going into (digitizing) and out of the Avid for building tracks or working on an SD project. This combination proved to be exceedingly difficult to accommodate with a modest-sized digital mixer and eventually required the addition of a “surround monitoring processor” to help manage all the options.
First, a very quick primer on surround audio. For the purposes of Discovery's HD Theater, the concern is only with surround in the Dolby Digital 5.1 format (also known as AC3 when encoded for consumer delivery). Dolby Digital specifies channels for left, right, center, low-frequency effects (LFE) and stereo surrounds. It also specifies a library of metadata information that can be carried with the audio stream and used to control functions in the viewer's home decoder. Typically, the audio mix is created in a conventional audio post room with the metadata added during this process, and then the final result is dubbed onto a pair of VTR channels using Dolby-E encoding (Dolby's format for “transport” of up to eight AES channels in a single AES pair). At the transmission end, the Dolby-E tracks are decoded back to discrete 5.1 and then re-encoded into AC3 for the consumer. The metadata is passed along in the AC3 stream and is used by the decoder at home. (Extensive information on all these topics is available on the Dolby Web site, www.dolby.com.)
The Avid|DS HD suite is used to edit, mix and create high-definition programming at the Discovery Communications’ technical center in Bethesda, MD. Photo by John Spaulding, Discovery Communications.
In the Avid|DS suite, AES audio passes between the patchbays and the Avid via a Yamaha 01V mixer (many of which were already used in the facility). The 01V, with an optional card, can handle eight channels of AES input and output, plus 16 analog inputs, and has 10 internal buses for routing. This arrangement was sufficient to handle four channels of AES from patch to Avid (digitize), and four channels from Avid back to patch (output), and leave a pair of buses for monitoring stereo — which was fine for editing in SD. However, it would be impossible to also monitor six channels of surround audio since only two buses were available. In addition, there would be situations where it was desired to pass eight channels of audio from the Avid into the Dolby-E encoder and, again, this would preclude the ability to monitor in surround.
Compromise was needed. It was decided to limit some mixer functionality in certain cases and to add a Martinsound MultiMAX monitor processor. The MultiMAX provides a variety of “wide” (eight-channel) and stereo inputs that can be selected and routed to various multichannel speaker systems. Although it does not handle Dolby Digital metadata, it does provide the ability to audition a 5.1 mix “downmixed” to stereo or mono. It also provides speaker mute and solos, volume control and dim, thus becoming the clearinghouse for all listening audio in the room.
It was also important to keep an already complicated system as operator-friendly as possible. One way to help was to avoid changing the function of mixer faders whenever possible. Therefore, the first four mixer channels were designated as “From the VTR” (either AES or analog), the next four became “From the Avid” (AES), and the last eight were fixed as analog returns from the Avid used solely to feed the MultiMAX. By carefully arranging which buses fed where (and writing seven mixer scene presets), it was possible to handle all required functions with a minimum of variation or need for the editor to fuss with the mixer.
Once the room was put into operation it took several days to try various types of projects and debug the workflow process. Feedback from the editors suggested a few improvements to the original design, but it basically works the way it was intended. No question, though, it's complicated.
In another scenario it might be valuable to have a larger digital mixer in the system. However, this would probably not eliminate the need for a device like the MultiMAX for monitoring; the beauty of which is that it clearly delineates the available listening sources and makes it easy to verify that tracks have the correct content (such as by auditioning individual speakers). Unfortunately, there is no way to make working in HD much easier overall. And it is only going to get more complicated as HD becomes more mainstream, which will increasingly require engineers to find novel ways of solving peculiar problems!
Eric Wenocur is the owner/principal of Lab Tech Systems, a technical consulting and systems design company in the Washington, DC, area.