Content Life Cycle III: The Hidden Agenda

Video servers steadily add feature sets that help improve the workflow of a broadcast facility. Software and hardware components that cover the gamut from integrated conversion between SD and HD, to cross converters from 1080i to 720p, and as far as SDTI-CP transport and FireWire (IEEE 1394, i.Link) interfaces for DV25 are routine; but that was not always the case.

At the dawn of video servers, conversions were supported both inside and outside the box, so to speak. Early NTSC and component analog video, as well as PCM AES digital audio were internal, but as the facility went digital, analog I/O conversions moved to outside devices. Initially, embedded audio in an SDI stream wasn’t available, nor were HD inputs or MPEG file systems. Video was first encoded as motion-JPEG and audio was interleaved on a separate track or in some other form of internally stored audio, e.g., MPEG-1 or another format.

Fast forward a decade; we now see the SD crossing over to HD, and with this transition we still find variations in input and output parameters. And with these changes, we see adjustments in workflow, signal system design and complexity. HD inputs were first offered as ASI inputs that required using external MPEG-2 long GOP encoders. HD outputs were also ASI and required external HD decoders.

Not long after the introduction to HD, server outputs moved to native SMPTE 292M HD-SDI interfaces with some retaining ASI-outputs as an option.

Today, software codecs for HD-baseband I/O are available on some servers, yet there are vendors who emphasize a best-of-breed external HD encoder as the best way to tailor image capturing to application requirements. Depending on the application in which the server is placed, both concepts have their places.

(click thumbnail)
SANS CODECS

With the baseband I/O circle just about complete, emphasis has moved to direct file transfers into storage systems without a codec. These technologies have not been without complications and frustrations for the facility operator. Managing video on a file level is not a proven science with a universal solution. Flipping content from one file wrapper, format or compression scheme to another native format, wrapped or not, is not an easy task and not without its own sets of issues.

A matrix of conversion requirements would show a surprising amount of activities necessary to transform the video, audio and metadata into something that looked and acted reasonably the same. Today’s software transcoding systems do an adequate job for the routine compression protocols.

Yet repeated transcoding processes may cause visual degradation or loss of important data. Issues such as 608/708 closed-caption translation, passing of AMOL (i.e., Nielsen data), watermarking and transcoding between MPEG to DV or back to MPEG present constraints that require adjustments.

For example, when rewrapping and transcoding at a file level between an MXF-wrapped 25 Mbps DV image with 4:1:0 sampling to a low bit-rate GXF-wrapped MPEG-2 long GOP format with 4:2:0 sampling—the type of material, such as sports versus talking heads—can affect noticeable changes in the image quality once it is transcoded.

Knowing the intended uses of your content for the final product is critical. If the content is to be file transferred and then transcoded again for craft editing on an NLE, you should know as early on as possible how the final piece will be getting back to a server’s native format for play-out.

Care must be given to selecting the bit-rate, sampling parameters and actual encoding formats. Considerations including image motion, contrast ratios, depth of color in the image and the like. All become components of the toolset selection.

Looking back through the development of videotape formats, this is not unlike what happened when 3/4-inch U-matic, 1-inch C-format and Betacam or MII were mixed in the operational side of television.

The differences in how these analog signals were processed when going between color-under (for U-matic) and component mapping for Betacam and MII, made a significant difference in the image quality at play-out.

Similar sets of conditions are evident in the digital domain. In compression, much of the image is eliminated and becomes unrecoverable. “Generational loss” takes on a new dimension.

MOVING TARGET

Television imaging continues to be a moving target, with the repurposing and reusing of content high on broadcasters’ agendas. Today’s server platforms offer many advantages to fulfill that objective, as evidenced by the transformation when servers moved from motion-JPEG to MPEG or DV compression.

Encoding and storage considerations for content are escalating. While a baseband capture at 8 Mbps was once adequate for a record-and-air model, this may not be the same for content that is recorded as SD and then internally upconverted at play-out to HD.

Users should now experiment with different encoding formats and bit levels, especially if building an HD facility that will utilize more HD play-out than SD.

The latest variable in the new content life cycle equation is metadata, those bits about the bits that seem to mysteriously crop up all over broadcast systems.

Metadata is akin to a less familiar set of information called “user bits” in a SMPTE 12M timecode track. Here, additional information that tracked the timing of the recording, e.g., running clock time, version and dates, cut numbers, etc., was made available. Recognizing the physical limitations to user bits, the designers of baseband digital systems, as well as compression systems such as MPEG and DV, enabled areas of the digital data space to carry information beyond the visual and audio content.

Metadata carries the users’ own information, but also provides detailed control information, such as audio encoding parameters (i.e., dialnorm and surround mode) or in what format the image was originally intended (i.e., its proper aspect ratio for the active frame) to be linked to the material content itself.

Metadata must not only be inserted into the signal system, it must also be transported and preserved without degradation or distortion. Video server systems, just like terminal equipment, must recognize, parse, append, store and preserve for reconstruction, all forms of metadata. This challenge must further be amplified when the content is transcoded and wrapped or rewrapped within either a closed system or an open system at another facility.

Our next installment will delve into how metadata is handled in the media server environment.

CATEGORIES
Karl Paulsen
Contributor

Karl Paulsen recently retired as a CTO and has regularly contributed to TV Tech on topics related to media, networking, workflow, cloud and systemization for the media and entertainment industry. He is a SMPTE Fellow with more than 50 years of engineering and managerial experience in commercial TV and radio broadcasting. For over 25 years he has written on featured topics in TV Tech magazine—penning the magazine’s Storage and Media Technologies and Cloudspotter’s Journal columns.