Content Life Cycle II: Time To Address a Real Need
The path of electronically generated media files throughout a broadcast plant touches many elements, passing through various server platforms and on to viewers through a growing number of distribution channels.
The content life cycle begins at the point of capture, moves through ingest and storage, continuing on to the editorial process, back to storage and preparation for release, and finally to transmission and emission. This flow finds media server technologies falling into many sectors of the moving media industry; and in turn, yields to new developments in transport and compression technologies, authoring and distributing, digital media asset management and more.
With each new enabling technology comes an ongoing need for the harmonization of media files types, their interchange, translation and transcoding. More than ever, manufacturers are recognizing the importance of establishing file infrastructure harmonization. No longer will fixed proprietary systems be acceptable.
The industry now expects excellence in compressed image quality, regardless of capture or release format. For broadcasters, the future of the digital media business will be how well they transform and repurpose assets for a wider set of displays, uses and audiences.
Ranging from the higher resolutions of digital cinema and HD to the comparative microresolutions of mobile handheld, cellular and portable media devices; never has there been more demand for a unification that allows for the interchanges necessary in repurposing and redistribution of content; made possible by the modern digital media server.
(click thumbnail)
MORE THAN STORAGE
Servers are no longer just the storage depot; they are the conduit through which all media files traverse. Servers are used as buffers, as transcoders, as gateways to other systems, and as editing and play-out devices. For much programming, broadcasters now depend on servers in the same way they used to depend on videotape.
With broadcast content delivered mostly via satellite services, whether via full analog transponders, real-time MPEG, or IP-encapsulated trickle streams; the first stop in the broadcast content life cycle becomes a catch server. Known also as a “media distribution platform,” this cache reassembles all the bits, temporarily storing these files until signaled for migration to the next segment of their life cycle.
Catch servers are remotely controlled, with content placed and managed principally under control of the service provider. The native formats for these files are governed by an encoding scheme and transport method that the broadcaster has little control over—and they all vary from provider to provider.
Files or program streams may be MPEG-2 or MPEG-4, with the scheme for each given service provider’s system tailored to make the system perform best for their business purpose. Yet each variation in the service provider’s scheme comes with a price. Files cached to these catch servers must nearly always be transcoded to a file structure compatible with the ingest server or the local play-out server’s native file format. This can be as simple as rewrapping the file or far more complex, as in an MPEG-to-DV conversion.
Migration to another serving platform is a less-than-straightforward process, generally necessitating yet another set of third-party devices made of server hardware, interchange constraints, and licensing or software updates that, again, are out of the domain or control of the end user.
When a decision is made to receive program X via media delivery service Y; the requirements for that hardware platform and its software interface become fairly straightforward. Systems can be installed and placed into service much like placing a traditional satellite IRD into service—except that these servers now offer both baseband and file transfer capabilities.
The issues come when you need to move a wrapped file from catch server to ingest or play-out server, or when you wish to edit that content without decoding to baseband video on an NLE.
This routinely requires another file translation, i.e., possibly another server or another license, that rewraps the file to the next platform’s native format, plus another metadata interchange—before editing can commence. When the edit is completed, it is not unusual to require another transcode as the file is transferred back to a play-out server and another metadata interchange to another database that now must be understood by automation, archive and/or traffic manager subsystems.
TRACKING TRANSLATIONS
Fortunately, file transcoding and rewrapping platforms are well established for the more common applications and file formats. Still, complications occur when a file must serve double duty, or when a frame-accurate proxy must be created for preview and timing purposes, and when the metadata for the databases that track all these material translations and versions must work seamlessly and reliably.
Tuning up a full system to a level where human validation becomes unnecessary for each successive process is a daunting task. The computing horsepower necessary to get all the bits to align properly without noticeable aberrations or reductions in image quality is enormous. File transcoding must respond to each translation in a specific way, but be flexible enough to recognize the type and structure of the file without question. Users must have confidence that once the systems are tuned up that another software revision in one element of the chain won’t render someone else’s decoder to do the unexpected.
Today, users generally have no idea something went wrong in the file translation until played out at a QC-station or directly to air. Improperly configured systems can result in dropped closed captioning, loss of video, offsets in audio to video, or nothing at all. Given that most translations take roughly a one-for-one time period, a two-hour movie wrongly translated is four hours of lost work.
To support the hands-off management and quality assurance segments of the content life cycle, new emerging technologies that work at the file level are developing, with some solutions already in the marketplace and others on the horizon. The next installment of this column will look further into how these technologies will fit into the workflow and how they might be integrated into a media server platform.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Karl Paulsen recently retired as a CTO and has regularly contributed to TV Tech on topics related to media, networking, workflow, cloud and systemization for the media and entertainment industry. He is a SMPTE Fellow with more than 50 years of engineering and managerial experience in commercial TV and radio broadcasting. For over 25 years he has written on featured topics in TV Tech magazine—penning the magazine’s Storage and Media Technologies and Cloudspotter’s Journal columns.