Server Interoperability
Following the initial development of motion-JPEG disk-based recording, the migration to MPEG encoding and the movement toward a tapeless operating environment, many of the most recent advances in the implementation of media servers have occurred because of the distribution of program content over satellite, deployment of higher bit-rate imaging and newer, more efficient workflow requirements inside the facility.
As the mix of data rates, file structures and formats for higher-quality imaging in high-definition video and digital cinema continue to proliferate, the server environment must be ready to transport, exchange, manipulate and convey data at rates between 50 Mb and upwards of 7.5 Gb per second. Core server requirements will demand improvements in storage systems, server interfaces and interoperability in order to keep pace with the overall growth.
The recent adoption of SMPTE 377M--Material Exchange Format (MXF) and its predecessor SMPTE 360M--along with the General Exchange Format (GXF), has set the tone for interoperability and the interchange of meaningful data and formats between file-based media-centric devices. These standards and numerous other supporting standards have cleared the path for the next generation of media server deployment.
ADVANCED EXCHANGES
The most recent buzzword in the video media server world has been MXF. MXF was designed for the transfer of finished and source material, and its commercial implementation is beginning to surface in both media servers and other production products. In addition, the Advanced Authoring Format (AAF), which is designed essentially for post production, has already found itself being deployed in news production and other applications, including editorial.
The interoperability capabilities of these standardized and de facto formats should now enable any new format to integrate into the facility, at the news and production level (as nonlinear editors), ingest and content distribution levels (tapes, satellite IRD and Internet), central storage (nearline and archive) or at the on-air/online and contribution levels.
Each of these formats was developed with numerous handles and hooks in their syntax so that the exchange of not only essence (audio, video, graphics), but also of the essential metadata (described as the "bits about the bits") related to the essence can live and interoperate across many hardware- and software-based systems. At a key level in these formats is the development of the metadata standards with its information cataloging and descriptive characteristics. Work on such segments as metadata dictionaries, KLV (key-length-value) coding, unique material identification, and a centralized data registration authority has been ongoing for well over a decade.
Prior to the onset of any industry standardization of metadata, the various individual product manufacturers would decide what descriptive information--such as clip length (duration), start of message (SOM), number of audio tracks, and descriptions of the content (catalog or house numbers) --would be created and carried with the essence elements.
Each manufacturer's system would have its own format with varying degrees of information and its own method of capture and transport. Seldom was there any common denominator that would permit the interchange of (meta)data, even when, or if, the content essence was moved within the various system components--such as from ingest to storage, to editor and then back to storage. If external metadata were collected, they might have been assembled into an external database or a material management system--today referred to as a Media Asset Management (MAM) system--but in most cases, the major purpose of that metadata was for the internal operational requirements of the device for which is was collected.
METADATA AGNOSTICISM
Today, the intent of descriptive metadata has been extended much further. Metadata can be linked into a MAM system whereby it can be searched, manipulated and managed; knowing that the metadata will retain synchronization with the content essence it is associated with. Using metadata can now have both internal (vendor specific) and external value.
Not too many years ago, the process of converting between file formats varied from limited capability to extremely complex or nonexistent. Now, another significant development, the concept of a true file format interchange, is well onto its way of becoming a reality.
The goal of finding a common native file format, "native" meaning the file structure that describes an internal format for storage, is currently much less important. The manufacturers of media servers and peripheral products now have an opportunity to engage an industry standard as the wrapper, or common denominator, for interchange, rather than adapting each of their products to match all others' products.
Internally, the selection of the native file format is optimized for the performance of the file storage system and its internal components (i.e., drive and RAID controllers, bus architectures, interconnections within the storage system itself, etc.). Since the internal native file formats of a server's storage platform could seldom, if ever, be directly interchanged between other foreign storage platforms; the previous existing server products' common denominator for interchange reverted to either analog NTSC video and audio or 270 Mb/s serial digital interface (SDI) and AES audio; almost exclusively over coaxial cable with BNC mechanical interfaces (connectors).
For any interchange format to be successful, it must be able to resolve many different devices, and to ultimately allow for content (essence) and metadata to be exchanged fluidly between those devices. The interchange format must allow for a timely conversion process, preferably one that can occur in at least real time or faster. The interchange must not alter the quality or integrity of the original material, and should be able to be carried over a variety of physical media, including copper or fiber, and as IP over Gigabit Ethernet, Fibre Channel or ATM.
The interchange method for the future is here. The interoperability formats of MXF, GXF and AAF will now permit server systems to maximize their own performance while maintaining the ability to exchange information at the file level between varying server and editorial operating platforms. The acceptance of MXF by the manufacturers is a milestone in the future of server systems--although we've only scratched the surface of actual implementation, the buy--in by the leading providers has already paved the way to the future across a wide range of users and uses.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Karl Paulsen recently retired as a CTO and has regularly contributed to TV Tech on topics related to media, networking, workflow, cloud and systemization for the media and entertainment industry. He is a SMPTE Fellow with more than 50 years of engineering and managerial experience in commercial TV and radio broadcasting. For over 25 years he has written on featured topics in TV Tech magazine—penning the magazine’s “Storage and Media Technologies” and “Cloudspotter’s Journal” columns.