Video storage

When television technology was first beginning to emerge from the laboratory 75 years ago, the problems related to storing video were well understood. Clearly, producers could not transmit all programs from live sources. But the technology to save programs to a nonvolatile “memory” of some kind was, at that time, cutting edge at best.


Most video content today is still recorded and stored on magnetic media, but storage technology has come a long way since the days of the quadruplex two-inch recorder.

Over the intervening years, clever and cost-effective strategies have evolved to record, store and archive content at ever-higher quality and steadily declining costs. The perfection and commercialization of practical high-quality analog video recording in the 1950s pointed the way to much of today's technology. Modern “digital” disk and tape recording technology is, at the fundamental physical level, an analog process. Modulating a carrier with a pattern that represents digital content and converting the representation back to digits requires sophisticated application of analog methods at the signal's interface with the media. It is an increasingly inventive application of physics, mechanical engineering and materials science.

Linear media

In any case, the equipment we call analog recording hardware no longer dominates the market, but its installed base still represents a large segment of the industry. Over the last 20 years, manufacturers have delivered huge numbers of analog recorders for professional use. Several years ago, their numbers began to dwindle as digital recorders became ever more cost-effective, but they are far from gone. Indeed, it is only in the last few years that the last of the quadruplex 2-inch recorders first seen in 1956 were retired. As a result, today we see in common use a mix of old and new technology using radically different approaches, presenting challenges for those librarians who manage valuable repositories of content.

It is equally important to remember that television content is not all stored electronically. Film is as important today in television production as it was generations ago. It is likely that film will persist for a long time as a primary acquisition medium. The current market push to use electronic images that emulate the look of film in every aspect demonstrates the production community's deep reliance on film as a primary medium. And although much of the film shot for episodic and dramatic television today is finished on video, long-form film for theatrical release is still an important portion of the content delivered on television. Intermediate electronic processes, including special effects, have encroached on the traditional film market, though film schools still teach traditional methods for next-generation filmmakers. First and foremost, production is about capturing compelling content. And film will be critically important in recording that content for a long time into the future of television. It is valuable to remember that film was the only practical means of recording electronic video images in the early days of television, and those archives remain stable and easy to recover.

Nonetheless, most recordings today are committed to videotape, also affectionately known as “rusty mylar.” The range of professional analog recorders in production today is quite small, though not zero. Clearly, digital recording and storage techniques are the norm today — indeed, they are the only commercially viable methods. The number of digital video formats in use today is quite staggering. These formats include Sony's Digital Betacam, DVCAM, Betacam SX, Betacam IMX and HDCAM; Panasonic's DVCPRO, DVCPRO 50, DVCPRO100, D-5 (standard definition) and D-5-HD; and JVC's Digital S. SMPTE has standardized these formats and given them designations such as D-3, D-5, D-6, D-7, D-9, etc.). The DV-type formats from all manufacturers are variants of the original DV codec, which is standardized for both consumer and professional use. The rest of the formats were developed in isolation, some of them as research projects funded by NHK in Japan. Some machines can play back tapes other than those belonging to their native format, but each format is unique in some way.

This leads to a major complication for content producers. When a producer chooses a format, he also determines some of the flexibility of the production process. For instance, using the 4:1:1 DV-based formats for acquisition is less desirable for high-quality work due to the chroma subsampling. You can't ignore the resulting limitations on later post-production processes, though upon careful review it is surprising how well 4:1:1 video actually holds up. But, for high-quality graphics and post-production processes, it is certainly better to stick to 4:2:2 recordings. The 4:2:2 recording method was pioneered jointly by the SMPTE and EBU committees, which developed the original D-1 specification in the early 1980s. D-1 remains the highest-quality standard in normal usage today, though the original D-1 recorders have largely disappeared.


JVC’s BR-D860U VTR utilizes the Digital S format, designated D-9 by SMPTE. D-9 is one of the many digital videotape recording and storage formats in use today.

The migration from one format to another over time presents major challenges to the producer. Once an economically significant volume of content has been stored on any particular format, the logical question is, “What is the content's shelf life?” Early video recordings lasted perhaps a dozen years before concern over their stability pushed owners to migrate to new recording media. Today, the quality of videotape is better: It is chemically more inert and mechanically more stable. With proper conditions (humidity and temperature, primarily), modern videotape can last decades. But its lifetime is certainly less than that of film which, in some cases, has lasted well over 100 years. No one knows the future, but it seems unlikely that videotape will become an archive format capable of preserving content for hundreds of years.

Nonlinear media

Manufacturers provide a dizzying array of new recording options every year. But few changes in the market have been as sweeping as the move to hard-disk recording, which is now over a decade old. It is valuable to remember that disk recording is not a new concept. Indeed, John Logie-Baird used “video disks” over 75 years ago. The first nonlinear editing system was invented in the '70s at CMX, and was used commercially by CBS for offline editing in Los Angeles. The CMX 600 used computer storage disk drives to record analog monochrome video. But it was a short-lived product. And, though others tried to replicate the concept using other hardware, none were successful for many years.


Volkswagon’s Autostdat in Germany uses several synchronized QuBit video servers to store and play video and audio to a theater in the round.

High-quality recording on hard disks began more than 10 years ago. The first commercial units could store only a couple of hours of content due to two interlocking constraints on recording capacity: disk size and compression ratio. Initially, video disk recorders used only JPEG compression, and storage at below 20Mb/s provided marginal results. Drive size now approaches 100 times that of a decade ago, and compression improvements have more than doubled that again. Today, it is not uncommon to buy a video server with over 100 hours of storage in a single chassis. Indeed, consumers now can record acceptable video on hard-disk home recorders that allow over 50 hours of storage, with the promise of hundreds of hours in the next few years.

This explosion of capacity, along with effective means of sharing both the storage and control over inexpensive networks, has changed the way programmers, distributors and broadcasters manage content. Videotape recording is far from dead, but the areas of our industry where tape predominates shrink every year. Servers and smaller-scale disk recorders have dropped in price while steadily increasing in capability at a rate somewhat lower than Moore's Law. In part, this is because of the portion of the cost that is attributable to high-quality compression engines and networking hardware and software.

Compression and interchange

Today, most server products use MPEG-2 or DV compression. A few use proprietary variants, but, for marketing and engineering reasons, most have adopted standard codecs. Compression technology continues to advance, and MPEG-4 AVC (advanced video coding), also known as H.264, stands on the near-term horizon. H.264 promises significant improvement in picture quality at modest bit rates. In the short term, it is expected to yield efficiency gains of 20 percent to 30 percent, with further improvements as engineers gain experience with the new tools H.264 allows.


Sony’s new optical-storage (DVD) camera and decks represent the first major professional production products to use optical storage.

Key among the desires of all broadcasters and content providers is the ability to interchange files (or streams) among servers made by all manufacturers. Several efforts to allow that interchange are under way, and several methods are in place. The simplest to understand are efforts to codify an interchange standard to which all manufacturers can adhere. The Material eXchange Format (MXF) standard, which is in the final stages of development within SMPTE, holds that promise. MXF not only provides the mechanism for interchange, it also standardizes the interchange of metadata. This will allow distributors to encode time information, version, episode and other important information with the content and deliver it all the way to the server at the station used for playback to air. Traffic and automation can then grab that metadata without rekeying information at every playback site. Information about embedded commercials could be extracted similarly.

The ability to interchange bit streams (with or without metadata) is key to the further growth and simplification of video servers. Either all manufacturers must adopt standardized interchange formats, or stand-alone boxes must be developed to allow that interchange. The Pro-MPEG Forum, AAF, SMPTE and others are hard at work to create those standardized interchange mechanisms. But some in our industry cannot wait for standards to catch up to current market demands. This creates an opening for stand-alone, or embedded, applications and hardware facilitating interchange.

Two such products have had a major impact on current implementations. Telestream's Flip Factory and Pinnacle's Exchange convert streams between different formats. They have developed a concept that allows seamless interchange between proprietary products created by third parties. In a somewhat “disinterested” approach, they leapfrogged standards to achieve interchange in the short term where commercial interests and standards have not moved fast enough for the marketplace. These products are having a major impact on real-world applications today. Streams distributed by networks and service providers for both news and commercials can now be fed directly into servers for air or production use without going back to baseband as an interchange medium. The resultant improvement in quality and workflow is important and represents the best that technology can apply to real-world problems.

One can envision, however, a general-purpose approach to the same problem. Assume for a moment that a news provider shoots on DV tape and wishes to edit only in an MPEG-2 I-frame environment. Along with that content, he wishes to deliver the thumbnails and other metadata that comes from the field camera. A server system that has an MXF- or AAF-compliant I/O could take the content in, parse the metadata and, using a hardware-embedded approach like FlipFactory, convert the stream to its native format. The user would not care if the next scene came from an MPEG-based camera because the internal system would seamlessly adapt to the content and smoothly deliver the stream to the recording engine in the right format. This may not be very far into the future, and the impact such an approach would have is quite significant.

A mesh of products allowing such free interchange of content might have both native-only and interchange-capable I/O in a homogeneous environment. The resulting improvement in workflow would be significant. Commercials delivered through store-and-forward would appear on air servers almost magically, no matter how they were distributed or what standard the distributor adopted. Metadata could be shared with automation and traffic from a database known to be accurate, improving the on-air look and reducing the work needed to bring distributed content to air.

We tend to focus heavily on MPEG standards today, which is appropriate given the installed base and economics. Other compression standards are moving rapidly along as well, including JPEG 2000, which is based on wavelet compression. It was conceived as a compression system for high-quality stills, in major part due to the explosion of electronic still cameras. Wavelet compression is not new to our industry. At least one commercially successful nonlinear editing system used wavelets. Motion JPEG 2000 is analogous to Motion JPEG, which was the dominant compression system in early video servers, in that each frame is uniquely coded. Every still frame is complete, allowing simple editing, much like I-frame-only MPEG.

When implemented as a moving-image compression system, JPEG 2000 allows lower bit-rate decoding of a main bitstream. This will allow a high-definition stream to be decoded for standard-definition use, or thumbnails to be extracted without creating a proxy stream. This technique might allow a video server to mix uses in a single bitstream, allowing browsing on the desktop, SD broadcasting and cinema release from a single coded bitstream at modest bit rates.

This month's “Technology in Transition” column (page 179) discusses archive systems. It is important to note here that video storage is not monolithic. The characteristics of storage systems must be matched to the intended application. For instance, Sony has just introduced a new optical-storage (DVD) camera. This represents the first major professional production product to use optical storage. Until now, virtually all storage, except for robotic archives, has used magnetic techniques (disk and tape). Key advantages of this innovative approach are improvements in workflow that an inherently nonlinear (at least read, but not write) medium can provide. Avid and Ikegami attempted this with magnetic hard-disk camcorders. They achieved modest success, but failed to gain acceptance for many marketing and economic reasons. The feature that differentiates Sony's new approach is that, unlike hard disks, DVDs might be considered a consumable, much like videotape, media. Note that, at press time, Sony had not announced a price for the disks. However, the disks are not expected to be as inexpensive as blank consumer DVDs.

Ideally, the endgame in your struggle for storage will see minimum cost, maximum capacity and maximum versatility in the total system. If you choose to use products optimized for the intended workflow, the gains can be significant. But implementing a new video-storage strategy in your organization without first developing a road map and list of requirements can lead to a short-lived solution. There are products in the market today that can facilitate nearly any strategy, from lowest possible cost, low storage density and modest quality, to highly automated workflow and hooks to future migration. The key to understanding the marketplace may be to listen carefully and think openly about your requirements and the options available to you.

John Luff is senior vice president of business development at AZCAR. To reach him, visitwww.azcar.com.

Home | Back to the top | Write us

CATEGORIES