Video storage and networking for centralcasting
The agenda in corporate board rooms is seldom published in the newspapers. But, in these times, one thing is certain: The agenda of most broadcasting CEOs is dominated by the costs of DTV conversion and operations, declining revenue, increasing cost, consolidation and centralized operations. The motivation to consider centralized operations is simple. Costs are rising, ad sales are slumping, and cash flow from operations is hurting. Broadcasters must consider anything that holds the promise of delivering a salve to the wound. At press time, a federal court had struck down the FCC's cap on multiple ownership, and sent it back to the FCC to reconsider. At the same time, the court struck down the rules against cable ownership of broadcast licenses. As a result, a new round of consolidation is likely, increasing the pressure on broadcasters to consider centralized operations over wide geographic areas.
Pressure has a way of escaping the confines of any vessel. Engineering and operations executives are under pressure to find centralization models that can provide savings in the short term and facilitate growth. No one disputes the imperative, but the issues are complex and strike to the heart of the technology of television in fundamental ways. When viewed at the business core, the broadcast industry sells the assembly of media items into a continuous stream in such a way that the pieces sold (commercials) are enhanced in viewership by the programs that wrap them. This process of concatenation of elements in a time sequence has been developed into a mature business with largely computerized business operations, and often with automated assembly.
Storage for centralized operations
There is a resemblance between this process and the factory floor in a rust-belt industry. The company buys raw materials (programs), brings them to a factory (master control), builds a product (broadcast stream) to the orders sent in by sales and operations (traffic and programming), and sends it out (transmission) before sending an invoice and packing slip (accounting and log reconciliation/affidavits). Each of these facets has been radically reinvented in the last generation, indeed in the last few years. The first to change was business operations, and most stations have complex software products that manage their broadcast inventory. But media played out, spots and long-form alike now are being distributed, stored and played out using new tools with far-reaching implications for the possibilities of centralized operations.
The distribution and tracking of programming on videotape has been with us for over 40 years. The use of servers has been with us since the early 1990s. Asset-management tools are barely five years old. Satellite distribution began in the 1960s, but digital distribution over satellite is far newer. Distribution of spots on a store-and-forward basis became a real business in the mid-90s, and now promises to expand into program distribution in the next few years. These dynamics have required a thorough review of how we store and use media in our broadcast “factories” to build the products of the new media millennium.
Storage for centralized operations is uniquely different than storage for unitary station operations. By its very nature, it requires more volume of storage, the tracking of more items and effective strategies to protect the media assets related to multiple broadcast licenses. It is also clear that some parts of broadcast operations cannot be fully eliminated from the local station — most specifically, ingest of locally produced spots and programming. Strategies that will work for centralized operations must allow for that reality while pulling the maximum portion of the local operation into line with new economic reality.
Storage for centralized operations need not be entirely centralized. Asymmetric distributed storage is a powerful tool that facilitates the local storage of spots produced in each market. In any case, the management of the media must take into account the need to ingest on short notice, and then ensure that the spot is available at the right time to play back to air.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
There are two methods of handling this process. One is to use appliances that compress the media, store it, and prepare it to be transmitted over data networks, and decompressed and restored in a broadcast server at the centralized operations center. This process can be done quite inexpensively using the Internet to send media as files via FTP and proprietary techniques. The tools use secure encryption, error correction and protection, and network-aware transmission techniques to ensure the media is delivered exactly as it was received. Tools now exist to convert the files, normally MPEG, into the correct flavor for storage on local servers at the centralized operations site with metadata transparently transferred. In the future, GFX, developed by Grass Valley Group, and MXF, developed by SMPTE, hold the promise of allowing servers from different manufacturers to interconnect directly, eliminating the translation step.
Media servers
The second method of accomplishing this is to site a network of servers, using smaller and less costly servers at the local station to ingest media and make it available over a private network to the server(s) at the centralized operations center. Though MXF technology will eventually make it possible to use servers from multiple manufacturers, for now it is more practical to build a seamless network of servers from one company. When spots are ingested at the local station, the automation system requests a transfer to the central operations center, usually via FTP on a dedicated network. After transfer, the spot can be deleted from the local station's storage, or kept there for backup.
Media can also arrive from one of several service providers (Vyvx, PathFire and others). Their delivery to a local computing-industry-based server with playout capability almost gets it into the air-chain servers, but not quite. MXF provides a path for metadata to be transported with the file and directly imported into the software management applications in the local server. Doing so will eventually allow the local playout system to interrogate the delivery server and pull relevant media into the air chain. Though the standards are essentially complete, implementation requires rigor on the part of the distributors, which is something they have not had to provide in the past. For instance, the start time cannot be even a frame off for effective unattended use with an automation system. Segment times must be similarly accurate, and data about the commercial content must also be complete so that the local log can be reconciled without manual entries. The technology exists, and business practices must catch up to the technical art. Storing all of this in metadata is the key to success.
Fault tolerance
In the general sense, media storage for centralcasting requires more rigor in fault tolerance and disaster recovery than that in a unitary station. The cost of make-goods in any single market is no higher. But with the increase in the number of markets potentially at risk, the financial consequences of a serious systemic failure would resemble the broadcast equivalent of a nuclear meltdown. A unitary station in a modest market might protect itself by mirroring the server (exactly duplicating the hardware and media content), or by using either a “partial” backup (sometimes called unbalanced backup) or tertiary servers.
An unbalanced backup recognizes the fact that a total failure of the primary server is unlikely, and even more unlikely to be irreparable. The assumption is that backing up the next six to 12 hours of content should be sufficient. This reduces the cost of storage, especially when only the primary server is equipped with RAID storage. In a centralized operations center, the volume of media can be an order of magnitude higher than it would be in a unitary station. This might make an unbalanced backup seem even more attractive, but the economic equation must have two factors in it: the likelihood of a failure that is not repaired when the backup is drained of relevant content, and the cost of the time lost in that event. In general, this model is too risky for centralized operations.
The tertiary model, sometimes called a library/air model, can be an effective method of reducing the total cost of storage while increasing the reliability. This approach uses two identical servers (air and protect), and a “library” server that is used for ingest and connection to any archive device. The library server transfers media to the air and protect servers sufficiently before air to meet standard operating guidelines. Instead of having perhaps 100 to 200 hours of air and protect, this allows perhaps 50 hours of duplicated storage, with a larger quantity of storage available to the library server. It would be possible, for instance, to have late-arriving spots ingested into the station's local server and then transferred to the library server without the potential of resource conflict with the air and protect servers. The library server would transfer the media to the air chain after it has arrived, and well before air.
This strategy keeps three copies of all media, but reduces the potential for continued growth in two servers as needed. As storage needs grow, only the library server is expanded. The air chain (air and protect) remains the same size. A third chassis also means that, in the unlikely event of a failure in either air or protect, the library server could be used directly to air. This would permit more time for repair or routine maintenance of the air and protect servers. With three high-reliability devices, the laws of statistics predict a greatly enhanced reliability for the total operation.
One variety of centralized operations (called Distributed Broadcast Operations) distributes the playout of media between the central operations center and the local station. At the same time, the automation and media-management systems are similarly distributed. This type of dispersion of the load may well reduce the load in the WAN circuits and provide simple disaster recovery. Programs that can be received and switched to air easily via automation (often network programming and local programs like news and public affairs) are not sent to the central operations center for concatenation into the final emission stream, but rather kept local without interconnection impact.
System I/O must also be considered. It is quite possible today to require analog I/O for legacy videotape ingest, SDI I/O for normal operations, and network FTP I/O. Increasingly important is DVB ASI I/O (270 Mbits/s MPEG data using NRZ data). ASI is the preferred method of ingesting HD content (which should not be subjected to multiple compression cycles), as well as compressed files from service-provider servers via MXF. Distributing the I/O across multiple frames is also highly advisable to enhance reliability.
Shared storage
So far, this discussion has dealt with individual servers with locally attached storage. But shared storage, under several names and many technical flavors, might well make more sense. Its potential advantage is that it requires only one copy of the media, which is then available to all playout ports. An ingest station can be designed with minimal features at lower cost. It eliminates the complexity of the process of transferring media between servers, and thereby saves money. Networked storage is a well-developed concept in the mainframe-computing industry, which requires extremely high reliability. When the reliability of the storage is insufficient, one can construct a backup system to mirror the entire storage network and playout devices as well, effectively replicating the entire system. This could eliminate any potential cost savings that networked storage can provide, but, if the company is particularly risk averse, this might be an attractive option. Many systems using these highly redundant topologies are being installed today in cable-origination and broadcast plants, including news-only operations.
One must consider the topology of the entire centralcasting enterprise as one holistic problem. Indeed, when viewed from 30,000 feet up, a system that uses a storage network and ingest servers at the local station is a hybrid of two approaches. It must, however, operate as seamlessly as if it were all in one room. In reviewing disaster recovery, some have suggested that geographic dispersion of the assets is an advantage. DirecTV and Echostar both use redundant facilities capable of feeding the entire system after a catastrophic failure. Those who had data and video lines underneath the World Trade Center learned rapid-fire lessons in disaster recovery. While not appropriate for the majority of applications, it is possible that, as centralized operations become more important and the centers grow larger, such redundant facilities may well become important. The potential for acquisition of broadcast licenses raised in a recent federal court ruling might make for strange bedfellows in a centralized operations center for broadcast and cable origination combined. That may well lead to geographic dispersion of backup media and facilities.
Server backup
At the end of the day, the most important transitions in media technology are those that are deceptively simple yet far-reaching in their impact. This discussion has been largely about storage of content in servers, but the backup to servers is equally critical. At one time, that was videotape. In many stations today, it still is. And though the future seems to hold the promise of service providers delivering all content electronically, the death of linear tape is overplayed. In New York, one station refuses to take even satellite delivery of programming, preferring the reliable delivery of videotapes.
In many applications, server backup may continue to be videotapes on vault shelves for a long time to come. In others, a data-tape backup of the content is more appropriate. An increasing number of companies are choosing to install archive robots with multiple data-tape transports as the final element in the storage puzzle. In particular, this can permit modified storage strategies in centralized operations, since the failure of any media could be restored from the archive. The process would be painful, as archives are not intended to be real-time media, but it could be done in an automated process. Even in a “tapeless” playout center for many stations, one could make an effective case for keeping good old reliable videotape around just in case the shared storage network crashes and something has to be put to air quickly. While the videotape is playing, the restore process could begin and gradually return the operation to normal with all media again residing in the network and all playback proceeding from server I/O ports.
Media management
When one builds a complex holistic system of online, near-line and offline storage (data tapes on the shelf), the final piece of the puzzle is media management. The central media-asset management (MAM) system must be the traffic cop that knows the whereabouts of all content, as well as all of the metadata associated with it. The servers at the stations, the servers in the centralized operations center and the archive (shelf and robot, as well as videotapes) all contain critical assets from which the broadcast factory must assemble the final product. The fact that there is a large number of individual elements in the combined broadcast streams of several (or many) stations under the control of a centralized operations center means that literally hundreds of entries might be made every business day. Managing such a system without MAM would be a difficult task indeed.
John Luff is senior vice president of business development at AZCAR.