Essential Elements for The Video Server Facility
Part 1
Scaling a broadcast, newsroom or production facility toward a complete video server environment involves numerous networking components beyond the video server's encoder/decoder, its storage and associated application software. As video material management moves from videotape and more toward a tapeless production environment, the architectures of these systems begin to take on the same dimensions as baseband video-only systems-but without the numerous dedicated VTRs, video switchers and physical videotape libraries we've grown accustomed to over the previous two decades of digital video implementation.
Fundamentally, the essential building blocks of the compressed digital/tapeless environment appear similar: There is a means to transfer field material into the edit environment, then editing is done on that material, and finally the finished products are sent for playout or distribution. But that is where the similarity from a component standpoint ends. Since the inception of nonlinear editing (NLE) in the early 1980s, which has signaled the eventual departure from the serial processes of videotape production, the workflow and time necessary to complete a production element has decreased dramatically. In turn, the cost to produce those production elements has decreased as well. These two components, time and cost, continue to fuel the development of the products and the deployment of the nonlinear, entirely server-based facility.
BUILDING BLOCKS
In this installment we will look at the building block components used to build a complete facility including ingest, server playout, production/editorial, archive and off-site asset protection for disaster recovery purposes. The example shown in Fig. 1 is not an artificial concept. Indeed this architecture is typical of systems being deployed by all the major video server manufacturers, albeit some take different approaches to the networking concepts, and their video I/O structure may differ depending upon the server engine's internal architecture.
As expressed in my previous column, protection of the asset and redundancy in systems are the major keys to a continued successful operation. In the accompanying diagram, nearly every portion of the system has at least one alternative path for both the data transfers, the system management and the operational processes. Furthermore, all stored data can be shared between all the elements of the workflow processes from ingest and playout, through editorial and network interconnection for remote purposes. What differs between the various manufacturers' implementations are the software applications and the degree (and methods) of data transfer between various operational work areas. For those instances, I encourage you to contact specific manufacturers for their individual representations of features and functionality, as each facility most likely will require its own special configurations and components, depending upon its operational model.
The "system" architecture shown in the example is comprised of PROGRAM EDITING, PRODUCTION EDITING, INGEST AREAS 1&2, AIR PLAYOUT with QC (quality control), STORAGE, ARCHIVE, and a NETWORK INTERCONNECT component. Each system, with the exception of STORAGE and the FABRIC SWITCHES-which are the common elements in the overall system-can function independently and autonomously from one and other.
STORAGE
To begin, we will first look at the most central element of the system, the storage component. In our example, all the compressed digital essence (video, audio and metadata) will be kept in a common "pool" shown by the STORAGE portion of the diagram. Each storage array has a dual-port Fibre Channel interface that moves data to and from the rest of the system. Data flows out both ports, in either direction (shown as the left or right side of the storage array diagram) to a pair of Fibre Channel fabric switches, configured in a redundant mode such that either path from the storage arrays can sustain the full bandwidth of the system as designed. It is these DUAL FABRIC SWITCHES (see "A") that act as the traffic control mechanism between all the system components, managing the data transfer between ingest and playout servers, edit stations, the archive server and the two GATEWAY INTERCONNECTS ("C") that bridge this local system to other external systems. The GATEWAYS allow the system to be integrated with other server systems using Gigabit Ethernet, over a wide area network or to another video server-based systems, such as a newsroom editing and computer network.
A local or NEAR LINE ARCHIVE is bridged into the system through the fabric via an ARCHIVE SERVER ("B") that manages and buffers the transfer of compressed data between server and data tape or DVD storage. The archive server may further employ additional third-party software or a second data server to process the activities specifically associated with the digital tape or DVD archive.
The data storage methods and arrays are especially particular to the actual video server manufacturer. Most storage arrays and associated Fibre Channel switching systems are selected by the manufacturer to meet certain specific requirements specific to that individual manufacturer's design. For example, in some servers the data may be spread (or striped) across multiple drive array chassis, allowing for both RAID protection and increased bandwidth when multiple data transfer requests are issued from the playout or editing station operations. Other server designs will segregate the arrays into groups and control access to those areas via software and the Fibre Channel switches.
INGEST AND PLAYOUT
From a workflow perspective, the example diagram breaks the ingest and playout areas into segments across the overall network. For example, INGEST & PLAYOUT AREA #1 is intended to record feeds such as live local production or videotape via the central house video router. Typically these sources are controlled from the central equipment room or satellite record center and continusouly monitored from this same area. Hence, there are inputs to server A and server B, but a single output from server B, which functions as the "QC" station for this activity.
INGEST AREA #2 would typically become the satellite, network delay or turnaround area. Server C and server D only have input encoders, and the servers typically would never be involved with playout, either for QC or air playback purposes. The number of inputs or outputs is determined by the manufacturer or type of video server product selected. Note that some server manufacturers actually have bidirectional channels-ones that can either encode (ingest) or decode (playback) or feature internal baseband digital video routers that steer the signal into internal codecs; creating an additional level of feature sets that should be considered depending upon the workflow and operational needs of the system.
Integrated tapeless editing systems (such as depicted in the PROGRAM and PRODUCTION EDITING STATIONS) are migrating from the baseband video I/O model associated with the dedicated or standalone NLE system. Today's systems are entirely operated at the compressed digital level, meaning that no transcoding occurs from MPEG-2 to component digital (ITU-R 601) at the workstation location. Depending upon the system deployed, the original baseband or compressed video media is cached into the central STORAGE system via a separate server engine. Time-line editing occurs at the EDIT STATIONS using data that is stored only on the STORAGE arrays.
Not having to transfer that "videodata" to each workstation for editing or viewing reduces editing time and the amount of hardware, as well as storage, required on the system. Essentially, the edit workstation uses either proxies of the compressed digital video or creates only a set of pointers to the files that are kept on the storage array. Only the effects (dissolves, keys, etc.,) are rendered at the actual workstation, then those clip segments are transferred back to the central store-keeping both the original material and all the final material on the central storage arrays. Once a segment or the entire cut is completed, the user can either create a final leveled (that is rendered, separate clip) on the central store for playback or archive-again leaving the original material untouched and unaltered.
These larger systems now permit a great deal of activity to happen concurrently. However, with multiple edit sessions happening simultaneously, and continual caching of material to the stores ongoing, a lot of bandwidth can be consumed on the "network." High-speed networking, both over Fibre Channel and Gigabit Ethernet, is now used in a variety of combinations depending upon what functions are occurring in the system. In addition, a separate 10/100 base-T Ethernet control system (not diagrammed for simplicity) overlays the high-speed data network for the carriage of control and other metadata between elements of the system. This 10/100 base-T Ethernet data network is used as a signaling system for edit decision list management, monitoring the health of the network and determining or controlling the bandwidth management schemes during peak data transfer activities. These components can also be integrated with a browse or proxy system for offline desktop preview or rough-cut editing.
SWITCHING
The various component elements of the total system are spread across one or more pairs of fabric switches for at least two purposes (note only one pair is depicted in this diagram). First, for the management of the workflow process, some servers may be grouped with certain edit stations to balance the amount of data flow between elements through the switch. Second, in the event of a fabric switch failure, the entire system is not crippled because although two sets of functions may be temporarily disabled, other workstations could pick up the load because all the data essence material is still accessible from the central STORAGE arrays.
Extremely complex or very large-scale systems may use multiple sets of fabric switches in an arrangement that does not hinder operational performance, should any single network component fail. Multiple switches allow the system administrators to reconfigure the digital video network for other purposes. This level of system discussion is beyond the scope of this article, but it is not uncommon to find extensions of this diagram applied to extremely large-scale implementation at news organizations such as CNN or others.
The dedicated QC & PLAYBACK areas and the balance of this diagram are somewhat self-explanatory at the block level. In the concluding portion of this installment, we will look at how the ARCHIVE and the GATEWAY components are further extended for off-site mirroring, wide array network distribution, streaming video, backup and other campus- or facility-wide implementation.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Karl Paulsen recently retired as a CTO and has regularly contributed to TV Tech on topics related to media, networking, workflow, cloud and systemization for the media and entertainment industry. He is a SMPTE Fellow with more than 50 years of engineering and managerial experience in commercial TV and radio broadcasting. For over 25 years he has written on featured topics in TV Tech magazine—penning the magazine’s Storage and Media Technologies and Cloudspotter’s Journal columns.