Video server technology: Methods for reliable operations
Providing a precise definition of a video server1 is similar to the blind man describing an elephant; the description will depend on the vantage point of the observer. This is true because the video server comes in several different flavors, and this obscures its persona. In general, there are three classes of video server:
- the standalone AV server with internal storage and/or external storage;
- the clustered server with multiple AV input/output nodes, each connected to one or more storage arrays through a switching network; and
- the edge server with a few AV I/O ports, internal cache storage and file-based import/export using non-real-time (NRT) transfers.
Each of these server types also has control ports, monitoring/diagnostics ports and the usual timecode and video reference ports.
Figure 1 illustrates the first of three server types. The top section shows the generic video server with M input ports and N output ports. SDI video, composite, ASI, AES/EBU audio and video/IP are some input/output format types. Additionally, a LAN connection is customary for NRT and/or real-time (RT) file transfer. Optionally, there may be a connection (SCSI, Fibre Channel or Ethernet) for external storage access. In general, this architecture can represent the standalone server or the edge server (as described earlier), depending on how the device is configured and used.
Figure 1. The standalone AV server has internal storage and/or external storage. Click here to see an enlarged diagram.
The standalone server model is the workhorse of our industry and is used in the capacity as VTR replacement, small clip server, play-to-air server for TV news operations and a host of other applications. The edge server model is typically loaded (or unloaded) with content via NRT file transfers from distant storage systems. Applications run the range from remote digital signage to low-cost AV ingest and playout nodes attached to inexpensive NRT storage systems.
The clustered server model is depicted in Figure 2. Clusters are used in situations that require large I/O counts (100+) with support for 1000s of hours of shared AV storage. Nodes and storage are added as needed to meet usage demands. Each I/O node has little or no internal storage and has the I/O richness of the standalone server. Importantly, an AV node is not a video server but rather a “media client.” However, the combination of a node plus storage offers all the functionality of a video server. The distinguishing features of the cluster are immense, including expandable external shared storage, a common file system that all nodes share and a switching network for nodes to access storage. This configuration requires a high level of reliability across the file system, switching and storage systems. Some switching systems use a hybrid combination of Ethernet, Fibre Channel or IEEE-1394 connectivity and may require non-trivial protocol translators inside the network.
Figure 2. The clustered server has multiple AV input/output nodes, each connected to one or more storage arrays through a switching network. Click here to see an enlarged diagram.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Next, let's study a pool of techniques for creating high availability video servers.
Built to last
First, let's look at some common reliability techniques than can be applied to all three classes of video server. In the big picture, reliability is linked to maintainability, including remote servicing and diagnostic ability. A feeble ongoing maintenance policy will result in overall poor reliability, regardless of how much attention is given to hardware/software reliability and stability. So, excellent maintenance practices are a prerequisite for high-availability operations. What else is needed?
Consider the generic standalone server and media client noted in Figures 1 and 2. To keep them running at peak performance, each needs swappable dual-power supplies with associated AC access to dual-power rails, dual (or more) fans, protected storage and spare I/O ports. All servers and media clients also have an internal controller with CPU, DRAM, glue logic and often hard disc storage. It's not practical to duplicate the controller portion of the server or node because transferring from a failed to spare controller is extremely difficult and not feasible in practice. And who of us would not implicate software (executed on the internal controller) as a potential failure component?
In the end, it's not practical to build a single server or node that has duplicated every internal component. So, most servers in use today sensibly duplicate some internal components, but not all of them. Element duplication results in improved unit reliability but does not provide truly fault-tolerant operation under all conditions. Such a unit may be described as having a single point of failure (SPOF) — the controller CPU, for example. It's less expensive to configure two mirrored SPOF servers running in parallel than to design a single unit that has no single point of failure (NSPOF) performance. However, two SPOF servers running in lock-step parallel do offer an NSPOF operational mode; if one server node fails, the other replaces it immediately or acceptably quickly.
Everlasting storage
Figure 3. Many techniques are available to assure excellent uptime with NSPOF system-wide. Click here to see an enlarged diagram.
Storage is an essential part of most video servers. With hard discs clocking in at 500GB per unit in 2006, it is common practice to store 1000s of hours of RT online AV. As more AV content is stored, higher storage reliability is demanded. For the small edge server or single VTR replacement server, there may not be any storage protection. However, for most mid- to large-size server systems, storage protection strategies are used. The amount of protection should be proportional to your business needs and budget.
Storage protection methods typically use RAID. RAID is a family of separate methods ranging from a 100-percent mirror of all stored data to using clever data parity tricks to recreate missing or corrupt data, including losing an entire drive. RAID 3 and 5 use data parity and require less storage overhead compared with a pure data mirror (RAID 1). Real-time RAID performance requires careful array design and copious testing to guarantee that any R/W data anomaly is corrected. This is one area where manufacturer experience and field-proven products are a valuable metric when selecting a server vendor.
Let's see what might happen when a single hard disc fails completely, as in array number two (Figure 2) with eight drives using RAID 3. Reconstruction kicks in and, using stored parity information, the system begins recreating data from the missing drive. However, if the bad drive is not replaced quickly, the remaining seven drives of data are at risk if another drive fails.
Also, detecting and recreating the missing data in real time is both art and science. When the bad drive is replaced, RAID methods rebuild the data image using an automatic background process. It's important that the rebuilding procedure does not steal valuable “user” bandwidth with consequent loss of some AV I/O. It's always good to ask the providing vendor if server performance is sacrificed during a RAID rebuild.
Providing RAID storage is a necessary but not sufficient condition for bulletproof storage access. Also needed are dual RAID array controllers, a passive array backplane, and dual-power supplies and fans. Only then can an array be classed as NSPOF. Figure 2 on page 40 shows arrays each with dual controllers. The media clients must decide when to abandon an R/W transaction and switch to the alternate link but doing so without loss of data. As imagined, this requires excellent system engineering and testing.
RAID is not the only tactic for storage protection. A new method called Redundant Array of Independent Nodes (RAIN) was recently introduced by Avid in the Unity ISIS storage system. RAIN uses intelligent storage blades (nodes) in a novel configuration, providing 100-percent uptime with NSPOF storage protection, and transparent background blade rebuilding.
Other techniques
The clustered server requires the most reliable hardware and software operation. Why? Because with more I/O and more storage at risk — compared with the isolated standalone server — many different techniques are needed to assure excellent uptime with NSPOF system-wide. In addition to the methods previously discussed, the following methods are often applied. (See Figure 3.)
- Dual links from a media clientUsing alternate links provides redundant paths to the storage via the switching network.
- Redundant switchingThe Fibre Channel or IP switching network needs redundant switches for failover as needed.
- Redundant file system controllersIn the event that all nodes share a common file system, the file metadata controllers must be duplicated.
- N+1 sparingA spare node (number N+1) sits in standby mode until put into service. If node #2 fails, external control logic switches I/O from #2 to node N+1. The delay in switching nodes is a strong function of the speed at which a failure is detected. While it's true that the I/O nodes are SPOF designs, with proper failover, the entire system can be virtually NSPOF.
- Automation redundancyIn many cases, I/O nodes are under the control of external automation schedulers. Here it is wise to use dual controllers because if one system fails, the alternate can take over.
- Self healingThis is designed-in automatic healing by using fast route-around techniques.
Conclusion
The methods mentioned in this paper describe high-availability video servers. No one method will guarantee NSPOF operations. However, all of these techniques mean that outstanding availability is practical with today's servers.
Al Kovalick is a strategist and fellow with Avid Technology and author of “Video Systems in an IT Environment: The Essentials of Professional Networked Media” (www.theAVITbook.com).
1RT is used to describe data movement in the “real time video” sense whereas NRT implies AV data transfer rates other than real time.