Elements of the SAN

The wide availability of cost-effective disk storage continues to provide excellent incentives for the deployment of video server platforms into new environments. Editorial applications including collaborative post production, news stories, online/non-linear editing and long-form content storage are candidates for the "tapeless" production environments shaping up industrywide.

The media server architectures employed for these activities include a new set of storage systems that are just now coming into their own. The days in which new server installations in broadcast facilities consisting of a single dedicated array of storage are dwindling. Previously, perhaps with the exception of a limited set of server products, if an individual video server needed to increase either its I/O capabilities or its total storage, another chassis was added with additional I/O ports, and another array of storage would be added in order to balance the server storage loads equally between those chassis groups. Automation and archive managers further compounded this criterion, depending upon how they managed storage in either protected or mirrored operations.


(click thumbnail)Fig. 1As server chassis and storage capacities reached some defined level, and a second set of drives and chassis was added, the complete system needed to rely on Fibre Channel-based transfers between the various sets of servers and their storage arrays, in order to exchange content at the "data" level. Today, media server architectures are being configured around large sets of shared storage arrays, connected in a network architecture, serving multiple access points in a "storage network" topology (Fig. 1).

The first storage systems associated with early server deployments were configured in what was later to be known as directed attached storage (DAS). Systems could be configured with storage that could expand in capacity up to the limit of the RAID controller. Often the video server systems became governed by the volume of continuous I/O activity, including those transfers between the encoders and decoders contained within the given server's chassis. This became the benchmark called system bandwidth.

Although a select set of server manufacturers enjoyed the benefits of true centralized or shared storage, most depended upon the growth of Fibre Channel technology to support data transfers between sets of server chassis and storage arrays. Fibre Channel is still used for replicating data between primary and secondary servers, but as the number of server channels grew per system, a new means to address storage and in turn system bandwidth emerged. Enter the storage area network (Fig. 2).
(click thumbnail)Fig. 2

When storage elements are configured such that the servers and workstations are connected in a flexible, managed, scalable, high-performance/high-capacity storage environment, they are considered to be in what is now popularly known as a storage area network or SAN. Some mistakenly simplify this definition as nothing more complicated than a group of disk drive arrays arranged on a network topology that can provide access to and from sets of server I/O.

In reality, a storage area network is much more than just a collection of components, such as disk drives, controllers and the like. A SAN consists of a collection of control and management systems that collectively provide for the connections, data transfers and other block-based services. In this context, we refer to services as the "input and output operations for data movement between servers and storage systems." It should be noted, also, that a SAN may provide file-based services, but for media-centric applications, SANs are generally configured to meet the needs of large, continuous blocks of data that must move predictably and efficiently between I/O and storage.

The SAN is predicated upon the migration of, and the replacement for, SCSI-based parallel transfers with a networked, storage architecture that is situated "behind" the server. The SAN represents a network different from the LAN or WAN; it is an architecture that is removed from the messaging functions of a network and focused on the movement of data between the server and its storage elements - be that disk or tape. SANs accomplish this by enabling multiple direct connections between the servers and the storage devices.

CONVERTINGTO THE NETWORK

SANs are typically built on a network topology, in a structure that does not rely on LAN protocols. Most SANs are constructed using high-bandwidth Fibre Channel technologies as the network transport - with the movement of data to and from disks using a serial-SCSI protocol (i.e., SCSI-3). The SAN provides channel-like access between many servers and storage resources, with data management over a dedicated bandwidth, separate from the LAN or other data activities.

Those working in television, whether in production, news or transmission, are responding to the concepts of a networked environment. Traditional routed, unidirectional, serial digital video transports with single-point access, connected in a switching configuration of a point-to-multipoint system - while still necessary for many high-speed, high data rate digital video applications - are progressively being converted to a networked environment for nearly every operation except where real-time, fully deterministic switching is absolutely necessary. The new realm is one that consists of networked components, with applications for both data transfer and data storage.

Network components can be divided into those elements that sit in front of the server, and those that reside behind it. It is those that sit behind the server that are the components associated with the SAN. Data movement among SAN elements is managed by the SAN architecture, and in turn needs little intervention from the servers, and has little interaction with those elements that sit in front of the server (i.e., clients, workstations, etc.).

In a SAN, certain functional elements are required. The first element is the device, which can be a collection of storage elements or storage systems. Storage devices may be a single JBOD array, a series of RAID chassis, or a system consisting of a massive array of 2 GB Fibre Channel drives arranged, for example, in a split bus configuration that provides extremely high system bandwidth and high availability storage. The device could also be an archive system, driven by a gateway server that buffers data flow and interruptions from the tape drive mechanics, while maintaining a high throughput, constant performance archive (transfer to tape) or restore (transfer from tape) operation.

The second element is connectivity consisting of components including routing, switching, media (copper or optical cabling), and the appropriate protocols for the exchange and transport of data between those components. Interfaces are at the PHY (physical) layer and are administered through specific standardized protocols that provide for compatibility between media and the physical elements on the network.

The third element, control is the management of the data paths, transfers, resources associated with those devices (e.g., storage arrays) and the regulation of actual data within the SAN. Network management is the process for maintaining the stable transport of data across a network's infrastructure. In the case of the SAN, the elements must be controlled such that peak limits are obfuscated, that server requests are handled according to their preset hierarchy for delivery, and that backup or protection paths are enabled and ready to take over when or if needed. Additional elements of control include volume management, data resource and data backup management, file access and the reliable transport of data between storage elements and servers when called upon.

STORAGE MANAGEMENT

Storage management generally has a hierarchy associated either with discrete applications or to the overall, integrated storage system. At the highest level of the hierarchy is the enterprise, ranging in size from the office or studio, to a campus or entire set of users spread geographically over great distances. As one moves from the largest level of the storage management hierarchy, the model ends up at the lowest levels consisting of such devices as the gigabit interface connector (GBIC), hubs, switches and even fabrics. This is similar to the OSI model in structure, yet more specific to operations and software.

A principle benefit to the SAN is that it provides a much higher degree of control over the storage network environment. The applications in which SANs are deployed generally expect a high degree of availability, must be completely predictable in performance, and will not tolerate wide fluctuations in overall system performance. To accomplish this level of performance, the SAN will generally be connected using Fibre Channel-based components.

One of the values in the SAN approach is it requires less processing overhead in the interconnecting modes between servers. As mentioned earlier, the SAN is ideal for block transfers, where data is not broken up into small segments, making it perfect for media-centric operations. SANs, operating over Fibre Channel protocol, are arguably most effective in the delivery of large bursts of block data, and provide little benefit for occasional desktop work such as word processing.

When looking at the applications found for media content delivery, the data almost always moves from storage to server (or vice versa) in large contiguous chunks - or blocks - of isochronous data sets that, once decoded, make up the moving image. As a clip is called from storage to the server's decoder, as in the playout function of a video server, the rate at which the data is pulled from storage is for the most part constant and steady. The SAN's performance for this application can be tuned for delivery based upon the server plane's backbone, system bandwidth requirements, as well as the need for delivery over a finite, and usually predetermined or predictable, time period.

SANs further provide high-availability applications with additional features, such as hot standby or protected switching, multiple server connections and ease of scalability. The barriers of managing direct attached storage to servers are removed, providing - in the case of data-centric server applications - a single standby server that can then support multiple primary servers.

The design of a properly functioning SAN is no trivial task. The selection of the components necessary to make up a SAN is best left to the vendors of video server systems themselves. The qualification of switches, GBICs, hubs and storage arrays takes considerable time and effort: Most end users would find little reward for the effort it would take to select the specific elements of the storage network - and would certainly risk a considerable amount of their owner's investment for a small return in the cost/benefit ratio.

Expect the integration of SAN architectures to grow as video servers continue their deployment and applications, expanding to serve the needs of the industry.

Karl Paulsen
Contributor

Karl Paulsen recently retired as a CTO and has regularly contributed to TV Tech on topics related to media, networking, workflow, cloud and systemization for the media and entertainment industry. He is a SMPTE Fellow with more than 50 years of engineering and managerial experience in commercial TV and radio broadcasting. For over 25 years he has written on featured topics in TV Tech magazine—penning the magazine’s “Storage and Media Technologies” and “Cloudspotter’s Journal” columns.