Connecting It All
When building any form of network, elements for consideration include manageability, versatility, compatibility and cost effectiveness. Data networks seldom emerge as complete replacements for existing infrastructures; rather they are deployed as new or updated data centers that are married into existing data networks, spanning either work groups or entire enterprises.
For the broadcast or television system facility deploying video server and storage systems, choices in hardware employed were confined to a sole source – the video server manufacturer. Typically, the vendors were charged with complete responsibility for the selection of equipment, subsystems and applications software. They would qualify all the equipment and only support those components that were "packaged" into their nearly proprietary solution.
As technology advances, we’ve seen third-party systems (such as tape archives and archive management software) attached to video servers’ engines and disk storage systems. As storage needs increased, these engines were interconnected over additional third-party components (Fibre Channel switches) using – again – another set of third-party subsystems (intermediary buffers or gateways) comprised of conventional network servers in both Unix (typically Sun Microsystems) or Windows (Wintel systems on Dell, HP and others).
When browse capabilities, automation and media asset management software is added, the user must turn to another set of components, sometimes further separated from the video server manufacturer’s core equipment.
ANOTHER DIMENSION
To move video server storage technologies to the another dimension – one that follows models employed in the IT domain – techniques must be added that not only allow for expanded storage capabilities, but also allow for increases in bandwidth and therefore overall performance – all the while maintaining reliability and versatility.
Recalling that IT environments are virtually asynchronous and video requires isochronous behaviors, maintaining sufficiently high throughputs – so all channels can play-out (or record) continuously – requires sophisticated designs and complex hardware configurations.
For storage systems, one technique employed by the video server manufacturers is the striping of the data across several elements of one or more drive array subsystems. While the typical practices of RAID are continued, the parallel synchronous movement of large blocks of data is needed for video – and this is what typically sets video server architectures apart from IT-based servers and storage systems.
The number of inputs and outputs for the video server systems – as well as the data rate for playback and encoding (recording) – are key factors that are restricted by the overall bandwidth of the system. For example, when a system needs playback of five streams at 15 Mbps each, then the bandwidth requirements must be at least 75 Mbps just to get data from the drives, through the decoders and out of the system as video.
Transferring between storage systems (or server chassis) must be completed at rates much higher than the playback rate – typically 100 Mbps or more. This is bandwidth above and beyond the play-out requirements, but taxes the overall system nonetheless.
During playback, continuous bandwidth integrity is key. Without continuous, isochronous data from drives to decoders – provided in sufficient quantity to each decoder – a continuous stream is not possible.
Unlike software-based PC codecs (as in Real Video or Microsoft Windows Media Player), video facilities simply cannot wait for several seconds while a sufficient amount of data is buffered (e.g., stored locally) before playback commences. Any buffering must be continuous and uninterrupted, meaning the data must "stream" off the drive array in a continuous and uninterrupted fashion.
When larger scale video server systems are deployed – with dozens of playback and record channels all running different media content (data) – system architectures must be expanded in a nonlinear manner.
If additional functions, separate from record or playback, are present – such as Fibre Channel transfers between other servers or edit systems, or between tape or disk archives – the requirements and expectations of the "video server" grow tremendously.
Once peak system bandwidth is reached, performance drops and unpredictable things begin to happen. (This is a principle reason why video server manufacturers prefer to qualify and provide all the components in a system.) Building the video server’s storage architecture is therefore significantly different than building an IT network.
Still, we’ve seen that the foundations of video server storage have their roots in the IT world; this brings us to the discussion of advances in storage architectures that, once again, show the IT domain leading the charge.
MIRROR, MIRROR
Early nonlinear editing systems and broadcast servers deployed SCSI interfaces between a rather small subset of disk drives. Broadcast server storage was usually configured in a RAID 3 or RAID 5 arrangement, and often replicated 100 percent as a totally mirrored system.
As storage needs increased – and Fibre Channel for IT environments became mature – some of the video server manufacturers found a need for interconnecting between ingest servers, play-out servers and online libraries. Fibre Channel Arbitrated Loop (FC-AL) was adopted as a principle storage transfer topology between these varieties of systems and subsystems.
As more Fibre Channel products – including FC-based hard drives – became available, the video server manufacturer moved almost exclusively to this architecture. For storage, SCSI remains a predominant protocol and is carried on a Fibre Channel topology between devices. Now, with Gigabit Ethernet maturing, it has become the conduit between video server gateways and archive systems.
Given the reemergence of IP in video server systems, it may be time to visit the viability of other principles in networking and storage management for future expansion of the video server system enterprise-wide. To do this, we’ll look at one of the newer entries that is allowing the IT world to meet the needs of expanded storage and access on a scalable and wider area network topology.
Fibre Channel, originally developed for the efficient connection of storage to servers, has evolved into a multitude of new and expanding performance values. Fibre Channel’s major strengths include its low latency, high-performance capability of lengthy connection distances in a campus environment, with wide availability from vendors specializing in storage and interconnection products.
In addition to its ability to connect over longer distances than its SCSI counterpart, Fibre Channel further addresses the needs of scalability and availability.
Fibre Channel Arbitrated Loop is now widely understood. This stable system provides for redundancy as well as a switched fabric that allows for systems designed with both alternate path routing and zoning of the data storage architecture.
STORAGE MANAGEMENT
Newer 1 Gb products provide for full-duplex throughput of 200 MBps; and the Gb products on the market double that. Already, a 10 Gbps Fibre Channel system – on the horizon in both product and implementation – is showing promise.
The newest classes for Fibre Channel are in its storage management domain. The storage area networking – SAN – management emergence has been changing the architecture, and the topology, of the storage model.
The variety of available products, choices of solutions, and alternatives in technology will do much toward bringing the SAN model more into the mainstream.
One drawback is the distance limitation inherent in an FC SAN – holding storage applications within the confines of the data facilities or – at the extreme – campus-wide. This situation is unlikely to change over the next several years, curbing widespread deployment in FC SAN beyond a very local area.
One solution to the confinement of SAN-type storage over Fibre Channel lies in IP storage. Using conventional IP methods and IP storage applications, the FC SAN can be extended over greater distances. In mid-2001, the Internet Engineering Task Force (IETF) worked on a standard that uses IP storage as an alternative to Fibre Channel when implementing expanded SANs. The specification, called iSCSI, is a protocol that defines how SCSI and Ethernet are harmonized so that SCSI transfers can be made over TCP/IP networks.
Block level storage transfers are now available anywhere on the IP network and – once again – the network bandwidth now sets the throttle for data movement between devices.
iSCSI gives access to a centralized, high-performance SAN that is familiar in principle and operations to many. As a compliment, iSCSI for the IT environment expands the capabilities of the Fibre Channel SAN over much broader areas – whether on the public or the private network.
Fortunately, applications being deployed in the 1 Gbps domain work well over both 1 Gb Fibre Channel and 1 Gb Ethernet.
Gigabit Ethernet development continues and is now well defined through 100 gigabits per second. With 1 Gb Ethernet growing in popularity, 10 Gb Ethernet development also continues, and broad deployment is expected in the next two to three years.
For small organizations – where the high cost of building large-scale, robust Fibre Channel networks makes little economic sense – building SANs around the iSCSI specification will become attractive.
CENTRALIZED OPERATIONS
As the broadcast community continues its experiment in centralized operations, more media will need to be stored in a more distributed method. The movement of that information between sites and organizations continues to be one of the more costly and confusing issues surrounding centralized operations.
Moving toward less complex network-based media management systems will be a driving factor in the implementation of systems that more closely follow the IT domain or environment. It will be interesting to see how standards, such as iSCSI, will change the complexion of the video server architecture as faster IP-based networks are deployed in myriad business and media applications.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Karl Paulsen recently retired as a CTO and has regularly contributed to TV Tech on topics related to media, networking, workflow, cloud and systemization for the media and entertainment industry. He is a SMPTE Fellow with more than 50 years of engineering and managerial experience in commercial TV and radio broadcasting. For over 25 years he has written on featured topics in TV Tech magazine—penning the magazine’s “Storage and Media Technologies” and “Cloudspotter’s Journal” columns.