Scaling beyond SAN

Much of the evolution of file storage in the broadcast environment has taken place on an ad-hoc basis. The island of production — graphics, news and editing — built storage systems to meet its own special requirements. (See Figure 1.) The original storage that could meet the needs of video production, direct-attached storage (DAS), was later supplanted by a mix of network-attached storage (NAS), a specialized file server and storage-area networks (SAN). Each has its advantages and shortcomings.

Some of the rationale broadcasters adopted when buying storage systems rested on the assumption that media files were just works in progress. The real archive was videotape, and it was used to move content between the production islands. In the last few years, the broadcast business has radically changed, and many of the changes are affecting storage requirements.

Videotape technology is reaching its end of life phase. Now that cameras can output video as files as well as streams, there are many alternate ways to store video. Optical disks, solid-state memory and hard drives are replacing videotape for acquisition. The recent decreases in price of fiber bandwidth mean that to transfer content between the production processes video networks can replace a courier moving a tape.

It's not only the technology that has changed; broadcasters must distribute more programming to more distribution outlets and in more formats. This is driving down revenue per channel, so broadcasters are forced to look for more efficient processes. These can be summed up as file-based operations — leveraging the computing, storage and networking of the IT sector.

As broadcasters replace their videotape libraries with file storage systems, the search is on for storage that meets the needs of the media industry. Broadcasters are more cost-conscious than many traditional users of enterprise storage (government, healthcare, oil and gas), yet the requirements can stretch technology. Media files are large; an hour of uncompressed 1080p50 video is nearly a terabyte. Real-time transfer rates for such a file are 375MB/s and require network technologies like 10GbE.

The older storage architectures, NAS and SAN, could handle SD files, but the move to 3Gb/s video means that broadcasters must look to more recent storage configurations to meet their needs. To provide scope on some of the requirements, a typical 15,000 rpm Fibre Channel hard drive has a sustained data transfer rate around 1.2Gb/s, which is below the real-time video transfer rate. A compositing workstation running eight video layers of uncompressed 1080p video would need a 24Gb/s connection to the storage. Of course, compression is normally used, with codecs like DNxHD and ProRes 422, and drives are striped together, but this indicates how HD video is demanding of the storage and networks.

Apart from the basic technical needs, some of the requirements broadcasters will look for in a storage system include:

  • low energy use;
  • low latency file transfers;
  • security, possibly with encryption;
  • ease of maintenance;
  • 50-100 year archive life;
  • compatibility with legacy systems;
  • automated migration to new media; and
  • DR facilities.

It should also have a low cost of ownership and high reliability. Unlike many enterprise users, broadcasters use a mix of operating systems for their client workstations — Windows, Mac and Linux — which means flexibility with the network file systems.

Many buzzwords are flying around the storage world: clusters, grid, scale-out, virtualization and cloud, to name just a few. Cloud storage can virtualize a cluster of storage for broadcasters that want to scale out its capacity. Many of these newer architectures are well suited to the needs of broadcasters and can prove more cost-effective than the traditional NAS and SAN.

Network attached storage

NAS has been a popular storage appliance, partly because it is a simple concept. It can be considered as a specialized file server, a familiar concept with any IT engineer. The problem with the NAS filer is that it doesn't scale well. The network interface is a bottleneck and limits the rate files can be served. If more NAS systems are added to a network to increase capacity, the problems are space and asset management. A user will see a number of mounted drives, but where do they store the files? If one appliance fills up, there is no automatic mechanism to use available space on another appliance. The consequence is inefficient use of the available disk storage.

What is needed is an abstraction of the physical storage from the user. That way the user sees a pool of all the available storage. The complexity of drive letters and worrying about tape backup can be hidden from users; they see a virtual data store.

Scale-out NAS

The various drawbacks of NAS have led storage vendors to look for an architecture that allows the NAS to scale out to provide large storage systems and the virtualization that hides the complexities of running enterprise storage systems.

Scale-out NAS can be a cluster of file stores. As files are written to one of the cluster nodes, a management network synchronously replicates files across the cluster. Multiple nodes add capacity to the system and provide higher throughput for read-write operations. The replication provides resilience. The file management network allows the cluster to be presented a single namespace, one drive letter, simplifying the storage mount for users.

Storage hierarchy

Storage comes in a hierarchy of performance and cost, from Fibre Channel disks and solid-state drives at the high end, down to offline data tapes. One size does not fit all, so to cover all the performance demands of broadcast and to optimize cost, hierarchical storage management (HSM) mixes storage technology. A typical broadcast system may use SAS drives for real-time video serving, SATA drives for low-cost nearline storage, and a data tape library or virtual tape technologies for the archive.

Virtualization

Virtualization promises advantages to broadcasters. It simplifies storage access for users like editing departments. They don't have to worry about which tier of storage they are using, or how it will be archived and backed up. For the IT department, it can simplify the management of storage because the virtualization application can manage the backups, maintenance and control migration to new media.

Obsolescence and migration

In the quest for faster, cheaper, smaller products, and more recently lower energy use, drive technology is constantly changing. Whether it is hard drives, videotape recorders or data tape drives, equipment rapidly becomes obsolete. Although magnetic tape has a typical life of 25 years, will you have a drive to play the tape in 25 years time?

This means that a content archive must be constantly migrated from old to new storage formats. With videotape, this was a costly process, usually only undertaken when the tape was judged to be on the verge of decay. Migration is much simpler in a file-based archive and can be performed as an automated background task.

Energy saving

No storage architect can afford to ignore energy efficiency. Data tape systems have low energy use, but there are alternatives based on disk arrays. Disks that are not in active use are spun down until needed. Massive array of idle disks (MAID) or virtual tape library (VTL) technology is a way to provide lower latency read/write than data tape, but at lower power consumption than traditional spinning disks.

Grid storage

Any owner of valuable program assets will want to store files in two or more geographically-separated locations to protect against a disaster at the primary site. There are many ways to implement this. One way is to replicate files via a data mover using business rules set up in the DAM system. (See Figure 2 on page 16.) Another approach that has found favor in the healthcare sector is to use a grid to automatically asynchronously replicate files across dispersed storage nodes. (See Figure 3 on page 16.)

Cloud storage

Cloud storage offers virtualized storage as a service. Users pay for storage capacity they need at a given time, rather than purchasing the storage platform. It offers a low cost-per-byte, but for large media files with a high throughput, the cost of the interconnections to the storage could be high and must be factored into a possible business case. Many broadcasters are wary of trusting their content to third parties, but the cloud service can be private as well as public. It can be thought of as outsourcing the ownership of the storage and its management. Issues like security can be handled through well-established technologies.

Summary

Choosing an enterprise storage system to replace a videotape library is no small task. Many storage components have a life of less than five years, yet an archive may have to store valuable program assets for decades or centuries into the future. So it's not like choosing a RAID array for an edit bay.

Videotape archives are resilient to failure. One tape may become unplayable, but that doesn't affect the rest of the collection. A central data storage system is more at risk to total failure. It means that the selection of a system must be given much more careful thought, with a thorough risk analysis of alternative solutions. This is even more important for an archive that is to store content for many decades. The rapid pace of change in technology makes it difficult to look more than five years ahead.

There are many approaches; you can own the storage or outsource to a service provider. Resiliency can be provided by clusters, a grid approach or by a service provider. So the answer to the question about which technology to use — grid, clusters or virtualization — is yes to all of them.

CATEGORIES