Intelligent Storage for Media Content Systems

Video servers first focused on broadcast television operations for commercial playout and only recently migrated to long-form program content management. As file formats and media structures have stabilized, and the interface capabilities between devices and platforms have matured, the "video server" has truly moved from that of video-centric storage to that of media content storage and processing.

As the evolution of the video server continues, one of the leading topics remains extending the storage capacity of server systems. Individual hard drive capacities have increased into the hundreds of gigabytes. System bandwidth requirements seem to be growing proportionately. So the new question becomes "how will these storage components be assembled, from disk array perspective, to meet the challenges of the future?"

(click thumbnail)Fig 1: Example of network attached storage (NAS) in a multiplatform production environment where an "intelligent storage" approach is used to manage overall bandwidth across the network.
Depending upon the type of activities, when looking to increase storage capacities for the facility or when making choices for a media-server system in general, a routine question often becomes "how can my storage capacity grow." When that facility is a content creation house, a more prevalent question becomes "what type of data can be stored on the new storage capacity?"

With the principal forms of disk storage systems-NAS (network attached storage), DAS (direct attached storage) and the venerable SAN (storage area network)-users must continually analyze both cost and performance issues. For production services, which routinely mix multiple files or compression formats, users must understand how their server platform will perform with formats other than more conventional MPEG or DV.

To maximize interoperability, the server and storage system should ideally store and interchange many forms of media files. Files may be found in any mixture of individual essence (video, audio or graphics) and its associated metadata; be stored in raw or compressed form; and may be individual clips or completed segments. The ideal server system would be a large transparent bit-bucket capable of storing everything from GIFs and TIFs, to MPEG -- 2 and MPEG -- 4. The file structure could be a mix of MXF, GXF and AAF, or could include associated and disassociated metadata in HTML, XML or some other proprietary format.

CHANGE IN STRATEGY

Historically, server vendors have remained application-specific, offering only a modest (two or three at the most) variance in file formats while minimizing the potential for failures during a mission-critical commercial or program playout. Driven by the acceptance of storing and managing all forms of media, the storage vendors who supply drives and systems to both server vendors and to end users are changing their strategy for storage solutions. Recently, server vendors have taken the approach of disassociating storage platforms from the server engine, believing that this allows their products to cross-service many different moving image categories. Some have taken an approach whereby a large (NAS) system is managed in a "file agnostic" scheme, tailoring the system to various applications from graphics composition to video editing to real-time/on-air playback.

Regardless of the hardware solution's approach, the proper selection of all the components in a media platform requires a thorough definition of uses and workflow. Having a clear understanding -- before purchase -- of which components will make up the online and nearline storage, offline and archive, proxy or browse elements, network backbone and media, file format transcode engines, metadata generators and editors, and particularly the workflow or operations management applications, will help "bulletproof" the system from a workflow perspective.

Looking at a production perspective example, a facility focused on graphics and editorial content with no mission-critical "on-air" services, the workflow processes include capturing or generating images or graphics, creating 2D and 3D effects, rendering and compositing those image sequences, as well as nonlinear editing for release of finished content. Workstations consist of serious machines with dedicated high -- end applications and conventional shrink -- wrap products loaded onto common desktop computers. Production tasks include routine functions such as file format conversion, paint or touch-up up through high-definition (and beyond) image manipulation, effects work and image cleanup or wire-removal.

Exchanging elements between workstations and centralized storage was previously confined to network files transfers using FTP or telnet sessions. Files were transferred to the local drive or array, or if the workstations had sufficient computing bandwidth, they were connected via Ethernet through a switch to a server that permitted simultaneous work across the network. This approach is not unlike that still found in both small graphics boutiques and larger full service post-production facilities. However, as the sizes of files get larger and workstations become more powerful, network congestion occurs as dozens of requests for transfers of large contiguous files are compounded with only modest file server management. The result? The production process is brought to its knees.

'AS IS' BASIS

Today it is no longer necessary to suffer from the headaches of an overtaxed network or unmanaged central storage array. The same storage vendors that used to only supply OEM drive arrays to the video server vendors have begun to take a new stance. In growing proportions, dedicated storage solutions comprised of a hard drive array chassis and storage management software, are providing avenues that allow end users to tailor storage and bandwidth to meet their individual facility needs on a workflow basis, (see Table 1). These new "intelligent storage systems" can be adjusted to throttle input and output bandwidth, to set priorities for specific workstations or applications, and to allow multiple operating systems to co-exist and share common storage volumes. Critical requirements are provided with sufficient delivery flow of the datastream to maintain continuous operation. At the same time and on the same storage, less critical or moderate performance software tasks are provided with just enough bandwidth to operate without degrading the other systems.

(click thumbnail)Table 1: Bandwidth requirements
Intelligent storage applications are constructed without a dedicated server, or without the reliance on workstation CPU support that traditionally allocated its own horsepower to manage the bandwidth and thereby inhibiting performance due to internal bus-bandwidth restrictions or other complications.

This new approach to storage subsystems allows the user to control the environment and tune for maximum performance. Users can add (or subtract) not only the space allocated for storage on an array, but the addition of storage either in segmented volume form or as an overall gross increase the total storage volume. Some vendors allow storage expansion and configuration on the fly, during work sessions, and without system shutdown.

Extending the idea of the modular, intelligent storage system to multiple operating systems that can address the same volume augments the recent acceptance of Linux and the request that operating systems such as OS-10 and Windows XP can share data among platforms. These latest concepts are opening the door for a single storage system as the only store for an entire facility, in turn providing for flexibility, scalability and extensibility. With drive densities increasing, these same stores can also be transported to other facilities, in whole or in part, for remote rendering without necessarily needing to first offload or transfer to another medium.

With solutions that can control delivery bandwidth parameters, the potential to add greater amounts of online storage for the enterprise also increases. Even though a broadcaster or content delivery service may still prefer to segregate mission-critical program or commercial playback using conventional video server platforms, the recent developments in both NAS and SAN technologies coupled with intelligent storage management, can extend the desktop, workstation and content production workflow to a next generation need.

Karl Paulsen
Contributor

Karl Paulsen recently retired as a CTO and has regularly contributed to TV Tech on topics related to media, networking, workflow, cloud and systemization for the media and entertainment industry. He is a SMPTE Fellow with more than 50 years of engineering and managerial experience in commercial TV and radio broadcasting. For over 25 years he has written on featured topics in TV Tech magazine—penning the magazine’s Storage and Media Technologies and Cloudspotter’s Journal columns.