Virtualized media data centers
Media companies have embraced IP-based architectures as the standard solution for file-based media production. This evolution enables today's media environments to treat video as ordinary files, independent of the video format. On a broader scale, it has launched a paradigm shift away from closed, proprietary media workflow solutions and toward architectural solutions constructed with generic IT technology.
The innovation of Data Center Bridging (DCB), including the creation of a lossless, high-quality storage networking environment, puts the Ethernet-based network at the center of both storage and client infrastructures. (See the article “Employing Data Center Bridging in media networks” in the January 2010 issue of Broadcast Engineering.) However, potential problems remain when porting media applications to a file-based environment.
The high throughput and quality demands of file-based media production require powerful, scalable storage systems with lossless characteristics. At the same time, the peculiar “bursty” characteristics of media file transport to client applications, such as high-resolution post-production editing, pose similar requirements for the IP client network — requirements that only data center networks have historically addressed.
Media production workflows are typically complex and dynamic, integrating many different media services (i.e., ingest, storage, transcoding, etc.). Given the heavy transport demands of media, most media services would benefit from being physically closely coupled with the clustered storage. Hence, the optimal media workflow architecture should integrate both storage and media services within a storage cluster environment, based on a scalable virtual platform.
Our laboratory tested the viability of a virtualized media data center architecture and found that this approach simplifies data flows, increases workflow efficiency and radically reduces overall workflow execution times.
Media-oriented architecture
The broadcaster back office has evolved to accommodate file-based workflows in a largely unstructured, chaotic way. Often, vendors have created solutions for a particular media service without taking the complete technology picture into account. As a result, most architectures today simply link multiple self-contained media service products — each with its own local storage, servers and network — in a best-effort mode via the central IP network.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
This approach creates a great deal of duplication and complexity, to the point that the system is nearly unmanageable. Even more problematic, the architecture becomes heavily dependent upon the central IP network — a network composed of classical IP switches designed for the IT world, which are no match for the bursty nature of media traffic. (See Figure 1.)
Because most media services (and their local storage and servers) reside in the client IP network in a loosely coupled way, the overall infrastructure consists of file-based islands. Most data traffic is launched by the media asset management (MAM) application or the media applications themselves, independent of each other and unaware of the underlying architecture. Effectively, traditional sequential tape-based workflows have been replaced by almost identical sequential file-based workflows. This leads to inefficient data flows, as files are exchanged back and forth between islands in an any-to-any traffic pattern, with many duplicated copies. Because of the bursty nature of the traffic, packet loss results in unpredictable transfer delays or even transfer loss. Fortunately, we can envision a solution.
The reason many media services use local (and often, proprietary) storage is that they require processing power close to that of a storage service. The straightforward answer then is to unify local server and storage platforms for all services into a centralized platform — a virtualized media data center. Assuming such a platform meets several core requirements (guaranteed throughput, linear scalability, high reliability, support for multiple operating systems, flexibility and efficiency through service virtualization, etc.), it can create a much simpler, more efficient architecture. (See Figure 2.)
In this scenario, almost all media services run on the processing nodes of the virtualized media data center cluster and use the cluster's uniform central storage. This scalable clustered system now replaces the IP network as the basic platform for interconnecting media services.
Virtual media data center
To test the viability of this approach, we created a media data center architecture using a DCB-based Workhorse Application Raw Power (WARP) media storage cluster employing IBM's General Parallel File System (GPFS), the Cisco Nexus 5000 DCB switch and Cisco's Unified Computing System (UCS-C) servers. (See Figure 3.) As demonstrated in our previous tests, a DCB cluster with Priority Flow Control (PFC) implemented can sustain 100-percent efficiency and ideal scalability in file-based media environments. (See the article “Building IP-centric media data centers” in the March 2010 issue of Broadcast Engineering.)
The DCB cluster enables the physical machines of the processing network-attached cluster (NAN) nodes to run different operating systems (Windows, Linux, etc.) and run multiple media services using different operating systems within the same cluster. However, we can optimize resource utilization of these processing nodes by defining multiple virtual machines on the physical NAN nodes. Each virtual machine acts as a GPFS cluster node, meaning that the same physical machine can now run multiple instances of different operating systems, creating a virtualized architecture. (See Figure 4 on page 50.)
Optimizing workflows
Implementing media services on the processing nodes of a virtualized media data center mounted on clustered, lossless central storage can shorten the transport paths and simplify data flows considerably, increasing workflow efficiency and optimizing the client IP network. To demonstrate this, our lab tested a relatively simple workflow example: the ingest of a video clip from a file-based camera into the central storage, and the selection of the material and transport to a high-resolution editing station.
Continue to next page
To understand the advantages of the virtualized media data center approach, let's first explore this workflow in a typical file-based production environment. It consists of essentially three steps:
- Material is transferred from the memory card of the camera into central storage.
- A low-resolution proxy is created so that any journalist can view the material and select the relevant clips. The journalist creates an editing decision list (EDL) to mark the selections.
- The system uses the EDL to transport selected pieces of material to the nonlinear file-based editing suite (in this example, an Avid Media Composer connected to an Avid ISIS platform). There, the journalist works with the editing technician to perform the editing and create the result as a media file or multiple media files.
When we view the data transfers required to execute this workflow in a conventional file-based production architecture, we see that the actual data flow is much more complex than the simple workflow would suggest.
There are several reasons for this. Because the individual work centers in this model operate as islands — and have not been optimized to integrate efficiently into an overall file-based workflow — many extra file transfers are required. (For example, format conversion to proxy video may require fetching the video from the central storage, transporting it over the central IP network to the transcoding work center and transporting the result back again.) Work centers from different vendors may also require multiple copies of the same media with different Material Exchange Format (MXF) file wrappers. Today's file-based production environments also make many duplicate copies of media for redundancy and data protection purposes.
This seemingly simple workflow results in a data flow that requires 36 file transfers over the storage and IP network. This includes video and audio file transfers between local and central storage for ingest, rewrapping, conforming, transcoding for both high- and low-resolution versions, backup copies of all versions, etc. Even though this network uses 10Gb/s backbone links, packet loss induced by the bursty nature of the media traffic heavily deteriorates the throughput efficiency to as low as 10 percent of the theoretically available link bandwidth. This, together with the large number of consecutive transfers, leads to a long overall execution time.
Now, let's examine this same workflow as implemented on a virtualized media data center platform, using Ardrome MAM system media services.
The setup for this test employed the virtualized media data center described earlier, configured as follows:
- A Windows GPFS NAN node served to ingest video files from the camera.
- The main media services used in this workflow (rewrapping between MXF file formats, transcoding and browse visioning) were implemented on a single physical NAN node configured for up to eight virtual nodes: two Windows virtual machines to perform the rewrapping, five Linux virtual nodes to perform parallel transcoding tasks and one Linux virtual node to expose the proxy video to the browse viewing client. (The UCS-C server provided more than ample RAM capacity to support all of these virtual nodes.)
- A third NAN node was used to run the respective MXF Linux- and Windows-based server gateways for both Apple Final Cut Pro (FCP) and Avid editing clients accessing the high-resolution media. (Virtual Linux and Windows nodes were implemented side by side on the same physical machine.)
The workflow was implemented as follows, for both Apple FCP and Avid Media Composer material:
- Files were ingested into the clustered central storage. Immediately after arrival, a hard link was created linking the high-resolution media files to the correct directory structure of the respective FCP or Avid project structures. This gave the high-resolution editing clients access to the media without the need to move or copy the files to a different directory.
- The files were read by the rewrapping process running on a virtual Windows node.
- Files were written back to the central cluster directly to the correct final location — the high-resolution directory.
- The files were then read to the transcoding engine on the Linux virtual machine of the same node. (In the future, this could be further optimized by transferring the file directly between the memory of the rewrapping virtual machine and transcoding node, since these virtual nodes reside on the same physical machine). The transcoder generated the low-resolution version and placed it into the low-resolution directory of the central storage. Transcoding ran at 1.2X real time (using DNxHD 120Mb/s HD video).
- The media item was then checked into the MAM system itself.
Continue to next page
The virtualized media data center implementation effectively ran the workflow and clearly simplified the resulting data flow.
Ultimately, the test demonstrated the following advantages of this approach:
- Total overall workflow execution time was 80 percent faster than the traditional architecture (with the transcoding speed, not the network, setting the maximum speed).
- Total file transfers were reduced by an order of magnitude.
- The virtualized media data center performed all processing steps, offloading all media traffic from the IP network and making optimal use of available CPU and memory resources.
- No excess, duplicate, or intermediate copies of media files were stored.
- There were no bottlenecks of the 10Gb/s lossless cluster network in any of the workflow steps.
- High-resolution material was made available to editing clients immediately after ingest, with no double wrapping/unwrapping process required.
- No time-consuming transfers of high-resolution material to external storage systems were necessary.
Conclusion
We have demonstrated that implementing media services on a virtualized media data center mounted on clustered central storage shortens transport paths and dramatically simplifies data flows compared with today's common file-based media architectures. This approach can reduce the number of file transfers, reduce IP network traffic by 90 percent or more and improve workflow execution time by 80 percent. For media network architects, the ability to radically reduce dependencies on the client IP network will also make it much easier to design a media-aware client network capable of handling bursty media traffic, and avoid bottlenecks and performance issues.
Luc Andries is a senior infrastructure architect and storage and network expert with VRT-medialab/IBBT/CandIT-media.