Bandwidth Basics: Looking at Storage From a 3D Perspective


(click thumbnail)Transfer Time as a Function of ThroughputThe introduction of Gigabit Ethernet connectivity and networked storage into video server technologies opened the door even wider to a newer generation of multiformat video delivery and content storage capabilities. With 300 GB-plus hard drive capacities and technologies including perpendicular recording, and a market driven push toward high-definition content delivery, the challenges of mixing real-time playback in HD and SD servers are now merged into higher performance and wider bandwidth networked video servers.

Current generation GbE (Gigabit Ethernet) connected systems offer improvements over heretofore expensive and cumbersome Fibre Channel systems. GbE offers better data moving capabilities for handling local and distributed storage. These IP-based technologies provide a path to high-availability systems—a crucial element in getting a file to play on demand, synchronously and isochronously. Important factors to obtaining high-availability include redundancy, fault tolerance and resiliency.

Storage resiliency for online, nearline and disaster recovery are key aspects of a system’s design. For true storage resiliency, a system must prevent errors and system failures; and be able to recover quickly and unobtrusively from those errors and system failures. Such systems typically utilize early detection methods and self healing processes, core elements in what’s considered “under-the-hood” in server/storage architectures.

Mitigating business interruptions necessitates having media stored in more than one location. That location may be adjacent to the main store, as in mirrored servers or archives, or off site in a deep archive or disaster recovery bunker. With modern GbE networked storage configurations, the ability to populate those redundant stores becomes easier to configure and manage.

However, IT-systems engineers must understand that managing the isochronous delivery of media files requires different approaches than those employed in traditional office environments, database or transaction-type operations.

3D CONFIGURATIONS

Media operations require systems that support faster access times, higher data rates and for the transport of larger contiguous file sets between storage subsystems or site locations. Next-generation storage systems employed in video media servers address these requirements by effectively managing file movement (transfers) between storage elements while at the same time improving bandwidth, access time and content recall for live-to-air operations.

Understanding storage system configurations for file-based mediacentric operations; and why manufacturers sometimes take those approaches requires a basic fundamental understanding of bandwidth and access. Serving up storage for content delivery might be thought of from a three-axis perspective.

In this model, the X axis defines the amount of physical data needed in a storage subsystem. Storage is typically specified for some number of hours of storage at a given data rate with a definition of video and audio essence capacities. An example of an appropriate specification for storage might be stated as “120 hours at 56 Mbps of high definition, MPEG-2 4:2:0 video encoding with four AES channels (i.e., up to 8 tracks).” Next, the storage requirements should annotate “sufficient VANC data for closed captioning (CEA-708), AMOL watermarking or program content tracking data, a timecode track and metadata, such as to carry information as to the Dolby mode or other private user data parameters.”

This specification is provided to the server vendor, who then inquires as to the required number of simultaneous play-out and record channels at the defined 56 Mbps high-definition data rate. Storage calculators, specifically configured for the particular model of server and methodology of storage protection (e.g., what RAID level), are then used to estimate the storage requirements.

If the systems is a SCSI-based system the number of LUNs (logical unit numbers), a unique identifier used on a SCSI bus to distinguish between devices that share the same bus, may be stated. The term LUN is used in the SCSI protocol as a means to differentiate individual disk drives within a common SCSI target device like a disk array. A base number of LUNs is provided by the vendor, given that there are no additional service requirements in the base system.

Additional system service requirements would include such needs as FTP file transfers between the main storage system and another server; file transfers into the system from a transcoding server; activities such as archiving or restoring to and from the server; or transfers to or from a remote site.

If the system requires mirrored servers, that number also enters the equation and is applied to operations including load balancing between main and backup, an ongoing agenda item that becomes more complex when both record and play-out functions occur in concert with one and another.

The next dimension, our Y axis in this 3D model, addresses bandwidth and is often referred to the system throughput. Sufficient bandwidth is important when other service activities—such as media movement from server to archive or server to transcoder—must occur simultaneously with storage calls for playback and/or ingest. Think of the bandwidth just like the volume of a pipe—the bigger the pipe, the more volume (of data) that can be moved. Bandwidth can be increased by upping the count of the LUNs, and by providing additional storage arrays to the overall system.

The final dimension, the Z axis addresses the depth of protection and system resiliency. The simplest protection is the most common, the RAID level employed at the individual drive or drive array level. RAID 1 (mirrored) provides duplication of data, but little bandwidth improvement. RAID 3 or RAID 5 both provide parity protection of data, and an element of increased bandwidth. Combining RAID levels (e.g., RAID 50) increases the system bandwidth and at the same time, the protection elements; providing a failover for data redundancy with a modest decrease in performance during that failure period.

Since data movement and storage is of paramount importance—the time necessary to access that data is always in high demand. The faster the network and the quicker that a server can deliver the media content, the higher the overall performance becomes.

From a scalability perspective, anytime that a system does need to grow, the balance of all three of these dimensions must be considered. It just isn’t simple anymore, given the myriad of file types now used in the industry, the need for transcode and proxy operations, more hybrid codecs supporting HD and SD, and that ever changing need to store media on hard disk drives. With all these elements, the improvements in the design of storage subsystems and their networking components is helping make it just a little easier to move forward to an all file based operations.

CATEGORIES
Karl Paulsen
Contributor

Karl Paulsen recently retired as a CTO and has regularly contributed to TV Tech on topics related to media, networking, workflow, cloud and systemization for the media and entertainment industry. He is a SMPTE Fellow with more than 50 years of engineering and managerial experience in commercial TV and radio broadcasting. For over 25 years he has written on featured topics in TV Tech magazine—penning the magazine’s “Storage and Media Technologies” and “Cloudspotter’s Journal” columns.