Video servers
It is no surprise that IT technology is increasingly penetrating broadcast TV operations. There are many reasons for this today, but when early broadcast video servers were introduced, IT was scarcely a consideration. At that time, there was little about a video server that was considered IT, except that it was run by a computer platform (actually more than one). Early video servers were really analog video recorders with motion JPEG recording technology inside.
Servers were initially deployed to fix the inability of analog cart machines, based on analog video recorders, to play back-to-back 10-second spots. The video server could cache content and play it out on command, using the video cart machine as the storage engine and the server as the playout. It is no accident that bidirectional channels, which either recorded or played, were no problem. The cart machine could play content whenever it was loaded to two record ports, and two playback channels then played the recorded content in a continuous stream during the break. Clearly long-form content was not the intended use, and a few hours of content were a luxury.
Holistic systems rule today
Today, the mission of modern server systems is quite different. Full-length, long-form content as well as indeterminately short interstitials are mixed on a single timeline at will. Multiple channels, with seemingly no limits, can be added to storage systems that combine high-performance online storage with nearline spinning disk and offline robotic high-capacity archives. The holistic system is scalable from small edge servers built into network distribution systems to large systems providing content storage and management for major network facilities serving many channels.
Ultimately, this has driven development from digital islands into large IT-centric networks, with storage models adapted from mainstream IT approaches. There are, however, some aspects of server technology that are distinctly different from other IT applications. First is the isochronous nature of video. A delay in the delivery of your bank balance over a network is hardly noticeable, but jitter in data delivery to video playback will crash decoders in a heartbeat, literally. This puts unforgiving requirements on a video server's throughput. Data is usually striped across multiple drives, often in complex approaches, to assure sufficient bandwidth is always available to delivery data to outputs.
Another significant change is the evolution of file-based workflow. When content is produced using nonlinear editing, it is natural to want to deliver the files to playout channels without resorting to decoding to baseband video and reingesting in the playout system. It has been a long technological struggle to reach the point where interchange of file-based content between production and playout servers is practical, but with MXF, standardized by SMPTE, and the work being done by the Advanced Media Workflow Association, we have finally “arrived.” Now it is common to directly mount editing platforms on the same storage system that serves playout ports, with edit-in-place capability, saving even the need to move files between storage systems.
We are approaching the time when spinning disk may be replaced by enormous pools of nonvolatile memory. This can provide superior access times, lower power consumption, lower maintenance and, thus, lower cost of ownership, though with high initial capital cost today. Using flash memory disks for high-performance local storage may be best accomplished with nearline pools of spinning disk to hold sufficient content, thereby making it a practical system. This might sound similar to the cache function that early servers performed for analog cart machines. It's funny how that concept will not die.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Future capabilities
Much can still be done to improve video servers in the future. Most, though not all, systems still rely on hardware codecs. At one time, encoding and decoding required too much processing power for software codecs to be practical. However, as the power of processors increases, with dual quad core advanced processors readily available in desktop machines, it is quite practical to execute complex encode and decode processes without dedicated hardware. These improvements are arriving just in time to enable advanced codecs, like full-featured H.264 implementations, in software. This should allow performance to continue to increase without churning hardware in ways accountants find repugnant (engineers often see this as less of an issue).
As the nature of servers has switched to a distinctly IT-based approach, it is tempting to look for other processes that can be combined in logical ways in playout servers. For almost a decade, a friend and client, Del Parks, has preached to me that we need to stop looking at servers as different from computer platforms. Though I have always agreed with Del in principal, we have reached the cusp of a time when it is imminently possible to add more functionality to servers, rendering them as the complete playout channel rather than just a source to playout channel. There is no technological reason servers cannot add graphics, perform image manipulations and integrate live inputs. Several companies deliver systems often thought of as integrated content management boxes, but is it just as practical to look at them as highly optioned video servers. At NAB this year, there were several developments along that continuum.
Approaching the Holy Grail
In the end, servers should be considered both content management and playout systems, because both functions are central to the systems into which servers are built. As we approach the time when moving recoding mechanisms, including both tape and optical disk, disappear from common use, it may be beneficial to use networked storage for all recording. Instead of video recorders for each ISO in a studio shoot, at a cost of tens of thousands of dollars per device, we can now use ports on a large server implementation at a fraction of the cost.
In the balance, the ability to manage the life cycle of the content is facilitated. The total workflow, from production to post and on to air, can be done with different appliances attached to a central library recording and media asset management system. As we approach that Holy Grail, we are on the right path.
John Luff is a broadcast technology consultant.
Send questions and comments to:john.luff@penton.com