Software replacing hardware
Just as editing systems migrated from hardware-based, linear affairs built around tape decks and custom hardware and control surfaces, the modern video server is evolving quickly away from the dedicated hardware and dedicated chipsets that have marked the breed since its inception in the mid-1990s. Chief among the reasons for the evolution is the rapidly rising performance of computer processing power.
The modern processor is made up of two or more cores, each acting as an independent processor on a single piece of silicon, with two processors per computer being popular for server applications. Paired with the right software that can divide the tasks across the different cores correctly, processors can encode and decode SD and HD material in real time, enabling the software approach to eclipse the typical performance profile of custom hardware.
Most commonly, software replaces hardware in video servers for the basic task of compression and decompression of baseband video. While a hardware interface is required to receive and output the baseband video, it is significantly simpler and less restrictive than an interface that also features an onboard codec or set of codecs. Software also scores in three other key areas in this respect:
- If time is taken to optimize the software codecs, they can support multiple ingest and playout streams at one time in the same computer. (See Figure 1 and Figure 2 on page 16.)
- As codec quality improves (as in the case of MPEG-2 compression, which has advanced considerably over the years), video server quality advances are only a software upgrade away.
- A correctly managed platform allows any and all supported codecs to be played back-to-back in any order.
By contrast, a hardware platform is locked in, requiring a board change in order to improve image quality by allowing the use of a new chipset, or changing board sets to support a new codec. In an era of limited resources and lights-out operations, the traditional approach of board swap is undesirable and sometime impractical. The life of a hardware-based server becomes more difficult in an environment that requires multiple codecs, if all the required codecs are not supported by a single I/O board. This could quite easily be the case in a world of file-based workflows where it is not always possible to guarantee the flavor of a codec in an imported file. The resulting complexity of managing which server channel can deliver which hardware codec can be difficult or impossible for an automation system, or may require essence transcoding.
Why stop at compression and decompression of baseband video into high-resolution codecs? In software, it is possible to generate both high-resolution and low-resolution versions in parallel, enabling many people either locally or remotely to view or edit content and — with low-resolution H.264 — in relatively high quality. Software render engines can then take edit decision lists (EDLs) and create high-resolution versions from the parent content contained in the server.
More flexible conversion
Use of software is not limited to the act of encoding and decoding. This would be too restrictive for even basic use, as the need to mix SD and HD is now commonplace. In order to up-, down- and crossconvert video, a real-time scaler is needed for each playout channel. This allows a playlist to contain SD, 720p and 1080i material and be played seamlessly back-to-back in any order. If we now mix a well-executed server using software that supports an array of codecs, which can also be played back-to-back in any order with a software-based up-, down- and crossconversion schema, we gain an extra layer of flexibility. In some cases, we can even bypass the need for external conversion equipment up- or downstream of the server.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
The user cases for up-, down- and crossconversion become complex quickly. Poor metadata, such as SD content that is flagged as 4:3 when it is really 16:9, can trip up the conversion, leading to undesirable results on-air. So it helps to have a set of rules that allow users to compensate for potential errors. Sensible defaults applied to the playout port cover most cases. A user override enables potential on-air errors to be avoided. This could be either changing the clip metadata or allowing for a different aspect ratio than the selected default to be applied to a given asset. In addition, the interface should be exposed via API to enable external automation systems to control the aspect ratios through their own user interfaces. Adding AFD support (with a toggle to strip AFD if it is not wanted) and the ability to support ATSC or SMPTE variants through software settings enhances usability.
Improved workflow efficiency
Handling of time code can have a direct and critical impact on station workflows. Being able to control, via a software interface, how time code is affected on output can streamline these processes. A simple example is where the original input time code must be maintained to be synchronous with the tape archive record and carried into the digital archive, but operators require a standard start time code for all clips to aid timing during playback — such as during live productions. One solution is a software service that can act on a user-defined metadata change, which causes restriping of time code on output without changing the time code in the clip's essence.
Adding closed captions to a clip that has already been ingested is often a time-consuming process. Operationally, this usually means one of two approaches: reingest the whole clip with the new captions; or export the clip to a captioning system and have the ancillary data striped with the captions, and then copy it back and overwrite the clip on the server when the process is complete. It is possible to use a software service that will simply import the caption file into the server and stripe it into the desired ancillary slot in the server — no reingesting, export or import required. The net result is the same as the first two practices: the correct caption output to air, with the added benefit of a significant time saving for the operator.
Better file compatibility
The broadcaster's dependence on file-based workflows continues to grow rapidly. File compatibility remains an area that can cause frustration and confusion when it comes to interchanging media between systems. This is not restricted to simple server-to-server transfers, but greatly impacts interaction with archives, migration from one vendor product to another and the growing use of file-based content distribution.
In many cases, content intended for a particular brand of server will need transcoding or transwrapping — a process where the essence quality itself remains intact, but the structure of the media and the metadata for the content has to be reformatted to suit the receiving system. Both processes require an additional hardware layer. On output from the server system, this process may need to be repeated so that the next system in line can handle the file. At best, this adds cost and operational complexity. At worst, it adds cost and degrades the quality of the material with each transcode.
Continue to next page
A better approach is a file system that stores essence plus metadata. In this type of system, the transwrap operation is actually built into the software of the file interchange mechanism, i.e., it does not add extra hardware, extra cost or operational complexity. What you end up with is a virtual file system — a software-generated front end that presents the essence in the server in a wide range of wrapper types, easily accessible to automation or ad-hoc file interchange operations. (See Figure 3 on page 19.) The resulting virtual file system can receive a wide variety of wrapper and essence types, store the essence unmodified and allow you to extract the essence in any supported wrapper — not just the wrapper that was used when the content was transferred to the server. This becomes particularly valuable when dealing with servers from a variety of vendors.
Not just files can be imported into a server. Satellite feeds, which have traditionally been decoded to baseband and then re-encoded at a server record port, can now be recorded directly on to the video server via ASI or IP. Multiprogram transport streams can be picked apart in software, and the resulting program streams can be stored on the server storage for edit-in-place and subsequent baseband playout. This software process not only significantly reduces hardware costs, but also improves the quality of pictures — eliminating generation loss through a decoding and re-encoding process.
Simplified on-air graphics
A growing trend is the desire to be able to overlay graphics on top of the video during playback. There is a range of solutions on the market that address this particular workflow. Typically, they are single-box islands, designed to allow low-cost channels to go to air. These solutions are split between servers with third-party graphics hardware built-in and systems that use software to generate the graphics in sync with the video. This single-chassis-per-channel approach means each channel's assets need to be independently managed.
In larger operations, improved efficiency for graphics playout is possible if a shared storage approach is used. Software-generated graphics with shared video assets simplify the job of the centrally managed automation system, resulting in improved operational efficiency. And while the video and graphics exist as data in software, it is also possible to display the multiple record and play ports via a multiviewer DVI output.
Improved asset protection
Protecting assets so they are always available for playout is key to any server system. Because spinning disks remain central to storing assets, RAID protection is used to ensure the assets continue to be available should one or more drives fail. Typically, RAID protection is applied by using hardware. This is robust, but can be restrictive when drives need to be rebuilt after a drive failure. A software approach can help. If executed correctly, it only rebuilds the portion of the disk that is missing data. In a similar fashion, protection from a single- or perhaps two-drive failure in a volume may not be enough. In this case, completely mirroring all the storage adds another layer of protection.
In one approach, this is referred to as intrinsic mirroring, where software in each I/O host manages writing two copies of the content simultaneously to two independent storage systems — each with its own RAID protection. This approach results in two frame allocation tables that uniquely track each storage system's assets. Problems on one storage system have zero impact on the other, resulting in improved system resilience. Issues are resolved quickly in software, and fixing any mismatches in content between the two storage systems requires only copying the missing data from the good storage to its mirror.
Conclusion
As we have demonstrated, the impact of the software approach to the modern broadcast video server can be profound. The use of software allows for new technologies to be applied to existing hardware, which translates to tangible cost saving over the life of the product. In many cases, upgrades no longer involve card, chassis or entire system replacements. A simple software upgrade applies the latest technology to the existing investment.
Andrew Warman is senior product marketing manager, servers, at Harris Broadcast Communications.