Harnessing a New Opportunity: The Transition to IP in the Contribution Space
Viewer demand for better quality is far from a new phenomenon. Even if we consider the last 20 years in isolation, the industry has witnessed the transition from analog to digital, standard definition (SD) to high definition (HD) TV and now to early, limited deployments of 4K ultra high definition (UHD). Of course, this evolution did not come about just through updates to TV sets and cable, satellite or IPTV distribution systems, but also the content to and between studios and the distribution of channels after production.
In fact, the updates to these contribution and primary distribution parts of the chain needed to be in place before commercial TV services could be launched to the consumer. For higher resolutions—such as UHDTV and more capable video levels enabled by high dynamic range (HDR) and wide color gamut (WCG)—we can now provide a significantly better viewing experience to the consumer. Once again, these new formats need to be delivered to and between studios, often with low latency, and then distributed once produced.
It is clear from the demand for UHD HDRTVs, desire from the studios and the consumers’ reactions to improved viewing experiences that we can expect UHD and HDR to be widely successful. This in turn will create a requirement to upgrade the contribution and primary distribution stages of the chain.
THE ROLE OF THE CLOUD
In the wider IT industry, cloud technology has been gaining more adoption and indeed it is fairly common to use cloud providers as the location for internet-delivered content, particularly for organizations that do not own their own networks. It is reasonable, therefore, to ask the question: What role, if any, does cloud technology have in the upgrade of the contribution and primary distribution parts of the chain?
It could be argued the cloud does not seem an obvious application for contribution networks, which connect multiple locations and require functionality at each one of those locations. However, this reaction is often based on considering the role of third party cloud providers, rather than cloud technology itself. In order to understand whether cloud technology can be applicable, we need to look at the problems cloud technology resolves and how, then whether those problems and solutions address the needs of contribution networks.
The technology behind large-scale data centers was introduced to address specific needs in particular: how to scale the number of applications without making operations difficult, with the aim being to maximize the amount of automation in the process. Applications are written by a huge number of different organizations, each making its own decision about hardware infrastructure, operating system (OS) choice, configuration, installation, upgrades, backups, high availability to name a few factors. As a result, it is very difficult to create a large-scale data center that supports this diversity.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Cloud providers that offer virtualized infrastructure (for example allowing the user to choose a virtual machine of a particular size and OS) do provide automation, but only really remove the complication of the diversity of physical infrastructure and connectivity. In doing so, the inconsistency of application requirements and behaviors remain in place.
APPLICATIONS SPECIFICALLY FOR THE AUTOMATED ENVIRONMENT
For any organization seeking to use multiple applications, this diversity is an operational cost that brings no benefits, but instead increases the risk of errors and makes fault diagnosis more difficult. In order to resolve the diversity, there is a need to be able to manage applications in the same manner, no matter where the application was originally written. This is where Cloud Native Containerized applications have a role to play. These are applications that are written specifically for the automated environment, using containers—such as Docker containers—and being managed by the same, open-source (usually) application orchestrator (often Kubernetes).
Superficially, it would appear that a wide-area network with processing capabilities distributed at multiple locations is very different from a typical transaction-based web application that is often deployed in cloud providers. Indeed, there are many differences in the type of applications and the need for data flow, which must be taken into consideration. The automation, self-service and consistency attributes of cloud technology though, can help address some of the operational needs of a contribution network.
Clearly, if processing is required at the edges of the network, then the topology appears to be very different from a data center. However, this topology difference need not cause an issue, as long as network connectivity allows the processing nodes to work together in a cluster; the nodes can optionally host acceleration such as FPGA or QSV, if that is relevant for the processing required. This network component is perhaps the key asset of an organization offering contribution networks, so it is reasonable to expect that on-premises—in fact on multiple premises—deployment is likely to be the most common model.
Putting this together, we can see that it is actually possible to build a distributed cloud, comprising processing nodes that are located in many different sites, connected by the network that is the key asset of the contribution network service provider. It is then possible to use the same, open, cloud technology that is used behind some of the world’s biggest cloud providers within that distributed cloud, thus creating much higher levels of automation and reducing operational complexity.
Of course, differences remain between web transaction and low-latency, high availability, content flow applications. Addressing these needs is the key enabler for making cloud technology fit the needs of the contribution network business.
Tony Jones is vice president technology and architecture, media solutions for Ericsson.
Tony Jones is a Principal Technologist with MediaKind and has advanced the cutting edge of digital video technology for the past 30 years. Starting in R&D designing post-production digital video effects technology at Questech, he moved to digital transmission systems for satellite, cable, IPTV and OTT, starting with DMV in 1996 and the through the evolution of DMV into NDS, Tandberg Television into Ericsson Media Solutions and now MediaKind, he designed and developed in R&D a diverse base of the portfolio, from set top box software, through professional receivers, contribution network equipment and video compression encoders.