Media Processing in the Cloud? The Big Problem On Everyone’s Mind

Over the last few months, the narrative elements surrounding the needs, concerns and technology plans of media companies have started to converge into common themes. One very clear message is that media companies are struggling with the constant growth in the number of outlets they are required to deliver content to.

While, on the face of it, this growth is a good thing, a very real issue is that each new outlet requires that some portion—often a very large portion—of their back catalog needs to be repurposed in some way to make it suitable for the outlet in question. More and more of an organization’s technology investment is directed at creating a media supply chain that offers greater flexibility in throughput and efficiencies in processing. This doesn’t always directly refer to pricing—the efficiency may present itself in a reduced time to market in provisioning a new outlet for use. For many companies, though not all, the logical solution is to move the media supply chain up into “the cloud” and make use of the well-understood advantages that such architectures promise.

This migration has already taken place for many organizations, and others are in process. This is particularly true for those companies who obtain most, if not all, of their program masters from external production companies. In these scenarios, the original masters are already located “in the cloud,” so the ingest operations and the master storage are actually already in the cloud. Now they need to have the rest of the content supply chain follow the media.

After spending years looking into the relative pros and cons of several supply chain scenarios, we find that there is one simple truth: for any viable solution, you want to put the processing where the media is. If a company’s source material and delivery destination are in the cloud, then it doesn’t make any sense to process the material anywhere but in the cloud. The scalability of cloud processing, along with the idea of consumption pricing for the infrastructure and processing make this a “no-brainer” decision—especially in the mind of the CFO, for whom consumption pricing and the ability to move the expense over to the op-ex budget are like catnip.

But there is subtlety to be considered here: the cost to download the media back to an on-prem system or even over to some other media company (the so-called “egress charges”) if you need to perform some of your media processing “on the ground” can be substantial. Don’t forget that in general, we’re talking about high bandwidth mezzanine files in many cases, which prohibits the use of significant compression on the material. The cost of transfer back to the facility has to be factored into the overall solution’s cost/benefit equation. Again, to be clear, you need to have the processing where the media is.

It's NOT just about transcoding

While it is true that transcoding/packaging of the media is a substantial part of the process of prepping for a new outlet, it is far from the only issue. In any on-premise workflow, graphics are added/removed/altered, dialog is replaced, legislative advisories are added/removed/replaced, promos are inserted, branding snipes (animated graphic elements) are added, and much more. In many cases, these additions and alterations are performed by an automated “bag and tag” edit function running largely autonomously.

What media companies really need is the ability to have all of the tools that they use on-premise to create a property available to them in the cloud—including the workflow automation engine that ties all of these processes together into an efficient, cohesive whole. Transcoding alone simply does not suffice. For example, more and more outlets require IMF packages as the mechanism of delivery. These are not simply transcoded copies of the original master (which may have been made many years ago), but program segments that require significant processing in order to create the multiple components that make up an IMF deliverable. It’s not just transcoding!

It's also not just an application running on a virtual machine

The simplest approach to “cloudifying” a processing solution for many customers is simply to run up a number of virtual machines on some cloud platform and install instances of the monolithic application that they’ve been using on-prem on those machines. While it is indeed a simple way to get started in a cloud-based solution, it fundamentally disallows several of the most favorable aspects of cloud-based compute—on-the-fly scalability and pay-as-you-go pricing.

Such an approach only offers the same method of scalability as the on-prem solution: purchase of sufficient permanent licenses to cover your greatest throughput needs. That is just not tenable in any real-world scenario. An intelligent processing platform must ideally be based on a microservices architecture, so that the individual actions in any workflow can be scaled through standard cloud management means (of course, the automation engine must be “cloud aware” also for this to be achievable).

Do you still need on-premise media processing?

Many discussions on cloud-based media processing seem to be making the point that the only solution moving forward is a 100 percent cloud-based architecture. This is simply not the case. There are a number of scenarios where cloud-based processing—and particularly processing hosted by a public cloud provider—is not preferable or even feasible. There are data ownership provisos in many source agreements that prevent material from being housed on a public cloud platform. There are also some scenarios in which an on-prem platform can actually be more cost effective than a cloud approach—mainly those where the “run rate” business is well known, and there is less need for “bursts” of processing.

For many media organizations, the solution is to go for a hybrid approach—cover the run-rate business, or a significant portion of it, with on-prem processing—but with a “safety blanket” capability to process in the cloud where it makes most sense (process where the source material is), or when the company has a burst of work which cannot be fulfilled with on-prem processing within some pre-determined time constraint.

Many CFOs find this approach to be attractive too, as it makes pretty strong financial sense. Indeed, this is the approach that other industries have adopted as they’ve made the transition to cloud-based processing. The secret here, though, is to ensure that both on-prem and cloud-based workflows can offer all of the same capabilities with no exceptions and hopefully with the same interface. Once again, it’s not only about transcoding. ALL of the processing steps and options need to be available in both scenarios for this approach to be successful.

Solutions where you want them, when you need them

The choice of where the media supply chain should be located is, as previously stated, largely predicated on the location of the media to be processed. This will naturally vary from company to company—and may actually vary within an individual company based on the details of source masters and the company’s strategic and tactical goals for present and future operations. I believe that the hybrid approach is the one that will make sense to many organizations and I would encourage companies to seek solutions with this level of flexibility as they consider the challenges that lie ahead.

CATEGORIES