Recap: Impact of IP & Cloud on Media Workflow

NEW YORK—IP-based media workflow technologies are making way for a fully virtualized operation in which functions can be leveraged on-demand, and files can be considered by their constituent objects, like threads in a scarf that are unraveled and rewoven to suit the season.

“We also have a solution that well-proven over the years, and that’s SDI. In that context—what’s the problem I have to solve? I don’t know, but there’s sort of a natural evolution in all things, and certainly in technology, IP is the next step in that evolution,” said Jim DeFilippis to media professionals who gathered in New York last week for a one-day event, “The Impact of IP & Cloud on Media Workflow.”

DeFilippis opened the proceedings with a retrospective on that evolution, from tape-centric, linear operations to abstractions of those operations executed in the cloud; from following a tape cart down the hall to conjuring the digital equivalent on a screen.

DeFilippis, a Fox alum who now provides expertise to multiple clients, sat in on the first of three panel discussions exploring “The Impact of IP & Cloud on Media Workflow. He was joined by Gary Olson, principal of GHO Group, and Geoff Stedman, senior vice president of marketing and scale-out storage solutions for Quantum, which sponsored the event alongside Elemental, IPV, Telestream and CatDV.

MIGRATION 2IP
The discussions covered the status of IP and cloud technologies in terms of development and deployment. On the development side, there are standards and there are polite suggestions, proffered by a variety of groups, Olson said. The combination makes for an alphabet soup of acronyms: AIMS, VSF, ASPEN, NMI, NDI, TICO, TR-03 and -04, RFC 4175, ISO, SMPTE, IEEE, IETF, AMWA, JT-NM, EBU, etc. Most of the groups are inter-related and much of the work is ongoing.

“What are we trying to solve with all these standards? The maturity is going from your Best Buy router to a more sophisticated network,” he said.

Meanwhile, the ongoing development of multiple standards has folks waiting to purchase equipment. Panelists noted that the SDI-to-IP migration is a work in progress. Whereas the cost of SDI once was born by the entire industry, research and development funds are now going increasingly into IP, thus a shift is expected to follow.

Stedman noted that internet technology is in a continual state of development, and to “use IP technologies when you have a problem that it solves.”

What you can do in a server-based environment versus a proprietary environment is different, Olson said.

“You can have a traffic system that lives in a virtual machine, say, instead of having seven servers, you can have one server with seven slices. Right now, a managed switch has all the brains in the switch, and it has to be replicated across the fabric. With a software-defined network, you define it once and it replicates across the environment,” he said.

The discussion touched on the single-switch architecture reflective of the SDI environment, and how that concept may need to evolve in the IP world, where the last trouble-shooting step is the reboot. One attendee noted that his operation has bifurcated the network because of the number of functions that have to be performed. Stedman noted that in cases where a single switch is used, it has to be designed for peak load.

METADATA & HYBRID WORKFLOWS
During the second session of the day, TV Technology Executive Editor Deborah McAdams (R), moderated the “Changes in Media Management in a Hybrid Workflow Environment” panel with (L to R),James Varndell, technical product manager for IPV, Ryan Servant, vice president of North American business development for CatDV and James Snyder, senior systems administrator at the Library of Congress. The second panel of the day, with James Snyder, senior systems administrator at the Library of Congress, Ryan Servant, vice president of North American business development for CatDV, and James Varndell, technical product manager for IPV, discussed media asset management in the hybrid IP-SDI workflow environment.

This group concluded two things: no two operations are the same and the metadata management is increasingly crucial. This is especially true at the Library of Congress, where material has to be archived in a way that ensures searchability for the next 150 years. Consequently, search terms must be considered from a popular culture perspective because descriptives change over time.

Both Varndell and Servant said MAM providers are under pressure to provide customized systems. Servant said MAM platforms have to be designed to export in multiple delivery formats, and to play well with technology from multiple vendors.

This includes cloud technologies. Servant said that according to research carried out by CatDV, about 80 percent of end-users say they want to do “something” in the cloud over the next two years, they’re just not sure what. Security remains a concern with cloud technologies, as well as the costs associated with storage and retrieval, according to the research. (Ed note: This comment was originally and erroneously attributed to Mr. Varndell, and corrected Nov. 23, 2016.)

Panel No. 3 dug further into the cloud with input from Shawn Carnahan, chief technology officer of Telestream; Manual De Peña, eastern region senior field operations director for Amazon’s Elemental; and Eric Pohl, chief technology officer for National Teleconsultants.

SEND IN THE CLOUDS
De Peña said Elemental is performing proof-of-concepts for cloud-type deployments, “because people want to feel as comfortable going into the cloud as they do on prem.” He also mentioned the opportunity to leverage peripheral services a cloud provider may offer.

Carnahan said depending on where a client’s data is located, they’re probably going to want to be near it geographically to make bandwidth more manageable. He also noted how block storage is different from object storage, in that objects are encapsulated data constituents of blocks.

With object storage, Pohl said the idea is that APIs will be used to interact. He also described the flexibility of spinning up an infrastructure in the cloud.

“You can define your own virtual machines, and create your own LAN within the cloud. Most of these data centers are built on non-blocking architectures. You can sit down with a tool and define that within 10 minutes. You can go to Amazon and order the components you want and have them loaded onto a virtual machine.”

The downside, Pohl said, is that if you leverage a lot of the tools on AWS, for example, it has an impact on portability to say, Azure.

CLOUD 101
Stedman, who delivered the closing remarks, parsed cloud types and potential uses. So-called “private” clouds are dedicated, whether owned by a user or some other party. A public cloud can be viewed as a shared space, he said, and a hybrid environment blend both.

“The cloud fundamentally completely extracts hardware and software. When you write to Amazon, do you know what storage you write on? You don’t. You’ve completely lost sight of that hardware, and you don’t care as long as you have the SLA that you want. It applies equally to private and public clouds,” he said.

Another advantage is having access from anywhere, and the ability to scale up to handle workflow peaks as needed. Stedman also noted that while the size of media files has grown exponentially, the pricing of cloud storage has come down nearly as much, and connectivity costs remain flat. He predicted a lot of proxy interaction until connectivity costs come down.

He also noted that all public cloud is based on object storage, which provides “eleven nines” of reliability in that objects are replicated across the cloud.

“You can build an object store such that if a rack fails, a server fails even an entire data center fails, the object can still be retrieved,” he said.

Object storage also allows for capacity expansion without doing a major rebuild. Stedman said Quantum and its partners are exploring how content can be worked on in the object realm—without having to turn it back into files. Quantum also is working on multi-site, multi-cloud models, “so clients can adopt cloud technology from any source.”

In terms of applicability, cloud storage may be more appropriate for non-real-time operations like transcoding, rendering, QC, file delivery and archiving, while real-time activities such as editing may be better relegated to local, high-performance storage.

With regard to archiving in the cloud, a main consideration is frequency of retrieval. The higher the frequency, the more expensive the retrieval rate in the public cloud model. Therefore, a private cloud architecture would be more fiscally sensible.

“Make sure fee structures are understood. There are a lot of potential ‘gotchas.’ Read the fine print, he said. Your point-seven cents may have gone up to 2.5 cents.”




CATEGORIES