Artificial Intelligence Is Leaving Its Mark on Pro Video

(Image credit: Imagen)

According to the Gartner Hype Cycle, several critical components of what we collectively call “artificial intelligence” (AI)—namely neural networks, machine learning, computer vision and natural language processing—have passed the “peak of inflated expectations.” The next stop on Gartner’s cycle? A looming “trough of disillusionment,” followed by a “slope of enlightenment.”

Yet “disillusionment” is about the last adjective you’ll hear vendors use to describe AI’s capabilities in the video market today. Even if expectations for the technology are drifting back down to Earth, there appears to be plenty of optimism left that AI will continue to unlock a host of practical innovations for video professionals. Here are just a few.

AI-POWERED TRANSCRIPTION

Jim Tierney, CEO of software developer Digital Anarchy, says the company has incorporated machine learning and natural language processing in its Transcriptive AI plug-in to deliver video transcriptions automatically with an accuracy rate of 97 percent. The software can be used to create video captions and subtitles and can be integrated with Adobe Premiere to give users a library of video clips that are searchable via text keywords.

“AI is not going to magically make everything a ‘one-click’ solution in post,” Tierney said. “Users are still going to have to help it along and, as developers, we need to build the tools that enable users to do that.” To that end, Digital Anarchy is researching areas where AI  “might be helpful… both to complement Transcriptive with features like translation or to add features to our visual effects plug-ins that are pushing pixels around. For example, creating better skin tone masks for Beauty Box.”

YOU SAY YOU WANT RESOLUTION

One emerging use case for machine learning is image upscaling. Companies such as Denmark’s Pixop are leveraging techniques inspired by those created by Google to upscale lower-resolution still images to create higher-resolution videos from low-res source material.

The benefit, according to Chief Technology Officer and co-founder Jon Frydensbjerg, isn’t simply the ability to modernize — and monetize — archival video content for the seemingly unquenchable demands of streaming video providers, but to do so at a lower cost than existing restoration techniques.

Pixop’s current state of the art delivers a scaling factor of four to any content, sufficient to upscale a high-definition video source to 4K resolution. According to Frydensbjerg, to go from a standard-definition file to a 4K file would require a scaling factor of five. “While doable, this is admittedly not the most practical solution,” Frydensbjerg noted. “We will be adding support for higher upscaling factors in the future, though.”

THE AI ASSET MANAGER

One of the more mature use cases for AI in video is the automatic extraction of important metadata from video files. Imagen does that just, using machine learning for facial tagging, auto-transcriptions, descriptive tagging and automatic video translations.

INDUSTRY EXPERTS SAY ...

Jim Tierney, Digital Anarchy

“I don’t think we’ll be completely replacing all our algorithms with ML-based algorithms any time soon.”

Jon Frydensbjerg, Pixop

“There is such an enormous potential to be creative and innovate in this space with the building blocks and technology already available. New research is coming out all the time, which redefines what is possible on even very degraded footage.”

Charlie Horrell, Imagen

“Our goal is to use AI to bring order to unstructured data, help bring content to market more quickly and substantially reduce cost of ownership. Meanwhile we see enormous scope for AI under the hood, which is sometimes more difficult to get excited about — but can deliver enormous business benefits — such as predicting demand or personalizing the user experience.”

Patrick Palmer, Adobe

“We’ve seen artificial intelligence and machine learning integrate into more and more editing workflows in the last year, and these technologies will continue to be a game changer in video production and post production at [next year’s] NAB Show and beyond.”

Charlie Horrell, CEO, said that AI-derived tools could soon be used for other aspects of content management, including content onboarding, search improvements, workflow optimization and preparing video formats for delivery.

Adobe has been steadily integrating machine learning-powered features into a range of its Creative Suite products under the collective brand “Sensei.”  Sensei is currently enabling Auto Reframe in Premiere Pro, Color Match in Premiere Pro and Auto Ducking in Premiere Pro and Audition, said Patrick Palmer, principal product manager for video editing, Adobe. “Content Aware Fill for Video in After Effects, unveiled last year at the NAB Show, is also powered by intelligent algorithms.”

For Adobe, AI’s early virtue is in peeling away the grunt work and inefficiencies in a user’s workflow. “We are creating these features to automate tedious, repetitive tasks, but they aren’t black boxes,” Palmer said. “To achieve this, we test and tweak functionality and the user interface, discovering where we need to allow for more transparency and visibility, and where it makes sense to minimize.”

So don’t worry editors, the robots aren’t coming for your job—only the boring parts.

CATEGORIES