IP Drives Audio to Virtualization, Immersive at 2019 NAB Show
Trend-spotting, for manufacturers, is key to helping them anticipate their customers’ needs and create their roadmaps for future products—some of which will make an appearance at this year’s NAB Show. Several interrelated audio trends appear to dominate currently—namely the adoption of IP networking, which in turn is beginning to drive interest in virtualized audio systems, and increasing momentum behind immersive audio formats.
As Phil Owens, senior sales engineer at Wheatstone Corp., noted previously in TV Technology, AoIP adoption, particularly around mixing console systems, initially created silos of proprietary networking. But since the publication of the AES67 standard and its adoption within SMPTE ST2110, customers now expect any system that they purchase to support AES67 for interoperability with third-party equipment.
Most importantly, said Owens, customers need to be able to transport console feeds into the primary house router, whether it’s an Evertz, Grass Valley or other product. “This is where the AES67 audio streams are married and time-synched to the corresponding video streams.”
Prior to wider adoption of AoIP, he continued, MADI was largely preferred when transporting console feeds into the intercom system. “Now that we are moving to a unified protocol for these connections, most intercom manufacturers are implementing some form of AES67 communication.”
Last in terms of importance, said Owens, a standard protocol has become necessary to interoperate with third-party devices such as custom mic preamps, automixers and dedicated audio processors.
WINNOWING A CROWDED FIELD
Control protocols were not included in the original AES67 development brief, and while AMWA’s NMOS has been gaining traction, it could be awhile before it’s more generally adopted. AMWA defines its NMOS protocol as “a growing family of specifications, which are available to both suppliers and end users at no cost, to support the development of products and services which work within an open industry framework.”
“I expect there will continue to be a number of control protocols in use for quite some time, even as AES70 becomes better defined,” said John Schur, president, TV Solutions Group, The Telos Alliance. “Broadcasters may make a significant investment to design and integrate a control system, making it less attractive to quickly adopt the new control protocols.”
“NMOS is a great protocol to support stream management but you need a great tool to do that with,” added David Letson, vice president of sales for Calrec. The U.K.-based manufacturer is not alone in developing an easy to use stream manager, he said. “We need tools for audio people that make complicated networks simple to use.”
Stephen Brownsill, audio product manager with TSL Products, observed that AoIP has at least been a consideration for anyone planning to build a new facility, studio or OB truck that he has visited of late. For those transitioning into IP infrastructures, he said, “All the sensible customers are setting up labs for proof of concept. That allows them to look at and validate the products available” and provides valuable training.
Manufacturers, too, have been taking part in plugfests, demonstrating AES67 stream sharing between products from diverse brands, according to Owens. “Wheatstone continues to participate in these interim opportunities,” he said.
NO DEDICATED HARDWARE
Successful implementation of AoIP networks and at-home or REMI production systems is leading some broadcasters to consider virtualized workflows.
“More of our customers are working on roadmaps that do not include any dedicated audio processing hardware,” Schur said. “A couple years ago, these customers were asking questions about what we could do in virtual machine and cloud environments. Now they are coming to us with specific projects where all processing is done on COTS [commercial off-the-shelf] hardware, with the flexibility that virtual machines offer to spin up broadcast channels and assign resources as needed.”
IP and software apps go hand in hand, said Owens, noting that software is very well suited for custom interfaces. “Wheatstone and other manufacturers have software that allows the user to create various combinations of faders, meters, switches and tallies on a touchscreen. The key to any system like this is the elegance with which software and hardware are married. The combined system brings some new possibilities to the table. Mixing a show from a distant location is one such possibility.”
Calrec, one of the first to make a dedicated REMI product commercially available, is getting out in front of the virtualization trend. In response, the company will roll out the VP2, which Letson describes as a “headless console.” “It doesn’t have a surface, just some software that you can pick up and control anywhere,” he said. “It’s feasible now for an automation system in Dallas to control a VP2 in, say, Los Angeles.”
Letson also reported that one major U.S. broadcaster is relocating much of its audio production gear out of its studios. “They can throw the hardware into service centers and keep talent and control surfaces locally,” he said. “Operationally, nothing changes.”
Virtualization offers significant potential for maximizing resources and scaling equipment capabilities according to demand. During a major election, for example, a TV news studio may need to handle several times the usual workload, Letson said. “Having flexibility in how you define the system is something they want,” he said, which Calrec is offering with its new ImPulse core.
“We’ll also be showing new consoles, which integrate well into virtual environments” at the NAB Show, said Schur. “Our matrix-free IP-based Infinity intercom system debuted at NAB last year, and this year we’ll be highlighting new, innovative ways in which multiple Infinity sites can be networked together.”
TSL’s Brownsill noted that there is typically a one-to-one ratio between operators and audio monitoring units. “When they go home, the AMUs sit idle and burn energy,” a situation that can also be addressed through virtualization. “In our next-generation platform we’ll have a central engine into which all the audio will be received and processed,” he said. “We’ll distribute panels to customers; they’ll be able to log in to that engine to access and listen to the audio.”
TSL’s new SAM-Q, is a first step, enabling operators to use the platform based on the task at hand and his or her skill set. “We’re allowing people to choose the way they work and the complexity of what the audio monitor shows them,” said Brownsill.
“Some audio manufacturers seem to be moving into providing video transport through their products,” said Jay Yeary, a former TV Technology columnist who worked in engineering at Turner Studios for over 15 years. For example, Audinate recently announced its Dante AV module, an integrated solution for manufacturers, and QSC introduced the Q-SYS NV-32-H network video endpoint at ISE 2019. “It gives them the ability to play in a world they wouldn’t necessarily play in. If you’re building a system that is primarily audio, but it needs some video, you could use equipment from that one manufacturer.”
AUDIO FOR ATSC 3.0
Telos will also announce several new products to support audio workflows and processing for ATSC 3.0 facilities, including authoring and monitoring for Dolby Atmos and Fraunhofer MPEG-H, as well as immersive upmixing and watermarking hardware and software solutions, according to Schur.
“The rollout of ATSC 3.0 is another important driver in the industry,” he said. “The initial phase of the rollout will not support all of the features that Next Generation Audio has to offer, but subsequent phases will require new workflows and infrastructure for distributing audio and metadata in a facility. ATSC 3.0 provides an opportunity for manufacturers to create flexible solutions to support the initial phases of the adoption, but also to give broadcasters the path to deploy the features in the future that are important to consumers that are enabled by the standard.”
Yeary thinks the most compelling aspect of ATSC 3.0 audio is the ability to replace the primary language with the language of the viewer’s choice. “That’s compelling for a broadcaster, to be able to target the Lithuanian community, or Korean community, and deliver product in their native language,” said Yeary. For viewers, he said, the ability to turn down the announcers, if the content creator allows it, is also compelling.
“From a U.S. perspective, immersive is something that might happen in the future,” said Letson. “But in the U.K. market, we’re doing live Premier League [soccer] games. So we’re having to provide more tools now.”
While mixers need wider bus formats and new immersive monitoring and metering capabilities, one challenge for manufacturers is that each market could choose a different format, Letson said.
“In Korea, for example, they are looking into 7.1.4, whereas in the U.S. it’s 5.1.4 or 5.1.2. And broadcasters could choose to do things in a different way, so the tools have to be able to cope with all of them.”
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Steve Harvey began writing for Pro Sound News and Surround Professional in 2000 and is currently senior content producer for Mix and a contributor to TV Tech. He has worked in the pro audio industry—as a touring musician, in live production, installed sound, and equipment sales and marketing—since November 1980.