ENCO to Showcase enCaption 3R4 with Multi-Speaker Identification at the 2017 NAB Show
Southfield, MI, March 22, 2017–ENCO continues to advance the possibilities of closed-captioning with its enCaption system, the company’s software-defined engine for cost-efficient speech-to-text voice recognition. Now in its fourth generation, enCaption3R4 takes amajor step forward with the ability to distinguish between multiple speakers, further reducing the labor of live captioning in the broadcast workflow. ENCO will unveil enCaption 3R4 at the 2017 NAB Show, taking place from April 24-27, 2017 at the Las Vegas Convention Center, in Booth N2024.
Like previous generations, the enCaption 3R4 system needs no respeaking, voice training, supervision, or real-time captioners, thereby eliminating human error. However, enCaption3R4 integrates a special algorithm with the intelligence to manage complex captioning situations where multiple subjects are speaking at once. enCaption3R4 achieves this by isolating each speaker’s microphone throughout the live program.
The system supports up to six independent microphone feeds, and the speakers’ names can be preconfigured based upon their assigned microphone position. Multi-lingual support is also built into the algorithm, and includes personalized and/or localized spelling capabilities to ensure greater accuracy.
“With our new multi-speaker identification feature, hearing-impaired viewers will not only know what is being said, but also who is saying it. This will add to their understanding and enjoyment of the show,” said Ken Frommert, general manager of ENCO. “For the station, bringing this useful feature into the automated workflow of enCaption removes the pressure of live captioning from production staff, and ensures a better viewer experience.”
While one of the audio inputs could be a feed from a production truck, the system treats that audio stream as a single speaker, even if multiple people are speaking. And if a pre-recorded video clip is rolled during a live show, the captioning of that audio automatically takesprecedence over anyone speaking on set.
“The sophisticated algorithms that we’ve built-into this latest release, enCaption 3R4,know how to manage the captioning of a spirited exchange,” Frommert said. “The algorithm does its best to determine who “owns” the conversation—such as the person that started it or who dominates the discussion—and ignores distractions like low voices and brief interruptions. As soon as the conversation shifts to the next speaker, the algorithm immediately and seamlessly transitions to focus onthat speaker. Without this selective management process, it becomes very difficult to caption live events, such as roundtable or panel discussions, where people often compete to be heard and disrupt the flow of conversation.”
About ENCO
Founded in 1983, ENCO Systems is a world leader in broadcast solutions for demanding radio and television organizations. ENCO is headquartered in Southfield, Michigan USA and retains a worldwide distribution network. For more information, please visit: www.enco.com.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.