Data Networks For Crucial Video
For years the broadcast industry has been slowly pulled, pushed and sometimes prodded towards integrating data networking capabilities into the facility. Data networks have steadily crept into the broadcast operations center, first with email and offline communications—tape logs and edit lists. It was non-intrusive, and had no impact on the air chain. Later, automation and traffic systems began to require data networks for communicating with the front office, causing data networks to become a business necessity. Following this migration was web surfing, more email, scheduling and a host of other non-broadcast functions forcing the facility to spend increasing amounts of capital on data networking infrastructure, often at the expense of the broadcast facility.
Now with the ever-increasing focus on cost of operations there is talk of using these data networks to carry parts of the air chain, leaving many chief engineers weighing the benefits and risks of true convergence of video, voice and data.
So what’s driving this talk, anyway?
Partly it’s due to the success of the transition to near-tapeless facilities. As facilities go tapeless (or nearly so) all of the content sits on servers; these data networks let you move content between servers, even if they are significant distances apart. For example, news groups would be able to create their own promos on their own servers, and then send them over to the air server “just-in-time.” Further, news bureaus in remote studios could produce locally, and then push finished content to the air server. Imagine if during the newscast, live feeds could go across the same links that connected the servers 24-hours-a-day.
Another common theme for the value of data networking between sites is disaster recovery. Facilities can shadow work-in-progress between servers in remote offices. Not only can talent work together on projects but engineering has each end working as a backup copy (or backup facility) for the other. When the air servers are backed up in this manner, one can run an entire station from another studio in a planned disaster-recovery mode.
Technically, it’s hard to find a valid reason why this can’t work perfectly, and there are more and more examples of station groups where components of this have come into practice. Many major long distance providers have engineered their network infrastructure to enable increased speed and capacity of data networking, making handling significant amounts of realtime traffic a reality. Today this has enabled broadcasters to carry significant amounts of local traffic between buildings and across towns.
An example of just how far some of these networks are able to reach, Turner Broadcasting Systems recently provided fully redundant IP video routing connecting their Buenos Aires, Argentina studio to their central studios in Atlanta, GA. This two-way international link enabled TBS to transport programming content originating in Latin America to Atlanta for uplinking purposes along with internal data and telephony communications between the two cities.
The real benefit of using data infrastructure for video transport comes when facilities are interconnected. It could be a news bureau in the city and a broadcast center in the suburbs, or stations in several related cities connected over a great distance. Instead of committing large amounts of operations cost in single-purpose video circuits between those cities for live feeds, plus additional costs for phone lines and data network capacity, the opportunity is available to optimize on cost and only order data bandwidth to do all three. Data networking bandwidth is available from every competitive carrier, even the ones that can’t spell video. And more and more often, the price is right.
FARFETCHED? HARDLY.
There have been various technology demonstrations, trials, and a few early-adopter stations that have successfully deployed IP-based transport infrastructures full-time. Many large groups are now looking at these ideas seriously for workflow consolidation across their stations. Large or small, the advantages of disaster recovery, facility sharing and workflow sharing all point towards workforce savings and network reliability improvement, enabled by high-speed data networking between facilities.
So what prevents broadcast plants everywhere from leveraging the 10:1 difference in cable and connector costs and riding the wave of inexpensive multi-gigabit switches instead of stringing more coax? What prevents planners from coordinating operations across several stations in a group, spreading the ingest workload, maybe even consolidating master control operations?
At this point nothing but habit. There are no show-stopping technical obstacles to coordinating operations across far-flung facilities, backing up one station with operations and equipment from another, and ingesting content at one station for play-out across the group. The obstacles are non-technical ones: Fear, Uncertainty, and Doubt.
CHIEF ENGINEERS OF THE WORLD, TAKE CHARGE!
What truly stands in the way of using these modern data networking tools reliably is good old-fashioned engineering. Instead of viewing your “IT guys” with dread each time they show up with new hardware and yet another release of software, bone up on the technology and collaborate with them to get it right. The same engineering disciplines that keep the station on the air 24/7 can bring these new technologies into the mainstream with high reliability. Remember, the Internet and all of its data protocols were designed by engineers, for engineers.
What good will come of it? Plenty. Data networking equipment has a huge volume curve behind it, and is getting cheaper faster than traditional broadcast plants. Gigabit Ethernet on inexpensive cable can go thousands of feet, while you are lucky to get 300 meters on SDI (even less with HD-SDI) on very expensive coaxial cable. Data networking for video will save time and money not only in the short term, but also in the long term.
Even more, this data networking approach allows you to do more complicated productions, and to be more flexible throughout the day. With adequate bandwidth in place between stations, you can share operations and news stories, and do live backup of one station with another (or even live operation). Traffic and automation systems are constantly developing new tricks to support operations across multiple studios and multiple channels.
Technology isn’t the roadblock to unlocking the long-promised benefits of data networks as a broadcast infrastructure. The roadblock is the application of proper broadcast engineering standards and practices to the data infrastructure, instead of the “cowboy” mentality prevalent in today’s typical IT department. If we hold our data networks to the same engineering standards we have always required in the broadcast air chain, they will deliver first-class results every time.
The long-term advantages of video/voice/data convergence are clear: a single integrated infrastructure that is flexible, scalable, multi-service and technologically poised to provision the next generation services.
John Mailhot is General Manager of Aastra Digital Video (www.aastra.com/digitalvideo/) and can be reached at jmailhot@aastra.com
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.