The centralcast model

Broadcasting has enjoyed a glorious past. Today how ever, there are pressures applied from many directions, which tear at the very fabric of the industry. Competition, rising costs, declining revenues, advancing technology, and the Federal DTV mandate have conspired to put a new spin on the broadcast model.


Channel M’s master control room in Vancouver, British Columbia had to be as high-tech and automated as possible to reduce ongoing operational costs.

Market share for broadcast television is still considerable, in fact dominant, but the percentage (and the dollars that support broadcast operations) decrease every year. Twenty years ago broadcasters predominantly got programming from a single network source and were handsomely compensated for network carriage. Cable program services, DBS sports packages, and pay per view and on demand services from cable and satellite offer the consumer compelling alternatives to broadcast. The result on revenue is direct and predictable. Advertisers spread the revenue across more options, and each gets a little less.

Adding to the factors in this equation is the FCC mandate for DTV, which saps capital investment funding available to local stations. DTV conversion may in fact not be possible without creative financing or consolidation in many small markets.

About five years ago some broadcast group owners began to see that the trend would never reverse and sought ways to cut costs to stall the decline in broadcast cash flow. Ackerley (now part of Clear Channel), the New York Times Group, and others began to toy with the idea of moving air operations to centralized sites, with the thought that the savings in labor would be more than the cost of interconnection lines. Ackerley plunged in head first, building regional centers in the North East, the Pacific Coast, and elsewhere and gutting the technical plants in the remote cities with the intention of moving video over fiber optic data circuits and consolidating labor in one site. The idea was to automate the stations, and reduce the total head count by more than the number of new technicians at the central site.

The New York Times took a different tack by putting automation in stations and then moving control and monitoring to the remote site along with the labor to control the stations. With fewer people at the local station and a minimum crew at a central site to operate multiple stations, their approach used less capital hardware and a low interconnection bandwidth to achieve only control and status via remote circuits.

These initial implementations were not experimental; they are still on the air. They achieved at least in part the goal of reducing cost, and they certainly learned a lot about the realities of centralized broadcast operations. Kelly Alford, then Ackerley's director of engineering, told me that the challenge was most importantly human. Getting the personnel in a station to understand and agree with a decision to centralize operations is a human relations problem, and one where the staff may well see no alternative, but still not be pleased with the choice to consolidate operations and reduce staff costs. Ackerley sent locally produced news content back “up the pipe” to the hub site, where it was turned around on the continuous pipe from the hub to the local transmitter.

What Ackerley and others have discovered is that before the decision is made to uproot broadcast air operations, other “low hanging fruit” exists which is less disruptive, and potentially more profitable. The central theme for centralized operations is to reduce duplication and thus head count. Promotions, traffic and back office operations are areas where the impact of technology on the decision is not as costly, and the potential return is every bit as great. For instance, moving the traffic department off site works well in most groups. If terminals are provided at the local station for entering and retrieving data, but broadcast inventory is managed remotely the impact on most of the station is minimal at best.

The staff in any business wants the company to succeed to insure the continued employment of the maximum number of their co-workers. Working toward getting involvement and agreement instead of forcing the decision without discussion will help make it possible to succeed with centralized operations.

Promotions are another area where centralization can be quite effective. In the days when promotions were locally produced using linear editing bays there might have been little room to save money in consolidation. Today many group owners buy much of the same syndicated programming. Rather than have 10 stations all cutting promos for “Oprah,” it is clear that one master promo with local tags can cut production time and cost significantly. The need to have extensive editing operations at the local station is reduced, freeing capital to be used in the newsroom or other parts of the station where dollars are equally hard to free up. Nonlinear editing has the additional benefit of being inherently digital, making interconnection to remote playout a potentially transparent process. Many groups are using “video e-mail” appliances to transport the completed stories to the local market from the hub production center at low cost and with “more urgency.” The impact on the operations is dramatic when the tedious work of tagging promos is moved to a central site where that activity is repeated for multiple markets using the same input material. This approach also can lead to a consistent look at groups of multiple stations, and permit consistency in how promotions are done to maximize results in multiple markets.

Though promotions, traffic and lower tech solutions to lowering costs are easy to understand and achieve, the majority of professionals probably think of master control centralization as the real “meat and potatoes” of centralized operations. As one would think from only a cursory look at the issues in centralizing (or consolidating) air operations, they are anything but simple and the success is anything but assured. The equation looks something like this:

Labor savings-interconnection cost-capital depreciation=actual savings

This is deceptively simple and easy to understand, but the allure of centralized operations is compelling. USA Networks tried the grand daddy of all centralizations, building a complex facility in Los Angeles in 1999 to feed over a dozen stations across the USA. To protect the signals from “back hoe fade” they used redundant circuits. The facility was complex and sophisticated, and by some accounts a technical success and economic failure. What went wrong? One only needs to use the equation above to come up with the back of the envelope answer. The labor saved could not overcome the cost of interconnection and depreciation.

The impression many in our industry have is that interconnection is cheap and ubiquitously available. Neither is true in every case, though with thoughtful implementation it can be closer to reality now than ever before. Consultants and designers are constantly looking for ways to untie this Gordian Knot. And the way may well be available, and in fact is in use at several groups today. Like Alexander who simply cut he knot with his sword instead of untying it, we need to look at the problem with no preconceived notions of the methods needed for success if we hope to achieve success. The key is analysis before action.

Centralized operations fail to be economically viable for two main reasons. Either the labor saved is not sufficient, or the interconnection cost is too high. It is hard to get additional savings from labor, for once everyone is released there is nowhere else to turn to for savings. With many stations operating with low paid master control operators the head count reduction cannot achieve dramatic savings. In large markets where staff costs are high this is not the case, at least to the same degree. However, if one assumes high bandwidth guaranteed QOS circuits for interconnection the marginal savings in labor are compounded with interconnection at high dollar.

It would seem difficult to operate a broadcast station without high bandwidth and high reliability in the interconnection medium. Unless we cut the knot and say that maybe neither is necessary! Enter Distributed Broadcast Operations, a hybrid of centralized operations in which the final stream is assembled by automation at the local station from sources which already exist when practical and cost effective. If network receivers already exist at the station why move them to a new site, only to reconnect them over an IXC circuit at high cost and with the risk of “back hoe fade” taking the station off the air? What if the programming was assembled in a combination of remote and local playlists interleaved and controlled from a central location? This has the benefit of allowing for store and forward techniques, which can work on circuits with, guaranteed average bit rate, but which do not guarantee instantaneous QOS.

Here's how it works. Programming that is live from a network is received at the local station without additional interconnection cost. Automation takes the well-defined playlist and inserts interstitials as scheduled. Programming that is delayed is received at the central site, prepped for air, and then sent on a store-and-forward system to the local station to be played out at its scheduled time. By moving the ingest labor to a central site, it is possible to ingest “Oprah” one time for many stations and simply send the media and the automation playlist events to local storage at all the stations in the same format. This reduces ingest labor dramatically, and insures that all stations air the precisely same program. By distributing the playlist and media over the combined resources of the central site and local station you also achieve a degree of protection from loss of the interconnection circuit. If it is down for any substantial amount of time the problem is the same as in any other interconnection topology, but losing two minutes of interconnection when it is not live to air all the time is just not a critical problem any longer. What is important is the guarantee of delivery in time for air.

Data circuits for this intended operational mode can have a lower QoS, lower interconnection bandwidth, and so are cheaper to install and operate. The flexibility to multiplex variable bit rate streams together can allow for connections to include communications circuits, as well as allowing remote data bases and remote monitoring feeds to be connected. You might ask what happens when the circuit goes down, which it certainly will eventually. With less bandwidth to be re-established it is possible to use a backup strategy, which is less costly for restoration purposes only. Even if the restoration is at lower bandwidth, i.e. using ISDN dialup service, the effect is potentially to limit the number of services, which can be maintained, for instance dropping voice services, as well as clearly slowing down the FTP to a trickle from a flood. This does not mean the video will be delivered in time for air, but if the plan were to keep hours ahead of need then the outage would have to be very long to create a problem that actually affects air.

By distributing control and media the level of protection goes up, and potentially the cost goes down, raising the probability that the equation will provide positive results instead of proving that the centralization cannot work. To the extent that the topology approach relies on high bandwidth circuits, the cost certainly goes up and the likelihood that the equation produces a result that will save broadcasters money goes down.

Ultimately, what matters is not the technology, but the success of the business. Centralized operations are but one arrow in the quiver used to shoot at a moving target. Creative use of the tools is the only way to enhance the bottom line and produce value. It is our job in technology to keep our eyes open to new approaches and creatively analyze the effects on the business of broadcasting.

John Luff is senior vice president of business development for AZCAR. To reach him, visitwww.azcar.com.

Home | Back to the top | Write us

CATEGORIES