Facility remote monitoring

Parsing words carefully can sometimes lead to interesting results. This topic could cover monitoring over a large distance, or it could cover gathering information about systems within one facility. Then again, it could mean monitoring many facilities from one location. It seems to me that in the context of our changing industry, it needs to cover all three.

Increasingly, broadcasters are involved in one of many variants of centralized operations. That is often meant to construe centralized master control, though it could easily mean traffic, promotions or even a common transmitter facility for a region (as in Sutro Tower in San Francisco or Mount Wilson in Southern California). When viewed from the vantage point of the central facility, remote monitoring often means gathering information on the status of the remote facility and, most importantly, displaying it in a way that operators can quickly get a sense of the health of the remote location. Today, that means several kinds of information.

SNMP

Video switching provider The Switch uses Skyline DataMiner, an SNMP fault reporting system with SLA correction, for remote monitoring. The Switch monitors each of its locations from its network operations center in New York City.

For a centralized master control, it is important to access data about hardware as well as credible video to understand the end result of any potential failures noted by remote sensing. For instance, an SNMP monitoring system might report a failure in a video server, but what is more valuable to know is what the video and audio outputs look and sound like. By using a combination of IT-based remote sensing and return of confidence feeds as compressed audio and video, we can gain a much better view of the status.

SNMP is a great tool and is integrated into the monitoring systems delivered by a number of manufacturers in our “space.” SNMP has three elements. Devices must have resident capabilities to respond to requests from a local SNMP monitoring system. The final element is a system that generates requests and aggregates responses into a practical user interface. There are a number of SNMP management software packages that can be used both for IT- and broadcast-centric hardware.

In our industry, it is often more convenient to buy the monitoring software along with modular equipment (DAs, conversion hardware, etc.). This can have two major benefits. First, there can be tight coupling between fault reporting and fault repair with control over a router, perhaps switching to a backup in the event of a failure of a particular circuit. Second, by merging signal transport hardware and the monitoring system into a closely coupled system, it is easy to get reports from each piece of modular conversion and distribution hardware about the health of the signal.

Consolidation

Of course there are other benefits to merging the facility monitoring into a small number of components. It becomes easy to have a single interface that is used to both monitor and control devices. This can permit operators to make adjustments when things are not quite perfect, like perhaps swapping audio tracks to remain on the air with a usable signal. Carrying it one step further, it is easy to see how a graphical user interface could, for instance, turn a defective device from green to red when a fault occurs, and with a simple mouse click permit the operator to bring up details about the fault. This could of course include pictures and sound to a larger monitor to permit quality evaluation.

The need to do that is easy to understand. An MPEG quality analysis system might sense the presence of excess macroblocking, causing an alarm. But it may be the best picture you have available due to rain fade in a satellite circuit, or even a faulty recording. Switching to a backup without seeing the defect might not be a good idea. Another example might be audio silence, which might simply be a dramatic effect the producer intended. Having monitoring systems that are more media aware can thus allow better fault finding and smoother operation than multipurpose IT-centric systems. But, of course, the opposite is also true.

IT hardware

Today, we live in an increasingly hybridized world where purpose-built video and audio hardware has to interface well with IT-centric hardware. For example, we now find station-in-a-box systems for master control that are entirely IT-based, except for interface cards for live feeds. Monitoring such systems, along with video servers, transmitter remote controls and lots of other examples can be done with video industry packages that have SNMP capability. But in a large installation, there may well be more IT than video in the future, so finding a happy medium is critical if we consider growth of a system over the long term.

We should think about the topology of a monitoring system carefully. If a monitoring system has to operate over a WAN connection, one should carefully consider what happens if the interconnection goes down. If the monitoring system is critical, as in the case of a centralized master control facility remaining in contact with the remote locations, then a backup interconnection method needs to be determined. Both MTBF and mean time to repair (MTTR) will affect the decisions about provisioning backup interconnect bandwidth. It would not suffice to leave the receiving end of a central master control operating in hysteresis for a long period of time. A VPN over the Internet might allow at least thumbnails and SNMP traffic to continue even if higher quality video would be impractical.

The other side

I always suggest to clients that the remotely monitored facility have at least some routing that can be controlled from the other end, at the NOC. This allows the remote operator to switch a monitoring circuit between all potential points in the system over which they have monitoring and control, which is lot more cost-effective than increasing the number of “probes” and return video/audio circuits. We could learn a lot from NASA, which has been remotely monitoring and controlling rovers on Mars for years with a system latency measured in minutes, not milliseconds.

John Luff is a digital television consultant.

Send questions and comments to: john.luff@penton.com

CATEGORIES