Avoiding the digital cliff


The DTV transition brings the capability of higher-quality signals, as well as new types of test and measurement needs.

The DTV transition created a number of challenges for TV broadcasters. There is mounting competitive pressure from alternatives to terrestrial broadcast TV stations, and today's station is a collection of diverse and complicated technologies.

Multilayer and multiformat facilities are the norm. Baseband signal measurements, MPEG monitoring and RF analysis each contribute to painting a picture of the health of the broadcast signal for the station engineer. The engineer's job is to make sure content is delivered, and there are several critical parameters that require close scrutiny in each layer, such as jitter, timing, ancillary data and audio.

The transition from analog to digital, and now HDTV, introduced new challenges to ensure the transmission of the signal from the camera through to RF modulation, be it cable, satellite or terrestrial TV. Broadcast systems are now multi-standard and multiformat, with a large number of interfaces between islands of analog, digital SD or HD-SDI and MPEG-compressed systems.

This introduces a new series of testing and monitoring methodologies. Digital picture encoding, DVB and ATSC program specific information (PSI) and system information (SI), and complex digital RF modulation schemes all add to the need to monitor key system health indicators at all layers of the transmission chain. If a system fails, the pathology of failure is different from an analog system. Failure at the digital cliff is sudden and has less understood precursors.

The layered model

There are, however, a series of key system parameters that can be monitored at all layers of the broadcast system to help maintain a safety margin for error-free, reliable transmission. Moving forward, it's possible for systems to become predictive, proactive and preventive through setting multiple triggered monitoring points that provide trend analysis. The result may be the construction of an early warning of your system's approach to the digital cliff.

Today's digital transmission systems can be thought of as five basic layers:analog, SD and HD uncompressed serial digital, compressed digital (MPEG-2, VC1, H.264), RF transmission, and command and control. Any one of these layers can generate failures in the transmission chain, which will propagate down the chain, causing picture and sound degradation or loss. Clearly, an accumulative error budget exists for the total transmission system. Therefore, it's necessary to look at the key health indicators and error budget within each layer that could impact that layer and the following links in the system.

Key performance indicators

Ultimately, the goal is to move from reactive monitoring, where station staff fixes a problem once it occurs, to predictive monitoring, which provides warning of incipient failure and time to fix the issue before it becomes a visible problem. In the digital domain, this is possible by continually monitoring these trends in parameters such as program clock reference (PCR) timing and modulation error ratio (MER) as precursors of failure.

The keys to transmissions in a facility, whether analog or digital, are signal level and timing. Effective monitoring of these parameters in both the analog and digital domains ensures sufficient headroom at all stages of the transmission chain, whether it be SDI timing, eye diagrams or PCR measurements in the MPEG domain.

Synchronization and timing

For any system, long-term reliability starts with the installation of infrastructure for HD-SDI signals. This requires correct cabling, termination and installation of the cable in order to prevent kinks or undue stress to the cable. These steps will allow the high-speed 1.5Gb/s signal to be properly transmitted without disruption from unwanted reflections caused by impedance changes. Getting this right from day one pays long-term dividends in reliability.

Synchronization is a fundamental and critical procedure in video facilities. Every device in a system must be synchronized to successfully create, transmit and recover video and audio. The complexities of operating an analog and digital multi-standard, multiformat environment require flexibility to achieve and maintain synchronization.

Television timing (using black burst as the reference) has always been critical, but the addition of tri-level sync in HD can complicate systems' timing in a hybrid facility. As a result, facilities with multiple formats require a variety of generator solutions. If that's not enough to worry about, digital audio places even more demands on a facility because these signals need to be referenced to video sync to maintain defined relationships between audio and video. All this requires careful system design to ensure synchronization between all processing equipment.

The focus, therefore, is careful calculation of the propagation delay between various devices caused by cable lengths. In addition, you must account for any processing delay of the equipment through which a signal might pass.

Eye diagrams and jitter

After you've achieved error-free transmission throughout the digital paths within your facility, examine the quality of source material as it's delivered to the facility. Modern digital waveform monitors can monitor serial digital source material upon ingest. Some models generate a compliance log against time code listing any errors. These errors should be rectified before further processing. Figure 1 shows an error log of video and audio errors present within an incoming signal. Note the errors are related to the time code of the event, making further examination easier. Logged errors include input signal or external reference signal missing; color gamut error; EAV/SAV missing or mismatch with line number; SAV placement error; CRC and EDH errors, code word violations or field and line length errors; and ancillary data and closed caption present/absent errors, parity errors or checksum errors.

Eye diagrams displayed on a waveform monitor are the key tool for monitoring both amplitude and jitter on SD and HD-SDI signals. The height and openness of the eye give a clear indication of the health of the measured signal.


Figure 1. Alarm status display showing error log, alarm status, and video and audio sessions. Click here to see an enlarged diagram.

The SMPTE standards (259M, 292M, RP184, EG33) recommended practice RP184 provides definitions and measurement procedures for jitter and defines measurement specifications for the electrical characteristics of the signal. Figure 2 shows an eye diagram of an HD signal with automated measurements of the eye parameters.

Signal amplitude is important for two reasons: because of its relation to noise and because the receiver estimates the required high-frequency compensation (equalization) based on the remaining half-clock frequency energy as the signal arrives. If an incorrect amplitude signal was applied at the sending end, an incorrect equalization would be applied at the receiver, causing signal distortion. While poor transmissions with eye closure are potentially recoverable by modern equipment with good front-end equalizers, they are likely to lead to sparkle artifacts, line dropouts and eventually freeze frames and black pictures.

MPEG monitoring

Digital processing can add cyclic redundancy codes (CRC) to the digital data stream to provide a simple means of error checking the video signal. By monitoring the CRC values, a measurement device can report the number of errors. If your system notes errors every minute or every second, this would be a clear warning the system is close to the digital cliff. An eye display should be used to isolate the problems in the transmission path.


Figure 2. HD eye measurement with automated eye measurement parameters. Click here to see an enlarged diagram.

Among the many elements of the MPEG transmission layer, there are three that need constant monitoring: PCR timing and drift, continuity count errors, and PSI/SI tables for correctness and repetition rates. PCR clock recovery is fundamental to MPEG transmission because it allows a set-top box to recover the reference 27MHz (±30ppm) clock used to derive system timings. Jitter and long-term drift in this clock will ultimately lead to set-top boxes failing to display the transmitted video.

The DVB standard TR 101 290 measurement guidelines specify in detail the proper PCR measurement methods for both jitter and drift. Measurements include accuracy (PCR_AC), overall jitter (PCR_OJ), frequency offset (PCR_FO) and drift (PCR_DR). (See Figure 3.) The standard also defines guidelines for testing PSI/SI table presence and repetition rates.

Continuity count errors provide an indication of dropped packets, which are a common problem in video distribution. Dropped packets mean loss of information from the transport stream, which can lead to decoder problems. Monitoring continuity count errors gives an indication of potential problems in the video distribution network and enables remedial action to be taken before decoders experience visible failure.


Figure 3. System health can be monitored by checking key PCR parameters of timing accuracy and drift. Click here to see an enlarged diagram.

These measurements can be performed in real time by an MPEG transport stream monitor. When equipped with multiple threshold alarms, the device provides trend-based, predictive monitoring, resulting in a good early indication of failure. Warnings are available prior to the onset of the digital cliff. Most real-time monitors also provide transport stream recording on user defined triggers, such as PCR timing. This practice allows detailed offline analysis of both timing and PSI/SI issues when they occur.

A key tool for PSI/SI monitoring is monitoring time-defined reference templates. The results can then be compared against the actual transmission. PCR timing, continuity count errors and PSI/SI content and repetition rates should be monitored 24/7 by transport stream monitors located at the stream entry point and after each MPEG manipulation point in the system.

Modern digital RF cable, satellite and terrestrial broadcast systems behave differently when compared with traditional analog TV. Once reception is lost, the path to recovery isn't always obvious. The problem could be caused by MPEG table errors or merely from the RF power dropping below the operational threshold or the cliff point. RF problems include satellite dish or low-noise block converter issues, terrestrial RF signal reflections, poor noise performance, channel interference and cable amplifier or modulator faults.

RF monitoring

The following RF parameters are important for transmission health:

  • RF signal strengthHow much signal is being received?
  • Constellation diagramThis diagram characterizes link and modulator performance.
  • Modulation error ratio (MER)MER is an early indicator of signal degradation and represents the ratio of the power of the signal to the power of the error vectors expressed in dB.
  • Error vector magnitude (EVM)This is a measurement similar to MER but expressed differently. EVM is the ratio of the amplitude of the RMS error vector to the amplitude of the largest symbol expressed as a percentage.
  • Transport error flag (TEF)The TEF is an indicator that the TEF FEC is failing to correct all transmission errors. TEF is also referred to as Reed-Solomon uncorrected block counts.


Figure 4. Constellation diagram with RF figures of merit, such as MER EVM & BER. Click here to see an enlarged diagram.

MER is designed to provide a single figure of merit for the received signal. It gives an early indication of the ability of the receiver to correctly decode the transmitted signal. In effect, MER compares the actual location of a received symbol (as representing a digital value in the modulation scheme) with its ideal location. As the signal degrades, the received symbols are located further from their ideal locations and the measured MER value will decrease. Ultimately, the symbols will be incorrectly interpreted, and the bit error rate will rise. At some point, the signal reaches the threshold or the dreaded digital cliff.

Improving on bit error ratio

The key to maintaining a reliable digital broadcast system is careful monitoring at all levels and formats. Check the key indicators to the health and long-term stability of your network continuously. Then, use the predictive trend analysis of jitter, amplitude and timing to catch system deterioration before your audience knows there could have been a problem.

Mike Waidson is a video applications engineer for Tektronix.

CATEGORIES