MPEG quality assurance
How do you monitor the quality of an MPEG-2 stream? Most broadcast engineers use the concept of Quality of Experince (QoE). Some engineers call it Perceived Quality of Service (PQoS), in the sense that it is the Quailty of Service (QoS) as it is actually perceived (or experienced) by the end user. For our purposes, we’ll stick with calling it QoE.
The evaluation of the QoE for digital television content provides digital broadcasters with a wide range of potential choices, covering the possibilities of low, moderate or high-quality levels. Use of the QoE evaluation method gives broadcast engineers and network operators the capability to minimize necessary storage and network resources by allocating only the resources necessary to maintain a pre-defined level of viewer expectations.
The most basic of approaches to measuring video content QoE is the “no-reference” analysis. In this scenairo, QoE is not measured by comparing the original video to that which is delivered, but by trying to visually detect artifacts such as blockiness, blur or jerkiness in the video. The “no-reference” approach is based on the concept that viewers don't know the quality of the original content. In these days of big, bright, undistorted plasma, LCD and LED screens, artifacts are hard for viewers to ignore. The good and bad news is that modern non-CRT displays don’t “lie like a Trinitron.”
QoE evaluation is a combination of objective and subjective evaluation procedures, each taking place after the encoding process. Subjective quality evaluation can be a time-consuming process requiring a large amount of human resources. Objective evaluation methods can provide QoE evaluation results faster, but they require large amounts of machine resources and sophisticated test gear. Towards this, objective evaluation methods are based on and make use of multiple metrics.
QoS
QoS is the ability to provide different priorities to different applications, users or data flows, or to guarantee a certain level of performance for a specific stream. For example, a required bit rate, delay, jitter, packet dropping probability and/or bit error rate may be guaranteed. QoS guarantees are important if the network capacity is insufficient, especially since real-time streaming IP-TV often requires a fixed bit rate and is delay-sensitive.
A network or protocol that supports QoS may agree on a traffic contract with the application software and reserve capacity in the network nodes, for example, during a session establishment phase. During the session, it may monitor the achieved level of performance, such as the data rate and delay, and dynamically control scheduling priorities in the network nodes. It may also release the reserved capacity during a tear down phase.
QoS is sometimes used as a quality measure, with many alternative definitions, rather than referring to the ability to reserve resources. QoS sometimes refers to the level of service. High QoS, for example, is often confused with a high level of performance or achieved service quality, such as high bit rates, low latency and low bit error probability. A high level of performance is, in fact, a QoE factor, not QoS.
Best Effort non-QoS
Best Effort is an alternative to QoS control mechanisms. A Best-Effort network or service does not fully support QoS, and is not all that unusual in broadcast studios because it reflects how typical broadcast engineers think.
The goal is to provide high-quality communication over a Best-Effort network by over-provisioning the capacity so that it is more than sufficient for the expected peak traffic load. The resulting absence of network congestion eliminates the need for QoS mechanisms.
While many new routers and switches support QoS, many older devices do not. As devices are replaced within a station’s IT ecosystem, it will ultimately be capable of QoS monitoring and measurements.
PEVQ
Perceptual Evaluation of Video Quality (PEVQ) is a standardized end-to-end (E2E) measurement algorithm, based on modeling human visual behaviors, to score the picture quality of a video presentation by means of a five-point mean opinion score (MOS), with five being perfect.
The measurement algorithm can be applied to analyze visible artifacts caused by MPEG compression in digital video encoding/decoding (or transcoding) process, RF- or IP-based transmission networks, or viewer devices such as set-top boxes. Application scenarios address next-generation networking and mobile services including IPTV (SD television and HDTV), streaming video, Mobile TV, video telephony, video conferencing and video messaging.
The development for today’s picture quality analysis algorithms began with still image models. These models were later enhanced to also cover motion pictures. The measurement paradigm is to assess degradations of a decoded video sequence output from the network (for example as received by a TV set-top box) as compared to the original reference picture as it leaves the broadcast studio control room. Consequently, the setup is referred to as end-to-end (E2E) quality testing.
The setup most exactly reflects how average viewers would evaluate the video quality based on subjective comparison. Specifically, it is a QoE test. Besides assigning an overall quality MOS as a figure of merit from 0-5, abnormalities in the video signal are quantified by a variety of Key Performance Indicators (KPIs), including peak signal-to-noise ratios (PSNR), distortion indicators and lip-sync.
Depending on the information that is made available to the algorithm, video quality test algorithms can be divided into three categories. A “Full Reference” (FR) algorithm has access to and makes use of the original reference sequence for a comparison. It can compare each pixel of the reference sequence to each corresponding pixel of the degraded sequence. FR measurements deliver the highest accuracy and repeatability but tend to be processing-intensive.
A “Reduced Reference��� (RR) algorithm uses a reduced bandwidth side channel between the sender and the receiver which is not capable of transmitting the full reference signal. Instead, parameters are measured at the sending side, which help predict the quality at the receiving end. RR measurements offer reduced accuracy and represent a working compromise if bandwidth for the reference signal is limited.
A “No Reference” (NR) algorithm only uses the degraded signal for the quality estimation and has no information of the original reference sequence. NR algorithms are low accuracy estimates, only, as the originating quality of the source reference is completely unknown. A common variant of NR algorithms doesn't even analyze the decoded video on a pixel level but works on an analysis of the digital bit stream on an IP packet level, only. The measurement is consequently limited to a transport stream analysis.
QoE monitoring and analysis
In a traditional environment, video and network performance are verified with independent tools across the entire network. Typically, multiple performance metrics used across the network are not the most efficient way to pinpoint a problem. This time-consuming process requires man hours of interaction and often leads to inaccurate results.
This approach is not particularly adequate for assuring video quality because voice and data network metrics cannot ensure video delivery. Additional metrics specific to video delivery are necessary to present a full view of video transport health.
Transport health alone does not assure a high-quality experience for the viewer. QoE verification at the source and at subsequent network events, such as aggregation, encoding, rateshaping and grooming, is also necessary to ensure that the processed content meets the quality requirements for both video and audio.
As the industry realizes that video content can’t be monitored like voice or data signals, a new monitoring trend began to emerge. MPEG video monitoring vendors used QoE parameters, such as perceptual quality, as the best approach to monitor a video network. Supporters of this idea argued that QoE and QoS were not closely related. QoE verification was more important because it better represents the viewer’s perception.
Get the TV Tech Newsletter
The professional video industry's #1 source for news, trends and product and tech information. Sign up below.
Ultimately, this is an incomplete verification of video performance because a monitoring system based solely on QoE will only detect problems when they become visible — that is, when the viewer sees technically troubled program content. By then, it is too late. Automated Q/C systems are appearing in the industry and will be the topic of a future “Transition to Digital” tutorial newsletter.
A thorough analysis of video must include verification of metadata, and it is best done before a file or files play back to viewers. The metadata contained in special MPEG PIDs contributes to the video user experience by providing PSIP, control information (such as dialnorm for Dolby Digital AC-3), offline information (such as closed captioning systems) and interactive capabilities. However, the verification of metadata, other than missing, wrong or out-of sync audio channels, is absent from QoE-only methods of video monitoring.
Balancing QoS and QoE
Short of fully automated Q/C testing of all metadata, video and audio levels, and hundreds of other tiny but crucial digital file tests, the next best approach is a balanced application of QoS and QoE monitoring methods throughout the network. This method verifies that video processing nodes are delivering a satisfactory level of video quality, that video quality is maintained throughout the network, and that degradation trends occurring at the transport plane are addressed before their cumulative effect has visible impact on content quality.
Content from diverse sources is aggregated in a control room. Typically that’s where aspect ratios and presentation formats may be modified and video and audio are encoded using diverse codec types and compression rates, often groomed and rate-shaped. At these points, it is necessary to verify that the produced content meets the QoE goals of the service provider.
Farther along the network, the video is transported through the core, to the edge, and ultimately to the viewer location. Throughout these nodes, MPEG packets can suffer loss, misplacement or delay for a wide variety of reasons. If ignored, these issues will cause a perceivable negative impact on the content. At these points, continuous QoS monitoring is a proactive way to deliver service and quality assurance.
Once video streams have left the studios and their structure is verified and intact, the main IP issues are not transport stream structure-related, but mainly packet loss and jitter-related events. Video is inherently sensitive to delivery time distortions and packet loss. It is demanding on packet switched networks, especially those that have had little provisioning for quality of service headroom.