280 likes | 407 Views
Ποιότητα υπηρεσίας για Επικοινωνία Video -Επίδραση Λαθών Μετάδοσης και Μέθοδοι Αντιμετώπισης. Πολυχρόνης Κουτσάκης. Video Quality of Service (QoS) Requirements. Ideally Quality of video information should not be limited by the coding system or the transmission medium
E N D
Ποιότητα υπηρεσίας για Επικοινωνία Video-Επίδραση Λαθών Μετάδοσης και Μέθοδοι Αντιμετώπισης Πολυχρόνης Κουτσάκης
Video Quality of Service (QoS) Requirements Ideally • Quality of video information should not be limited by the coding system or the transmission medium • Constant quality level from frame to frame, independently of the visual content and of the current state of the transmission system • No degradation due to errors or losses in the communication system • No delay
Visible Errors • Human Visual System->sensitive to errors and degradation in video information. • An error that occurs within a single decoded frame will only be apparent for 1/25th or 1/30th of a second and will not have a great impact on the viewer unless it affects a very large spatial area of the frame. • An error that persists for several frames is likely to be much more obvious to the viewer. A video communication system should aim to minimize or eliminate errors that last for several frames of decoded video.
QoS requirements for a Video Transport System • Video transport system: responsible for transporting video from one end user to another. QoS->level of service provided to a video application (bandwidth, error rate, delay). • The transport system may be conceptually divided into an underlying network and an end-to-end protocol (transport protocol). • Within the Internet, TCP is used as an end-to-end transport protocol to enhance the QoS provided by the network in order to meet the requirements of the application. • The complexity of the transport protocol depends on the degree of mismatch between the QoS provided by the network and the QoS required by the application.
QoS for Coded Video (1/4) Data transmission rate • CBR transmission: Many existing video communication systems make use of a fixed bit rate channel (e.g., ISDN-based videoconferencing, broadcast digital television). • Problem: Encoded video data has an inherently highly variable bit rate. Therefore, data is buffered before transmission-> smoothing out the short-term variations in data rate. Longer term variations (in spatial and temporal content) cannot be smoothed this way. Usual technique: feed back some measure of the output bit rate to the encoder,in order to adjust the compression factor (quantization step size modification).
QoS for Coded Video (2/4) • VBR transmission: Packet-switched networks (e.g., ATM Networks) are capable of supporting real-time VBR traffic. VBR transmission->advantageous, because DCT-encoded video data can approach constant quality if the quantization size is kept at a fixed level (no feedback from the encoder output to the encoding parameters, open-loop encoding). • Problem: when a connection is set up through an ATM network, the characteristics of the traffic source must be specified, including mean and peak data rates. With open-loop encoding, the video source may violate the traffic parameters agreed to at the start of the connection.
QoS for Coded Video (3/4) Errors and Losses • Even a relatively low level of bit errors or packet/cell losses can have a severe effect on the quality of decoded video data, because compression algorithms remove much of the redundancy from a moving video sequence and the remaining data is very important to the correct reconstruction of the video sequence at the decoder. • For MPEG-1, MPEG2, H.261 video, a bit error of higher than about 10-6 can produce a significant loss of visual quality, especially when temporal prediction is used.
QoS for Coded Video (4/4) Delay • Three factors are responsible: • Encoding (buffering the data prior to transmission in a CBR encoder+bidirectional prediction (I,B,B,B,P) ) • Communication medium (congestion) • Decoder (to solve the problem, each set of B pictures is transmitted after the relevant pair of I/P pictures).
Sources of errors • Bit errors • Packet Loss (due to delays through the network-> route taken, processing speed and capacity of each node along the route, amount of other data traffic in transit at the time)
Error Effects and Error Propagation (1/2) • Position of error in coded data • Spatial Error propagation-> a packet loss may cause a larger area to be corrupted. If the data within the lost packet is all contained within a single coded slice, then only this slice will be corrupted. But if the packet size is large or the slice size is small, then each packet may contain several slices-> several slices will be corrupted in the decoded frame.
Error Effects and Error Propagation (2/2) • Temporal Error Propagation The corrupted area in the decoded frame can propagate to other decoded frames if they are temporally predicted from the corrupted frame. I and P pictures in an MPEG sequence are used as reference frames for predicted P and B pictures. The area within each predicted frame that is predicted from the corrupted area in the reference frame will also be corrupted.
Glitches • Loss or excessive delay of video data in networks->glitching in the display of video. • Glitch: The effect seen by the viewer in the display of video due to the unavailability of video data at the decoder when needed. It begins when a portion of a frame is not displayed due to the unavailability of data, while its preceding frame is fully displayed. The glitch continues as long as each consecutive frame after the beginning of the glitch contains a portion that is not displayed. It ends when a subsequent frame is fully displayed.
Glitches as a Network Performance Measure (1/4) • Three quantities of interest: • Glitch duration • Spatial extent (percentage of the undisplayed portions in the frame). It may vary within a glitch from frame to frame. • Glitch rate (number of glitches per unit time that a video stream experiences). • Not all glitches have the same quality degradation effect to the user.
Glitches as a Network Performance Measure (2/4) • The duration, spatial extent and rate of the glitches depend on: • The network • The traffic scenario • The video encoding scheme • The video encoder control scheme
Glitches as a Network Performance Measure (3/4) • The network type affects the statistics of packet loss and delay, and the packetization process depends on it, as well as the maximum packet size-> has an important effect on all three glitch quantities of interest. • The traffic scenario affects the statistics of packet loss, since it determines the load on the network (increase in network load->increase in glitch rate).
Glitches as a Network Performance Measure (4/4) • The video encoding scheme affects the glitch statistics because for different schemes (H.261, MPEG), the dependencies among frames are different. • Video encoder control scheme-> Used in order to achieve certain data rate and quality objectives. • CBR • Open-Loop VBR (OL-VBR).
Average Interval Between Errors • The time interval between successive errors in a video sequence depends on the BER and on the bit rate of the encoded sequence ->table 7.1. If these errors propagate spatially and temporally, the effect on the subjective quality of the sequence may be significant.
Image Distortion Measures • PSNR • š (combination of measures of “spatial” and “temporal” information, calculated for the original video sequence and for the decoded sequence) • MPQM (determines which distortions are visually obvious)
Coding techniques to reduce the effect of errors • Error correction: Feedback error control schemes (e.g.,ARQ) are not generally suitable for real-time video transmission. The extra delay introduced (ACK and retransmission of errored packets) is unacceptable. FEC is successfully used in some applications to control the error rate. • Error concealment: • Temporal • Spatial • Motion-compensated
Packet Loss Protection and Recovery • Error Correction for Packet Headers (in ATM Networks, cyclic code that corrects 1-bit errors and detects 2-bit errors). • Protection by Packet Priority (scheduling based on delay time, loss rate). • FEC and Interleaving
FEC and Interleaving • Interleaving protects from burst errors that destroy multiple bits at one time. • Combining bit interleaving with FEC is used for localizing the impact of errors->technique independent of the video encoding method, since it is done before the video encoding data are assembled into packets (Figure 6.3). • Example:Figure 6.4, where the output of the source encoder is partitioned into blocks of k bits. An L bit error-correcting code is added, creating blocks of k+L=m bits. Blocks can be restored even if one cell per block has been lost. The lost cell can be identified by maintaining a sequence number (CI).
Recovery by the receiver (1/2) • Concealment: techniques for replacing erroneous sections with other data in order to minimize degradation when an uncorrectable data error or packet loss has been detected. • Concealment by interleaving image block data. • Coordinated operation of coder and decoder.
Recovery by the receiver (2/2) • Concealment by interleaving image block data: The high degree of adjacent video data correlation can be utilized to conceal errors. This is only effective if adjacent coded data are interleaved in such a way that encoded data for neighboring regions are placed in widely separated packets, thus minimizing the effect of bursty packet loss. • Coordinated operation of coder and decoder: the decoder informs the coder of the video frames and blocks that have been affected by packet loss, and the coder stores the local decoding data used for coding so that, when the decoder sends a notice of data loss, the coder can correct the local decoding data for the affected frames and blocks. Thus, the difference between coder and decoder predictions values is eliminated.
Layered video coding with prioritized packet handling • Layered coding: two senses • One refers to a system of video services at multiple quality levels, which gives users more freedom to select the quality required for their needs (figure 6.11) • The other (more relevant for packet video) refers to a useful means of ensuring quality in network transmission. A single video source is partitioned into layers, and differing transmission qualities are established for each level. In figure 6.12, signals of low-significance are sent over a low-priority channel, while signals of high significance are sent over a high-priority channel. In this way degradation due to channel errors is minimized.
Fundamental Approaches to Layer Partitioning • Bit planes:Quantized data for each pixel is partitioned into bit planes (most->least significant bit). For color video, each pixel is expressed as a combination of several color signals. Data for a single pixel can be partitioned into bit planes expressing brightness and color. (most general approach) • Feature planes: Attempt to extract “features” with meaning to the viewer and assign them to high-priority layers (e.g., for teleconferencing video, human images can be extracted and separated from the background to form a meaningful layer. (most specific approach) • Frequency planes layering: it is performed based on consideration of the characteristics of human vision. The most visually important frequencies are given high priority.
Criteria for evaluating Layering • The determination to use layering depends upon a comparison of the overhead required for layering (increased complexity in the codec, increased transmission bit rate, increased complexity for handling the layers within the network) and the extent to which layering facilitates recovery of image quality when packets are lost (determined by SNR or by subjective evaluation).
Principle Factors Affecting Image Degradation in a Packet Loss Environment • The nature of the packet loss (random or bursty) • The relation between packet loss and video coding data (amount and type of video data lost with a packet). • Factors relating to video images being transmitted (losses referring to background-> not very apparent, losses referring to a person’s face->very noticeable). These factors depend greatly upon the content of the video. • Factors related to coding schemes (the effect of packet loss is dependent on the prediction method, and thus it is strongly dependent on the actual coding algorithm and coding bit rates.).
Video Quality with Layered DCT • The basic concept of DCT layering is to scan DCT coefficients and assign the DC and low-frequency components to the upper, priority layer, and the other components to the lower, nonpriority layer (figure 6.13). • Results of subjective evaluation:15 viewers, 4 decoded video sequences, NTSC video signals (table 6.5). For nonprioritized coding, degradation starts to become noticeable at a packet loss rate of 0.01%. With two-layer prioritized coding, degradation is not noticeable until packet loss reached 10% (figure 6.27).