460 likes | 625 Views
Networking Requirements of Multimedia Applications. Network features and performances; Throughput; Delay; Delay Variation; Isochronism; Packet Errors; Multicasting; Networking Requirements of Multimedia Applications Requirements of Audio-Visual Communications;
E N D
Networking Requirements of Multimedia Applications • Network features and performances; • Throughput; • Delay; • Delay Variation; • Isochronism; • Packet Errors; • Multicasting; • Networking Requirements of Multimedia Applications • Requirements of Audio-Visual Communications; • Issues and Approaches at Application Level; • The Layered End-to-End QoS Architecture. • RTP/RTCP and its applications
Throughput Bit rate is usually used to measure the throughput; End-to-end throughput, not the total transfer capacity of the network; • Definition: the number of binary digits that the network is capable of accepting and delivering per unit of time between two communicating end systems; • To perceive the end-to-end throughput from two points of view network bit rate, the capacity of network to achieve a certain rate. link rate, processing power of the intermediate equipment, usable resource, resource management, protocols and their efficiency, and etc; host bit rate, the capacity of the end-system, which may impose practical limitations to the achievable bit rate. processing power of CPU, IC, scheduling of tasks, buffer management, the implementation of the protocols, configuration of the protocols, and etc.
Throughput (cont’d) The access speed, the frequency at which bits can be sent or received during transmission periods at the interface between the end-system and the network; The sustained speed, the averaged frequency at which bits can be sent or received during conversation periods at the interface between the end-system and the network; As for CS network, sustained speed ≈ Access speed As for PS network, sustained speed < Access speed
Throughput (cont’d) • The issues to be considered If the sustained speed can fulfill the requirements of certain types of MM applications (at least the bandwidth requirements); The side-effects caused by the bit-rate variation (the delay jitter); Others. How to deal with these issues Profound study on: the statistical features of the bit rate (e.g., bounds, d.s., variance, correlation and etc.); the statistical features of the application data; how to couple the two sides (e.g., the transmission scheduling scheme) applications --- network; (network operator) network --- application; (service provider) To consider the issue in an integrated fashion!
Delay end-to-end delay = access delay + transit delay + transmission delay; The access delay: the time span necessary for the source to wait for the medium to be available or for the network to be ready to accept the block of information; OS’s task scheduling scheme, medium access control, flow control, congestion control, admission control, and etc.; Transit delay:the time elapsing between the emission of the first bit of a data block by the transmitting end-system and its reception by the receiving end-system; physical medium, buffering (shaping, congestion control and etc.), routing computation, forwarding and etc.; Transmission delay:the time elapsing between the reception of the first bit of a data block and the reception of the last bit of the same data block at the receiving end-system; for a given block size, the delay only depends on the access speed;
Delay (cont’d) end-to-end delay = access delay + transit delay + transmission delay; Round-trip delay: the elapsed time of the first bit of a data block and its reception by the same end-system after the block has been echoed by the destination end-system.
Delay (cont’d) the round-trip delay • Applications of the round-trip delay: • Usually a metric of network latencies which is often more meaningful than the strict transit delay, esp. for interactive applications; • Estimation of the variation of the network load at the end-systems; --- flow control, congestion control;
Delay variation Definition:differences among the end-to-end delays experienced by different packets (also named delay jitter); • Physical jitter: the variation of delay generated by the transmission equipment; • The physical jitter is one among several components of the overall delay variation; • Factors: Repeaters which reshape signals may have slightly faulty behavior; Crosstalk between cables may create interference; Electronic oscillators may have phase noise; The propagation delay in metallic conductors changes when temperature changes; Generally speaking, delay jitter cannot be avoided for any types of networks; In addition to physical delay jitter, there are other types of jitters, which may play more important roles in the overall delay variations; (discussed later!!!) What we want to do with the delay variation statistical features: d.t., expected values, variance and sth else; QoS, to make it more predictable; To minimize its side effects on multimedia communication,esp. on the real-time or continuous audio-visual communications;
Delay variation (cont’d) Major components of the end-to-end delay variation Circuit-switched networks Not free of jitters, particularly have small values: for a circuit formed by a single local fiber connecting two transmission systems, in the order of a few nanoseconds; for long-haul circuits traversing a number of transmission systems, in the order of microseconds;
Delay variation (cont’d) Major components of the end-to-end delay variation (cont’d) Local-are networks
Delay variation (cont’d) Major components of the end-to-end delay variation (cont’d) WAN packet networks
Isochronism Definition: an end-to-end connection is said to be isochronous if the bit rate over the connection is guaranteed and the value of the delay jitter is also guaranteed and small; A combination of the two basic characteristics discussed previously; Meaning of ‘guaranteed’, at least predictable; Isochronism does not mean real-time; Necessary indeed for the satisfactory transportation of continuous media streams; bit arte, in a more sustained manner; jitter, small and with reasonable upper bound; CS,CBR service in ATM,G service in ISPN strict isochronism; VBR service in ATM, C service in ISPN coarse isochronism; ABR service in ATM, B service in ISPN no isochronism;
Error Rates The degree to which the network respects the integrity of the data it transports; Measures of network’s resilience to errors; Measures of the behavior of the network with respect to alteration, loss, duplication, or out-of-order delivery of data; Types of transmission errors: <1> data alteration, or bit error Inversion of bits, loss of trailing or heading parts in data blocks or packets; the least frequent form of error in modern networks; BER, bits error rate; <2> packet or cell loss Bit error (rare); The most frequent cause of data loss in modern packet network is internal network congestion affecting nodes or transmission lines, not transmission errors resulting in packet discarding; congestion itself & congestion control mechanism; PLR or CLR, packet or cell loss rate;
Error Rates (cont’d) The degree to which the network respects the integrity of the data it transports; Measures of network’s resilience to errors; Measures of the behavior of the network with respect to alteration, loss, duplication, or out-of-order delivery of data; Types of transmission errors (cont’d): <3> packet or cell duplication; not rare in fact, e.g. TCP error control mechanism; <4> out-of-order packet or cell; quite often, mainly caused by alternate routes (dynamic routing); PER or CER with respect to <2>, <3>, <4>;
Error Rates (cont’d) Error control: <1> perspective mechanism used by network infrastructure; mechanism adopted by networked applications; <2> components of the mechanism error detection/ error notification/ error recovery; Network perspective: CRC (detection of bits error), FEC (detection and correction of bits error); notification by network itself (emitted by intermediate equipments); sequence number, timeout, acknowledgement and retransmission (the TCP error control mechanism) With the advent of very high-speed and very reliable transmission systems, a number of heavy mechanism such as internal retransmission are superfluous and detrimental to other service parameters. the fastest, the cheapest, the most efficient network ILP
Error Rates (cont’d) Application perspective: Media Repair Sender Based Receiver Based
Error Rates (cont’d) Application perspective (cont’d):
Error Rates (cont’d) Application perspective (cont’d):
Multicasting Definition: multicasting is the capability of the network to replicate, at certain internal points, the data emitted by a source. Replicated data should be forwarded to the recipient end-systems which are part of the multicast group so as to avoid or minimize segments of the networks to be traversed by multiple copies of the same data. To minimize the network load intelligently (of course, the network operation overhead is increased correspondingly!);
Multicasting (cont’d) • Three forms of replications: • bit stream replication when information is not structured into blocks such as packets or cells when transmitted; • block stream replication dealing with packets or cells; according to a multicast routing scheme, to decide whether the packets or cells have to be replicated and towards which destination; the next destination may be another replicator or the target end-system; • application data replication dealing with messages, files, or even documents;
Requirements of AV Communications Bandwidth Generally speaking, the bandwidth requirement is high; The actual required bandwidth is highly dependent on the factors such as the type of the application, the required data quality, the compression scheme and etc.; The required bandwidth may vary to a great extent; Delay Highly dependent on the type of application; downloading application, not sensitive to delay; streaming application (near real-time), care about delay, but only to some extent; real-time application, to have stringent requirement on delay; In general, a real-time motion video stream is transmitted simultaneously with an audio stream for synchronous presentation. In such cases, the requirements on the end-to-end delay and the delay jitter will usually be dictated by the audio stream.
Requirements of AV Communications (cont’d) Delay (cont’d) • For MM applications network latency requirements are in general less stringent than for compute intensive applications. The latency is measured on the time scale of human perception. • Latency is of little relevance for one-way transmissions (up to a certain limit...) but is a potential nuisance for teleconferencing and for shared applications. For these applications the latency should be kept below 0.5 sec. • Latency issue is almost entirely under control (or lack thereof) of the networking gear vendors. Almost every operation on a packet contributes to the total latency. • Technologies such as cut-through (instead of store and forward) forwarding attempt to decrease the latency. • More heterogeneous networks usually induce higher latency due to packets encapsulation/header changes or segmentation and reassembly.
Requirements of AV Communications (cont’d) • Jitter - audio quality killer • Due to statistical factors, packets do not arrive in evenly spaced intervals. Instead, arrival time displays Gaussian variation. (an open issue) • Jitter is a problem for both audio and video streams. • For audio, jitter may cause unacceptable degradation of playback quality • For video, packets that arrive too late require complex logic in the decoder. Dropping the late packet may cause header loss and decoder confusion. Processing the late packet compounds the sync problem. • Remedy : network buffers, BW reservation, packet priority handling....
Requirements of AV Communications (cont’d) Human perception of sound and images The psycho-acoustic behavior of our ear may be modeled as a “differentiator”; The mechanism of human vision acts as an integrator; Humans are much more sensitive to alterations of audio than that of visual signal. • Our tolerance of transmission errors affecting audio streams is much lower than our tolerance of errors affecting motion video streams; • When audio and video streams compete for the same network resources, the audio streams should have the higher priority; Network error rates must be lower when audio or video compression schemes are used. The higher the compression rate, the higher the probability that an erroneous bit entails a visible artifact (visual errors which appear unnatural); The persistence of the artifact, that is how long it remains displayed, is another important parameter. a function of the compression scheme; to be taken into consideration by end-to-end, application level QoS control.
Requirements of AV Communications (cont’d) Skew requirements for synchronized playback Skew:The time difference between related audio and video is known as “skew” Skew is unavoidable, and the inter-media synchronization control is to guarantee the skew within some acceptable range; The skew requirements (some typical values)
Requirements of Audio Streams Some typical values Bandwidth requirements:
Requirements of Audio Streams (cont’d) Some typical values • Delay requirements: • The delay should stand between 100ms to 500ms; (to get the impression of real-time, subjective); • In virtual reality, the delay should be in the order of 40ms; • Delay variation requirements • The jitter should in general not exceed 100ms for CD-quality compressed sound and 400ms for telephone-quality speech. • For multimedia applications with stringent bounds on the transit delay, like virtual reality, this jitter should not exceed 20ms to 30ms; Bits error requirements In the case of presentation to human users without recording for further processing, the residual bit error rate of a telephone-quality audio stream should be lower than 10-2. The residual bit error rate of a CD-quality audio stream should be lower than 10-3 in the case of an uncompressed format and lower than 10-4 in the case of a compressed format;
Requirements of Video Streams Some typical values Bandwidth requirements
Requirements of Video Streams (cont’d) Some typical values (cont’d) • Delay requirements (same as the ones from audio) • The delay should stand between 100ms to 500ms; (to get the impression of real-time, subjective); • In virtual reality, the delay should be in the order of 40ms; Delay variation requirements The jitter should not exceed 50ms for HDTV quality, 100ms for broadcast quality, and 400ms for videoconferencing quality; The bits error requirements The end-to-end bit error rate before possible error recovery between end-systems should not exceed 10-6 for HDTV quality, 10-5 for broadcast quality, and 10-4 for videoconferencing quality. These figures are for compressed streams; If FEC techniques are not used, the bit error rates given above have to be divided by factor of 10,000;
Issues and Approaches at Application Level End-to-end flow or congestion control Bursty loss and excessive delay have a devastating effect on video presentation quality, and they are usually caused by network congestion. Thus, congestion-control mechanisms at end systems are necessary to help reducing packet loss and delay. Typically, for streaming video, congestion control takes the form of rate control. Rate control attempts to minimize the possibility of network congestion by matching the rate of the video stream to the available network bandwidth. • components: 1) congestion detection; 2) measurement and estimation of the available network bandwidth; 3) rate adaptation;
Issues and Approaches at Application Level(cont’d) End-to-end flow or congestion control (cont’d) • Existing rate-control schemes can be classified into three categories: 1) source-based rate control; 2) receiver-based rate control; 3) hybrid rate control; Source-based rate control the sender is responsible for adapting the video transmission rate; typically, feedback is employed by source-based rate-control mechanisms. Based upon the feedback information about the network, the sender could regulate the rate of the video or audio stream. the source-based rate control can be applied to both unicast and multicast.
Issues and Approaches at Application Level(cont’d) End-to-end flow or congestion control (cont’d) Receiver-based rate control the receivers regulate the receiving rate of video streams by adding/dropping channels while the sender does not participate in rate control. typically, receiver-based rate control is used in multicasting scalable video or audio, where there are several layers in the scalable video or audio, and each layer corresponds to one channel in the multicast tree. Hybrid rate control the receivers regulate the receiving rate of video streams by adding/dropping channels, while the sender also adjusts the transmission rate of each channel based on feedback from the receivers.
Issues and Approaches at Application Level(cont’d) delay jitter and real-time transmission of continuous media Delay equalization --- the intra-media synchronization control ti: the sending time of the ith frame; ai: the arriving time of the ith frame; pi: the playback time of the ith frame; di: the end-to-end delay of the ith frame;
The necessary and sufficient condition for the playback of the ith frame: Issues and Approaches at Application Level(cont’d) Delay equalization --- the intra-media synchronization control (cont’d)
Issues and Approaches at Application Level(cont’d) Offset delay: the interval between a packet’s arrival time and its playback time; Delay equalization: a mechanism used to overcome delay variation and support smoothed playback of continuous media stream by adding an additional offset delay. Delay equalization --- the intra-media synchronization control (cont’d)
Issues and Approaches at Application Level(cont’d) Static delay offset • The value is set up in a static manner,prior to the session,and may possibly be based on some estimation of the delay distribution. • The tendency in this case is to take a rather high value which will minimize the probability of blocks arriving after of the delay offset. • This technique works satisfactorily with networks the performance of which is rather constant in time and in particular,with networks the transit delay of which is not highly dependent on the offered load. • In contrast, using the technique of static offset over packet networks such as shared LANs or IP network naturally leads to adopting very long delays witch are unnecessary during unloaded periods. Adaptive delay offset 1) The receiving system measures the end-to-end delay experienced during the session and adapts the delay offset accordingly. 2) This technique performs better than a static setting for packet networks the delay distribution of which may vary considerably between busy and quiet hours.The difficulty lies in the switching between the period over which the value of the delay offset should be different. The change should occur during silent period, to remain unnoticed by the listener. Delay equalization --- the intra-media synchronization control (cont’d)
Issues and Approaches at Application Level(cont’d) Error control mechanisms (examples) Sender-based repair passive Interleaving
Issues and Approaches at Application Level(cont’d) Error control mechanisms (examples) Sender-based repair passive Interleaving (cont’d) • Advantages • Most audio compression schemes can do interleaving without additional complexity • No extra bandwidth added • Disadvantages • Delay of interleaving factor in packets • Even when not repairing! • Disperse the effects of packet loss • Many audio tools send 1 phoneme (40 ms of sound)
Issues and Approaches at Application Level(cont’d) Error control mechanisms (examples) Receiver-based repair Interpolation • When packet is lost, reproduce a packet based on surrounding packets. • Waveform substitution • Use waveform repetition from both sides of loss • Works better than repetition (that uses one side) • Pitch waveform replication • Use repetition during unvoiced speech and use additional pitch length during voiced speech • Performs marginally better than waveform • Time scale modifications • “Stretch” the audio signal across the gap • Generate a new waveform that smoothly blends across loss • Computationally heavier, but performs marginally better than others
Issues Related to Audio-visual Multicasting • Heterogeneity • heterogeneity of the end-systems; • heterogeneity of the network areas; • Since continuous media imposes heavy demands on both networks and hosts, it is likely that not everyone will be able to receive all of a sender's streams, due to link or host limitations. • heterogeneity of the requirements • In general, it is difficult to deal with the heterogeneity problem in terms of flow control, resource reservation and other QoS related issues.
Issues Related to Audio-visual Multicasting(cont’d) Hierarchical Coding for Continuous Media • Hierarchical coding techniques, also referred to as layered or sub-band coding, split a continuous media signal into components of varying importance. The original signal may be reconstructed by aggregating all these components, but even proper or specific subsets of these components can approximate it well. • Video (for example) three broad parameters affect the overall perceived resolution of motion video: 1) the spatial resolution, the number of pixels per frame; 2) the color or chroma resolution, the amplitude depth or pixel depth; 3) the temporal resolution, the number of frames per second. Thus, with hierarchically encoded streams, the receivers can allocate resources based on their own specifications and priorities. For long term allocations, this may be done in advance so that the sender can avoid sending the extraneous streams. (sender: end-system, relaying server, replicating points internal to the network,…) Temporary resource shortages, whether memory or processing ones, can be dealt with by ignoring some streams, without any explicit negotiations with the sender, and dynamically degrading the quality of the presented signal. The receiver may even manipulate these streams before presentation in ways not anticipated by the sender.
Issues Related to Audio-visual Multicasting(cont’d) Hierarchical Coding for Continuous Media (cont’d) Hierarchical encoding can be exploited to the benefit of the network infrastructure itself, e.g. with hierarchically coded continuous media, the less important signal components, as determined by the applications, can be dropped to relieve congestion without causing retransmissions, leading to degradations in quality of service but not service interruption. Many proposed congestion control techniques rely on this feature as a last resort. The benefits derived from the independent streams provided by hierarchical encoding should be measured against two factors. 1) Separate compression of parts of the signals can be less efficient than compression of the complete signal. 2) More importantly, there are costs involved with splitting the signal into components and later reconstructing it, since not only hardware and software support is still inadequate for this purpose, but there may also be added performance penalties for not conforming to standardized encoding formats. Another relevant aspect of hierarchical encoding is that in some schemes the basic, low resolution, layers that are essential for signal continuity are highly compressible, thus suggesting a strategy of transmitting these streams with stricter guarantees than the ones for the remaining streams.
Issues Related to Audio-visual Multicasting(cont’d) Transcoding The functionality of network node or application-level gateway to support conversion among different encoding or compression schemes Customers can request different encoded versions of the voice stream with different data rates, according to their uplink capabilities. A single version of the stream is sent through the network and the requested versions are derived from the original stream by transcoding in intermediate network nodes. and G.729 (8kbps). Transcodings between different versions are only allowed from a higher bitrate codec to a lower bitrate codec. It is no use to transcode a signal to a higher bitrate codec as this deteriorates quality and increases the bitrate. After all, Transcoding and rate control in multicast;
The Layered End-to-End QoS Architecture Why the layered end-to-end QoS architecture • Definition of QoS from ITU: The collective effect of service performances which determine the degree of satisfaction of a user of the service; user: not only functional entity internal to the network but also applications or human beings; applications or human beings reside at the end systems, so the end-to-end QoS architecture; data is processed layer by layer, so the layered QoS architecture; • In terms of the simplification of the application-level controls and the promotion of their performance (e.g. accuracy, efficiency and etc) the network QoS mechanism is needed! • Even if the network QoS mechanism presents, the application-level controls are also necessary! • This implies that the end-to-end QoS control mechanism should be a layered, sophisticated system.
The Layered End-to-End QoS Architecture (cont’d) the layered end-to-end QoS architecture
The Layered End-to-End QoS Architecture (cont’d) the layered end-to-end QoS architecture (for the networked MM applications) User User Application Application Transport Transport Network Network • User specifies QoS parameters at the app. layer. • QoS parameter mapping from user parameters to resource requirements for all layers below. • Example: frame rate and resolution are mapped to CPU bandwidth, memory, network throughput, etc. • Translation algorithms: • Disparate translation mechanism • No clear formula or standard mapping • Some translated parameters are conflicting