260 likes | 677 Views
Video Streaming. Ali Saman Tosun Computer Science Department. Broadcast to True Media-on-Demand. Broadcast (No-VoD) Traditional, no control Pay-per-view (PPV) Paid specialized service Near Video On Demand (N-VoD) Same media distributed in regular time intervals
E N D
Video Streaming Ali Saman Tosun Computer Science Department
Broadcast to True Media-on-Demand • Broadcast (No-VoD) • Traditional, no control • Pay-per-view (PPV) • Paid specialized service • Near Video On Demand (N-VoD) • Same media distributed in regular time intervals • Simulated forward / backward • True Video On Demand (T-VoD) • Full control for the presentation, VCR capabilities • Bi-directional connection
Streaming Stored Video Streaming • media stored at source • transmitted to client • streaming: client playout begins before all data has arrived • timing constraint for still-to-be transmitted data: in time for playout
client video reception constant bit rate video playout at client variable network delay buffered video client playout delay Streaming Video • Client-side buffering, playout delay compensate for network-added delay, delay jitter constant bit rate video transmission Cumulative data time
Smoothing Stored Video For prerecorded video streams: • All video frames stored in advance at server • Prior knowledge of all frame sizes (fi, i=1,2,..,n) • Prior knowledge of client buffer size (b) • workahead transmission into client buffer 2 1 b bytes n Client Server
Smoothing Constraints Given frame sizes {fi} and buffer size b • Buffer underflow constraint (Lk = f1 + f2 + … + fk) • Buffer overflow constraint (Uk = min(Lk + b, Ln)) • Find a schedule Sk between the constraints • Algorithm minimizes peak and variability U number of bytes rate changes S L time (in frames)
Proxy-based Video Distribution Server Proxy adapts video Proxy caches video Proxy Client Client
Proxy Operations • Drop frames • Drop B,P frames if not enough bandwidth • Quality Adaptation • Transcoding • Change quantization value • Most of current systems don’t support • Video staging, caching, patching • Staging: store partial frames in proxy • Prefix caching: store first few minutes of movie • Patching: multiple users use same video
Online Smoothing Source or proxy can delay the stream by w time units: Larger window w reduces burstiness, but… • Larger buffer at the source/proxy • Larger processing load to compute schedule • Larger playback delay at the client stream with delay w streaming video b bytes Client Source/Proxy
proxy client Ai Si Di-w B b Online Smoothing Model • Arrival of Aibits to proxy by time i in frames • Smoothing buffer of B bits at proxy • Smoothing window (playout delay) of w frames • Playout of Di-w bits by client by time i • Playout buffer of b bits at client Transmission of Si bits by proxy by time i
Online Smoothing • Must send enough to avoid underflow at client • Si must be at least Di-w • Cannot send more than the client can store • Si must be at most Di-w + b • Cannot send more than the data that has arrived • Si must be at most Ai • Must send enough to avoid overflow at proxy • Si must be at least Ai - B max{Di-w, Ai - B} <= Si <= min{Di-w + b, Ai}
Online Smoothing Constraints Source/proxy has w frames ahead of current time t: don’t know the future number of bytes U L ? time (in frames) t t+w-1 Modified smoothing constraints as more frames arrive...
Smoothing Star Wars GOP averages 30-second window 2-second window • MPEG-1 Star Wars,12-frame group-of-pictures • Max frame 23160 bytes, mean frame 1950 bytes • Client buffer b=512 kbytes
Prefix Caching to Avoid Start-Up Delay • Avoid start-up delay for prerecorded streams • Proxy caches initial part of popular video streams • Proxy starts satisfying client request more quickly • Proxy requests remainder of the stream from server • smooth over large window without large delay • Use prefix caching to hide other Internet delays • TCP connection from browser to server • TCP connection from player to server • Dejitter buffer at the client to tolerate jitter • Retransmission of lost packets • apply to “point-and-click” Web video streams
Changes to Smoothing Model • Separate parameter s for client start-up delay • Prefix cache stores the first w-s frames • Arrival vector Ai includes cached frames • Prefix buffer does not empty after transmission • Send entire prefix before overflow of bs • Frame sizes may be known in advance (cached) Ai bs Si Di-s bc bp
Enhancement layer Best possible quality at possible sending rate Quality Base layer Sending rate Scalable coding • Typically used asLayered coding • A base layer • Provides basic quality • Must always be transferred • One or moreenhancement layers • Improve quality • Transferred if possible
Temporal Scalability • Frames can be dropped • In a controlled manner • Frame dropping does not violate dependancies • Low gain example: B-frame dropping in MPEG-1
73 72 61 75 83 -1 2 -12 10 Spatial Scalability • Base layer • Downsample the original image • Send like a lower resolution version • Enhancement layer • Subtract base layer pixels from all pixels • Send like a normal resolution version • If enhancement layer arrives at client • Decode both layers • Add layers Base layer Less data to code Enhancement layer Better compression due to low values
SNR Scalability • SNR – signal-to-noise ratio • Idea • Base layer • Is regularly DCT encoded • A lot of data is removed using quantization • Enhancement layer is regularly DCT encoded • Run Inverse DCT on quantized base layer • Subtract from original • DCT encode the result • If enhancement layer arrives at client • Add base and enhancement layer before running Inverse DCT
Multiple Description Coding • Idea • Encode data in two streams • Each stream has acceptable quality • Both streams combined have good quality • The redundancy between both streams is low • Problem • The same relevant information must exist in both streams • Old problem: started for audio coding in telephony • Currently a hot topic
Delivery Systems Developments Several Programs or Timelines Network Saving network resources: Stream scheduling
Patching • Server resource optimization is possible Central server Join ! Unicast patch stream multicast cyclicbuffer 1st client 2nd client
Proxy Prefix Caching Central server • Split movie • Prefix • Suffix • Operation • Store prefix in prefix cache • Coordination necessary! • On demand • Deliver prefix immediately • Prefetch suffix from central server • Goal • Reduce startup latency • Hide bandwidth limitations, delay and/or jitter in backbone • Reduce load in backbone Unicast Prefix cache Unicast Client
S33 S34 S32 S11 S31 S11 S12 S12 S11 S13 S21 S22 Video clip 1 Video clip 1 Video clip 1 Video clip 3 Video clip 2 I12 I11 I21 I33 I32 I31 Interval Caching (IC) • caches data between requests • following requests are thus served from the cache • sort intervals on length I32 I33 I12 I31 I11 I21
Receiver-driven Layered Multicast (RLM) • Requires • IP multicast • layered video codec (preferably exponential thickness) • Operation • Each video layer is one IP multicast group • Receivers join the base layer and extension layers • If they experience loss, they drop layers (leave IP multicast groups) • To add layers, they perform "join experiments“ • Advantages • Receiver-only decision • Congestion affects only sub-tree quality • Multicast trees are pruned, sub-trees have only necessary traffic