380 likes | 539 Views
8.4 Wide -Scale Internet Streaming Study. CMPT 820 – November 2 nd 2010 Presented by : Mathieu Spénard. Goal. Measure the performance of the internet while streaming multimedia content from a user point of view. Previous Studies – TCP Perspective.
E N D
8.4 Wide-Scale Internet StreamingStudy CMPT 820 – November 2nd 2010Presentedby: Mathieu Spénard
Goal • Measure the performance of the internet while streaming multimedia content from a user point of view
Previous Studies – TCP Perspective • Study the performance of the internet • At backbonerouters, campus networks • Some studies (Paxson, Bolliger et al) mimican FTP, which is goodfornow, butdoesn'trepresenthowentertainment-oriented service willevolve (few backbone video servers, lots of users) • Ping, traceroute, UDP echo packets, multicastbackbone audio packets
Problem? • Notrealistic! Do notrepresentwhatpeopleexperience at home whenusingreal-time video streaming
StudyReal-TimeStreaming • Use 3 different dial-up Internet Service Provider in the U.S.A. • Mimictheirbehaviour in the late 1990s-early 2000s • Real-Timestreaming different than TCP because: • TCP rate is drivenbycongestioncontrol • TCP usesan ACK forretransmission; real-timeapplicationssendan NACK which is different • TCP reliesonwindow-basedflowcontrol; real-timeapplicationsutilizesrate-basedflowcontrol
Setup • Unix video server to the UUNET backbonewith a T1 • AT&T WorldNet, Earthlink, IBM Global Network • 56kbps, V.90 modems • All clientswere in NY state, but dialed long-distancenumbers to every 50 states to connect, fromvarious major cities in the U.S.A. To the ISP via PPP • Issue a parallel traceroute to the server and thenrequest to stream a 10-min long video
Setup (cont'd) • Phone database of all numbers to dial • Dialer • Parallel Traceroute • Implementedusing ICMP (instead of UDP) • Send all probes in parallel • Record IP Time-to-live (TTL) foreachreturnedmessages
What is a success? • Sustain the transmission of the 10-minute video sequence at the stream's target IP rater • Aggregatepacket loss is lessthan a specificthreshold • Aggregateincoming bit rateabove a specific bit rate • Experimentallyfoundthatthisfilter-outmodem-related issues
When does the experiment end? • 50 states (including AK and HI) • Eachdayseparatedinto 8 chunks of 3 hourseach • One week • 50 * 8 * 7 = 2800 successfulsessions per ISP
StreamingSequences • 5 frames per second, encodedusing MPEG-4 • 576-byte IP packetthatalways start at the beginning of a frame • Startupdelay: networkindependant: 1300ms, delayjitter: 2700ms. Total: 4000ms Multimedia over IP and Wireless Networks Table 8.1 page 246
Client-ServerArchitecture • Multi-threaded server, goodfor NACK requests • Burstsbetween 340 and 500ms for a low server overhead • Clientuses NACK for lost packets • Clientcollectsstatsaboutreceived packets and decoded frames
Client-ServerArchitecture (cont'd) • Example: RTT. Clientsends a NACK. Server respondswithretransmissionsequencenumber. Clientcanmeasure the time difference • Ifnotenough NACK needed, the clientcanrequestsome, soitactually has data. This happens every 30seconds ifpacket loss < 1%
Notation • DXnfor Dataset collectedbyISPx (x = a, b, c) withStream Sn (n = 1, 2) • Dnfor the combined set {Dan U Dbn U Dcn}
ExperimentalResults • D1 • 3 clientsperformed 16,783 long-distanceconnections • 8429 successes • 37.7 million packets arrived at clients • 9.4 GB of data • D2 • 17,465 connections • 8423 successes • 47.3 million packets arrived at clients • 17.7 GB of data
ExperimentalResults (cont'd) • Failurereasons: • PPP-layerconnectionproblem • Can'treach server (failedtraceroute) • High bit-errorrates • Low modem connectionrate
ExperimentalResults (cont'd) • Average time to traceanend-to-endpath: 1731ms • D1encountered 3822 different Internet routers; D2 4449 and together, 5266 • D1encounteredon average 11.3 hops (from 6 to 17), 11.9 in D2 (from 6 to 22)
ExperimentalResults (cont'd) Multimedia over IP and Wireless Networks Fig. 8.9 (top) page 250
Purged Datasets • D1p and D2p made up of successfulsessions • 16,852 successfulsessions • Accounts for 90% of the bytes and packets • 73% of the routers
Packet Loss • D1p average packet lost was 0.53%, D2p 0.58% • MuchhigherthanwhatISPsadvertise (0.01 – 0.1%) • Therefore, suspect lost happens at the edges • 38% of all sessions had nopacket lost; 75% had loss rates < 0.3% and 91% rate lost < 2% • 2% of all sessions have packet lost > 6%
Packet Loss – Time factor Multimedia over IP and Wireless Networks Fig. 8.10 (top) page 252
Loss BurstLengths • 207,384 loss bursts and 431,501 lost packets Multimedia over IP and Wireless Networks Fig. 8.11 (top) page 253
Loss BurstLengths (cont'd) • Router queues overflowed at a rate smaller than the time to transmit a single IP packet over a T1 • Random EarlyDetection (RED): Was disabledfrom the ISPs • Whenburstlength lost >= 2, samerouter, or different ones?
Loss BurstLengths (cont'd) • In each of D1p and D2p: • Single packetburstscontained 36% of all lost packets • Bursts <= 2 contained 49% • Bursts <= 10 contained 68% • Bursts <= 30 contained 82% • Bursts >= 50 contained 13%
Loss BurstDurations • If a router's queue is full, and if packets are really close to oneanotherwithin the burst, theymight all bedropped • Loss-burstduration = time between the last packetreceived, and the onereceivedafter the burst loss • 98% of loss-burstdurations < 1second, whichcouldbecausedbydata-linkretransmission
Heavy Tails • Packetlosses are dependantfromoneanother; itcancreate a cascading effect • Futurereal-timeprotocolsshould account forbursty loss packets, and heavy tail distribution • How to estimateit?
Heavy Tails (cont'd) • Use a Parettofunction • CDF: F(x) = 1 – (β/x)α • PDF: f(x) = αβαx-α-1 • In the case, α = 1.34 and β = 0.65 Multimedia over IP and Wireless Networks Fig. 8.12 (top) page 256
UnderflowEvents • Packet loss: 431,501 • 159,713 (37%) werediscovered missing whenit was too late => noNACK • 431,501 – 159,713 = 271,788 left • 257,065 (94,6%) recoveredbeforetheir deadline, 9013 (3.3%) were late and 5710 (2.1%) wereneverrecovered
UnderflowEvents (cont'd) • 2 types of late retransmission: • Packets thatarriveafter the last frame of theirGoP is decoded => completelyuseless • Packets that are late, butcanstillbeusedforpredicting frames withintheirGoP => partiallylate • Of the 9013 late retransmission, 4042 (49%) werepartially late
UnderflowEvents (cont'd) • Total underflowbypacket loss: 174,436 • 1,167,979 underflows in data packets, whichwerenotretransmitted • 1.7% of all packets causedunderflows • Frame-freeze of 10.5s on average for D1p, and 8.6s for D2p
Round-TripDelay • 660,439 RTT foreach D1p and D2p • 75% < 600ms, 90% < 1s, 99.5% < 10s and 20 > 75s Multimedia over IP and Wireless Networks Fig. 8.13 (top) page 259
Round-TripDelay (cont'd) • Varyaccording to the period of the day • Correlatedto the length of the end-to-endpath (measured in hops withtraceroute) • Verylittlecorrelationwithgeographicallocation
DelayJitter • One-waydelayjitter = differencebetweenone-waydelay of 2 consecutivepackets • Usingpositivevaluesforone-waydelayjitter, highestvalue was 45s, 97.5% < 140ms, and 99.9% < 1s • Cascadingeffect: many packets canthenbedelayed, causingmanyunderflows
PacketReordering • In Da1p, 1/3 missing packets was actuallyreordered • Frequencyof reordering = % of reordered packets/totalnumber of missing packets • In the experiment, this was 6.5% of missing packets, or 0.04% of all sent packets. • 9.5% of sessionsexperienced at leastonereordering • Independantof time of day and state
PacketReordering (cont'd) Largestdelay was 20s (interestingthough, distance was onepacket) Multimedia over IP and Wireless Networks Fig. 8.16 page 265
AsymmetricPaths • Usingtraceroute and TTL-expired packets, canestablishnumber of hops betweensender and receiver • Ifnumber is different, definitelyasymmetric • Ifthe same, we don'tknow and callitpotentiallysymmetric
AsymmetricPaths (cont'd) • 72% of sessionsweredefinitelyasymmetric • Couldhappen becausepaths crosses over Autonomous Systems (AS) boundaries, where a “hot-potato” policy is enforced • 95% of all sessionsthat had at leastonereordering had asymmetricalpaths • 12,057 asymmetricalpathsessions => 1522 had a reordering. 4795 possiblysymmetricpaths, only 77 had reordering
Conclusion • Internet studyforReal-timestreaming • Usevarioustoolssuch as traceroute to know the routersalong a path • Analyse the percentage of requestthatfail • Packetloss and loss-burstdurations • Underflowevents • Roundtrip delay • DelayJitter • Reorderingand AsymmetricPaths
Questions? Thankyou!