1 / 13

Every Microsecond Counts: Tracking Fine-Grain Latencies with a Lossy Difference Aggregator

Every Microsecond Counts: Tracking Fine-Grain Latencies with a Lossy Difference Aggregator. Author: Ramana Rao Kompella , Kirill Levchenko , Alex C. Snoeren , and George Varghese Publisher: SIGCOMM ’09 Presenter: Yun -Yan Chang Date: 2012/02/22. Introduction. Motivation

nardo
Download Presentation

Every Microsecond Counts: Tracking Fine-Grain Latencies with a Lossy Difference Aggregator

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Every Microsecond Counts: TrackingFine-Grain Latencies with a Lossy Difference Aggregator Author:RamanaRaoKompella, KirillLevchenko, Alex C. Snoeren, and George Varghese Publisher: SIGCOMM’09 Presenter: Yun-Yan Chang Date: 2012/02/22

  2. Introduction • Motivation • Many network applications have stringent end-to-end latency requirements,evenmicrosecond variations may be intolerable. • VoIP, automated trading, high-performance computing, etc. • Proposeinstrumenting routers with a hash-based primitive that called Lossy Difference Aggregator (LDA) to measure latencies down totens of microseconds and losses as infrequent as one in a million.

  3. Introduction • LDA (Lossy Difference Aggregator) • A measurement data structure that supports measuring the average delay and standard deviation of delay. • Both sender and receiver maintain an LDA. • At the end of measurement period, the sender send its LDA to receiver, and the receiver computes the statistics. • Tight time synchronization • Consistent packet ordering

  4. LDA (Lossy Difference Aggregator) • No loss • Avg. delay • Difference of timestamp between sender and receiver, divided by number of packets. • Low loss • Maintain an array of several timestamp accumulators and packet counters. • Each packet hash to one of the accumulator-counter pairs, and update the corresponding one. • By using the same hash function on sender and receiver, we can determine the number of packets hash to each pair and the number of loss packets. • Assume the number of losses is L, and spilt the traffic into m separate stream, the expected sample size is at least (1-L/m) received packets.

  5. LDA (Lossy Difference Aggregator) • Example Figure 2: Computing LDA average delay with one bank of four timestamp accumulator-counter pairs. Three pairs are usable (with 5, 2, and 1 packets), while the second is not due to a packet loss. Thus, the average delay is (60 + 22 + 8)/(5 + 2 + 1).

  6. LDA (Lossy Difference Aggregator) • Known loss rate • Sample incoming packets to reduce the unusable rows. • Hashing to compute the sampling probability. • At sample rate p, expect the number of lost packets be pL, the usable rows is at least m-pL. • Arbitrary loss rate • Use multiple LDA banks with different loss rates. • Look the high-order bit to determine the updated bank. • Example. • Consider three banks with sampling probabilities p1= 1/23, p2= 1/25 and p3=1/27. Each packet hash to an integer. • If the first five bits are zero, update bank 2. • If the first seven bits are zero, update bank 3. • If the first three bits are zero, update bank 1. • Otherwise, the packet is not sampled.

  7. LDA (Lossy Difference Aggregator) Update procedure 1. i← h(x) // row 2. j← g(x) //bank sampling with pj 3. if j >0 then 4. T[i, j] ←T[i, j] + τ 5. S[i, j] ←S[i, j] + 1 6. end if • Data structure Figure 3: The Lossy Difference Aggregator (LDA) with n banks of m rows each.

  8. LDA (Lossy Difference Aggregator) • Hardware implementation Figure 10: Potential LDA chip schematic

  9. Evaluation • Validation • Use n = 1 bank of m = 1024 counters. • Simulate a 10-Gbps OC-192 link. S: expected sample size m: number of rows L: number of loss packets R: number of received packets Figure 4: The sample size obtained by a single-bank LDA as a function of loss rate.

  10. Weibull distribution generated by Pareto distribution generated by P(X ≤ x) = 1−(x/α)−β α: scale parameter β : shape parameter Evaluation • Validation (α , β) Figure 5: Average relative error and 98% confidence bounds of the delay estimates computed by LDA as a function of loss rate. Actual mean delay is 0.2 μs in all cases. In (b), each curve represents an LDA with different random seed on the same trace.

  11. Weibull distribution generated by Pareto distribution generated by P(X ≤ x) = 1−(x/α)−β α: scale parameter β : shape parameter Evaluation • Validation (α , β) Figure 6: Average relative error of LDA’s standard-deviation estimator as a function of loss rate.

  12. Evaluation Figure 7: The performance of various multi-bank LDA configurations.

  13. Evaluation • Compare with active probes Figure 8: Sample size, delay and standard deviation estimates obtained using a two-bank LDA in comparison with active probing at various frequencies. Log-scale axes.

More Related