360 likes | 497 Views
IP Performance Metrics: Metrics, Tools, and Infrastructure. Guy Almes January 30, 1997. Outline. Background IETF IPPM effort IPPM Activities outside IETF Scope Technical Approaches One Example of Measurement Infrastructure. Background. Internet topology is increasingly complex
E N D
IP Performance Metrics:Metrics, Tools, and Infrastructure Guy Almes January 30, 1997
Outline • Background • IETF IPPM effort • IPPM Activities outside IETF Scope • Technical Approaches • One Example of Measurement Infrastructure
Background • Internet topology is increasingly complex • Load grows (even) faster than capacity • The relationship among networks is increasingly competitive • Result: users don’t understand the Internet’s performance and reliability
IP Performance Metrics Objective • enable users and service providers (at all levels) to have an accurate common understanding of the performance and reliability of paths through the Internet.
IETF IPPM Effort • BOF: Apr-95 at Danvers • Within Operational Requirements / Benchmarking Methodology WG • Initial Meeting: Jul-95 at Stockholm • Framework Document: Jun-96 • Definitions of specific metrics: Dec-96
Jun-96 Framework Document • Importance of careful definition • Good properties related to measurement methodology • Avoidance of bias and of artificial performance goals • Relationship to dynamics between users and providers
Terminology for Paths and Clouds • host, link, and router • path: < h0, l1, h1, …, ln, hn > • subpath • cloud: graph of routers connected by links • exchange: links that connect clouds • cloud subpath: < hi, li+1, …, lj, hj > • path digest: < h0, e1, C1, …, Cn-1, en, hn >
Three Fundamental Concepts • Metric: a property of an Internet component, carefully defined and quantified using standard units. • Measurement Methodology: a systematized way to estimate a metric. There can be multiple methodologies for a metric. • Measurements: the result of applying a methodology. In general, a measurement has uncertainties or errors.
Metrics and the Analytical Framework • The Internet has a rich analytical framework (A-frame) • There are advantages to any notions described using the framework: • can leverage A-frame results • have some hope of generality, scaling • We’ll specify metrics in A-frame terms when possible
Such metrics are called analytically-specified metrics • Examples: • Propagation time of a link • Bandwidth of a link • Minimum bandwidth along a path • The introduction of an analytical metric will often require the refinement of A-frame concepts • These refinements form an hierarchy
Empirically-specified Metrics • Some key notions do not fit into the A-frame • Example: flow capacity along a path while observing RFC-1122 congestion control • The only realistic way to specify such a metric is by specifying a measurement methodology (cf.treno)
Measurement Strategies • Active vs Passive measurements • Hard vs Soft degree of Cooperation • Single Metric with multiple methodologies
Two Forms of Composition • Spatial Composition • e.g., Delay metric applied to router vs path vs subpath • Temporal Composition • e.g., Delay metric at T compared to delay at times near T
Progress at the San Jose IETF • Dec-96 • One-way Delay • Flow capacity • Availability • Revisions to the Framework Document
Framework Revisions • Clock Issues • Synchronization, Accuracy, and Resolution • Singletons, Samples, and Statistics • Generic ‘Type P’ Packets
Motivation of One-way Delay • Minimum of delay: transmission/propagation delay • Variation of delay: queueing delay • Large delay makes sustaining high-bandwidth flows harder • Erratic variation in delay makes real-time apps harder
Singleton:Type-P-One-way-Delay • (src, dst, T, path) • either ‘undefined’ or a duration • ‘undefined’ taken as infinite • duration in seconds
Sample: Type-P-One-way-Delay-Stream • (src, dst, first-hop, T0, Tf, lambda) • sequence of <T, delay> pairs • Poisson process (lambda) used to generate T values
Statistics • Minimum of a Sample • Percentile • Median
Measurement Technologies • Active Tests • Passive Tests • Advantages/disadvantages of each • Policy implications of each
Active Tests • Both at edge and at/near exchange points • Extra traffic • No ‘eavesdropping’ • Delay, packet loss, throughput across specific clouds
Passive Tests • Only at the edge -- privacy caution • No extra traffic • Throughput • Also, non-IPPM measurements on nature of use
The Surveyor Infrastructure • Collaborating organizations: • Advanced Network & Services • (23) Common Solutions Group universities • Active tests • ongoing tests of delay, packet loss • occasional tests of flow capacity • Passive tests • some tests to characterize Internet use
The Surveyor Infrastructure • Key ideas: • database / web server to receive results • use of GPS to synchronize clocks • need for measurement machines both at campuses and at/near exchange points
Database/Web Server • Measurement machines ‘upload’ their results to the database server • These results are stored so that queries can be made at a later time • Users interrogate the server using ordinary web browsers • These interrogations are analyzed using the database and the results returned to the user via the Web
Policy Implications for Asia / Pacific • Better understanding of cost vs performance tradeoffs • Cooperation among users / providers • Sharing of test results • Value of cooperating in sharing results even of imperfect tests • Value of supporting very accurate NTP at exchanges and campuses