500 likes | 515 Views
Internet Performance. Presented by Deepa Srinivasan CSE581, Winter 2002, OGI. Papers on this topic. End-End Effects of Internet Path Selection (‘99) Constancy of Internet Path Properties (‘01) Trends in Wide Area IP traffic (‘00) Resilient Overlay Networks (‘01). For each paper.
E N D
Internet Performance Presented by Deepa Srinivasan CSE581, Winter 2002, OGI
Papers on this topic • End-End Effects of Internet Path Selection (‘99) • Constancy of Internet Path Properties (‘01) • Trends in Wide Area IP traffic (‘00) • Resilient Overlay Networks (‘01)
For each paper... • Introduction, Goal, Benefit • Concepts • Methodology & tools for measurement • Results • Evaluation & Conclusion
Goals of the paper • Assess how “good” is internet routing from a user’s perspective? • Path taken by a packet depends on large number of factors • Constructed “better” synthetic paths for 30-80% of the cases • does not give a mechanism
Definitions • Path:Complete set of hops between two hosts • Route:Data structures exchanged between routers to describe connectivity • Path selection:combined set of route selection decisions • Path Quality:measured in terms of round-trip time, loss rate and bandwidth
Routing Overview • Two-level routing hierarchy using Autonomous Systems (AS’s) • Intra and Inter AS routing • Small AS’s use hop-count for intra; large use internal metrics • BGP: uses custom routing policy (or) default of minimum number of AS’s.
Why is routing non-optimal? • WAN routing protocols consider only connectivity, ignore path quality • Per-network policies are difficult to manage; no economic motivation • Contractual agreements exist
For a “good” path... • Admin of every AS on that path must have incentive • No contractual or operational obligations • Express it as a policy • Not hindered by any other AS
Methodology • Alternate paths: constructing weighted graphs • nodes as vertices • measurement as edge weight • Datasets • using traceroute and tcpanaly • Conservative estimate of potential inefficiency • Uses long-term averages
Results • Improvements in • Mean RTT • Mean Loss Rate • Mean Bandwidth
Interpreting the graphs • Probability Distribution: is a description of the probabilities associated with the possible values of X. • Cumulative Distribution Function: P(X <= x) = - to x f(u)du i.e. Probability that X has a value less than or equal to x is given by the value of CDF at x.
Observations • RTT: • 30-55 % has better RTT; smaller fraction has more than 20 ms. • Loss Rate: • 75 - 85% have better loss rates • 5 - 50% has significant improvement • Bandwidth: • 70 - 80% have improved bandwidth • 10 - 20% of the paths, factor of 3
Evaluation • Better alternate paths are chosen by avoiding poor quality routes, not because of a selective small number of hosts. • Better path obtained by avoiding congestion, rather than minimizing propagation delay. (Propagation delay: inclusive of physical transmission, store & forward, processing overhead)
Conclusion • Internet routing is non-optimal in terms of user perception • Path quality measured by RTT, loss rate, bandwidth • 30 - 80% of the time, there is a better path
Discussion Questions?
Introduction, Goal, Benefit • Concepts • Methodology & tools for measurement • Results • Evaluation & Conclusion
Introduction • Goal is to study “constancy” of (or) how “steady” Internet path properties are • Measurements should be modelled and used for prediction
Concepts • Mathematical Constancy • data can be described with a single time invariant mathematical model • e.g. session arrivals are modelled as a Poisson process (process: behavior of a system observed over time) • important to find appropriate model
Concepts • Operational constancy • quantities of interest remain within boundaries considered equivalent • Predictive constancy • past measurements allow for reasonable prediction of future characteristics • Constancy is a more useful concept on coarser time scales than for finer time scales
Methodology • NIMI - measurement platforms across Internet • Uses ‘zing’ to generate Poisson packet streams
Statistics Refresher • IID • Independent & Identically distributed variable • Poisson process • ‘n’ intervals; ‘p’probability in an interval • np is constant = (Poisson rate) • Change-free Region (CFR) • Steady regions delineated by change points
Loss Constancy • Loss episode process • time series indicating when a series of packets were lost • Loss Rate vs. Loss episodes • Loss Process is not an IID • Loss Episode Process is an IID - a Poisson process
Conclusions • Many processes are modeled as IID • once change-points are identified • Almost all predictors produce similar error levels • “How steady is the Internet?” • depends on particular aspect of constancy and dataset under consideration • constancy on at least a scale of minutes
Discussion Questions?
Introduction • Methodology • Results • Conclusion
Introduction • Trends are important for • optimization of future networking equipment • modeling effects of new protocols • Internet Exchange • a junction between multiple points of Internet presence • peers directly connect to each other to exchange local Internet traffic • NASA Ames IX (AIX)
Methodology • Uses NLANR/MOAT’s NAI project • First ATM cell is captured • contains the TCP header • Data collected over 10 months
Results • Packet Lengths • majority are 40, 1500 or 552 & 576 bytes • no long term trend • Protocol Mix • TCP, UDP, GRE and ICMP • growth of IPSEC, RealAudio is slower • decline in FTP
Results • Fragmentation is on the rise • due to UDP • e-mail traffic increased just before holidays • Napster showed 50% increase in last 2 months
Results • Online game traffic is on the rise • doubles on weekends as compared to weekdays
Conclusion • Analysis of various trends • Need more work in protocol classification • Questions?
Introduction • Design goals • Design • Implementation • Evaluations • Conclusion
Goals • Enable a group of nodes to communicate when underlying paths have problems • Integrate routing into distributed applications • Framework for implementation of expressive routing policies
Design Goals • Fast Failure Detection & Recovery • Link, Path Failures • Perceived as outages/performance failures by apps • Tighter integration with applications • routing based on application-specific metric • Expressive Policy Routing
Routing & Path Selection • Three metrics for each virtual link • latency, packet loss rate, throughput • Uses a link-state routing protocol to distribute topology information
RON Router • Detects outages using active probing • Implements 3 different metrics • latency minimizer, loss-minimizer, TCP throughput-minimizer • Policy Routing • Classification • Routing information from one RON node to all others • Multi-Level Routing Tables
Resilient IP Forwarder • No modifications to transport layer • Uses FreeBSD divert sockets to send IP traffic over the RON • Adds about 220s to packet delivery
Evaluation • Improvements for commercial sites came from commercial links • Outage(t, p):if loss rate over interval t is greater than p on path • 60% - 100% of path outages overcome
Improvements in RON • Loss Rate • improved by 0.05 a little more than 5% • worse in some cases • Latency • improved by 10s to 100s ms • Throughput • only 1 % received < 50% • 5% double throughput
Discussion • Possible misuse of BGP transit policies • Requires cryptographic authentication and access control • Design scales only to 50 nodes • sufficient for many distributed apps • Problems with NAT devices
Conclusion • Current Internet routing is non-optimal • Constancy of metrics • Trends in Wide Area IP Traffic • RONs - improving end-end performance