170 likes | 328 Views
Looking Over the Fence at Networking. Jennifer Rexford. Internet Success Leads to Ossification. Intellectual ossification Pressure for backwards compatibility with Internet Risks stifling innovative intellectual thinking Infrastructure ossification
E N D
Looking Over the Fence at Networking Jennifer Rexford
Internet Success Leads to Ossification • Intellectual ossification • Pressure for backwards compatibility with Internet • Risks stifling innovative intellectual thinking • Infrastructure ossification • Limits on the ability to influence deployment • E.g., multicast, IPv6, QoS, and secure routing • System ossification • Shoe-horn solutions that increase system fragility • E.g., NATs and firewalls
A Need to Invigorate Networking Research • Measurement • Understanding the Internet artifact • Better built-in measurement for the future • Modeling • Performance models faithful to Internet realities • X-ities like manageability, evolvability, security, … • Prototyping • Importance of creating disruptive technology • Emphasis on enabling new applications
Challenges of Measurement • Extreme scale • Large number of routers, links, ASes, packets, … • Difficulty of identifying flows • End-to-end design • Statelessness of the IP datagram • Routing asymmetry • Multipath routing • Limitations on collection and sharing of data • User privacy • Confidentiality of business data
Measurement Research: Line-Card Support • Efficient measurement to place in line cards • Online data collection at high speed • Ideally useful for many kinds of analysis • E.g., trajectory sampling • Sample based on a hash of packet contents • Sampled packets are sampled at each hop • E.g., psamp activity at the IETF • Parallel banks of filter, sample, and record • E.g., deep packet inspection • Algorithms for identifying patterns in packets • Useful for detecting worms, viruses, etc.
Measurement Research: Tomography • Inference based on limited measurements • Inverse problems that are often underconstrained • E.g., AS relationships (e.g., Gao paper) • Given collection of AS paths • Infer business relationship between AS pairs • E.g., traffic matrix • Given link load statistics and routing configuration • Infer offered load between ingress-egress pairs • E.g., link performance statistics • Given path-level measurements (e.g., loss, delay) • Infer the performance of the individual links
Measurement Research: Anomaly Detection • Mining large, heterogeneous, distributed data • To detect and diagnose anomalies, in real time • Flash crowd, DDoS attack, worm, failure, … • Applying a variety of analysis techniques • Statistics (e.g., Fourier, Wavelets, PCA) • AI (e.g., Machine Learning) • Algorithms (e.g., sketches, streaming algorithms) • To a variety of kinds of data • Per link: packet or flow traces • Per path: delay, loss, or throughput • Network-wide: link matrix or traffic matrix
Measurement Research: Privacy & Confidentiality • Preserving privacy and confidentiality • Respect user privacy and business confidentiality • While still producing useful analysis results • E.g., anonymization of the data • Anonymization of multi-dimensional data • While still preserving associations across data • E.g., privacy-preserving data analysis • Distributed computation that hides information • Computing a sum without revealing the parts
Measurement Research: Protocol Design • Protocol design • Incorporating self-measurement, analysis, and diagnosis in future systems and protocols • E.g., Early Congestion Notification • Marking TCP packets that encounter congestion • To trigger the sender to decrease sending rate • E.g., BGP cause tags • Tagging BGP update messages with root cause • To reduce path exploration during convergence
Traditional models Single queue Exponential distributions Open loop Steady state analysis Well-behaved parties Packet models Protocol analysis … Advanced models Network of queues Heavy-tail distributions Closed loop Transients & dynamics Selfish/malicious parties Multi-timescale models Protocol design … Performance Models
Modeling: The X-ities (or Ilities) • Beyond higher speed to consider X-ities • Reliability • Scalability • Manageability • Configurability • Predictability • Non-fragility • Security • Evolvability • Challenging to model, or even to quantify
A Need for Interdisciplinary Work • Statistical analysis • Artificial intelligence • Maximum likelihood estimation • Streaming algorithms • Cryptography • Optimization • Information theory • Game theory and mechanism design • …
Discussion • Where should the intelligence reside? • Traditional Internet says “the edge” • What about middleboxes (e.g., NAT)? • Need to assemble applications from components located in different parts of the network? • Better isolation and diagnosis of faults? • Decentralized Internet makes this difficult • Need to detection, diagnosis, and accountability • Challenges the end-to-end argument
Discussion • Data as a first-class object? • Tradition Internet simple moves the bytes • Naming, search, location, management in the ‘net • Modifyingg the data as it traverse the network • Does the Internet have a control plane? • Traditional Internet stress data transport • What about network management and control? • Today we place more emphasis on designing new protocols and mechanisms than controlling them
Discussion • Abstractions on topology and performance • Traditional Internet hides details from end hosts • Network properties are, at best, inferred • Guidelines for placement of middleboxes? • Feedback info about topology and performance? • Beyond cooperative congestion control • Traditional Internet places congestion control in the end hosts, and trusts them to behave • Is this trust misguided? • New alternatives to congestion control?
Discussion • Incorporating economic factors in design • Traditional Internet ignores competitive forces • Many constraints are economic, not technical • Better to construct/align economic incentives • Ways to deploy disruptive technology • Traditional core is not open to disruptive tech • Overlay network as a deployment strategy • Other approaches? Virtualization? Middleboxes? Speaking the legacy protocols with new logic? • Experimental facilities? A “do over”?
The Innovator’s Dilemma • Leading companies often miss “next big thing” • E.g., disk-drive industry and excavation equipment • Problem • Listening to customers leads to incremental improvement on the existing technology curve • Disruptive technologies are often less effective for the existing customers, so tend to be ignored • New companies exploit the new technology for a new market (e.g., desktops, laptops) • Eventually, the new technology curve overtakes the old technology, usurping the old technology • Will this happen with the Internet?