100 likes | 219 Views
How to Make the Internet more Resilient?. Lixia Zhang UCLA April 24, 2003. "On Being the Right Size".
E N D
How to Make the Internet more Resilient? Lixia Zhang UCLA April 24, 2003
"On Being the Right Size" • “Let us take the most obvious of possible cases, and consider a giant man sixty feet high ... These monsters were not only ten times as high as Christian, but ten times as wide and ten times as thick, so that their total weight was a thousand times his ... Unfortunately the cross sections of their bones were only a hundred times those of Christian, so that every square inch of giant bone had to support ten times the weight borne by a square inch of human bone.” “For every type of animal there is a most convenient size, and a large change in size inevitably carries with it a change of form.”
The Internet • Wider range of heterogeniety • Larger traffic volume • Bigger routing tables • Higher failure frequency • But most importantly: • ever increasing new threats due to growing large • ever increasing complexity due to growing large the Internet continues to grow both in size and in importance
Growing LargeChanging Environment • From small, friendly research community large, brutal real world • User population change diverse interest • Operator community growth
Brutal reality: Malicious Attacks • Wide-spread software viruses • CODE-RED, NIMDA, and recent SLAMMER • DDOS attacks • Not so recent DOS attack against websites • Recent attacks against DNS root servers • Less known: attacks directly against routing infrastructure
“If a problem has no solution, it may not be a problem, but a fact, not to be solved, but to be coped with over time” — Shimon Peres (“Peres’s Law”) Brutal reality: operational challenges • the cause for all the major outages so far • Operational errors • Just another small example: Filtering problems with newly allocated space "Atlantic.Net has just joined the 69/8 club of ARIN members with assignments in this IP block that's apparently in numerous outdated bogon filters." > My own opinion is that sophisticated routing attacks > are the single biggest threat to the Internet. My opinion is that lazy operational practices are the single biggest threat to the Internet.
Size, Weight, Strength • Recall our early story “Let us take the most obvious of possible cases, and consider a giant man sixty feet high ... These monsters were not only ten times as high as Christian, but ten times as wide and ten times as thick, so that their total weight was a thousand times his ... Unfortunately the cross sections of their bones were only a hundred times those of Christian, so that every square inch of giant bone had to support ten times the weight borne by a square inch of human bone. As the human thigh-bone breaks under about ten times the human weight, Pope and Pagan would have broken their thighs every time they took a step.” Is the Internet "bone" strong enough to carry its newly gained weight?
Up until now Protocol Design for Simple Functionality • Protocol design: contain the minimal set of bits necessary for data delivery • Explicitly enumerates all possible physical failures • Node failure: fail stop • Link failure: disconnect • Data delivery failure: bit error, our of order, loss, duplicates • Implicitly assumes that • Every component follows the rules • No faults other than physical failures listed above • But the God taught us that the list was rather incomplete
When unexpected faults occur? • Unexpected faults system failure • Reality already gave us a good doze of lessons • ARPANET old distance-vector routing protocol: blackhole • ARPANET new link-state routing protocol: sequence number fault • In the good old days • such unexpected faults were far apart and rare • Damage was limited • In the modern harsh reality • Norms rather than exceptions • Damage: $$$$$$$
One Grand Challenge • How to add resiliency into the Internet to make it withstand unexpected faults? • Require an overall systematic approach to the problem • it is unclear how effective separate, piecemeal efforts can be in preparing the Internet to defend itself in the long run--we do not know what kinds of new faults will occur in the future but are certain new faults will occur.