260 likes | 559 Views
Dr. Lawrence Roberts CEO, Founder, Anagran Internet Evolution The Beginning of the Internet ARPANET became the Internet 1965 – MIT- 1 st Packet Experiment -Roberts 1967 - Roberts to ARPA – Designs ARPANET 1969 – ARPANET Starts – 1 st Packet Network 1971 – ARPANET Grows to 18 nodes
E N D
Dr. Lawrence Roberts CEO, Founder, Anagran Internet Evolution
The Beginning of the InternetARPANET became the Internet • 1965 – MIT- 1st Packet Experiment -Roberts • 1967 - Roberts to ARPA – Designs ARPANET • 1969 – ARPANET Starts – 1st Packet Network • 1971 – ARPANET Grows to 18 nodes • 1983 – TCP/IP installed on ARPANET – Kahn/Cerf • 1986 – NSF takes over network - NSFNET • 1991 – Internet opened to commercial use Roberts at MIT Computer
“Internet” Name first used- RFC 675 Roberts term at ARPA Kahn term at ARPA Cerf term at ARPA SATNET - Satellite to UK Aloha-Packet Radio PacketRadioNET Spans US DNS TCP/IP Design TCP/IP NCP Ethernet EMAIL FTP ICCC Demo X.25 – Virtual Circuit standard Internet Early History
Original Internet DesignIt was designed for Data • File Transfer and Email main activities • Constrained by high cost of memory • Only Packet Destination Examined • No Source Checks • No QoS • No Security • Best Effort Only • Voice Considered • Video not feasible ARPANET July 1977 Not much change since then
Changing Use of InternetMajor changes in Network Use • Voice Totally moving to packets – Low loss, low delay required • Video Totally moving to packets – Low loss, low delay jitter required • Emergency Services No Preference Priority • Security Cyberwar is now a real threat • TCP unfairness – multiple flows (P2P, Clouds, …) • Congests network – 5% of users take 80% of capacity
Internet Traffic Grown 1012 since 1970 Electronics – Double every 18 months Double each year TCP ARPANET NSFNET COMMERICAL In 1999 P2P applications discovered using multiple flows could give them more capacity and their traffic moved up to 80% of the network capacity
20082018 % World Population On-Line 22% 99% Total Traffic PB/month 3,200 191,000 Traffic per User GB/month 2.2 26 GB/mo/user Developed areas 2.7 156 GB/mo/user Less Dev. areas 0.5 3 People in less developed areas will have more capacity than is available in developed areas today! Users in developed areas could see 3-10 hours of video per day (HD or SD) Requires a 60 times increase in capacity (Moore’s Law increase) Where will the Internet be in the next decade
Network Change Required • Fairness • Multi-flow applications (P2P) overload access networks • Network Security • Need User Authentication and Source Checking • Emergency Services • Need Secure Preference Priorities • Cost & Power • Growth constrained to Moore’s law & developed areas • Quality & Speed • Video & Voice require lower jitter and loss, consistent speed • TCP stalls slow interactive applications like the web
Technology Improvement – Flow Management • Historically, congestion managed by queues and discards • Creates delay, jitter, and random losses • TCP flow rates vary widely, often stall • UDP can overload, if so all flows hurt • Alternatively, flows can be rate controlled to fill link • Keep table of all flows, measure output, assign rates to each flow • Rate control TCP flows to avoid congestion but maintain utilization • Limit total fixed rate flow utilization by rejecting excessive requests • Assign rate priorities to flows to insure fairness and quality • Flow Management requires less power, size, & cost • There are 14 times as many packets as flows • Flows have predictable rate and user significance
Flow Management Architecture • Flows measured and policed at input • Unique TCP rate control – Fair and precise rate/flow • Rates controlled based on utilization of both output port and class • All traffic controlled to fill output at 90%+ • No output queue – Minimal delay • Voice and video protected to insure quality Assign Rate, QoS, Output Port, & Class Flow State Memory Processors Load Measurements Input Output Switch Discard Rate of Each Flow Controlled at Input Traffic measured on both the output port and in up to 4000 Classes
Flow Rates Control with Intelligent Flow Delivery (IFD) Instead of random discards in an output queue: • Anagran controls each flows rate at the input • IFD does not ever discard if the flow stays below the Fair Rate • If the flow rate exceeds a threshold, one packet is discarded • Then the rate is watched until the next cycle and repeats • This assures the flow averages the Fair Rate • The flow then has low rate variance (s=.33) and does not stall Discard 1 packet Fair Rate
IFD Eliminates TCP Stalls, Equalizes Rates • Normal Network • Rates often stall • Peak utilization high • Response time is slow • Jumble hurts Video & Voice • With Flow Management • No stalled flows • Less peak utilization • 3 times faster response times • Video and Voice protected Above graphs are actual data captures
Impact of Flow Management at Network Edge • Web access three times faster • TCP stalls eliminated – all requests complete • Voice quality protected – no packet loss, low delay • Video quality protected – no freeze frame, no artifact • Critical apps can be assigned rate priority • When traffic exceeds peak trunk capacity: • Eliminates the many impacts of congestion • Smooth slowdown of less critical traffic • Voice and video quality maintained
Fairness - In the beginning • A flow was a file transfer, or a voice call • The voice network had 1 flow per user • All flows were equal (except for 911) • Early networking was mainly terminal to computer • Again we had 1 flow (each way) per user • No long term analysis was done on fairness • It was obvious that under congestion: Users are equal thus Equal Capacity per Flow was the default design
P2P FTP Fairness - Where is the Internet now? • The Internet is still equal capacity per flow under congestion • Computers, not users, now generate flows today • Any process can use any number of flows • P2P takes advantage of this using 10-1000 flows • Congestion typically occurs at the Internet edge • Here, many users share a common capacity pool • TCP generally expands until congestion occurs • This forces equal capacity per flow • Then the number of flows determines each users capacity • The result is therefore unfair to users who paid the same
Typical Home Network Access • Internet Service Providers provision for average use • Average use today is about 100 Kbps per subscriber • Without P2P all users would usually get the peak TCP rate • With >0.5% P2P users, average users see much lower rates 1,000 Users 10 Mbps peak rate 100 Mbps INTERNET 100 Kbps Average / User
Internet Traffic Recently • Since 2004, total traffic has increased 90% per year, about average • P2P has increased 91% per year – Consuming most of the capacity growth • Normal traffic has only increased 22% per year –Significantly slowdown from past • Since P2P slows other traffic 5:1, users can only do 1/5 as much • This may account for the normal traffic growth being about 1/3 what it should be with normal growth
Deep Packet Inspection (DPI) Fails to Stop P2P • DPI currently main defense – but recently has problems with encrypted P2P • Studies show it detects < 75% of P2P – reducing the P2P users from 5% to 1.3% • As P2P adds encryption, DPI detection misses 25% already and encryption growing • Remainder of P2P simply adds more flows, again filling capacity to congestion • Result – Even ½ % P2P still overload the upstream channel • This slows the Average Users acknowledgements which limits their downstream usage • User Equalization based on flow rate management solves problem
A New Fairness Rule • Inequity in TCP/IP – Currently equal capacity per flow • P2P has taken advantage of this, using 10-1000 flows • This gives the 5% P2P users 80-95% of the capacity • P2P does not know when to stop until it sees congestion • Instead we should giveequal capacity for equal pay • This is simply a revised equality rule – similar users get equal capacity • This tracks with what we pay • If network assures all similar users get equal service, file sharing will find the best equitable method – perhaps slack time and local hosts • This is a major worldwide problem • P2P is not bad, it can be quite effective • But, without revised fairness, multi-flow applications can take capacity away from other users, dramatically slowing their network use • It then becomes an arms race – who can use the most flows
P2P Control with Flow Management • These are actual measurements showing the effect of controlling P2P traffic as a class • In this case, all P2P was limited to a fixed capacity, then equalized for fairness • P2P was reduced from 67% to 1.6% • Normal traffic then increased by 4:1
Why is it Important to Change Fairness Rule? • P2P is attractive and growing rapidly • It cannot determine its fair share itself • The network must provide the fair boundary • Without fairness, normal users will slow down and stall • Multi-flow applications will be misled on economics • Today most P2P users believe their peak capacity is theirs • They do not realize they may be slowing down other users • The economics of file transfer are thus badly misjudged • This leads to globally un-economic product decisions • User equality will lead to economic use of communications
Network Security • Wireshark users know the value of watching communication • Today the network is open and unchecked • All security is based on “flawless” computer systems • This needs to change - the network must help • Finding Bots is best done watching network traffic • Knowing who is trying to connect can help stop penetration • Allocating high priority capacity requires authentication • Emergency services, critical services, paid services • High value services need authentication, not passwords • On-line banking, credit transactions, etc.
Authentication Security Program • New DARPA project will allow users to be authenticated • The network can insure source IP address is not faked • The network can assign user based priorities • Emergency services needs priority • Corporations have priority applications • The recipient can know who is trying to connect • Filter out request from un-authenticated sources • Control application access to specific users • Today security is based on fixing all computer holes • Network assistance greatly reduces the threat
DARPA Secure Authentication Program SH =Secure Hash (Identifies user when hashed with Key) Each Flow Start: SH checked by NC using Key Each Flow Start: User can be checked with AAA using SH Each Flow Start: SH sent to NC NC Sender Receiver NC NC NC First Packet: NC checks user via SH with AAA, get Key & priority AAA Server User Log-in: NC identifies self to AAA, gets SH & Key NC=Network Controller • Network finds users priority & QoS info from AAA server • Receiver can check user ID if allowed & reject flow if desired • Intermediate NC’s can also check users priority & QoS • Result: Users ID securely controls network access & priority
The New Network Edge – Flow Management • Flow Management at the ISP edge can: • Insure fairness – equal capacity for equal pay • Eliminate overload problems (TCP stalls and video artifact) • Add authentication security to network • All these benefits at much lower cost & power vs. DP 40 Gbps capacity in 1 RU with Anagran
Summary • Today’s IP Networks need improvement • Fairness is poor – 5% of users take 80% of capacity • The cause is the old rule of equal capacity per flow • This needs to change to equal capacity for equal pay • Response time and QoS suffer from random discards • Web access suffers from unequal flow rates, TCP stalls • Video suffers from packet loss and TCP stalls • Voice suffers from packet loss and excessive delay • Security could be improved if network did authentication • Avoid unknown users penetrating computers • Permit priority for emergency workers, critical apps • Flow Management allows these improvements at lower cost