490 likes | 581 Views
Teletraffic Lessons for the Future Internet. Presenter: Moshe Zukerman ARC Centre for Ultra-Broadband Information Networks Electrical and Electronic Engineering Dept., The University of Melbourne. Outline. My Research
E N D
Teletraffic Lessons for the Future Internet Presenter: Moshe Zukerman ARC Centre for Ultra-Broadband Information Networks Electrical and Electronic Engineering Dept., The University of Melbourne
Outline • My Research • Background: Evolution, services, network design optimization, cost and carbon cost, Internet growth, link utilization, Internet congestion control • Optical Internet model and design options • Example of an optical network performance analysis problem • Results • Conclusion
My research • Queueing theory – bursty traffic – link dimensioning • Optical network performance and design • Medium access control – protocol performance analysis and enhancement • Other topics: TCP, Wireless/Mobile networks
New services - research directions • Internet of things (mice) • make it work from a traffic point of view • light weight protocols • traffic implications - network dimensioning • HD-IPTV, Virtual reality (elephants) • Streaming vs download • network dimensioning • multi-service internet • traffic shaping/policing • Others, in between, e.g. wideband voice
Moore’s law and Internet equivalence • Moore's Law: power and speed of computers will double every 18-24 months. • Internet backbone traffic grew from one Tbit/sec in 1990 to 3,000 Tbit/sec in 1997. • Number of Internet hosts more than doubled every year between 1980-2000.
World Internet Statistics World Population: 6,676,120,288 Number of Internet Users 1,407,724,920 Penetration 21.1% %Growth between 2000-2008 290.0% Source: www.internetworldstats.com
Design Optimization Aim: To provide services at Minimal Cost Subject to: Meeting required quality of service And other practical constraints (including availability of power)
Google Data Center The Dalles, Oregon Source: LA Times (14-6-2006) By JOHN MARKOFF and SAUL HANSELL “Hiding in Plain Sight, Google Seeks More Power” Competing with Microsoft on dominance but the practical constraint is power Power consumption ~200 MW (RS Tucker)
Google Data Centre (cont.) Source: www.techbanyan.com/archives/140
Network Power Distribution • Switching and Routing 34% • Regeneration 27% • Processing 22% • Storage 10% • Transport 7% Reference: “Data Centers Network Power Density Challenges” By Alex Vukovic, Ph.D., P.Eng. ASHRAE Journal, (Vol. 47, No. 4, April 2005).
Internet Power Usage TOTAL Population: 6,676,120,288 Number of Internet Users 1,407,724,920 Penetration 21.1 % %Growth between 2000-2008 290.0 % Source: www.internetworldstats.com
Internet Power Usage (cont.) Today Internet (excluding PCs, customers equipment, mobile terminals etc.) uses ~1% of total world electricity usage. If 2 Billion people have broadband access (1Mb/s) then ~5%. If 2 Billion people have broadband access (10 Mb/s) then ~50%. Source: R.S Tucker, “A Green Internet” May 2007, CUBIN Seminar, The University of Melbourne
Design Optimization Aim: To provide services at Minimal Cost (do not forget to consider also direct energy $ + indirect carbon $) Subject to: Meeting required quality of service And other practical constraints (including availability of power) The other aspect is utilization (traditional teletraffic concept)
Link Utilization Utilization = Proportion of time the link is Busy. measure for system efficiency and profit for telecom providers. The traditional teletraffic aim has been to maximize utilization subject to meeting queuing delay (and loss) requirements.
time time It’s all about using the scraps! Bursty traffic = low utilization and bad service Smooth traffic = high utilization and good service
time A Simple model X C P(X > C) < Quality measure
frequency Bursty traffic many standard deviations E[X] = 150 Mbit/sec C = 1000 Mbit/sec bit rate
frequency Smooth traffic Gaussian many sources Bit rate E[X] = 850 Mbit/sec C = 1000 Mbit/sec Chebyshev’s Inequality P(|X-E[X]| > S) ≤ Var(X)/S2
Internet end-to-end protocols Transmission Control Protocol (TCP) – Non-Real Time Traffic User Datagram Protocol (UDP) for Real-Time Traffic
“Old” Electronic Internet: Capacity expensive, buffering cheap Introduction of DWDM makes capacity cheap Electronic Bottleneck: O-E-O but maybe the bottleneck is not this E but the other one (Energy, or P = Power) Future All-Optical Internet (?): Link capacity plentiful, buffering painful (cost, power, space) and also wavelength conversion (espacially for OPS) is costly Towards All-Optical Internet
An Internet Model Access Optical Core
Bufferless Optical Burst/Packet Switching • Packet Switching but without buffers; • Packets cannot be delayed along the way. • Delay is possible at the edges. • Some multiplexing is possible. • Between packet switching and circuit switching. • How efficient can it be?
Optical switch trunk trunk
Optical switch without buffers and without wavelengths conversion trunk trunk links
Trunks and Links A trunk can be composed of 10 cables Each cable comprises 100 wavelengths So a trunk will have 1000 links
Let us focus on one output trunk Markov chain analysis is a common approach to evaluate loss probability
Models - no buffers many Pipes M / M / k / k Arrival process Service distribution Number of servers Buffer places including at servers M / M / infinity A = arrival rate (λ) / service rate (µ) A = arrivals per service time
M/M/k/k was developed for telephony “We are sorry; all circuits are busy now; will you try your call again later”. Old message from a local exchange of:
Erlang B Formula gives the the probability that a call is blocked under the M/M/k/k model. Recursion for Erlang B Formula: E0(A)=1 Blocking probability for traffic A and n channels
Multiplexing Benefit Target Blocking probability = 0.0001
With and Without wavelength conversion If a trunk is composed of 10 cables and each cable comprises 100 wavelengths so a trunk has 1000 links With wavelength conversion, the bottleneck trunk has 1000 links (achieves 91% Utilization). Without wavelength conversion it is divided into 100 mutually exclusive sets each of a particular wavelength that has 10 links (22% Utilization).
Why if larger A increases utilization? If the number of busy servers (Q) in an M/M/k/k system is almost always less than total number of output links k, the M/M/k/k behaves (almost) like M/M/infinity. For M/M/infinity, Q is Poisson distributed with parameter A. Thus, E[Q] = Var [Q] = A. Poisson => Normal as A (and k) increase. So σ[Q]/ E[Q] => 0 as A increases. The spare capacity (k-E[Q]) , e.g. 5σ[Q], becomes negligible relative to E[Q] (Recall E[Q] =A). This is similar to what we saw before.
As A increases we go from: frequency Spare capacity 1000 Mbit/sec 150 Mbit/sec Bit rate Bursty traffic
frequency To: 850 Mbit/sec Bit rate 1000 Mbit/sec Smooth traffic
Time M/M/k/k modeling of OPS/OBS over WDM Blocking probability is obtained by the Erlang B Formula wavelength 3 wavelength 2 wavelength 1
Extensions and technology choices • Limited number of input links. • => Engset Model - Still telephony(1918) • Frozen time when a packet is dumped. • => Generalized Engset Model (Cohen 1957) • Optical buffers. • Frozen time - packet is inserted into the buffer. • Hybrid circuit/packet switching. • Hybrid electronic/optical switching(!) • Optical burst switching • Network with multiple bottlenecks. • TCP on top.
One optical network model Core switches: symmetrical Edge routers: infinite buffers; Access links: smaller bandwidth than core links; TCP sources: saturated; no maximum window limit; (conservative, large send and receive buffers)
Notation M: total number of input links, K: number of output links, B: buffer size, : service rate of a single output link = reciprocal of mean packet time. PD: packet loss probability.
Analytical model λI PD λI = 1/[inter packet time per link]
Model of TCP throughput Relationship between TCP bottleneck throughput and packet loss probability: Ragg : the aggregate TCP throughput, N : the number of TCP flows, M : the number of input links, RTTH : the harmonic average round-trip time I I
Generalized Engset with Buffer (GEB) * = 1/(1/ I+PD/), fixed-point solution is needed.
Related models • Engset with buffer (EB) • Use Iinstead of* in GEB (no need for a fixed point solution). • M/M/K/K+B
Fix-point equations binary search algorithm => fixed point solution Open loop
Model Validation 16 input trunks
Zero Buffer – Scaling Effect No wavelength conversion # Sources
Conclusion • Teletraffic models can be used to provide insight into the economics of the optical-Internet. • Power usage and related cost must be considered. • In the optical Internet buffering can be pushed to the edges efficiently as traffic, number of sources and capacity (number of wavelengths per cable) increases, if cost effective optical wavelength conversion is available.