430 likes | 517 Views
Demystifying and Controlling the Performance of Data Center Networks. Why ar e Data Centers Important?. Internal users Line-of-Business apps Production test beds External users Web portals Web services Multimedia applications Chat/IM. Why are Data Centers Important?.
E N D
Demystifying and Controlling the Performance of Data Center Networks
Why are Data Centers Important? • Internal users • Line-of-Business apps • Production test beds • External users • Web portals • Web services • Multimedia applications • Chat/IM
Why are Data Centers Important? • Poor performance loss of revenue • Understanding traffic is crucial • Traffic engineering is crucial
Road Map • Understanding Data center traffic • Improving network level performance • Ongoing work
Canonical Data Center Architecture Core (L3) Aggregation (L2) Edge (L2) Top-of-Rack Application servers
Dataset: Data Centers Studied • 10 data centers • 3 classes • Universities • Private enterprise • Clouds • Internal users • Univ/priv • Small • Local to campus • External users • Clouds • Large • Globally diverse
Dataset: Collection • SNMP • Poll SNMP MIBs • Bytes-in/bytes-out/discards • > 10 Days • Averaged over 5 mins • Packet Traces • Cisco port span • 12 hours • Topology • Cisco Discovery Protocol
Canonical Data Center Architecture Core (L3) Aggregation (L2) Packet Sniffers Edge (L2) Top-of-Rack Application servers
Analyzing Packet Traces • Transmission patterns of the applications • Properties of packet crucial for • Understanding effectiveness of techniques • ON-OFF traffic at edges • Binned in 15 and 100 m. secs • We observe that ON-OFF persists Routing must react quickly to overcome bursts
Data-Center Traffic is Bursty • Understanding arrival process • Range of acceptable models • What is the arrival process? • Heavy-tail for the 3 distributions • ON, OFF times, Inter-arrival, • Lognormal across all data centers • Different from Pareto of WAN • Need new models Need new models to generate traffic
Canonical Data Center Architecture Core (L3) Aggregation (L2) Edge (L2) Top-of-Rack Application servers
Intra-Rack Versus Extra-Rack • Quantify amount of traffic using interconnect • Perspective for interconnect analysis Extra-Rack Edge Application servers Intra-Rack Extra-Rack = Sum of Uplinks Intra-Rack = Sum of Server Links – Extra-Rack
Intra-Rack Versus Extra-Rack Results • Clouds: most traffic stays within a rack (75%) • Colocation of apps and dependent components • Other DCs: > 50% leaves the rack • Un-optimized placement
Extra-Rack Traffic on DC Interconnect • Utilization: core > agg > edge • Aggregation of many unto few • Tail of core utilization differs • Hot-spots links with > 70% util • Prevalence of hot-spots differs across data centers
Persistence of Core Hot-Spots • Low persistence: PRV2, EDU1, EDU2, EDU3, CLD1, CLD3 • High persistence/low prevalence: PRV1, CLD2 • 2-8% are hotspots > 50% • High persistence/high prevalence: CLD4, CLD5 • 15% are hotspots > 50%
Prevalence of Core Hot-Spots 0.6% • Low persistence: very few concurrent hotspots • High persistence: few concurrent hotspots • High prevalence: < 25% are hotspots at any time 0.0% 6.0% Smart routing can better utilize core and avoid hotspots 0.0% 24.0% 0.0% 0 10 20 30 40 50 Time (in Hours)
Insights Gained • 75% of traffic stays within a rack (Clouds) • Applications are not uniformly placed • Traffic is bursty at the edge • At most 25% of core links highly utilized • Effective routing algorithm to reduce utilization • Load balance across paths and migrate VMs
Road Map • Understanding Data center traffic • Improving network level performance • Ongoing work
Options for TE in Data Centers? • Current supported techniques • Equal Cost MultiPath (ECMP) • Spanning Tree Protocol (STP) • Proposed • Fat-Tree, VL2 • Other existing WAN techniques • COPE,…, OSPF link tuning
How do we evaluate TE? • Simulator • Input: Traffic matrix, topology, traffic engineering • Output: Link utilization • Optimal TE • Route traffic using knowledge of future TM • Data center traces • Cloud data center (CLD) • Map-reduce app • ~1500 servers • University data center (UNV) • 3-Tier Web apps • ~500 servers
Draw Backs of Existing TE • STP does not use multiple path • 40% worst than optimal • ECMP does not adapt to burstiness • 15% worst than optimal
Design Requirements for TE • Calculate paths & reconfigure network • Use all network paths • Use global view • Avoid local optimals • Must react quickly • React to burstiness • How predictable is traffic? ….
Is Data Center Traffic Predictable? • YES! 27% or more of traffic matrix is predictable • Manage predictable traffic more intelligently 27% 99%
How Long is Traffic Predictable? • Different patterns of predictability • 1 second of historical data able to predict future 1.5 – 5.0 1.6 - 2.5
MicroTE: Architecture Monitoring Component • Global view: • Created by network controller • React to predictable traffic: • Routing component tracks demand history • All N/W paths: • Routing component creates routes using all paths Routing Component Network Controller
Architectural Questions • Efficiently gather network state? • Determine predictable traffic? • Generate and calculate new routes? • Install network state?
Architectural Questions • Efficiently gather network state? • Determine predictable traffic? • Generate and calculate new routes? • Install network state?
Monitoring Component • Efficiently gather TM • Only one server per ToR monitors traffic • Transfer changed portion of TM • Compress data • Tracking predictability • Calculate EWMA over TM (every second) • Empirically derived alpha of 0.2 • Use time-bins of 0.1 seconds
Routing Component New Global View Determine predictable ToRs Calculate network routes for predictable traffic Set ECMP for unpredictable traffic Install routes
Routing Predictable Traffic • LP formulation • Constraints • Flow conservation • Capacity constraint • Use K-equal length paths • Objective • Minimize link utilization • Bin-packing heuristic • Sort flows in decreasing order • Place on link with greatest capacity
Implementation • Changes to data center • Switch • Install OpenFlow firmware • End hosts • Add kernel module • New component • Network controller • C++ NOX modules
Evaluation: Motivating Questions • How does MicroTE Compare to Optimal? • How does MicroTE perform under varying levels of predictability? • How does MicroTE scale to large DCN? • What overheard does MicroTE impose?
Evaluation: Motivating Questions • How does MicroTE Compare to Optimal? • How does MicroTE perform under varying levels of predictability? • How does MicroTE scale to large DCN? • What overheard does MicroTE impose?
How do we evaluate TE? • Simulator • Input: Traffic matrix, topology, traffic engineering • Output: Link utilization • Optimal TE • Route traffic using knowledge of future TM • Data center traces • Cloud data center (CLD) • Map-reduce app • ~1500 servers • University data center (UNV) • 3-Tier Web apps • ~400 servers
Performing Under Realistic Traffic • Significantly outperforms ECMP • Slightly worse than optimal (1%-5%) • Bin-packing and LP of comparable performance
Performance Versus Predictability • Low predictability performance is similar to ECMP
Performance Versus Predictability • Low predictability performance is similar to ECMP • High predictability performance is comparable to Optimal • MicroTE adjusts according to predictability
Conclusion • Study existing TE • Found them lacking (15-40%) • Study data center traffic • Discovered traffic predictability (27% for 2 secs) • Developed guidelines for ideal TE • Designed and implemented MicroTE • Brings state of the art within 1-5% of Ideal • Efficiently scales to large DC (16K servers)
Road Map • Understanding Data center traffic • Improving network level performance • Ongoing work
Looking forward • Stop treating the network as a carrier of bits • Bits in the network have a meaning • Applications know this meaning. • Can applications control networks? • E.g Map-reduce • Scheduler performs network aware task placement and flow placement