500 likes | 618 Views
INCITE. Introduction. Jiří Navrátil SLAC. Project Partners and Researchers. INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL. Rice University
E N D
INCITE Introduction Jiří Navrátil SLAC
Project Partners and Researchers INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL Rice University Richard Baraniuk, Edward Knightly, Robert Nowak, Rudolf Riedi Xin Wang, Yolanda Tsang, Shriram Sarvotham, Vinay Ribeiro Los Alamos National Lab (LANL) Wu-chun Feng, Mark Gardner, Eric Weigle Stanford Linear Accelerator Center (SLAC) Les Cottrell, Warren Matthews, Jiri Navratil
Project Goals INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL • Objectives • scalable, edge-based tools for on-line network analysis, modeling, and measurement • Based on • advanced mathematical theory and methods • Designeted for • support high-performance computing infrastructures, such as computational grids, • ESNET, Internet2 and other HPNetworking project
Project Elements INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL • Advanced techniques • from networking, supercomputing, statistical signal processing, applied mathematics • Multiscale analysis and modeling • understand causes of burstiness in network traffic • realistic, yet analytically tractable, statistically robust, and computationally efficient modeling • On-line inference algorithms • characterize and map network performance as a function of space, time, application, and protocol • Data collection tools and validation experiments
Scheduled Accomplishments INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL • Multiscale traffic models and analysis techniques • based on multifractals, cascades, wavelets • study how large flows interact and cause bursts • study adverse modulation of application-leveltraffic by TCP/IP • Inference algorithms for paths, links, and routers • multiscale end-to-end path modeling and probing • network tomography (active and passive) • Data collection tools • add multiscale path, link inference to PingER suite • integrate into ESnet NIMI infrastructure • MAGNeT – Monitor for Application-Generated Network Traffic • TICKET – Traffic Information-Collecting Kernel with Exact Timing
Future Research Plans INCITE: Edge-based Traffic Processing and Service Inference for High-Performance Networks Richard Baraniuk, Rice University; Les Cottrell, SLAC; Wu-chun Feng, LANL • New, high-performance traffic models • guide R&D of next-generation protocols • Application-generated network traffic repository • enable grid and network researchers to test and evaluate new protocols with actual traffic demands of applications rather than modulated demands • Multiclass service inference • enable network clients to assess a system's multi-class mechanisms and parameters using only passive, external observations • Predictable QoS via end-point control • ensure minimum QoS levels to traffic flows • exploit path and link inferences in real-time end-point admission control
Surveyor NIMI Pinger RIPE There is no vacuum Optivity CiscoWorks Spectrum HpOpenview
JNFLOW Cisco-Netflows
FPP phase (From Papers to Practice) MWFS, TOMO, TOPO
20 ms ~300 ms 40 T for new set of values (12 sec)
First results BWe: 9,875 Mbps for 10 Mbps Ethernet CT-Graph
What has been done • Phase 1 - Remodeling - Code separation (BW and CT) - Find how to call MATLAB from another program - Analyze Results and data - Find optimal params for model • Phase 2 - Webing of BW estimate
ccnsn07.in2p3.fr sunstats.cern.ch pcgiga.cern.ch plato.cacr.caltech.edu
pcgiga.cern.ch default WS BW ~ 70Mbps pcgiga.cern.ch WS 512K BW ~ 100 Mbps
Problems ??? ? ? ? network software licence
After tuning More optimistics results
MF-CT Features and benefits • No need access to routers ! • Current monitoring systems for Load of traffic are based on SNMP or Flows (needs access to routers) • Low cost: • Allows permanent monitoring (20 pkts/sec ~ overhead 10 Kbytes/sec) • Can be used as data provider for ABW prediction (ABW=BW-CT) • Weak point for common use MATLAB code
Future work on CT • Verification model • Define and setup verification model (S+R) • Measurements (S) • Analyze results (S+R) • On-line running on selected sites • Prepare code for automation and Webing (S) • CT-Code modificaton ? (R)
MF-CT verification model UDP echo SNMP counter CERN SNMP counter internet SNMP counter IN2P3 SLAC SNMP counter MF-CT Simulator UDP echo
CTRE-ENGINEERING For practical monitoring would be necessary to do modification for using it in different modes: • Continuos modefor monitoring one site in Large time scale (hours) • Accumulation mode(1 min, 5 min, ?) for running for more sites in parallel • ? Solution without MATLAB ?
2 NEW 2Ls coming soon
Rob Nowak (and CAIDA people) say: www.caida.org This is internet
Network Topology Identification Ratnasamy & McCanne (99) Duffield, et al (00,01,02) Bestavros, et al (01) Coates, et al (01) Pairwise delay measurements reveal topology
Network Tomography source router / node link receivers Measureend-to-end (from source to receiver) losses/delays Infer link-level (at internal routers) loss rates and delay distributions
‘0’ loss ‘1’ success ‘0’ loss ‘1’ success Unicast Network Tomography Measure end-to-end losses of packets Cannot isolate where losses occur !
Packet Pair Measurements cross-traffic delay measurement packet pair
Packets experience the same delay on link 1 Extra delay on link 3 Delay Estimation Measure end-to-end delays of packet-pairs
record occurrences of losses and delays Packet-pair measurements • Key Assumptions: • fixed routes • iid pair-measurements • losses & delays on • each link are mutually • independent • packet-pair losses & • delays on shared links • are nearly identical
2 10 10 1 0.5 10 2 2 5 1 0.5 ns Simulation • 40-byte packet-pair probes every 50 ms • competing traffic comprised of: • on-off exponential (500 byte packets) • TCP connections (1000 byte packets) cross-traffic link 9 Kbytes/s time (s) Test network showing link bandwidths (Mb/s)
Future work on TM and TP • Model in frame of Internet (~100 sites) • Define verification model (S+R) • Deploy and install code on sites (S) • First measurements (S+R) • Analyze results (form,speed,quantity) (S+R) • ? Code modificaton (R) • Production model? • Compete with Pinger, RIPE, Surveyor, Nimi? • How to unify VIRTUAL structure with Real