80 likes | 239 Views
Tarunesh Ahuja, Tom Alexander, Scott Bradner, Sanjay Hooda, Jerry Perser, Muninder Sambi. IETF BMWG WLAN Switch Benchmarking. 69th IETF Chicago. Current Status. Two candidate drafts posted draft-alexander-bmwg-wlan-switch-term-00.txt draft-alexander-bmwg-wlan-switch-meth-00.txt
E N D
Tarunesh Ahuja, Tom Alexander, Scott Bradner, Sanjay Hooda, Jerry Perser, Muninder Sambi IETF BMWGWLAN Switch Benchmarking 69th IETF Chicago
Current Status • Two candidate drafts posted • draft-alexander-bmwg-wlan-switch-term-00.txt • draft-alexander-bmwg-wlan-switch-meth-00.txt • Some reflector discussion • Areas of improvement • Support for work • “Isn’t this out of BMWG’s charter?”
The Drafts Terminology and methodology for WLAN controller benchmarking • Terminology covers items specific to WLAN controllers • Basic SUT components: WLAN, Access Controller, AP (WTP), ... • Traffic-related terms: unicast vs. multicast, high-priority vs. BE • Mobility terms: roaming, roaming failures, roaming decision, ... • Data plane benchmark terms: Multicast forwarding rate, etc. • Most terms already well defined by RFC 1242, RFC 2285 • Control plane benchmark terms: roaming rate, reset recovery, ...
The Drafts - Contd • Methodology provides general information on test setups and test results, then describes benchmark tests • Functional model of SUT/tester, test setups, SUT configuration • Test conditions: frame sizes, PHY settings, trial duration, ... • Data plane benchmark tests • Unicast throughput/forwarding rate; multicast forwarding rate • Latency & jitter; QoS differentiation; power-save throughput • Control plane benchmark tests • Roaming delay & rate; endstation association rate & capacity • WTP (AP) capacity • Reset & failover recovery times • Provides appendix on calculating intended load
Reflector Discussion • Draft improvements suggested • Detect IFS (and other protocol timer) cheating, as well as fudging on contention behavior • Detect/report WLAN coordination function type • Discourage or better specify open-air testing • Word usage issues • ”Why do we need to do this?” • Does not relate to IP core, Internet infrastructure? • Aren’t WLANs a desktop/home sort of thing, like 10BASE-5 hubs? • Why are we benchmarking an IEEE L2 standard?
Why Do We Need To Do This? • Enterprise WLANs are highly IETF- & IP-centric • Protocols standardized by CAPWAP group in IETF • WLAN switch controllers function at Layer 3, not Layer 2 • Out of scope for IEEE, which standardizes Layer 1/2 • Incorporates many IETF-defined functions: ARP caching and proxying, DHCP service, firewalling, IPsec, tunneling, etc. • Enterprise WLAN equipment not ”desktop & home” • Present in the core networks of large enterprises • Being adopted by service providers for WLAN access services • Enterprise equipment benchmarking covered by BMWG’s charter • Complete lack of WLAN switch performance benchmarks • No common way to talk about IP performance of enterprise or service provider WLAN equipment • Lack of accepted benchmarks => poor equipment performance
Does BMWG Have The Expertise? • Isn’t there a lot of ’scary RF’ involved? • No! WLAN controllers are pure Ethernet/IP devices! • In some cases using wireless APs to connect to the WLAN controller interfaces is simpler, BUT ... • Methodology omits L1/L2-specific tests, and provides guidelines (see section 3.3) for eliminating RF effects on test results, AND ... • At least one vendor provides test equipment that is as easy to connect to the SUT as a core router with optical ports! • Aren’t we testing wireless/RF-specific stuff? • No! The data-plane metrics (throughput, latency, etc.) are all familiar friends from RFC 2544, RFC 2889 • Control plane metrics are likewise ”wireless-independent” – they deal with attributes of the CAPWAP architecture • Besides – wireless is now just as much a standard interface for an IP device as Ethernet, POS, ATM, etc.
Next steps • Continue to solicit comments, feedback, and support • Update drafts based on comments and re-submit Comments?