1 / 61

WPP Study Group Tutorial

WPP Study Group Tutorial. Group created at Vancouver meeting to create test definitions and methods for 802.11 Presentations today will cover WPP from several points of view Large user IC vendor Testing community . Agenda. Bob Mandeville: Iometrix Don Berry: Microsoft

jcervantez
Download Presentation

WPP Study Group Tutorial

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WPP Study Group Tutorial • Group created at Vancouver meeting to create test definitions and methods for 802.11 • Presentations today will cover WPP from several points of view • Large user • IC vendor • Testing community

  2. Agenda • Bob Mandeville: Iometrix • Don Berry: Microsoft • Mike Wilhoyte: Texas Instruments • Kevin Karcz: UNH-IOL • Jason A. Trachewsky: Broadcom • Fanny Mlinarsky: Azimuth Systems • Round Table Discussion and Q&A

  3. 802.11 Requirements forTesting Standards Bob Mandeville bob@iometrix.com

  4. WPP • What is the need for 802.11 metrics • What problems will they help solve? • Who will the primary users be? • How do we go about creating new metrics for wireless?

  5. IETF (BMWG) Based on two-step approach to definitions: Terminology document (all relevant functional characteristics are defined) Methodology document This method is most appropriate for performance testing ATM Forum, Metro Ethernet Forum Based on ratified standards documents Each test definition is referenced to standards source text This method is most appropriate for conformance testing Two Approaches to CreatingTesting Standards

  6. IETF BMWG Test Standards Templates • Terminology Definition Template: • Term to be defined. (e.g., Latency) • Definition: The specific definition for the term. Discussion: A brief discussion about the term, it's application and any restrictions on measurement procedures. • Measurement units: The units used to report measurements of this term, if applicable. • Issues: List of issues or conditions that effect this term. • See Also: List of other terms that are relevant to the discussion of this term. • Methodology Definition Template: • Objectives • Setup parameters • Procedures • Measurements • Reporting formats

  7. Conformance OrientedTest Methods Template 1/2

  8. Conformance OrientedTest Methods Template 2/2

  9. Sample List of 802.11 Terms to be Defined by Category:

  10. Requirements forTesting Standards • Roaming Test • A practical example which shows poor performance related to a lack of test standards definitions • To roam a 802.11 a/b/g device will: • disassociate from one AP • search for a stronger RF signal from another AP • then associate and authenticate with that A • resume normal data transmission • Roaming can fail due to: • transient RF conditions • the time that APs and devices take to complete the four step roaming process

  11. Test Configuration

  12. Test Procedure • Roaming Test recorded: Total Roaming Time = Decision Time + Switch Over Time • The Decision Time is the time it took the NIC to stop attempting to transmit packets to AP 1 after the attenuation of the RF signal • The Switch Over Time is the time it took the NIC to complete its association with AP2 after it stopped attempting to transmit packets to AP 1 • During the Decision Time cards recognized the signal attenuation and invoked proprietary algorithms to adapt their rates to slower speeds. • Switch Over Time ends when the NIC receives the AP’s acknowledgement to its association request. • This time should only be recorded as valid if data traffic from the NIC to the AP successfully resumes transmission after association.

  13. Test Results

  14. Test Conclusions • Need to break out Decision Time and Switch over Time • Switch Over Times are as low as 60 milliseconds and averages a little over 700 milliseconds over all the combinations excepting two outliers which took over 8 seconds. • In the majority Decision Time is the largest contributor to the overall Roaming Times. • Packet traces show that most implementations of the rate adaptation algorithms maintain transmission at the lowest 1 Mbps rate for several seconds after loss of RF signal has been detected. • These algorithms will need to be revisited to deliver quality roaming. • Test standards for measuring roaming times can make a significant contribution by aligning all vendors and users on a common set of definitions • This applies to roaming but also to a large number of other undefined terms

  15. Challenges of Operating an Enterprise WLAN Don Berry Senior Wireless Network Engineer Microsoft Information Technology

  16. Microsoft’s CurrentWLAN Network • 4495 Access Points • 1 AP per 3500 sq/ft • ~15 Million sq/ft covered in 79 countries • 70,000 users • 500,000+ 802.1x authentications per day – EAP-TLS • Supports 5.5 and 11Mbps only

  17. Wireless Service Challenge • What is “Wireless Service”? • How is it measured? • What factors impact Wireless Service? • How do you improve Wireless Service?

  18. Wireless Service and Support • Service Goals • Make Wireless service equivalent to wired • Offer unique mobile services • Operational Goals • Reduce operational costs • Minimize support personal

  19. How can WPP Help? • Produce criteria that reflect the client experience • Offer information that can compare different environments – Enterprise, SOHO, home

  20. Desired Outcome of WPP:A Perspective From a Si Provider Mike Wilhoyte Texas Instruments

  21. Key Issues We Face Relevant to WPP • Supporting Customers w/ Custom Test Fixtures • Are often uncontrolled and therefore repeatability is questionable • May introduce unintentional impairments and therefore don’t effectively isolate the feature under test • May unintentionally push beyond the boundary of the specification • May stress the system beyond what any user or evaluator will do • May overlook other critical areas of system performance • The complexity of the specification has grown since the days of 11b and more than ever, performance is setup dependent • Are tests really apples-to-apples?

  22. These Issues Result in: • Confusion over unexpected test results • Resource drain

  23. DUT SMIQ AP AP AP AP AP A Real Example: Customer ACI Test Fixture RF Shield Room – non Anechoic Over 30 Active APs TCP/IP People Observing the Results Test: Plot TCP/IP throughput with increasing levels of interference from the SMIQ

  24. Issues With This ACI Test Fixture Can you imagine trying to repeat any test result from this fixture in YOUR lab? • Metal walls in shield room producing multipath making the test results depend even more on the position of the laptop (in a fade?) • People (2.4 GHz RF absorbers) in the room • Over 30 AP’s active which may couple into the RF front-end (even though it’s cabled) of the test AP • SMIQ produces a non-realistic signal since the carrier is always present even though it may be triggered externally • There are ways around this • The test AP is not isolated from the interference and its behavior will affect the test result of the DUT • Rx performance in the same interference • Deferral behavior in the Tx (CCA) is affected • Rate adjustment behavior

  25. out in A “Better” ACI Test Fixture Testing the STA Channel 6 DUT AP Atten clp Anechoic Chamber 4 Anechoic Chamber 3 30 dB PA PA isolates interfering network and is not affected by traffic in chambers 3,4 Attenuator clp in out 20 dB pad Interfering network swept on channels 1-11 pad 30 dB AP STA Anechoic Chamber 1 Anechoic Chamber 2

  26. AP A “Better” ACI Test FixtureTesting a Network (AP+STA) Channel 6 DUT 30 dB Anechoic Chamber 3 PA PA isolates interfering network and is not affected by traffic in chamber 3 Attenuator clp in out 20 dB Interfering network swept on channels 1-11 30 dB AP STA Anechoic Chamber 1 Anechoic Chamber 2

  27. Desirable Outcomes of WPP • Develop a minimal set of test metrics that are relevant to key performance parameters such as: • Robustness • Throughput/Capacity • Range • Develop a Set of Test Best Practices that: • Produce repeatable results • Achieve the right balance between complexity and cost • The industry will use

  28. UNH-IOL perspective on WLAN Performance testing Kevin Karcz March 15, 2004

  29. A Quick Background • UNH-IOL Wireless Consortium primarily has focused on interoperability and conformance tests for 802.11, not performance testing • Have generated traffic loads to observe a DUT’s behavior under stress, but not specifically to measure throughput or related parameters. • However, QoS is inherently about providing performance while sharing limited resources • Optimization of: Throughput , Range, Delay & Jitter • Constrained by: • User resources: CPU load, DC power • Community resources: Available spectrum, aggregate users

  30. Examples of performance tests • PHY layer • Multi-path fading channel emulation using a Spirent SR5500 fader. • What fading models should be used? • MAC layer • Throughput varies with Traffic Generator used • CPU load differs significantly for between different vendors. Much greater than CPU load for a typical Ethernet device.

  31. Clear methods of testing are needed… • As we start measuring more performance metrics • Can each layer of the network be measured independently? • Which metrics need to look at the interaction of multiple layers? • Hassle of real world scenario testing vs. a PHY test mode? • Black box testing requires DUT to authenticate and associate with test equipment and interact at the MAC layer, not just the PHY layer.

  32. Some gray areas of testing • Throughput • Is throughput measured as a MAC layer payload? At IP layer? TCP or UDP layer? • One DUT may have better PER measurements at the PHY layer than a 2nd DUT, but may get worse throughput if it’s rate selection algorithm is poor. • Difficult to maintain consistency in an open (uncontrolled) environment • Can throughput be measured in a cabled environment without an antenna? • What if the DUT has a phased array antenna? • What if the device is mini-PCI and inherently has no antenna? • Range test • What if a higher TX level causes higher adjacent channel interference and brings the aggregate throughput down for a neighboring BSS? • Power consumption • Is this just the DC power drain at the cardbus card interface? • Should CPU load be included if the DUT implements much of it’s MAC functionality on a host PC? • Roaming • Quickest time: 1 STA, 2 APs on same channel • More realistic: AP reboots, Multiple STAs roam to new AP on new Channel

  33. Why WPP role is important to UNH-IOL? • Open standards are desired for the basis of test suite development • Defining test parameters and standardization of Test Scenarios makes comparison of ‘Apples to Apples’ easier • IEEE focuses on the technical content • Our interest is the testing, not determining how results are utilized

  34. Why WPP should define the tests • UNH-IOL follows IEEE PICS for test cases • More detailed info for test results • Cases: PDA/laptop/AP weight test results differently

  35. Example criteria weighting

  36. Comments on Wireless LAN Performance Testing And Prediction Jason A. Trachewsky Broadcom Corporation jat@broadcom.com

  37. Topics • Test Categories for WPP • Some Test Configurations

  38. Test Categories for WPP • Deciding what parameters are to be considered is the challenge. • How do we transform user perception of performance into a set of repeatably-measurable quantities? • Throughput and Range (what environments?) • Frame latency • Visibility of APs

  39. Test Categories for WPP • How do we transform user perception of performance into a set of repeatably-measurable quantities? • Delays in association/authentication • Host CPU utilization • Ability to roam without loss of connections • Etc.

  40. Test Categories for WPP • Basic PHY/RF Measurements • Transmitter Parameter Measurements • TX EVM or Frame Error Rate (FER) with Golden/Calibrated Receiver • Carrier suppression • Carrier frequency settling

  41. Test Categories for WPP • Receiver Parameter Measurements • RX FER vs. input power • Flat channel (controlled through cabled/shielded environment) • Controlled frequency-selective channels (known multipath power-delay profile) • Antenna measurements • cable/feed losses (S11 and S21) • gain vs. azimuth and elevation angle • One can easily take a great receiver design and blow away all gains with a bad antenna or lossy feed!

  42. Test Categories for WPP • MAC Layer Measurements • rate adjustment behavior • specific parameters? test conditions? • association and roaming behavior • specific parameters? test conditions? • frame latency • layer-2 throughput with encryption • host CPU cycles consumed?

  43. Test Categories for WPP • Layer-4 Measurements • UDP frame loss rate and latency vs. received power • flat or frequency-selective channels? • TCP throughput vs. received power • flat or frequency-selective channels?

  44. Test Categories for WPP • Open-air Measurements • Open-air measurements are always subject to imperfectly-known time-varying multipath power-delay profiles. • There is substantial variation at 2.4 and 5.5 GHz over 10s of msec.

  45. Test Categories for WPP • We have established that frequency-selectivity due to multipath can result in higher-loss channels having higher capacity than lower-loss channels. • The capacity of the channel can vary rapidly. • This is a more significant problem for systems like 802.11a and 802.11g which include a large number of possible rates to better approach the practical capacity. • (The problem won’t get easier for future WLAN standards.)

  46. Test Categories for WPP • Open-air Measurements • What can we do? • Try to perform as many measurements as possible with cable networks. • Perform open-air measurements in an area in which the distance between transmitter and receiver is small compared with the distance between either transmitter or receiver and any other object. I.e., strong LOS. • Helpful but not sufficient, as even small reflections affect channel capacity.

  47. Test Categories for WPP • Open-air Measurements • What can we do? • Give up and perform a large ensemble of measurements and gather statistics.

  48. UPS ARBtx0 Filt UPS Scoperx ARBtx1 Filt 10 Mhz clk ARBtx2 Filt ARBtx3 Filt trigger I / Q gpib DC PWR 10 Mhz clk ARBtxctl ethernet linux controller ethernet 10/100 hub 10/100 hub Channel Measurement Block Diagram • Scope provides 10Mhz reference clk for all systems • 3 long interconnect cables connect tx and rx side • Filt module includes LNA

  49. Time- and Frequency-Selective Fading > 10-dB change in received signal power in some bins over 60 msec.

More Related