110 likes | 274 Views
TGT (Wireless Performance) Test Organization Discussion. Presented By Christopher B. Polanec UNH InterOperabili0ty Lab August 2004 DCN: 11-04-0883-00-0wpp. Objectives. Provide an outline for a discussion into test organization and documentation
E N D
TGT (Wireless Performance)Test Organization Discussion Presented By Christopher B. Polanec UNH InterOperabili0ty Lab August 2004 DCN: 11-04-0883-00-0wpp Chris Polanec, UNH-IOL
Objectives • Provide an outline for a discussion into test organization and documentation • Propose some definitions to serve as the foundation of the organizational structure. Chris Polanec, UNH-IOL
Standardizing Methodology Documentation • Standardizing methodology documentation is necessary to increase readability to the end user. • Also want to focus end user’s attention to applicable tests • This standardization is also necessary within the group to ensure completeness of proposals. • Now is a good time because … • We are already seeing proposals • It creates the building blocks for general methodology Chris Polanec, UNH-IOL
Testing Generalization Input Input Output Output • Examples: • Inputs: Load (application) • System: Environment, Topology, Configuration • Outputs: Throughput, Delay, Jitter, Power Consumption, CPU Load, Forwarding Rate, Roam Time System Chris Polanec, UNH-IOL
Test Cases • Definition: A group of measurements relating to a specific metric where device and/or system configuration is kept constant. • A case can have several parameter values. • What is the restricting factor(s) on a test case? • Configuration, environment, topology, etc. • Examples: • forwarding rate with fragmentation set to 256 bytes and payloads of 64, 128, 256, etc. • packet error rate without noise using a load of 500 fps, 1000 fps, 10000 fps, etc. • Do we want to go a level lower and specify a instance of parameter values? • Test trials as Tom suggested. Chris Polanec, UNH-IOL
Test Plans • Definition: A plan that outlines a set of test cases that are designed to determine a specific performance metric for a device or system. It also specifies how to report results from an iteration its procedure. • Examples: • Forwarding Rate, Packet Error Rate, Latency, Jitter, etc. • Should test plans be even more specific? • Examples: • AP Forwarding Rate, AP Forwarding Rate with QoS, AP Forwarding Rate with RSN • If test plans are left generic, many test cases will not be applicable (or of interest) to end users. • Break test plans up based on device/system types, capabilities, …? Chris Polanec, UNH-IOL
Test Suites • Test suites are not necessary if we choose to have general test plans. • This type of layer is necessary (whether it is called a suite or inherent in test plans). • This high-level of the layer does not need to be specified this early in the game. • Applying this level of organization is done best in one of the final phases • Organization does not need to be based on metric • Could be based on topology (P2P, AP-#STAs, #APs-#STAs), device/system capabilities, or even input loads. • Metric is best choice because capabilities overlap and topology is too narrow Chris Polanec, UNH-IOL
Mapping the Organizational Structure to the TGT document 1 Test Suite 1.1 Test Plan Template 1.2 Test Plan Template 1.3 Test Plan Template 2 Test Suite 2.1 Test Plan Template 2.2 Test Plan Template 3 Test Suite 3.1 Test Plan Template 3.2 Test Plan Template 3.3 Test Plan Template Chris Polanec, UNH-IOL
Test Reports • Definition: A set of results compiled from iterations of test plans on an instance of a device. • Present different types of data such as forwarding rate, latency, etc. • Group could suggest contents of the test report • content would include results as dictated by the test plans as well as …. • DUT/SUT description • Equipment Lists with equipment specific information • Calibration/verification results and summaries • More…. Chris Polanec, UNH-IOL
TGT Doc. Test Suite Test Suite Test Suite Structural Overview Chris Polanec, UNH-IOL
Summary • We need a simple, effective means to provide some organization instead of one large bucket of tests. • The organization should help focus end-user’s attention on particular sets of tests suited for their interest. • Separation of methodology from result recording is important to distinguish between test methodology and test iterations. Chris Polanec, UNH-IOL