430 likes | 453 Views
Software Engineering (CSI 321). Testing Overview. Outline. What is Testing? Testing: Why? Testing Strategies Verification & Validation Testing Levels Testing Techniques Manual Testing vs. Automated Testing When to stop testing ? Testing vs. Debugging Reviews/Formal Technical Reviews
E N D
Software Engineering (CSI 321) Testing Overview
Outline • What is Testing? • Testing: Why? • Testing Strategies • Verification & Validation • Testing Levels • Testing Techniques • Manual Testing vs. Automated Testing • When to stop testing ? • Testing vs. Debugging • Reviews/Formal Technical Reviews • Major Testing Activities • Test Plan • Different roles in major testing activities
What is Testing? • Testing ==> Software Testing • Testing is the process of executing a system or component under specified conditions with the intent of finding defects and to verify that it satisfies specified requirements. • Testing is a product-oriented activity. • Testing is oriented to bug-detection. • Testing is one of the most important parts(activities) of quality assurance (QA).
What is Testing? • Testing is the most commonly performed QA activity. • Basic idea of testing involves the execution of software and the observation of its behavior or outcome. • If a failure is observed, the execution record is analyzed to locate and fix the fault(s) that caused the failure. Otherwise, we gain some confidence that the software under test is more likely to fulfill its designated functions.
What is Testing? • Testing=> Executing a program or program modules to find errors. • Good test cases are more likely to find errors (than not so good test cases) • The best test case is one that reveals an error. • Test cases correspond to customer requirements.
Testing: Why? • The purpose of software testing is to ensure that the software systems would work as expected when they are used by their target customers and users. • Most natural way to show fulfillment of expectations is to demonstrate their operation through some “dry-runs” or controlled experimentation in laboratory settings before the products are released/delivered. • Such controlled experimentation through program execution is generally called testing.
Testing: Why? • Original/primary purpose: Demonstration of proper behavior or quality demonstration • “Testing” in traditional settings • provide evidence of quality in the context of QA • New purpose: Defect detection & removal • Mostly defect-free software manufacturing vs. traditional manufacturing • Flexibility of software ( ease of change) • Failure observation ==> fault removal • defect detection ==> defect fixing • Eclipsing original purpose
Testing: Why? • Summary:Testing fulfills two primary purposes: • To demonstrate quality or proper behavior • To detect and fix defects/bugs
Testing Strategies • Developing a software testing strategy effectively answers – • How do you conduct the tests? • Should you develop a formal plan for your tests? • Should you test the entire program as a whole or run tests only on a small part of it? • Should you rerun tests you’ve already conducted as you add new components to a large system? • When should you involve the customer? • When to stop testing?
Testing Strategies • Testing often account for more project effort than any other software engineering activity. • A systematic strategy for testing software needs to be established. • Testing begins “in the small” and progresses “to the large.” Early testing focuses on a single component or a small group of related components and applies tests to uncover errors in the data and processing logic that have been encapsulated by the components. After components are tested they must be integrated until the complete system is constructed.
Testing Strategies • To perform effective testing, a software team should conduct effective Reviews/formal technical reviews(FTR). Many errors will be eliminated before testing commences by doing this. • Testing begins at the component level and works “outward” toward the integration of the entire system. • Different testing techniques are appropriate at different points in time. • Testing is conducted by the developer of the software and (for large projects) an independent test group; sometimes by customer or end-users. • Testing and debugging are different activities, but debugging must be accommodated in any testing strategy.
Verification & Validation • Software testing in one element of a broader topic that is often referred to as Verification and Validation (V & V). • Verification– Verification is a process that ensures the software product works properly. • Are we building the product right? • Is the code correct with respect to its specification? Verification ==>Building the product correctly • Validation – Validation is a process that ensures the software product meets the customer requirements. • Does the specification reflect what it should? • Are we building the right product? Validation ==> Building the correct product
Testing Levels • Unit testing • Integration testing • System testing • Acceptance testing
Testing Levels • Unit Testing ==> • Integration Testing==> • System Testing ==> • Acceptance Testing ==>
Testing Levels • Unit Testing: • Testing of individual software components • First level of dynamic testing • Typically white-box testing • Usually done by programmers • AKA: Component testing, module testing
Testing Levels • Unit Testing: • Individual units are tested separately • Units or modules may be single functions, procedures or programs • Done incrementally, usually by the single programmer who coded it • Uses stubs and drivers • White box testing most appropriate at this stage • Tests local data structures, boundary conditions, independent paths, error handling paths • Informal, i.e. no formal test plan specified and written down
Testing Levels • Integration Testing: • Integration testing is a systematic technique for constructing the software architecture while at the same time conducting tests to uncover errors associated with interfacing. • Testing of two or more units/modules together • Objective is to detect Interface defects between units/modules
Testing Levels • System Testing: • Conducted on complete, integrated system • Ensures entire integrated system meets requirements • Black-box in nature • Formal written test plan is necessary • Done before Performance testing
Testing Levels • Acceptance Testing: • Formal testing for product evaluation • Formal testing conducted to determine whether a system satisfies its acceptance criteria • Performed by customers/end users (preferably) • Verifies functionality and usability of the software • Prior to software being released to live operation
Testing Techniques • Two basic types of testing techniques: • Black-box testing • White-box testing
Testing Techniques • Black box testing: • View components as opaque • Based on requirements and functionality • Without any knowledge of internal design, code or language. • AKA : Functional testing, behavioral testing
Testing Techniques • White box testing: • View components as transparent • Based on knowledge of the internal logic • Done by programmers (usually) • AKA: Structural testing, Glass-box testing, clear-box testing
Manual Testing vs. Automated Testing • Manual Testing: • Oldest and most rigorous type of software testing • Requires a tester to perform manual test operations • Hard to repeat • Not always reliable • Costly • time consuming • labor intensive
Manual Testing vs. Automated Testing • Automated Testing: • Testing employing software tools • Execute tests without manual intervention • Fast • Repeatable • Reliable • Reusable • Programmable • Saves time
Alpha Testing vs. Beta Testing Alpha testing: • Alpha testing is performed by potential users/customers or an independent test team at the developer’s site. • Conducted when code is mostly complete or contains most of the functionality and prior to users being involved. • Minor design changes may still be made as a result of such testing. Sometimes a select group of users are involved.
Alpha Testing vs. Beta Testing Beta testing: • Beta testing is done at the customer’s site by the customer in the open environment. • Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. • Typically done by end-users or others, not by programmers. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released. • Beta testing is a type of acceptance testing involving a software product to be marketed for use by many users. Sometimes, selected users receive the system first and report problems back to the developer.
When to stop testing ? • When are we done testing –how do we know that we’ve tested enough? –There is no definitive answer to this question! • You’re never done testing! The responsibility just shifts from the software engineers to the customer/end-user • Are you done testing when you run out of time or you run out of money? What are other criteria to stop testing?
When to stop testing ? • Stopping criteria to stop testing: • Resource-based criteria : • “Stop when you run out of time” • “Stop when you run out of money” • Irresponsible ==> product quality/other problems • Activity -based criteria: • “Stop when you complete planned test activities “ • “Stop when quality goals reached”
Testing vs. Debugging • Debugging and testing are different. Dynamic testing can show failures that are caused by defects. Debugging is the development activity that finds, analyses and removes the cause of the failure. • Test and re-test are test activities • Testing shows system failures • Re-testing proves, that the defect has been corrected • Debugging and correcting defects are developer activities • Through debugging, developers can reproduce failures, investigate the state of programs and find the corresponding defect in order to correct it.
Testing vs. Debugging Test Re-test Correct. Defects Debugging • Test and re-test are Testactivities • Testing shows system failures • Re-testing proves, that the defect has been corrected • Debugging and correcting defects are Developer activities • Through debugging, developers can reproduce failures, • investigate the state of programs and find the • corresponding defect in order to correct it.
Debugging • Debugging: • Process of determining the cause of a defect and correcting it • Occurs as a consequence of a test revealing a defect
Debugging • Debugging is different from testing but they are often confused: • Testing aims to show that bugs exist • Debugging aims to show where bugs exist, and to remove them • Debugging starts by taking error output • After debugging, new test cases may need to be added to the regression test suite, e.g. when a bug fix introduces new control flow into the program
Debugging • Time required to determine the nature and location of the error - 88% • Time required to correct the error - 12% • First percentage is so high because: • Symptom and cause may be in different parts of a program, especially when coupling is tight • the bug may be intermittent and difficult to reproduce, especially if timing plays a part • human psychology, we see what we think is written
Static Verification • Dynamic testing is essential for • performance analysis • system validation (what is wanted by the user) • user interface validation • Static verification involves the desktop checking of the: • requirements description • specification • design documents • source code listings • Static verification can’t do away with the need for dynamic testing, but it is an extremely valuable addition.
Reviews/Formal Technical Reviews • Reviews: • Inspection • Walkthrough • Technical review • Informal review • Peer review • Code review
Reviews - Walkthroughs & Inspections • Primary verification technique in early stages of the software life cycle (requirements analysis, spec. and design stages) • Reviews are meetings at which a group of people evaluate technical material by ‘walking through’ it systematically • Errors are identified, and suggestions/criticisms made. These were discovered/decided before the review when the documents were circulated to the team members for pre-review individual inspections • End product is a written report, stating the quality of the material and any errors detected, but usually no recommended actions
Walkthroughs and Inspections • 2 main types of reviews: • Walkthroughs (informal, usually involve peer groups e.g. team of programmers and no managers) • Inspections(similar in aim, but more formal with teams of inspectors)
Program Inspections - Difficulty & Value • Primarily for unit testing. Two hours (200 program statements, approximately) is the maximum for one inspection session. After this, defect detection trails off even though inspections are only concerned with identification rather than fixing of bugs. • Program inspections are extremely effective, uncovering errors not detected by individual implementers in less formal conditions • Note : Inspections should not be linked with career reviews. If they are, the developer on the inspection team will detect few bugs
Testing Life Cycle • Establish test objectives. • Design criteria (review criteria). • Correct. • Feasible. • Coverage. • Demonstrate functionality . • Writing test cases. • Testing test cases. • Execute test cases. • Evaluate test results.
Major Testing Activities • Major Testing Activities: Generic Testing Process • Test planning and preparation • Test Execution • Analysis & Follow-up
Major Testing Activities • Major testing activities : The Generic Testing Process • Test planning and preparation Sets the goals for testing, select an overall testing strategy, and prepare specific test cases and the general test procedure • Test Execution Include related observation & measurement of product behavior • Analysis and follow-up Include result checking and analysis to determine if a failure has been observed, and if so, follow-up activities are initiated & monitored to ensure removal of the underlying causes/faults, that led to the observed failures in the first place.
What is a Test Plan? • A test plan is a document that describes the objectives, scope, approach, resources, schedule and focus of software testing activities. • A test plan gives detailed testing information regarding an upcoming testing effort. In other words, a test plan is a systematic approach to testing a system and typically contains a detailed understanding of what the eventual workflow will be. • Organizations may follow standard test plan guidelines (e.g. IEEE, CMM ) or they can have their own customized test plan outlines.
What are the different roles (of people) in major testing activities? • Testing activities for large-scale testing can generally be performed and managed with the involvement of many people who have different roles and responsibilities – • Dedicated professional testers and testing managers • Developers who are responsible for fixing problems, who may also play the dual role of testers • Customers and users, who may also serve as testers informally for usability or beta testing • Independent professional testing organizations as trusted intermediary between software vendors and customers