360 likes | 502 Views
How does software fail?. Wrong requirement: not what the customer wants Missing requirement Requirement impossible to implement Faulty design Faulty code Improperly implemented design. Terminology. Fault identification: what fault caused the failure? Fault correction: change the system
E N D
How does software fail? • Wrong requirement: not what the customer wants • Missing requirement • Requirement impossible to implement • Faulty design • Faulty code • Improperly implemented design Chapter 8
Terminology • Fault identification: what fault caused the failure? • Fault correction: change the system • Fault removal: take out the fault Chapter 8
Types of faults • Throughput or performance fault • Recovery fault • Hardware and system software fault • Documentation fault • Standards and procedures fault • Algorithmic fault • Syntax fault • Computation and precision fault • Documentation fault • Stress or overload fault • Capacity or boundary fault • Timing or coordination fault Chapter 8
Table 8.1. IBM orthogonal defect classification. Fault type Meaning Function Fault that affects capability, end-user interfaces, product interfaces, interface with hardware architecture, or global data structure Interface Fault in interacting with other components or drivers via calls, macros, control blocks or parameter lists Checking Fault in program logic that fails to validate data and values properly before they are used Assignment Fault in data structure or code block initialization. Timing/serialization Fault that involves timing of shared and real-time resources Build/package/merge Fault that occurs because of problems in repositories, management changes, or version control Documentation Fault that affects publications and maintenance notes Algorithm Fault involving efficiency or correctness of algorithm or data structure but not design Chapter 8
HP fault classification Chapter 8
Faults distribution (HP) Chapter 8
Testing steps Systemfunctional requirements Other software requirements Customer requirements specifications Designspecifications User environment Unit test Unit test Integration test Function test Performance test Acceptance test Installation test Unit test SYSTEM IN USE! Chapter 8
Views of Test Objects • Closed box or black box • the object seen from outside • tests developed without the knowledge of the internal structure • focus on inputs and outputs • Open box or clear box • the object seen from inside • tests developed so the internal structure is tested • focus algorithms, data structures, etc. • Mixed model • Selection factors: • the number of possible logical paths • the nature of input data • the amount of computation involved • the complexity of algorithms Chapter 8
Exmining code • Code walkthrough • the developer presents the code and the documentation to a review team • the team comments on their correctness • Code inspection • the review team checks the code and documentation against a prepared list of concerns • more formal than a walkthrough • Inspections criticize the code, and not the developer Chapter 8
Table 8.2. Typical inspection preparation and meeting times. Development artifact Preparation time Meeting time Requirements document 25 pages per hour 12 pages per hour Functional specification 45 pages per hour 15 pages per hour Logic specification 50 pages per hour 20 pages per hour Source code 150 lines of code per hour 75 lines of code per hour User documents 35 pages per hour 20 pages per hour Table 8.3. Faults found during discovery activities. Discovery activity Faults found per thousand lines of code Requirements review 2.5 Design review 5.0 Code inspection 10.0 Integration test 3.0 Acceptance test 2.0 Chapter 8
Proving code correct • Formal proof techniques • + formal understanding of the program • + mathematical correctness • - difficult to build a model • - usually takes more time to prove correctness than to write the code itself • Symbolic execution • + reduces the number of cases for the formal proof • - difficult to build a model • - may take more time to prove correctness than to write the code itself • Automated theorem-proving • + extremely convenient • - tools exists only for toy problems at research centers • - ideal prover that that can infer program correctness from arbitrary code can never be built (the problem is equivalent to the halting problem of Turing machines, which is unsolvable) Chapter 8
Testing program components • Test point or test case • A particular choice of input data to be used in testing a program • Test • A finite collection of test cases Chapter 8
Test thoroughness • Statement testing • Branch testing • Path testing • Definition-use testing • All-uses testing • All-predicate-uses/some-computational-uses testing • All-computational-uses/some-predicate-uses testing Chapter 8
Example: Array arrangement • Statement testing: • 1-2-3-4-5-6-7 • Branch testing: • 1-2-3-4-5-6-7 • 1-2-4-5-6-1 • Path testing: • 1-2-3-4-5-6-7 • 1-2-3-4-5-6-1 • 1-2-4-5-6-7 • 1-2-4-5-6-1 Chapter 8
Relative strength of test strategies All paths All definition-use paths All uses All computational/some predicate uses All predicate/some computational uses All predicate uses All definitions Branch All computational uses Statement Chapter 8
Comparing techniques Table 8.5. Fault discovery percentages by fault origin. Discovery technique Requirements Design Coding Documentation Prototyping 40 35 35 15 Requirements review 40 15 0 5 Design review 15 55 0 15 Code inspection 20 40 65 25 Unit testing 1 5 20 0 Table 8.6. Effectiveness of fault discovery techniques. (Jones 1991) Requirements Design faults Code faults Documentation faults faults Fair Excellent Excellent Good Reviews Good Fair Fair Not applicable Prototypes Poor Poor Good Fair Testing Poor Poor Fair Fair Correctness proofs Chapter 8
Integration testing • Bottom-up • Top-down • Big-bang • Sandwich testing • Modified top-down • Modified sandwich Chapter 8
Component hierarchy for the examples Chapter 8
Bottom-up testing Chapter 8
Top-down testing Chapter 8
Modified top-down testing Chapter 8
Big-bang integration Chapter 8
Sandwich testing Chapter 8
Modifying sandwich testing Chapter 8
Bottom-up Top-down Modified Big-bang Sandwich Modified top-down sandwich Integration Early Early Early Late Early Early Time to Late Early Early Late Early Early basic working program Component Yes No Yes Yes Yes Yes drivers needed Stubs No Yes Yes Yes Yes Yes needed Work Medium Low Medium High Medium High parallelism at beginning Ability to Easy Hard Easy Easy Medium Easy test particular paths Ability to Easy Hard Hard Easy Hard Hard plan and control sequence Comparison of integration strategies. Chapter 8
Sync-and-stabilize approach (Microsoft) Chapter 8
Testing OO vs. testing other systems Chapter 8
Where testing OO is harder? Chapter 8
Test planning • Establish test objectives • Design test cases • Write test cases • Test test cases • Execute tests • Evaluate test results Chapter 8
Test plan • Purpose: to organize testing activities • Describes the way in which we will show our customers that the software works correctly • Developed in parallel with the system development • Contents: • test objectives • methods for test execution for each stage of testing • tools • exit criteria • system integration plan • source of test data • detailed list of testcases Chapter 8
Automated testing tools • Test execution • Capture and replay • Stubs and drivers • Automated testing environments • Test case generators • Code analysis • Static analysis • code analyzer • structure checker • data analyzer • sequence checker • Dynamic analysis • program monitor Chapter 8
Sample output from static code analysis Chapter 8
When to stop testing • Coverage criteria • Fault seeding • Estimate of remaining faults S – seeded faults, n – discovered faults, N – actual faults N = Sn/s • Two test team testing x - faults found by group 1, y – by group 2, q – dicovered by both E1=x/nE2=y/n q <=x, q<=y also: E1=q/y (group 1 found q faults out of group’s 2 y) so: y=q/E1 and n=y/E2 therefore: n=q/(E1E2) Chapter 8
When to stop testing • Coverage criteria • Fault seeding • Confidence in the software • C = 1 if n > N • S/(S-N+1) if n<N • S – seeded faults, n – discovered faults, N – actual faults • (e.g., N=0, S=10, n=10 91% • N=0, S=49, n=49 98%) Chapter 8
Classification tree to identify fault-prone components Chapter 8