90 likes | 103 Views
Explore different testing approaches - inspection, running code, test case generation, and test stopping schemes, to enhance software quality and streamline development processes.
E N D
Different Testing Methodology • Inspection and Review (People Intensive) • Mostly applicable --- documentsand non-machine executable material • requires preparation and some training in looking for problem, recording, follow-up, etc. • Formal proof of correctness (Very Skill Intensive) • Mostly applicable --- very small algorithmic solutions • requires formal training in theorem proving • Running code (Machine Intensive & People) • Applies to all executable material • requires a lot of preparation of test cases and some training in test process, test execution, and test tools
Test Case Generation • How much do you test ? • From code point of view( what is the goal?) • 1 to 100% of executable statements • 1 to 100% of distinct paths ( #of predicates + 1) • 1 to 100% of program logic (loops can be a bear!) • From data point of view(what is the goal ?) • every input variable is assigned a valid value and executed once • every input variable is assigned a valid and an invalid value and executed once • every input variable is assigned a value outside of its boundary, on the boundary, and inside of its boundary and executed. (boundary value testing)
Total # of Paths .vs. Total Executable Statement Coverage S1 1 Since for each binary decision, there are 2 paths and there are 3 in sequence, there are 23 =8 total “logical” paths path1 : S1-C1-S2-C2-C3-S4 path2 : S1-C1-S2-C2-C3-S5 path3 : S1-C1-S2-C2-S3-C3-S4 path4 : S1-C1-S2-C2-S3-C3-S5 path5 : S1-C1-C2-C3-S4 path6 : S1-C1-C2-C3-S5 path7 : S1-C1-C2-S3-C3-S4 path8 : S1-C1-C2-S3-C3-S5 All Statement Coverage: S1, C1, S2, C2, S3, C3, S4, S5 With path1 we covered S1, C1, S2,C2, C3, S4 (S3 & S5 uncovered) But with path5 we cover S1, C1, S2, C2, S3, C3, S5 C1 2 3 S2 4 C2 5 6 S3 7 C3 9 8 S5 S4
When do we Stop Testing ? • Most common goal for stopping: • When we have i) executed all of our test cases and when ii) all thefailures which came from the testing are fixed. • Less common goal for stopping: • When we have i) executed a fixed percentage of test cases and when ii) all the failures which came from the testing are fixed • When we have i) executed all the test cases and when ii) only afixed percentage of the failures which came from the testing are fixed • Lesser common goal for stopping: • devising a stopping scheme with a goal and stop when the goal is reached.
A Devised Test Stopping Scheme • Seeding: Faults into the product and NOT tell testers where • assumes that (detected seeded errors)/(total seeded errors) is equal to (detected real errors)/(total real errors) • look at a way to utilize this • dse/tse = dre/tre (from above) • tre (total real errors) = (tse * dre)/dse • if the goal is to have x% of the estimated total real errors found prior to stopping, we can look at the ratio of dre/tre. • If the x% is met or exceeded, then we can STOP • if not, we can continue to run more test cases and monitor the results: • dse increases with no increase in dre (good situation: because tre = (tse*dre)/dse ---- tse is fixed thus dse increase will mean lesser tre) • dre increases with no increase in dse (bad situation: dre increase drags an increase in tre because tre = (tse*dre) /dse ) • both dse and dre increases (depends on the numbers) • Make sure the all seeded faults are removed prior to release
Look at An Example • tse = 20 • dse = 15 • dre = 50 • tre = (20 * 50) / 15 => approximately 67 total real errore estimated • dre/tre = 50/67 = appr. 75 % • if you want to have larger dre/tre ratio such as 80% (more faults are detected and fixed) then : • increase dre (this can mean that there are really more tre if there is no comparable increase in dse) • decrease tre via finding more seeded faults (increasing dse) without running into any real fault. • *** another mechanism is to introduce more total seeded faults(tse) and see if dse keeps on increasing without comparable increase in dre *** • *** this method is not fool proof because enough increase in dre will mean more testing is needed with more tse and the cycle continues.
Test -Fix Cycle • Testing-Fix cycle involves both the testers and the fixers and therefore must be managed. • Problems found must be described and submitted for fix • Resource must be assigned to diagnose (accept or reject) the failures found • Resource must be assigned to fix the failures • Resources must be set aside to accept the fixes and merge them into the next test “build cycle.” • Resource and schedule must be set to include re-test of the fixes
“Possible” Problem Report • Problem report should contain some of these items: • person who found the problem • date of problem submission (problem “open” date) • problem id # • component and version id of the software that was tested • test case(s) used • brief description of the problem symptoms with possibly screen snapshots • tester’s assessment of the problem severity • if a retest of another problem, give the earlier problem id # • Current Status: open; in fix; fixed; closed • closed date if closed
Test- Fix Cycle Any Problem ? Test the Software Submit the problem Yes Software Test Library NO Problem Report DB The Testing Side The Fixing Side Accept Problem ? Send fixed material to test library Update Problem Status Fix Problem Yes NO