1 / 54

Today

Learn black box testing, design for testability, and more. Dive into automated techniques and debugging. Hands-on YAFFS project included. Discover hidden system bugs. Get started now!

susanclark
Download Presentation

Today

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Today • Some general background • Topics of the class • Testing project • Basic definitions • Black box testing (FSM) algorithms • Why is testing difficult, in theory and practice?

  2. Before we start • What do I know about testing, anyway? • I’ve written programs and tested them • So have most of you, I would bet • Split my time at JPL between model checking & testing research • E.g., testing the file systems that will be used in the Mars Science Laboratory – JPL’s next big Mars mission

  3. they turn the file system off during EDL (Entry, Descent, and Landing), which helps me sleep at night

  4. Topics in Testing We’ll Cover • Black box (Finite State Machine) testing • Design for testability • Coverage measures • Random testing • Constraint-based testing • Debugging and test case minimization • Using model checkers for testing • Coverage revisited (“small model property”)

  5. Read All About It • No textbook for this class, only papers • Books I like that have something important to say about testing (though none of these are about testing): • The Practice of Programming, Kernighan and Pike • Programming Pearls, Bentley • Why Programs Fail: A Guide to Systematic Debugging, Zeller • Code Complete, McConnell • The Mythical Man-Month, Brooks

  6. Read All About It • Book about testing: • Introduction to Software Testing, Ammons and Offutt • I like it myself • Recommended by colleagues who’ve taught classes on testing (and are first-rate testing researchers) • Book is thorough and cleverly organized, provokes some real thought about how to test programs • I might just follow this book if doing a whole class on testing • As it is, we’ll take more of a “hit the highlights” approach – • More concentrated on automated techniques: random testing, constraint-based testing, and model checking • Won’t stop me from using some of their slides for areas they cover well

  7. Testing Project • Start with source code for YAFFS (Yet Another Flash File System): an open source flash file system • Plus a set of (buggy) variations of the YAFFS code • A (minimal) automated testing framework • Project: • Write a better automated tester • Must be efficient: spend no more than 45 minutes to test a version of YAFFS • Write a test report on the YAFFS versions

  8. Testing Project • You will also turn in two new buggy variations of YAFFS • Make sure your tester finds the bug • Produce a test case • Make sure the test case succeeds for original YAFFS! • You get to debug programs in lots of classes and in real life – in this class you get to bug a program intentionally, for once in your life • I will apply all of your testers to these bugs plus another (top secret) set of mutations of YAFFS that I generate • Let me know any time you think you find a bug in the original YAFFS!

  9. Testing Project • Grading criteria • Design/implementation of the tester • Effectiveness of the tester • Quality of the test report • Can I figure out how you tested YAFFS? • Can I figure out what wasn’t tested? • Can I figure out how reliable you think YAFFS is, and how buggy the various versions are? • How “interesting” and hard-to-find your new bugs are

  10. Testing Project “he who learns to play the harp learns to play by playing it”- Aristotle, Metaphysics, Book IX • Expectations • You can program in C • You can use makefiles / build system • Hopes • Maybe we’ll find some previously unknown bugs in YAFFS – that would be cool • I hope you’ll help me make sure MSLfigures out if there was life on Mars • Get started with YAFFS athttp://www.yaffs.net

  11. Basic Definitions: Testing • What is software testing? • Running a program • In order to find faults • a.k.a. defects • a.k.a. errors • a.k.a. flaws • a.k.a. faults • a.k.a. BUGS

  12. Bugs Hopper’s“bug” (mothstuck in arelay on anearly machine) “an analyzing process must equally have been performed in order to furnish the Analytical Engine with the necessary operative data; and that herein may also lie a possible source of error. Granted that the actual mechanism is unerring in its processes, the cards may give it wrong orders. ” – Ada, Countess Lovelace (notes on Babbage’s Analytical Engine) “It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise—this thing gives out and [it is] then that 'Bugs'—as such little faults and difficulties are called—show themselves and months of intense watching, study and labor are requisite. . .” – Thomas Edison

  13. Testing • What isn’t software testing? • Purely static analysis: examining a program’s source code or binary in order to find bugs, but not executing the program • Good stuff, and very important, but it’s not testing • Fuzzy borderline: if we only symbolically execute the program • For this class, we’ll stick to testing where the program actually runs (but maybe in a virtual machine)

  14. Why Testing? • Ideally: we prove codecorrect, using formalmathematical techniques (with a computer, not chalk) • Extremely difficult: for some trivial programs (100 lines) and many small (5K lines) programs • Simply not practical to prove correctness in most cases – often not even for safety or mission critical code

  15. Why Testing? • Nearly ideally: use symbolic or abstract model checking to prove the system correct • Automatically extracts a mathematical abstraction from a system • Proves properties over all possible executions • In practice, can work well for very simple properties (“this program never crashes in this particular way”), but can’t handle complex properties (“this is a working file system”) • Doesn’t work well for programs with complex data structures (like a file system)

  16. As a last resort… • … we can actually run the program, to see if it works • This is software testing • Always necessary, even when you can prove correctness – because the proof is seldom directly tied to the actual code that runs “Beware of bugs in the above code; I have only proved it correct, not tried it” – Knuth

  17. Why Does Testing Matter? Ariane 5:exception-handlingbug : forced selfdestruct on maidenflight (64-bit to 16-bitconversion: about370 million $ lost) • NIST report, “The Economic Impacts of Inadequate Infrastructure for Software Testing” (2002) • Inadequate software testing costs the US alone between $22 and $59 billion annually • Better approaches could cut this amount in half • Major failures: Ariane 5 explosion, Mars Polar Lander, Intel’s Pentium FDIV bug • Insufficient testing of safety-critical software can cost lives: THERAC-25 radiation machine: 3 dead • We want our programs to be reliable • Testing is how, in most cases, we find out if they are Mars PolarLander crashsite? THERAC-25 design

  18. Testing and Monitoring • In this first half of the class, we’ll look at which executions of a program to run • I’ll call this problem “the” testing problem • Second problem: how do we know if an execution reveals a bug? • Key question when monitoring deployed programs to handle faults or send in bug reports from the field • I’ll (mostly) take this for granted: we have a reference model or assertions to check Klaus

  19. Example: File System Testing • File system is a library, called by other components of the flight software • Accepts a fixed set of operations that manipulate files: Operation Result mkdir (“/eng”, …) SUCCESS mkdir (“/data”, …) SUCCESS creat (“/data/image01”, …) SUCCESS creat (“/eng/fsw/code”, …) ENOENT mkdir (“/data/telemetry”, …) SUCCESS unlink (“/data/image01”) SUCCESS File system / /eng /data image01 /telemetry

  20. Choose operation F Perform F on Tested FS Perform F on Reference (if applicable) Compare return values Compare error codes Compare file systems Check invariants Example: File System Testing • Easy to detect many errors: we have access to many working file systems, and can just compare results (in this unusual case, the problem Klaus will discuss is not much of a problem) (inject a fault?)

  21. Example: File System Testing • How hard would it be to just try “all” the possibilities? • Consider only core 7 operations (mkdir, rmdir, creat, open, close, read, write) • Most of these take either a file name or a numeric argument, or both • Even for a “reasonable” (but not provably safe) limitation of the parameters, there are 26610executions of length 10 to try • Not a realistic possibility (unless we have 1012 years to test)

  22. The Testing Problem • This is the topic of the first half of the class: what “questions” do we pose to the software, i.e., • How do we select a small set of executions out of a very large set of executions? • Fundamental problem of software testing research and practice • An open (and essentially unsolvable, in the general case) problem

  23. The Testing Problem / Terms • This is not a class in the management or even the basic practices of testing • Hard, important problem • But not the focus of this class • This class is going to focus on state-of-the-art automated approaches • Using tools • To catch the bugs that you don’t catch with basic practices • I will briefly cover some basic terms of testing and testing management today, then we’ll mostly dive into “How To Test It” at a more technical level

  24. Terms: Verification and Validation • These two terms appear a lot, often in vague or sloppy ways, in the literature • Verification is checking that a program matches a specification • Validation is making sure it meets the original requirements – satisfies customers, operates ok onboard the spacecraft, etc. • Verification: “you built it right” • Validation: “you built the right thing” (our focus, forthe most part)

  25. Terms: Unit, Integration, System Testing • Stages of testing • Unit testing is the first phase, done by developers of modules • Integration testingcombines unit tested modules and tests how they interact • System testingtests a whole program to make sure it meets requirements • “Design testing” is testing prototypes or very abstract models before implementation – seldom mentioned, but when possible it can save your bacon • Exhaustive model checking may be possible at this stage

  26. Terms: Functional Testing • Functional testing is a related term • Tests a program from a “user’s” perspective – does it do what it should? • Opposed to unit testing, which often proceeds from the perspective of other parts of the program • Module spec/interface, not user interaction • Sort of a fuzzy line – consider a file system – how different is the use by a program and use of UNIX commands at a prompt by a user? • Building inspector does “unit testing”; you, walking through the house to see if its livable, perform “functional testing” • Kick the tires vs. take it for a spin?

  27. Terms: Regression Testing • Regression testing • Changes can break code, reintroduce old bugs • Things that used to work may stop working (e.g., because of another “fix”) – software regresses • Usually a set of cases that have failed (& then succeeded) in the past • Finding small regressions is an ongoing research area – analyze dependencies “. . . as a consequence of the introduction of new bugs, program maintenance requires far more system testing. . . . Theoretically, after each fix one must run the entire batch of test cases previously run against the system, to ensure that it has not been damaged in an obscure way. In practice, such regression testing must indeed approximate this theoretical idea, and it is very costly." - Brooks, The Mythical Man-Month

  28. Terms: The Oracle Problem (Klaus) (oracle: a magical source of truth, often cryptic, given by the gods) • The oracle problem • How to know if a test fails • If the oracle says every execution is good, why bother running the program? • Some obvious, easily automated approaches: • The program probably shouldn’t crash • Assertions shouldn’t be violated • Automatable, but more difficult to apply: • Differential testing (McKeeman, etc.) – when you have another program, likely correct, that does the same thing, just compare outputs over same inputs • Last resort, not automatable: • Hand inspection of executions

  29. Terms: Test (Case) vs. Test Suite • Test (case): one execution of the program, that may expose a bug • Test suite: a set of executions of a program, grouped together • A test suite is made of test cases • Tester: a program that generates tests • Line gets blurry when testing functions, not programs – especially with persistent state

  30. Terms: Black Box Testing • Black box testing • Treats a program or system as a • That is, testing that does not look at source code or internal structure of the system • Send a program a stream of inputs, observe the outputs, decide if the system passed or failed the test • Abstracts away the internals – a useful perspective for integration and system testing • Sometimes you don’t have access to source code, and can make little use of object code • True black box? Access only over a network

  31. Terms: White Box Testing • White box testing • Opens up the box! • (also known as glass box, clear box, or structural testing) • Use source code (or other structure beyond the input/output spec.) to design test cases • Brings us to the idea of coverage

  32. Terms: Coverage • Coverage measures or metrics • Abstraction of “what a test suite tests” in a structural sense • Best explained by giving examples • Common measures: • Statement coverage • A.k.a line coverage or basic block coverage • Which statements execute in a test suite • Decision coverage • Which boolean expressions in control structures evaluated to both true and false during suite execution • Path coverage • Which paths through a program’s control flow graph are taken in the test suite

  33. Terms: Coverage Measures • In general, used to measure the quality of a test suite • Even in cases where the suite was designed for some other purpose (such as testing lots of different use scenarios) • Not always a very good measure of suite quality, but “better than nothing” • We “open the box” in white box testing partly in order to look at (and design tests to achieve) coverage • We’ll cover coverage in much more detail

  34. Terms: Mutation Testing • A mutation of a program is a version of the program with one or more random changes • Mutation testing is another way to measure the quality of a test suite • Amman and Offutt call it syntax-based coverage • Idea: generate a large number of mutants • Run the test suite on these • If few mutants are detected, the test suite may not be very good • Difficulties • Cost of testing many versions of a program • How to generate mutants (operators) • In principle, can subsume many otherforms of coverage

  35. Black Box (Finite State Machine) Testing

  36. Black box(FSM) testing • Let’s step back from software testing • Let’s look at a simpler model • Finite state machines • Software is a finite state machine • What? Software is a Turing machine, right? Only with an infinite tape. That is, only if your software has accessto infinite memory. Lego “Turing machine”

  37. Black box (FSM) testing • With static memory allocation or with limited dynamic allocation nothing is infinite • Even if you add in disk or network storage • We don’t have infinite electrons, much less memory • So software systems are finite state machines, in reality • Don’t you feel better now? • No more late nights worrying about the halting problem! there are only ~1079 of these little guys, y’know?

  38. Black box (FSM) testing • Theoretical issues aside, why do we care about testing finite state machines? • Abstraction: designs can often be best understood as finite-state machines • String processing/searching • Protocols – communication, cache coherence, etc. • Control component of any discrete system • Automatic abstraction: • Tools that take systems and produce (coarse) finite state abstractions

  39. Black box (FSM) testing • Useful for modeling aspects of many designs FD = open (“/foo”) close(FD) read(FD, buf, nbytes) write(FD, buf, nbytes)

  40. Very Simple FSM Model • FSM is a tuple, <S, , T, I> • S is a set of states • is the input alphabet • T is the transition relation • T: S x  x S • I  S is the initial state • Further assume: • Machine is deterministic • T is a (partial) function S x   S • Given an input from , machine either • Outputs 0 (if no transition) • Or outputs 1 and takes the transition to s’ d a a b a c c 0 b 1 a 1

  41. Conformance Testing • How do we test finite state machines? • Let’s say we have • Known FSM A • Know all states and transitions • Unknown FSM B (same alphabet) • Can only perform experiments • How do we tell if A = B? • Known as the conformance testing or equivalence testingproblem • As stated, we cannot solve the problem • Why? d a a b a c

  42. Combination Lock Machine • How many states does B have? • If we don’t know, we can never be sure it is the same machine as A • B is a combination lock: looks like A unless we input exact sequence “b u g” – in which case it deadlocks Machine B b u g Machine A a, c-z a-f, h-z a-t, v-z a-z

  43. Combination Lock Machine Machine B • Even if we know upper limit n on B’s size, for alphabet of size || • It takes n||tests to check equivalence to this particular A • This pathological case imposes some limits on conformance testing in general b u g Machine A a, c-z a-f, h-z a-t, v-z a-z

  44. Conformance Testing (VC Algorithm) • Algorithm due to Vasilevskii and Chow for conformance testing • Assumptions • A is minimized, has m states • B has no more than n states • A,B both have a reliable reset • We can start from initial state at will • Worst-case complexity: O(n2 m ||n-m+1) • I’ll cover this quickly and informally, skipping over the sub-algorithms

  45. Conformance Testing (VC Algorithm) • First, we find a path to each state of A • Typically, we compute a spanning tree • For example, by a depth first search (DFS) • Call this set P Read the paths off of the tree: <no input> a a a a a d a b a d a a a b a b c d

  46. Conformance Testing (VC Algorithm) • Next, compute a characterizing (or distinguishing) set for A • Set W of input sequences such that •  s, s’  S • s  s’ • Exists w  W • Output for w from s not equal to output from s’ • i.e., we can use W to tell what state we’re in

  47. Conformance Testing (VC Algorithm) • Next, compute a characterizing (or distinguishing) set for A • For example, W for A might be: • {aa, b} • aa: 11, 10, 01, 00, 10 • Distinguishes all but these two states • Which are distinguished by b (1 vs. 0) • Can we find another (better?) set? a d a a b c

  48. Conformance Testing (VC Algorithm) • Now we can compute Z: • W U •   W U • 2  W U • … • m-n W • To test B for conformance with A • Run the tests produced by taking cross-product of P and Z on both A and B

  49. Conformance Testing (VC Algorithm) a d • P: {<>, a, aa, aad, ab} • W: {aa, b} • Let’s say we know B has no more than 6 states • The complete testing sequence (with reset before each test on each machine) is: • {aa, b, a aa, a b, aa aa, aa b, aad aa, aad b, ab aa, ab b, a aa, b aa, c aa, d aa, a b, b b, c b, d b, a a aa, a b aa, a c aa, a d aa, a a b, a b b, a c b, a d b, aa a aa, aa b aa, aa c aa, aa d aa, aa a b, aa b b, aa c b, aa d b, aad a aa, aad b aa, aad c aa, aad d aa, aad a b, aad b b, aad c b, aad d b, ab a aa, ab b aa, ab c aa, ab d aa, ab a b, ab b b, ab c b, ab d b} a a b c

  50. Conformance Testing (VC Algorithm) • As this small example shows, exhaustive tests can be very expensive • In general, we cannot computationally afford to perform complete testing • We will always face the risk of missing errors • Even when we reduce our problem to the simplest model • The complexity of testing full equivalence to a reference model is simply too high • Exhaustion is exhausting

More Related