880 likes | 903 Views
Enhance your testing skills in embedded systems. Learn testing principles, techniques, and tools. Understand the importance of quality in software. Discover real-life failure examples and context-based testing. Join the course today!
E N D
Politehnica University of TimisoaraMobile Computing, Sensors Network and Embedded Systems LaboratoryEmbedded Systems Testinginstructor: Assist. Prof. Razvan BOGDANrazvan.bogdan@cs.upt.ro
Administrative • Laboratories • 312, Vector Lab • CANoe, Vector tools and devices • Students counseling • Wednesday 16-18.30 in B413A/B414 • Evaluation • Final mark: • Exam (30%) + lab (40%) + Online activity (30%) • Embedded Systems, Embedded Systems Testing, Ambient Intelligence, Ambient-Assisted Living, Complex Networks, Internet-of-Things, e-Learning, Big Data • razvan.bogdan@cs.upt.ro, B413A/B414
Course objectives Learning skills for planning tests Applying software testing skills to your projects Selecting appropriate testing techniques and objectives Learning a common terminology Learning practical tools for automotive testing
Textbook • Main textbook: • Andreas Spillner, Tilo Linz, Hans Schaefer, Software Testing Foundations, O'Reilly Media, 2014 • Rex Black, Erik van Veenendaal, Dorothy Graham, Foundations of Software Testing, 3rd Edition, 2012, Cengage Learning -> check students resources • International Software Testing Qualification Board, The Certified Tester Foundation Level in Software Testing, 2011, http://www.istqb.org/downloads/syllabi/foundation-level-syllabus.html
Content • Fundamentals of testing • Why is testing necessary? • What is testing? • Testing principles • Fundamental test process • The psychology of testing • Testing Throughout the Software Life Cycle • Software Development Models • Test Levels • Testing Types • Maintenance Testing • Static Techniques • Static Techniques and the Test Process • Review Process • Static Analysis by Tools
Content • Testing Design Techniques • The test development process • Categories of Test Design Techniques • Specification-based or Black-box Techniques • Structure-based or White-box Techniques • Experience-based Techniques • Choosing Test Techniques • Testing Techniques in Automotive • Transaction Flow Modeling, All Round-Trip Paths, Loop Testing, Data Flow Testing, Equivalence Partitioning, Boundary Value Analysis, Regression Testing, Negative Testing, Error Guessing, Error Handling Testing, Recovery Testing, Stress Testing, Load Testing • Test Management • Test Organization, Test Planning and Estimation, Test Progress Monitoring and Control, Configuration Management, Risk and Testing, Incident Management • Test Tools
Politehnica University of TimisoaraMobile Computing, Sensors Network and Embedded Systems LaboratoryEmbedded Systems TestingFundamentals Of Testinginstructor: Razvan BOGDAN
Outlines • Why is testing necessary? • What is testing? • Testing principles • Fundamental test process • The psychology of testing
Why is testing necessary? • A. The economic importance of software • The present machines and equipment largely rely on software • B. Software Quality • The quality of software has become the determining factor for the success of technical or commercial systems; such systems are to be found in different commercial products
Why is testing necessary? • Growing importance of Embedded Systems • As of January 2014: • 90% of American adults have a cell phone • 58% of American adults have a smartphone • 32% of American adults own an e-reader • 42% of American adults own a tablet computer • As of May 2013, 63% of adult cell owners use their phones to go online. • 34% of cell internet users go online mostly using their phones, and not using some other device such as a desktop or laptop computer
Why is testing necessary? • Testing is necessary because we all make mistakes. • Some of those mistakes are unimportant, but some of them are expensive or dangerous. • We need to check everything and anything we produce because humans make mistakes all the time
Why is testing necessary? • some mistakes come from bad assumptions and blind spots, so we might make the same mistakes when we check our own work as we made when we did it. • we may not notice the flaws in what we have done. Ideally, we should get someone else to check our work - another person is more likely to spot the flaws. • =>C. Testing for quality improvement • Testing and reviewing insure the importance of the quality of software products, but also the quality of the software development process itself.
Why is testing necessary? • Failure example 1: Ariane 5 Launch
Why is testing necessary? • Failure example 2: Patriot Missile
Why is testing necessary? • Failure example 3: USM Application System
Software systems context Testing Principle - Testing is context dependent Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site. • Not all software systems carry the same level of risk and not all problems have the same impact when they occur. • A risk is something that has not happened yet and it may never happen; it is a potential problem • When we discuss risks, we need to consider how likely it is that the problem would occur and the impact if it happens
Software systems context • For example: • whenever we cross the road, there is some risk that we'll be injured by a car. • The likeli-hood depends on factors such as how much traffic is on the road, whether there is a safe crossing place, how well we can see, and how fast we can cross. • The impact depends on how fast the car is going, whether we are wearing protective things, our age and our health. • The risk for a particular person can be worked out and therefore the best road-crossing strategy can be offered.
Causes of software defects • Human Error • A defect was introduced into the software code, the data or the configuration parameters • Causes of human errors: time pressure, complex code, complexity of infrastructure, changing technologies, and/or many system interactions • Environmental Conditions • Changes of environment conditions • Causes of negative environmental conditions: radiation, magnetism, electronic fields, and pollution can cause faults in firmware or influence the execution of software by changing the hardware conditions
Causes of software defects • Error (IEEE 610) • A human action that produces an incorrect results; • Example: a programming error • Defect (also known as Fault or Bug) • A flaw in a component or system that can cause the component or system to fail to perform its required function • If executed, a defect can cause a failure • Example: an incorrect statement or data definition • Failure • The physical or functional manifestation of a defect. A defect, if encountered during execution, may cause a failure • Deviation of the software from its expected delivery or service [after Fenton] => Work of testers is to produce failures Defects cause failures
When do defects arise? • Unfortunately, requirements and design defects are not rare; assessments of thousands of projects have shown that defects introduced during requirements and design make up close to half of the total number of defects • Defects should be detected as early as possible in the process
What is the cost of defects? • the old English proverb 'a stitch in time saves nine'. This means that if you mend a tear in your sleeve now while it is small, it's easy to mend, but if you leave it, it will get worse and need more stitches to mend it.
Role of Testing in Software Development, Maintenance and Operations • Increasing software quality • Testing helps to furnish the software with the desired attributes, namely to remove defects leading to failures • Reduction of the risk of encountering errors • Appropriate test activities will reduce the risk that errors are encountered during software operation • Meeting obligations • Tests might be mandatory because of client’s or legal regulation as well as to meet industrial standards
Testing and quality • Definition Software: Computer programs, procedures, and possibly associated documentation and data pertaining to the operation of a computer system. [IEEE 610] • Definition Software Quality: The totality of functionality and features of a software product that bear on its ability to satisfy stated or implied needs. [After ISO 9126] • Definition Quality: The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations. [After IEEE 610]
Testing and quality • Testing helps us to measure the quality of software in terms of the number of defects found, the tests run, and the system covered by the tests. • Projects aim to deliver software to specification. • For the project to deliver what the customer needs requires a correct specification. • Additionally, the delivered system must meet the specification. • This is known as validation ('is this the right specification?') and verification ('is the system correct to specification?').
Testing and quality • According to ISO/EC 9126 software quality consists of: • Functional Quality-attributes • Functionality • Non-functional Quality-attributes (how is the software running) • Reliability • Usability • Efficiency • Maintainability • Portability • Types of Quality Assurance (QA) • Constructive activities to prevent defects, for example: through appropriate methods of software engineering • Analytical activities for finding defects, for example: through testing leading to correcting defects and preventing failures, therefore increasing the software quality
Testing and quality • Constructive quality assurance • Quality of process – Quality management • Defects that were made need not to be repeated => prevent defects
Testing and quality • Analytical quality assurance • Quality of product – Verification and test procedure • Defects should be detected as early as possible in the process • Static testing – examination without executing the program • Dynamic testing – includes executing the program
Test Goals • Gain knowledge about defects in the test object • Defects contained in the test objects must be detected and be described in such a way as to facilitate their correction • Proof of functionality • System functionality should be implemented as specified • Generating information • Before handing over a software system to the users, information about possible risks has to be provided. Gaining such information might be one of the test goals • Gaining confidence • Software that has been well tested is trusted to meet the expected functionality and to have a high quality level.
How much testing is enough? • Exit criteria • Not finding any more defects is not an appropriate criterion to stop testing activities. There might be other metrics that are needed to adequately reflect the quality level reached. • The set of generic and specific conditions, agreed upon with the stakeholders for permitting a process to be officially completed. • The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing. [After Gilb and Graham] • Time and budget testing • The amount of resources available (personnel, time and budget) might determine the extent of testing efforts
How much testing is enough? • Instead we need a test approach which provides the right amount of testing for the project. • We do this by aligning the testing we do with the risks (risk-based testing) for the customers, the stake-holders, the project and the software. • Assessing and managing risk is one of the most important activities in any project, and is a key activity and reason for testing. • Deciding how much testing is enough should take account of the level of risk, including technical and business risks related to the product and project constraints such as time and budget
Test Case, Test Basis • Test Case (according to IEEE 829) • Test case definitions include at least the following information • Pre-conditions • Set of input values • Set of expected values • Expected post-conditions • Unique identifier • Dependence on other test cases • Reference to the requirement that will be tested • How to execute the test and check-in results • Priority • Test Basis • Set of documents (e.g. Requirements doc) defining the requirements of a component or system. Used as the basis for the development of test cases What about exit criteria? Integration tests?
Software development and reviews • Code development and reviews • Code: computer instructions and data definitions expressed in a programming language or in a form output by an assembler, compiler or other translator. [IEEE 610] • Software – Development: a complex process/sequence of activities aiming at implementing a computer system. It usually follows a software development model. • Requirement: • A requirement describes a functional attribute that is desired or seen as mandatory • A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. [After IEEE 610]
Why is Testing Necessary? - Summary • Summary • Software failures may cause enormous damage • Software quality is the sum of attributes that refer to the capability of the software to meet given requirements • Constructive software quality assurance deals with preventing defects • Analytical software quality assurance deals with finding defects and correcting them • Functional and non-functional quality attributes define the total quality of the system • Each test has to have predefined exit criteria. Reaching the exit criteria will conclude testing activities • Testers look for failures in the system and report them (testing) • Developers look for defects and correct them (debugging)
Outlines • Why is testing necessary? • What is testing? • Testing principles • Fundamental test process • The psychology of testing
WHAT IS TESTING? • The driving test - an analogy for software testing • In a driving test, the examiner critically assesses the candidate's driving, noting every mistake, large or small, made by the driver under test. The examiner takes the driver through a route which tests many possible driving activities • Some of the activities must be tested. For example, in the UK, an emergency stop test is always carried out • At the end of the test, the examiner makes a judgment about the driver's performance • The examiner bases the judgment on the number and severity of the failures identified, and also whether the driver has been able to meet the driving requirements.
WHAT IS TESTING? • Testing means more than running tests • Running tests is only one part of testing • Test process includes: • Planning and control • Choosing test conditions • Designing and executing test cases • Checking results • Evaluating exit criteria • Reporting on the testing process and system under test • Finalizing or completing closure activities • Reviewing documents, source codes and conducting static analysis also help to prevent defects appearing in the code
WHAT IS TESTING? • Testing Objectives • Finding defects it may arise: • In development testing: to cause as many failures as possible • In acceptance testing: to confirm that the system works as expected • Gaining confidence about the level of Quality • Providing information for decision-making • To assess the quality of software, to give information to stakeholders of the risk of releasing the system at a given time • Preventing defects • Maintenance testing that no new defects have been introduced during development of the changes
WHAT IS TESTING? • Terms: Software Development • Debugging • The process of finding, analyzing and removing the causes of failures in software. • Review • An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough. [After IEEE 1028]
WHAT IS TESTING? Test Debugging Correcting defects Retest • Testing and Debugging • Test and re-test are test activities • Testing shows system failures • Retesting proves, that the defect has been corrected • Debugging and correcting defects are developer activities • Through debugging, developers can reproduce failures, investigate the state of programs and find the corresponding defect in order to correct it Developer tasks
WHAT IS TESTING? - Summary • The fundamental test process comprises • Test planning and control • Test analysis and design • Test implementation and execution • Evaluating on exit criteria and reporting • Test closure activities • Testing objectives • May be: finding defects, the level of quality, information for decision-making, preventing defects • The thought process and activities involved in designing test early in the life cycle • Dynamic testing • Means execution the software, static testing means reviews of documents • Shows failures that are caused by defects, debugging finds, analyses and removes the cause of the failure
Outlines • Why is testing necessary? • What is testing? • Testing principles • Fundamental test process • The psychology of testing
SEVEN TESTING PRINCIPLES • Principle 1: Testing shows presence of defects • Principle 2: Exhaustive testingis impossible • Principle 3: Early testing • Principle 4: Defect clustering • Principle 5: Pesticide paradox • Principle 6: Testing is contextdependent • Principle 7: Absence-of-errorsfallacy
SEVEN TESTING PRINCIPLES Principle 1 - Testing shows presence of defects • Testing can show that defects are present, but • cannot prove that there are no defects. • Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.
SEVEN TESTING PRINCIPLES Principle 2 - Exhaustive testing is impossible • Testing everything (all combinations of inputs and preconditions) is not feasible except for small cases. • Test case explosion • Defines the exponential increase of efforts and costs when testing exhaustively • Instead of exhaustive testing, we use risks and priorities to focus testing efforts or we can use: • Sample Test • The test includes only a (systematically or randomly delivered) subset of all possible input values • Under real life conditions, sample tests are generally used; testing all combination of inputs and preconditions is only economically feasible in trivial cases.
SEVEN TESTING PRINCIPLES Principle 3 - Early testing • Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives. • The earlier a defect is discovered, the less costly is its correction • Highest cost effectiveness when errors are corrected before implementation • Concepts and specifications may already be tested • Defects discovered at the conception phase are corrected with the least effort and costs • Preparing a test is time-consuming also • Testing involves more than just test execution • Test activities can be prepared before software development is completed • Testing activities (including reviews) should run in parallel to software specification and design