510 likes | 617 Views
Software Configuration Management (SCM). SCM manages change throughout the software development process. Change occurs: When requirements, design, and code are first created. When requirements change When a fault (“bug”) is detected and the requirements or code must be fixed.
E N D
Software Configuration Management (SCM) SCM manages change throughout the software development process. Change occurs: • When requirements, design, and code are first created. • When requirements change • When a fault (“bug”) is detected and the requirements or code must be fixed.
“There is a need for some organization to ensure that all parties know how to request a change, that change is necessary, that all parties agree with the change, that all parties are informed of the impending change, and that there is a record of all changes made, who made them, when they were made, and why they were made.” Nancy Ross (1991) SCM has benefited from the development of online systems.
SCM in larger systems Communication and control become crucial in managing systems. “There must be a set of well-defined procedures for reporting problems with the product, recommending changes or enhancements to the product, ensuring that all parties with an interest in a change are consulted prior to the decision being made to incorporate it, and ensuring that all affected parties are informed of schedules associated with each change to the product.” Nancy Ross, (1991)
Software Quality • Bad software cost U.S. businesses $85 billion in lost productivity each year. Jim Johnson The Standish Group There are typically 5 to 15 flaws in every 1,000 lines of code. Tracking down each bug takes 75 minutes and fixing them takes two to nine hours each. That’s an average of about 50 hours or around $10,000 to cleanse every 1,000 lines. SEI Institute Carnegie Mellon U.
Complexity and competition as contributors “You can’t possibly replicate the myriad of ways in which companies will start to use the software. No testing is comprehensive enough to get at this.” Kevin McKay SAP Intense competition forces vendors to rush products out quickly and is also responsible for the increasing size and complexity of almost all software. Robert Herbold Microsoft
Software Testing At one time software testing was seen as almost a separate phase of the life cycle after integration and before maintenance. It is now realized that software testing activities must occur throughout the software life cycle. Debugging - Making sure the program runs
Definitions of software testing “Making sure the program solves the problem.” Charles Baker (1957) “Testing is any activity aimed at evaluating an attribute or capability of a program or system and determining that it meets its required results.” William Hetzel, 1988 “Testing is the process of executing a program /system with the intent of finding errors.” Thomas Drake, 1999
There are two types of testing - execution based testing and non-execution based testing. Thus, it is impossible to execute a written specification document to test it. It must be carefully reviewed or subject to some form of analysis. Once there is executable code execution-based testing with the running of test cases becomes possible. An error is a mistake made by a programmer. It may be manifested by a problem in the code or documentation called a fault (a “bug”). A failure is the observed incorrect behavior of the product as a consequence of the fault.
Good practice is having some degree of managerial independence between the software development team and the SQA (Software Quality Assurance) or the testing team. Developers and testers should not be under the same manager, and neither manager should be able to overrule the other.
Software Inspections Software inspections are a form of non-execution based testing that has been shown to improve product quality and reduce development time and cost. Devised originally by Michael Fagan of IBM (1976) inspections have been shown to identify 50-80% of all software defects, and to do so early in the software development process.Combined with execution-based testing defects in the final software product can be reduced even further.
Inspections are detailed reviews of work in progress. Small groups of workers (usually 4 to 5) examine work products independently and then meet to share findings and do further evaluations. The size of the work products is around 200-250 lines of code. Requirements, designs and even user manuals are inspected in similar sized chunks. There are six steps in the inspection process: Planning - Materials are distributed and an examination meeting is scheduled. Overview - An overview of the project that the work sample is a part of is presented.
Preparation - Inspectors prepare for the meeting by studying the work products.Checklists are used to help detect common defects. Examination - This is the meeting where the examiners review the work product together. The task of meetings is to find problems and see what’s missing - not to fix anything. Due to fatigue, meetings are not to exceed 2 hours in length. Rework - The author corrects defects identified during the examination meeting. Follow-up - The author’s corrections are checked by the moderator. The product is then checked in under configuration control.
Managers do not attend inspection meetings. Results of inspections are not to be used in employee evaluations. Benefits of inspections: • Execution-based testing cost and time reductions - Inspection cuts the number of defects that are still in place when execution based testing starts. Execution-based testing runs more smoothly because there are fewer defects to find. • Defects down, quality up - Inspections are the prime technique to reduce defect levels. From 50-80% of faults are exposed. At JPL on average each 2-hour inspection exposed 4 major and 14 minor faults. In dollar terms, this meant a savings of ~ $25,000 per inspection (Bush, 1990).
Project control benefits - The number of faults in a given product can be compared with averages of faults detected at the same stage of development in comparable products, giving management an early warning that something is amiss and allowing corrective action to be taken. • Organizational and people benefits - Because programmers know their work will be presented and reviewed by others they have an incentive to do their best work. Also, the inspection process has a training value. It provides senior software engineers with a vehicle to pass on their knowledge.
Reduced development time - It is cut by ~25%, including time spent on inspections. Gain comes from having fewer faults to detect and correct downstream. Costs of Inspections: The costs of having inspections is ~ 10-15% of the development budget (not including start-up costs) (Deptula, 2000). Inspection takes additional up-front time that people are not accustomed to spending.
Thus, in software inspections a group of software engineers, meeting together to review a software product, improves its quality by detecting defects which otherwise would have gone unnoticed by the product’s author. What are the determinants of defect detection in software reviews? There is debate over whether a key component of inspections, the group meeting, is necessary for defect detection. Individuals prepare by focusing on defect detection. Are significant numbers of defects discovered by reviewers interacting in the meeting?
Fagan believed the group meeting would generate new defects found beyond those already generated in the individual phase of the task. He wrote of the inspection leader “His use of the strengths of the team members should produce a synergistic effect larger than their numbers…” There is evidence that the performance advantage of interacting groups does not derive from the group discovering new defects in its meeting. Individual member’s task expertise has been found in social psychological research to be the major determinant of group performance. Rifkin and Deimel (1994) found a 90% reduction in defects reported by customers after software release through training reviewers in software reading techniques.
Experiences With Inspections (Weller, 1993) • Inspect code prior to execution unit testing. You will unearth more errors. Unit testing lowers the motivation of the inspection team to find defects because it gives inspectors false confidence in the product. They “know the product works,” so why inspect it? • One project did not inspect requirements and design documents. They only inspected code. The inspections went well but the project failed. The lesson taken was to inspect all basic design documents as well as code.
A definition of execution testing “Testing involves operation of a system or application under controlled conditions and evaluating the results (e.g., ‘if the user is in interface A of the application while using hardware B, and does C, then D should happen’). The controlled conditions should include both normal and abnormal conditions. Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn’t or things don’t happen when they should. It is oriented to ‘detection.’” Rick Hower (2001)
Execution (and non-execution) testing cannot be used to prove that a program always does what it should or meets its specifications. Some definitions of testing have stated that this is its purpose. “Program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence.” E. W. Dijkstra (1972) If test cases is run through a product and the output is wrong, the product definitely contains a fault. But if the output is correct, all that is shown is that the product runs correctly on that particular set of test cases. It does not mean that the product does not contain a fault. A different set of test cases may show a fault.
What is a test case? “A test case is a document that describes an input, action, or event and an expected response, to determine if a feature of an application is working correctly. A test case should contain particulars such as test case identifier, test case name, objective, test conditions/setup, input data requirements, steps, and expected results.” Rick Hower (2001)
It is good practice to prepare test cases early in the life cycle. The process of developing test cases may help you find problems in the requirements or design of an application. “The act of designing tests is one of the most effective error prevention mechanisms known. The thought processes that must take place to create useful tests can discover and eliminate problems at every stage of development.”
Software execution testing for larger projects depends upon tools such as test coverage tools and test automation tools. What to do after a fault (bug) is found? The fault needs to be shown to a programmer assigned to fix it. After the problem is resolved the fix should be re-tested. Fixes have been shown to contain a surprising number of faults. Regression testing should be done to see that the fix doesn’t cause problems elsewhere.
Four test execution stages are commonly recognized. Unit testing - Each program module is tested in isolation, often by the programmer. The most “micro” scale of testing. It requires detailed knowledge of the program design and code. It may require developing test driver modules or test harnesses. Integration testing - Testing of combined parts of an application to determine if they function together correctly. The “parts” can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.
System testing - Testing based upon requirements and functionality that covers all combined parts of a system. Acceptance testing - After the system is installed at the user’s site real-world test cases demonstrate key functionality for final approval over some limited period of time.
To support execution testing, different strategies can be used for test case generation. Black box tests - Are not based on any knowledge of internal design or code. Tests are based on requirements and functionality. White box tests - Require knowledge of the source code including program structure, variables, or both. Tests are based on coverage of code statements, branches, paths, conditions.
Other testing terms: Sanity testing - An initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. Regression testing - Re-testing after fixes or modifications of the software or its environment.Automated testing tools can be especially useful for this type of testing. Load or Stress testing - Testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system’s response time degrades or fails.
Usability testing - Testing for ‘user friendliness. User interviews, surveys, recordings of user sessions, and other techniques are used. Programmers and testers are usually not appropriate as usability testers. Comparison testing - Comparing software weaknesses and strengths with competing products. Alpha testing - Testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end users or others, not by programmers or testers.
Beta testing - Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers. Security testing - Testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques. From Rick Hower (2001)
The attitude and role of the tester: “You know you can’t find all of the bugs. You know you can’t prove the code is correct. And you know that you will not win any popularity contests finding bugs in the first place…A good test engineer should WANT to find as many problems as possible and the more serious the problems the better. A test when executed that reveals a problem in the software is a success.” Thomas Drake (2000)
“A tester must take a destructive attitude toward the code, knowing that this activity is, in the end, constructive. Testing is a negative activity conducted with the explicit intent and purpose of creating stronger software product and is operatively focused on the ‘weak links’ in the software.” Thomas Drake (2000) As a tester you may be the bearer of bad news to management about a product.
The Association for Computing Machinery code of ethics states the following: “The honest computing professional will not make deliberately false or deceptive claims about a system or system design, but will instead provide full disclosure of all pertinent system limitations and problems.”
Requirements Phase The objective of the requirements phase determining what the client needs. However, many clients may not know what they need or be able to effectively communicate their needs to developers. Requirements analysis begins with the requirements staff meeting with members of the client organization to determine what is needed in the product. Interviews continue until the requirements team is convinced that it has elicited all relevant information from the client and future users of the product.
Communicating to Users and Developers The requirements analyst turns vague customer ideas into clear developer specifications. Discussions with users should be focused on the tasks they need to perform with the system. User’s expectations about system characteristics such as performance, usability, efficiency, and reliability also need to be understood. Requirements development leads to an understanding, shared by the project stakeholders, of the system that will address the client’s needs. The requirements analyst is then responsible for writing requirements documents that clearly express this shared understanding.
Software Requirements Specification Document “The software requirements section should describe all software requirements at a sufficient level of detail for designers to design a system satisfying the requirements and testers to verify that the system satisfies requirements. Every stated requirement should be externally perceivable by users, operators or other external systems.
At a minimum, these requirements should describe every input into the software, every output from the software, and every function performed by the software in response to an input or in support of an output. All requirements should be uniquely identifiable (e.g., by number).” From a document outline based in part on the IEEE Standard 830-1993 for Software Requirements Specifications. Created by Steve Mattingly
Why Are Requirements Specifications So Important? “One of the most reliable methods of insuring problems, or failure, in a complex software project is to have poorly documented requirements specifications.” Rick Hower (2000) “Insufficient user involvement is well established as a leading cause of software project failure.” Karl Wiegers (2000)
New Developments in Requirements Analysis Our received model of the requirements analysis process has been driven by organizations concerned with the procurement of large, one-of-a kind systems. “In this context, requirements engineering is often used as a contractual exercise in which the customer and the developer organizations work to reach agreement on a precise, unambiguous statement of what the developer would build…The requirements -as -contract model is irrelevant to most software developers today. Other issues are more important today.” Siddiqi and Shekaran (1996)
Supporting market -driven inventors Most software developed today is based on market-driven criteria. Requirements of market-driven software are typically not elicited from a customer but are created by observing problems in specific domains and inventing solutions. Here requirements engineering is often done after a basic solution has been outlined and involves product planning and market analysis. The paramount considerations are issues such as available market window, product sizing, feature sets, toolkit versus application, and product fit with the development organization’s overall product strategy. Shekaran and Siddiqi (1996)
Prioritizing requirements Competitive forces have reduced time to market, causing development organizations to speed development by deliberately limiting the scope of each release. This forces developers to distinguish between desirable and necessary (and indeed between levels of needed) features of an envisioned system.
Coping with incompleteness The switch to more evolutionary models of software development was prompted in part by the recognition that it was virtually impossible to make all the correct requirements and implementation decisions the first time around. Yet classical thinking in requirements analysis emphasizes insuring completeness in requirements specifications. Incompleteness in requirements specifications is a simple reality in many development contexts.What is needed are tools for deciding when you can stop gathering requirements enabling further clarification to be postponed until later.
Rapid Prototyping A rapid prototype is hastily built software that exhibits the key functionality of the target product. A rapid prototype reflects the functionality that the client sees such as input screens and reports, but omits “hidden” aspects such as file updating. The client and intended users now experiment with the rapid prototype, with members of the development team watching and taking notes. Based on their hands-on experience, users tell the developers how the rapid prototype satisfies their needs and, more importantly, identify the areas that need improvement.
The developers change the rapid prototype until both sides are convinced that the needs of the client are accurately encapsulated in the rapid prototype. The rapid prototype is then used as the basis for drawing up the specifications…Rapid prototyping results in the construction of a working model of the product and is more likely to meet the client’s real needs than other techniques. Schach (1999)
Experiences With Rapid Prototyping (Gordon and Bieman, 1994) There are two rapid prototyping methodologies: throwaway, in which the prototype is discarded and not used in the delivered product, and evolutionary, in which all or part of the prototype is retained. Each of these methodologies may fit particular software development situations.
Usability - Improvements in ease of use have been reported with rapid prototyping. Rapid prototyping helps insure that the product will meet user needs. The traditional model of software development relied on the assumption that designers could stabilize and freeze the requirements. In practice, however, the design of accurate and stable requirements cannot be completed until users gain some experience with the proposed software system.
Effort - A commonly cited benefit of rapid prototyping is that it can decrease development effort. One reason for this is that faster design is possible when requirements are clearer or more streamlined. Also, in evolutionary prototyping, part (or all) of the prototype can be retained, so the requirements and development efforts tend to overlap. Some worry about increases in effort as users repeatedly ask for more functionality in various areas.
End-User Involvement - Rapid prototyping leads to greater end-user participation in requirements definition. Users are more likely to be comfortable with a prototype than a specifications document that is , say, 20 pages of single spaced technical writing. Prototyping makes it easier for users to make well-informed suggestions. Increased user participation has a positive effect on the product by increasing the likelihood that user needs will be met.
Too much importance given to the user interface aspects of the system - With rapid prototyping there may be a tendency to design the entire system from the user interface. This can be dangerous because the user interface may not characterize the best overall system structure. It is recommended that a user-interface prototype should be considered part of a requirements specification, not a basis for system design.
Code maintainability - A prototype developed quickly, massaged into the final product, and then hurriedly documented can be difficult to maintain or enhance. Such a product may not be completely documented. Configuration management may be a problem. • Large systems - Evolutionary prototyping on large systems can yield a system filled with patches - hastily designed prototype modules that become the root of later problems.
Because it is unlikely that a rapid prototype will stand as a legal statement of a contract between a developer and a client, using a rapid prototype as a sole specification should not be done. A second problem with using the rapid prototype as a specification is potential problems with maintenance.