350 likes | 366 Views
CS 501: Software Engineering. Lecture 21 Reliability 3. Administration. Security and People. People are intrinsically insecure: • Careless (e.g, leave computers logged on, use simple passwords, leave passwords where others can read them)
E N D
CS 501: Software Engineering Lecture 21 Reliability 3
Security and People People are intrinsically insecure: • Careless (e.g, leave computers logged on, use simple passwords, leave passwords where others can read them) • Dishonest (e.g., stealing from financial systems) • Malicious (e.g., denial of service attack) Many security problems come from inside the organization: • In a large organization, there will be some disgruntled and dishonest employees • Security relies on trusted individuals. What if they are dishonest?
Design for Security: People • Make it easy for responsible people to use the system • Make it hard for dishonest or careless people (e.g., password management) • Train people in responsible behavior • Test the security of the system • Do not hide violations
Suggested Reading Trust in Cyberspace, Committee on Information Systems Trustworthiness, National Research Council (1999) http://www.nap.edu/readingroom/books/trust/ Fred Schneider, Cornell Computer Science, was the chair of this study.
Validation and Verification Validation: Are we building the right product? Verification: Are we building the product right? In practice, it is sometimes difficult to distinguish between the two. That's not a bug. That's a feature!
The Testing Process Unit, System and Acceptance Testing are major parts of a software project • It requires time on the schedule • It may require substantial investment in test data, equipment, and test software. • Good testing requires good people! • Documentation, including management and client reports, are important parts of testing. What is the definition of "done"?
Test Design Testing can never prove that a system is correct. It can only show that (a) a system is correct in a special case, or (b) that it has a fault. • The objective of testing is to find faults. • Testing is never comprehensive. • Testing is expensive.
Testing Strategies • Bottom-up testing. Each unit is tested with its own test environment. • Top-down testing. Large components are tested with dummy stubs. user interfaces work-flow client and management demonstrations • Stress testing. Tests the system at and beyond its limits. real-time systems transaction processing
Methods of Testing Closed box testing Testing is carried out by people who do not know the internals of what they are testing. Open box testing Testing is carried out by people who know the internals of what they are testing. (a) What is the advantage of each approach? In each case, how do you set about selecting test cases?
Stages of Testing Testing is most effective if divided into stages Unit testing unit test System testing integration test function test performance test installation test Acceptance testing
Testing: Unit Testing • Tests on small sections of a system, e.g., a single class • Emphasis is on accuracy of actual code • Test data is chosen by developer(s) based on their understanding of specification and knowledge of the unit •Can be at various levels of granularity •Open box: by the developer(s) of the unit If unit testing is not thorough, system testing becomes almost impossible. If your are working on a project that is behind schedule, do not rush the unit testing.
Testing: System and Sub-System Testing • Tests on components or complete system, combining units that have already been thoroughly tested • Emphasis is on integration and interfaces • Uses trial data that is typical of the actual data, and/or stresses the boundaries of the system, e.g., failures, restart • Is carried out systematically, adding components until the entire system is assembled •Open or closed box: by development team or by special testers System testing is finished fastest if each component is completely debugged before assembling the next
Testing:Acceptance Testing • Closed box: by the client • The entire system is tested as a whole • The emphasis is on whether the system meets the requirements • Uses real data in realistic situations, with actual users, administrators, and operators The acceptance test must be successfully completed before the new system can go live or replace a legacy system Completion of the acceptance test may be a contractual requirement before the system is paid for
Variants of Acceptance Testing Alpha Testing: Clients operate the system in a realistic but non-production environment Beta Testing: Clients operate the system in a carefully monitored production environment Parallel Testing: Clients operate new system alongside old production system with same data and compare results
Test Cases Test cases are specific tests that are chosen because they are likely to find faults. Test cases are chosen to balance expense against chance of finding serious faults. • Cases chosen by the development team are effective in testing known vulnerable areas. • Cases chosen by experienced outsiders and clients will be effective in finding gaps left by the developers. • Cases chosen by inexperienced users will find other faults.
Test Case Selection: Coverage of Inputs Objective is to test all classes of input • Classes of data -- major categories of transaction and data inputs. Cornell example: (undergraduate, graduate, transfer, ...) by (college, school, program, ...) by (standing) by (...) • Ranges of data -- typical values, extremes • Invalid data, reversals, and special cases.
Test Case Selection: Program Objective is to test all functions of each computer program • Paths through the computer programs Program flow graph Check that every path is executed at least once • Dynamic program analyzers Count number of times each path is executed Highlight or color source code Can not be used with time critical software
Test Strategies: Program (a) Statement analysis (b) Branch testing If every statement and every branch is tested is the program correct?
Statistical Testing • Determine the operational profile of the software • Select or generate a profile of test data • Apply test data to system, record failure patterns • Compute statistical values of metrics under test conditions
Statistical Testing Advantages: • Can test with very large numbers of transactions • Can test with extreme cases (high loads, restarts, disruptions) • Can repeat after system modifications Disadvantages: • Uncertainty in operational profile (unlikely inputs) • Expensive • Can never prove high reliability
Regression Testing Regression Testing is one of the Key Techniques of Software Eengineering Applied to modified software to provide confidence that modifications behave as intended and do not adversely affect the behavior of unmodified code. • Basic technique is to repeat entire testing process after every change, however small.
Regression Testing: Program Testing 1. Collect a suite of test cases, each with its expected behavior. 2. Create scripts to run all test cases and compare with expected behavior. (Scripts may be automatic or have human interaction.) 3. When a change is made, however small (e.g., a bug is fixed), add a new test case that illustrates the change (e.g., a test case that revealed the bug). 4. Before releasing the changed code, rerun the entire test suite.
Documentation of Testing Testing should be documented for thoroughness, visibility and for maintenance (a) Test plan (b) Test specification and evaluation (c) Test description (d) Test analysis report
A Note on User Interface Testing User interfaces need two categories of testing. • During the design phase, user interface testing is carried out with trial users to ensure that the design is usable. This design testing is also used to develop graphical elements and to validate the requirements. • During the implementation phase, the user interface goes through the standard steps of unit and system testing to check the reliability of the implementation. Acceptance testing is then carried out on the complete system.
A CS 501 Project: Methodology • How we’re user testing: • One-on-one, 30-45 min user tests with staff levels • Specific tasks to complete • No prior demonstration or training • Pre-planned questions designed to stimulate feedback • Emphasis on testing system, not the stakeholder! • Standardized tasks / questions among all testers
A CS 501 Project: Methodology • How we’re user testing: • Types of questions we asked: • Which labels, keywords were confusing? • What was the hardest task? • What did you like, that should not be changed? • If you were us, what would you change? • How does this system compare to your paper based system • How useful do you find the new report layout? (admin) • Do you have any other comments or questions about the system? (open ended)
A CS 501 Project: Results What we’ve found: Issue #1, Search Form Confusion!
A CS 501 Project: Results What we’ve found: Issue #2, Inconspicuous Edit/ Confirmations!
A CS 501 Project: Results What we’ve found: Issue #3, Confirmation Terms
A CS 501 Project: Results What we’ve found: Issue #4, Entry Semantics
Results, Addressing What we’ve found: #5, Search Results Disambiguation & Semantics
Fixing Bugs Isolate the bug Intermittent --> repeatable Complex example --> simple example Understand the bug Root cause Dependencies Structural interactions Fix the bug Design changes Documentation changes Code changes
Moving the Bugs Around Fixing bugs is an error-prone process! • When you fix a bug, fix its environment • Bug fixes need static and dynamic testing • Repeat all tests that have the slightest relevance (regression testing) Bugs have a habit of returning! • When a bug is fixed, add the failure case to the test suite for the future.