430 likes | 652 Views
TESTABILITY. Operability: Testin işlevselliğinin hızlı ve verimli olmasıdır. Observability: Testin açık, net ve takip edilebilir olmasıdır. Controllability: Testin ve sonuçlarının kontrol edilebilir olmasıdır.
E N D
TESTABILITY • Operability: Testin işlevselliğinin hızlı ve verimli olmasıdır. • Observability: Testin açık, net ve takip edilebilir olmasıdır. • Controllability: Testin ve sonuçlarının kontrol edilebilir olmasıdır. • Decomposability: Testin bileşenlerine ayrılabilir ve değerlendirilebilir olmasıdır. • Simplicity: Testin kendisinin ve sonuçlarının basit ve anlaşılabilir olmasıdır. • Stability: Testin sağlam, geçerli ve sürekli olmasıdır. • Understandability: Testin anlaşılabilir olmasıdır.
TESTABILITY • Kaner, Falk and Nguyen [KAN93] suggest the following attributes of a “good” test: • A good test has a high probability of finding an error. (İyi bir testin bir hata bulma olasılığı yüksektir) • A good test is not redundant. (İyi bir test lüzumsuz değildir) • A good test should be “best of breed” [KAN93]. (İyi bir test türünün en iyisi olmalıdır) • A good test should be neither too simple nor too complex. (İyi bir test ne çok basittir nede çok karmaşık olmalıdır)
WHITE-BOX TESTING • Logic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. • We often believe that a logical path is not likely to be executed when, in fact it may be executed on a regular basis. • Typographical errors are random.
BASIS PATH TESTING • Flow Graph Notation • Cyclomatic Complexity • Deriving Test Cases • Graph Matrices
Flow Graph Notation The structured constructs in flow graph from: Case Sequence If While Until where each circle represents one or more nonbranching PDL or source code statements Figure 1.1. Graph Notation
CONTROL STRUCTURE TESTING • Condition Testing (Durum Testi). • Data Flow Testing (Veri Akışı Testi). • Loop Testing (Döngü Testi).
BLACK-BOX TESTING • Graph -Based Testing Methods (Grafik Tabanlı Test Metotları) • Equivalence Partitioning (Eşdeğer Bölümleme) • Boundary Value Analysis (Sınır Değerleri Analizi) • Comparison Testing (Mukayese Testleri)
TESTING FOR SPECIALIZED ENVIRONMENTS AND APPLICATIONS • Testing GUI’ s • Testing of Client/Server Architectures • Testing Documentation and Help Facilities • Testing for Real-Time Systems
TESTING GUI’ S (Graphical User Interface) • For Windows: • Will the window open properly based on related typed or menu- based mands? • Can the window be resized, moved, and scrolled? • Is all data content contained within the window properly addressable mouse, function, keys, directional arrows, and keyboard? • Does the window properly regenerate when it is overwritten and the called? • Are all functions that relate to the window avaible when needed? • Are all functions that relate to the window operational?
TESTING GUI’ S (Graphical User Interface) • Are all relevant pull-down menus, tool bars, scroll bars, dialog boxes, buttons, icons, and other controls avaible and properly displayed for the windows? • When multiple windows are displayed, is the name of the window properly represented? • Is the active window properly highlighted? • If multitasking is used, are all windows updated at appropriate times? • Do multiple or incorrect mouse picks within the window cause unexpected side effects? • Does the window properly close?
TESTING GUI’ S (Graphical User Interface) • For pull-down menus and mouse operations: • Is the appropriate menu bar displayed in the appropriate context? • Does the application menu bar display system related features (e.g., a clock display)? • Do pull-down operations work properly? • Do breakaway menus, palettes, and tool bars work properly? • Are all menu function and pull-down subfunctions properly listed? • Are all menu functions properly addressable by the mouse? • Is text typeface, size, and format correct? • Are menu functions highlighted (or grayed-out) based on the context of current operations within a window?
TESTING GUI’ S (Graphical User Interface) • Does each menu function perform as advertised? • Are the names of menu functions self -explanatory? • Is help avaible for each menu item, and is it context sensitive? • Are mouse operations properly recognized throughout the interactive context? • If multiple clicks are required, are they properly recognized in context? • If the mouse has multiple buttons, are they properly recognized in context? • Do the cursor, processing indicator (e.g., an hour glass or clock), and pointer properly change as different operations are invoked?
TESTING GUI’ S (Graphical User Interface) • Data Entry: • Is alphanumeric data entry properly achoed and input to the system? • Do graphical modes of data entry (e.g., a slide bar) work properly? • Is invalid data properly recognized? • Are data input messages intelligible?
A STRATEGIC APPROACH TO SOFTWARE TESTING • Verification and Validation • Organizing for Software Testing • A Software Testing Strategy • Criteria for Completion of Testing
Verification and Validation Formal Technical Reviews Software Engineering Methods Measurement quality Standarts And Procedures Testing SQA Figure 1.2. Achieving software quality
Organizing for Software Testing • Two point of view: • Constructive (Yapıcı) • Destructive (Yıkıcı)
A Software Testing Strategy System engineering S ST Requirements R V Design D I C U Code Unit test Integration test Validation test System test Figure 1.3. Testing strategy
High-order tests İntegration test Unit test A Software Testing Strategy requirements design code Testing “direction” Figure 1.4. Software testing steps
UNIT TESTING • Unit Test Considerations • Unit Test Procedures
interface local data structures boundary conditions independent paths error handling paths test cases Unit Test Considerations Module ---------- ............ ---------- ............ Figure 1.5. Unit test
interface local data structures boundary conditions independent paths error handling paths test cases Unit Test Procedures driver module to be tested stub stub RESULTS Figure 1.6. Test environment
INTEGRATION TESTING • Top – Down Integration • Bottom – Up Integration • Regression Testing
Top-Down Integration M1 M2 M3 M4 M5 M6 M7 M8 Figure 1.7. Top-down integration
Top-Down Integration Stub A Stub B Stub C Stub D = Direction of data flow Figure 1.8. Stubs
Bottom-Up Integration Mc Ma Mb D1 D2 D3 Cluster 3 Cluster 1 Cluster 2 Figure 1.9. Bottom-up integration
Driver A Driver B Driver C Driver D Bottom-Up Integration Y B A Send parameter from a table (or external file) Invoke subordinate Display parameter A combination of drivers B and C = Direction of information flow Figure 1.10. Drivers
VALIDATION TESTING (Testi Doğrulama) • Validation Test Criteria (Test Kriterlerinin Doğrulanması) • Configuration Review (Yapının Tekrar Gözden Geçirilmesi) • Alpha and Beta Testing (Alfa ve Beta Testi)
SYSTEM TESTING (Sistem Testi) • Recovery Testing (İyileştirme Testi) • Security Testing (Güvenlik Testi) • Stress Testing (Aşırı Yükleme – Stres Testi) • Performance Testing (Performans Testi)
THE ART OF DEBUGGING (Hata Yöntemleri) • The Debugging Process (Hata Ayıklama Süreci) • Physiological Considerations (Fizyolojik Etmenler) • Debugging Approaches (Hata Ayıklama Yaklaşımları)
The Debugging Process (Hata Ayıklama Süreci) Test cases Execution of cases Additional tests Results Suspected causes Regression tests Debugging Identified causes Corrections Figure 1.11. Debugging
SOFTWARE QUALITY • McCall’ s Quality Factors • Furps • The Transition to a Quantitative View
McCall’ s Quality Factors • Correctness. (Doğrulanabilirlik) • Reliability. (Güvenirlilik) • Efficiency. (Etkinlik) • Integrity. (Bütünlük) • Usability. (Kullanılabilirlik) • Maintainability. (Sürdürülebilirlik) • Flexibility. (Esneklik) • Testability. (Test Edilebilirlik)
McCall’ s Quality Factors Maintainability Flexibility Testability Portability Reusability Interoperability PRODUCT TRANSITION PRODUCT REVISION PRODUCT OPERATION Correctness Reliability Usability Integrity Efficiency Figure 1.12. McCall’ s software quality factors
A FRAMEWORK FOR TECHNICAL SOFTWARE METRICS • The Challenge of Technical Metrics • Measurement Principles • The Attributes of Effective Software Metrics
Measurement Principles • Formulation (Formulasyon) • Collection (Toplama - Derleme) • Analysis (Analiz) • Interpretation (Yorumlama) • Feedback (Geribildirim)
The Attributes of Effective Software Metrics • Simple and computable • Empirically and intuitively persuasive • Consistent and objective • Consistent in its use of units and dimensions • Programming language independent • An effective mechanism for quality feedback
METRICS FOR THE ANALYSIS MODEL • Function-Based Metrics • The Bang Metric • Metrics for Specification Quality
Function-Based Metrics Sensors test sensor password SafeHome User Interaction Function User zone inquiry zone settings sensor inquiry messages User panic button sensor status activate/deactivate activate/deactivate Monitoring & Response Subsystem alarm alert password, sensors... System configuration data Figure 1.14. Sort of the analysis model for SafeHome software
Function-Based Metrics • The data flow diagram is evaluated to determine the key measure required for computation of the function point metric: • number of user inputs • number of user outputs • number of user inquiries • number of files • number of external interfaces
Function-Based Metrics Weighting Factor Inquirement parameter count simple average complex 3 number of user inputs x 4 6 = number of user outputs x 5 7 = number of user inquiries x 4 6 = number of files x 10 15 = number of external interfaces x 7 10 = total 3 9 4 8 2 3 6 2 7 7 1 5 20 4 50 Figure 1.15. Computing function-points: SafeHome user interaction function
METRICS FOR THE DESIGN MODEL • High – Level Design Metrics • Component – Level Design Metrics • Interface Design Metrics
node arc a b c d e f g i j k l h m n p q r High-Level Design Metrics depth width Figure 1.16. Morphology metrics