230 likes | 330 Views
CAMELOT: A Testing Methodology for Computer Supported Cooperative Work. 36 th Hawaii International Conference on System Science Experimental Software Engineering Track January 7, 2003 Kona, Big Island, Hawaii. Robert F. Dugan Jr. Deptartment of Computer Science Stonehill College
E N D
CAMELOT: A Testing Methodology for Computer Supported Cooperative Work 36th Hawaii International Conference on System Science Experimental Software Engineering Track January 7, 2003 Kona, Big Island, Hawaii Robert F. Dugan Jr. Deptartment of Computer Science Stonehill College Easton, MA 02357 USA bdugan@stonehill.edu Ephraim P. Glinert, Edwin H. Rogers Deptartment of Computer Science Rensselaer Polytechnic Institute Troy, NY 12180 USA {glinert,rogerseh}@cs.rpi.edu
What Is CSCW? “Computer-based systems that support groups of people engaged in a common task and that provide an interface to a shared environment” [Ellis91] e-mail Newsgroups Chat CSCL Meeting Support Shared Windows Multiuser Editing
CSCW Characterization Human- Human Interaction Human- Computer Interaction • Floor Control • Coupling • Awareness • Iterative Design • Undo/Redo • Real-time CSCW • Session Management • Synchronization • IPC • Performance • Reliability Distributed Systems Network Communication
Motivation Develop Test Human-Human Interface Human-Computer Interface Rework
Overview • Motivation • Survey of Testing • Methodology • Evaluation • Limitations/Future Work
Goals of Testing • Correct behavior • Utility • Reliability • Robustness • Performance
Research Testing • Early stages of life cycle (Requirements, Functional Specification, Design) • Cost rises deeper into life cycle • Problems scaling to large development efforts • Problem space complex • Requirements in flux • Verification cost exceeds implementation cost • Example:
Commercial Testing • Later stages of life cycle (Implementation, Integration, System Test) • Less expensive to create individual tests • Use standard communication APIs (GUI events, HTTP, RMI, etc.) • Capture/Replay communication to drive application execution • Problems: • Fragile: if application communication changes, so must test case • Late life cycle means problems uncovered more costly • Rudimentary guidelines for use • Example: Mercury Interactive WinRunner
General Computing Camelot Code Cycle Description • Implementation [Meyers79] • Integration [Scach90] • System test [Meyers79] Implementation GC.IM.1 Functional Test GC.ST.12 Procedure Test Acceptance Test GC.ST.13
Human Computer Interaction Camelot Code Description Usability Criteria • Intersection with General Computing [Yip91] • Usability criteria [Schneiderman97] • Golden rules [Schneiderman97] • User interface technology [Schneiderman97] HCI.UC.1 Time to learn system: How long does it take for a typical user to learn to use the system? Error Messages HCI.UITG.9 HCI.UITG.10 Color
Distributed Computing Camelot Code Description • Scalability • GC & HCI intersections Race Condition DC.RC.1 DC.RC.2 Centralized Architecture Decentralized Architecture Loosely Coupled DC.S.8 DC.S.9 Synchronization • Race conditions • Deadlock • Temporal consistency
Human-Human Interaction Camelot Code Description • Communication • Coordination • Coupling Communication HHI.CM.1 Network bandwidth sufficient to support user communication. DC/HHI.5 Distributed computing scalability tests. Derived from (DC.S ^ HHI.CP) -> DC/HHI.5 DC/HHI.6 Distributed computing temporal consistency tests. Derived from (DC.TC ^ HHI.CP) -> DC/HHI.6 • Security • Awareness
Evaluation: RCN ISServer rcnClient RCNPublicServer rcnClient rcnClient
Evaluation: RCN • Rensselaer Collaborative Network • Characteristic of CSCW Software • Face-to-face, Synchronous, Meeting Support • Group Management, Chat, Shared Windowing • Floor Control, Asynchronous Multiuser Editing • Mature Application • Unit, System, User Acceptance, Daily Use Tested • Development considered it bug free • Development offered to deliberately introduce bugs! • Ammendable to Rebbeca-J • Java • Source code available
Evaluation Single User General Computing Human Computer Interaction Distributed Computing Human-Human Interaction Multi-User
Evaluation “i don't think any tester would have ever discovered that. simply for discovering that, i consider rebecca a success.” - J.J. Johns, lead developer for RCN
Limitations/Future Work • Single system evaluated • No formal testing methodology used by RCN team • Subset of CSCW technology • Large number of guidelines • Lack of ready-to-run test cases • Testing domains for specific technologies • Example: Chat Domain • Not a complete evaluation
CAMELOT: Discussion • Comparison to existing methodologies • SSM [Checkland89] • PETRA [Ross et al. 95] • SESL [Ramage99] • ECW Methodology [Drury et al. 99] • Part of a complete evaluation • Correct ordering of the evaluation is important. Technical Social
Conclusion We defined CAMELOT, a methodology for testing CSCW applications. • Our methodology has improved prior art by providing a detailed focus on CSCW technology. • We have created CSCW software taxonomy with single user with general computing and human computer interaction components and multiuser with distributed computing and human-human interaction components. • For each component we have identified explicit validation techniques that can be used in both manual and automated testing. • Further, our techniques exploit intersections between the components to improve bug detection.