210 likes | 312 Views
Large-Scale Collection of Application Usage Data to Inform Software Development. David M. Hilbert David F. Redmiles Information and Computer Science University of California, Irvine Irvine, California 92697-3425 {dhilbert,redmiles}@ics.uci.edu http://www.ics.uci.edu/pub/eden/. Background.
E N D
Large-Scale Collection of ApplicationUsage Data to Inform Software Development David M. HilbertDavid F. Redmiles Information and Computer Science University of California, Irvine Irvine, California 92697-3425 {dhilbert,redmiles}@ics.uci.edu http://www.ics.uci.edu/pub/eden/
Background Design Design Support - Argo Usage Data and Feedback Collection Support - EDEM Incorporate results ofreview into design Deploy software withexpectation agents Review Use Group Memory and Awareness - Knowledge Depot Usage Data and Feedback Collection Support - EDEM Report results of usage dataand feedback collection
Motivation • Expectations influence designs, designs embody expectations • Mismatches between expectations and how applications are actually used can lead to breakdowns. • Identification and resolution of mismatches can help improve fit between design and use • Identifying mismatches entails observing actual use and comparing it against expectations. • Our research: techniques to enable large-scale incorporation of usage data and user feedback in development to help uncover mismatches and improve the design-use fit.
The Internet • On the positive side • cheap, rapid, large-scale distribution of software for evaluation • simple transport mechanism for usage information and feedback • use and development increasingly becoming concurrent • should make incorporating usage data and user feedback easier • On the negative side • reduces opportunities for traditional user testing • increases variety and distribution of users and usage situations • lack of scalable tools and techniques for incorporating usage information and feedback on a large scale
Problems • Current approaches suffer from significant limitations • usability testing => scale (size, scope, location, duration) • beta testing => data quality (incentives, knowledge, detail) • monitoring => scale & data quality (abstraction, selection, reduction, context, evolution) • The subjective feedback paradox • users not having problems provide feedback => problems • users having problems don’t provide feedback => no problems • The impact assessment and resource allocation problem • what is the impact of suspected problems and proposed solutions? • where should development and evaluation effort be focused?
Approach • Developers • design applications and identify usage expectations • create agents to collect usage data and user feedback • Agents • deployed over the Internet to run on user computers (via HTTP) • perform abstraction, selection, reduction, context-capture as needed to allow actual use to be compared against expectations • report data and feedback to developers (via E-mail) • Data and feedback • inform further evolution of expectations, application, and agents
Approach • Expectation-Driven Event Monitoring (EDEM)
Usage Scenario (Expectations) • A cargo query form
Usage Scenario (Usage Data) • Agents monitor use and collect data unobtrusively • Agents may post messages (optional)
Usage Scenario (User Feedback) • Users may learn more about expectations (optional) • Users may provide feedback (optional)
Usage Scenario (Review) • Data and feedback stored for later analysis and review
Agent Specs saved w/ URL Development Computer DevelopmentComputer Java Virtual Machine Java Virtual Machine AgentSpecs CollectedData Top Level Window& UI Events Top Level Window& UI Events ApplicationUI Components ApplicationUI Components EDEMActive Agents EDEMActive Agents Property Queries Property Queries HTTPServer EDEMServer Property Values Property Values User Computer Agent Reports sent via E-mail Agent Specs loaded via URL Architecture
Reference Architecture DataCapture Abstraction, Selection, Context, Reduction DataPackaging DataTransport DataPrep DataAnalysis SystemModel ofUI & App: Components Events Properties Methods AnalystModel ofUI & App: Features, Dialogs, Controls, User-Supplied Values, User Tasks Mapping
Reference Architecture (Word IV) Instrumentation intertwined w/ app DataCapture Abstraction, Selection, Context, Reduction DataPackaging DataTransport DataPrep DataAnalysis SystemModel ofUI & App: Components Events Properties Methods AnalystModel ofUI & App: Features, Dialogs, Controls, User-Supplied Values, User Tasks Mapping
Reference Architecture (Office IV) Event monitoring infrastructure DataCapture Abstraction, Selection, Context, Reduction DataPackaging DataTransport DataPrep DataAnalysis TestWizard Database of Office UI SystemModel ofUI & App: Components Events Properties Methods AnalystModel ofUI & App: Features, Dialogs, Controls, User-Supplied Values, User Tasks Mapping
Reference Architecture (EDEM) Event monitoring infrastructure DataCapture Abstraction, Selection, Context, Reduction DataPackaging DataTransport DataPrep DataAnalysis “Pluggable” Data Abstraction, Selection, Context-Capture, and Reduction Expectation Agents SystemModel ofUI & App: Components Events Properties Methods AnalystModel ofUI & App: Features, Dialogs, Controls, User-Supplied Values, User Tasks Mapping
Conclusions • Usage expectations • help guide data collection • raise awareness of implications of design decisions • Agent architecture • abstraction, selection, reduction in-context and prior to reporting • independent evolution of instrumentation and application • Combined • higher quality data (v. beta tests) with less restrictions on evaluation size, scope, location, duration (v. usability tests)
Possible Applications • Use of long-term data about user and users’ behavior to support • adaptive UI and application behavior • “smarter” delivery of help/suggestions/assistance • Support for monitoring of component-based systems in which • event and state information can be easily “tapped” • low-level data must be related to higher level concepts of interest • available information exceeds that which can practically be collected • data collection needs evolve over time more quickly than application
Future Research • Large-scale evaluation of research in practice • maintenance issues, how is data used, cost-benefit analysis • Relationships to other requirements-related artifacts • e.g., use cases, cognitive walkthroughs, task analysis • Other types of analysis • changes in usage over time • usage involving multiple cooperating users • Reuse and adaptability of infrastructure • generalize to support monitoring of arbitrary software components
For More Info • http://www.ics.uci.edu/pub/eden/