280 likes | 386 Views
In an Experimental project, we may show generality of results by testing hypotheses. Experimental report (body) Introduction Hypothesis (proposed generality) Method (specifics) Results Statistical test (assessed generality) Conclusions & Discussion.
E N D
In an Experimental project, we may show generality of results by testing hypotheses Experimental report (body) • Introduction • Hypothesis (proposed generality) • Method (specifics) • Results • Statistical test (assessed generality) • Conclusions & Discussion
In an Experimental or Simulation project, we may wish to estimate parameters Report (body) • Introduction • Problem (parameters to be estimated, etc) • Method • Results • Statistical analysis (form of relationship, sensitivity, confidence limits, etc) • Conclusions & Discussion
In a Developmental project, how do we show generality of results? Report (body) • Introduction • Problem • Method/Design Principles • Results • What goes here? • Conclusions & Discussion
Evaluation of Results of a Development Project • User Feedback (Quantitative approaches) • User Surveys (could test hypothesis) • Usability Labs (structured observation – descriptive statistics but control insufficient for hypothesis testing) • User Feedback (Qualitative) – can supplement Surveys or Labs with user opinion, depth interviews, focus groups etc • Reflective Practice (Qualitative approach) • Design Rationale aka Design Space Analysis (Recording or reconstructing the decision process and criteria used) • Claims Analysis (Renders explicit the claims implicit in design decisions) Resources restrict the number of evaluation methods used within one project! (NB “Scoping” Masters proposal).
Design Rationale can give generality to results of a Development project Report body • Introduction • Problem • Method • Results • Design Rationale(relate design decisions to general principles) • Conclusions & Discussion
Design Space Analysis: Background • Design viewed as Problem Solving • Newell & Simon introduced idea of Problem Solving as Search in Problem Space – famous article on Computer Science as Empirical Study: Symbols and Search – later, book on Human Problem Solving • Maclean built ideas of Design Space Analysis on this view of problem solving
Basic idea: Search in a Problem Space(Newell & Simon Cryptarithmetic Problems) DONALD +GERALD =ROBERT D=T False D=? D=0 T=0, R=2L+1 D=5
Search in a Design Space DECISION SUBORDINATE DECISION
gIBIS, Questmap, enrich this with Hyperlinks DECISION SUBORDINATE DECISION
gIBIS Design Rationale Graph(From Sutcliffe, A, 2002, The Domain Theory, L Erlbaum, Ch 6) Issue Options Criteria Fixed + Accuracy sensors of location - + Detect Spot checks Cost Hazard by crew + + Report when - Reliability observed of report
Decision and Evaluation • Maclean: There are two relevant spaces to search • Decision Space (Design Alternatives) • Evaluation Space (Knowledge of Criteria for preferring one alternative to another) • QOC is an approach for representing Design Rationale in terms of these two spaces • Evaluation Space relates to “Scientific Design” (Glynn) and to “Claims Analysis” (Carroll, Sutcliffe)
Glynn: Successful Innovation comes from Scientific Design • Scientific Design = Knowing That • Replace IMPLICIT(TACIT)“Knowing How” • with EXPLICIT theoretical knowledge of general principles to which successful design must conform • McCarthy’s “Look Ma, No Hands” disease comes from sticking with Know How, not advancing to Knowing That
Claims Analysis • We can advance Design Rationale by looking at the Implicit Claims in the way an artefact is designed. • Carroll, Sutcliffe and others suggest building up libraries of generic claims for application areas. • [It would be a good MSc project to develop a web-based tool that displayed Design Rationale while helping designer to build up and organise a Claims Library]
Claim: Situating online interaction in a familiar place-based environment - + evokes partners’ history and common ground - but important parts of known context may be missing, making interaction awkward Sources: Common ground (Clark & Brennan, 1991); Distributed Cognition (Hutchins, 1995); Computer Mediated Communication (Sproull & Kiesler, 1991). Illustration: A Claim from Carroll (2003) fig 15.11
Sutcliffe’s Claims Knowledge Representation Schema • Claim ID • Title • Author(s) of original claim • Artefact in which originated • Description • Upsides • Downsides • Scenario • Effect (contribution towards goals) • Dependencies • Issues • Theory • Relationships (interclaim links) • Scope
Design Space Analysis (Maclean) • Design research should articulate ‘Design Rationale’ • Developed wrt User Interface Design • Has general applicability • Structure Design Rationale in terms of • Questions • Options (alternative answers to Questions) • Criteria (supporting or opposing options)
Search in a Design Space DECISION SUBORDINATE DECISION
Linking Design Space to Evaluation Space: QOC C O Q C O DECISION
Design Rationale & Claims A Sutcliffe (2002) The Domain Theory: Patterns for Knowledge and Software Reuse. LEA. (Includes Claims Library) T Moran & J Carroll (eds, 1996) Design Space Analysis. LEA. J Preece et al (1994) Human Computer Interaction. Addison Wesley. Pp 523-535. J M Carroll (ed 2003) HCI models, theories, and frameworks. Morgan Kaufman.
Evaluation in the “Real World” • How can we evaluate results in the complex situation of the real world? • Increasingly – networked and web-based applications • Increasingly Virtual Organisations based on Computer Communications • Technical and Social problems of evaluation affect one another: Complex Distributed Systems
Problems in evaluating Complex Distributed Systems (1) • Generic problems in financial justification of IT investments, e.g. difficulty of identifying and quantifying costs and benefits • Workforce aspects • Users’ varying skills, knowledge and views will influence the use of a collaborative system, therefore have impact on evaluating performance. • Employee reluctance to change, which is associated with organisational culture, may result in system failure: but should this be evaluated as a failure of the technology? • Confounding effects connected to culture, organisation and stakeholders, e.g. policies, norms, culture, hierarchy and incentive systems are relevant in influencing the effective utilisation of groupware and the evaluation.
Problems in evaluating Complex Distributed Systems (2) • Organizational Politics • Scale, scope or remit of evaluation activities may be limited by organizational politics. • Evaluation not really wanted: It may be politically essential for a project to be defined as a success, rather than critically evaluated (cf NHS Care Records system). • Identifying Stakeholders, capturing user requirements and goals, • e.g. different people as stakeholders have different viewpoints about system design and also about evaluation activities; • Evaluation must be with respect to Requirements – need to identify requirements from Stakeholders.
Example:- DIECoM Project StakeholdersProject on Integrated SW and HW Configuration Management across Product Lifecycle
Problems in evaluating Complex Distributed Systems (3) • Difficulties of evaluation measurement/ methodologies, e.g. there are no simple, low cost but effective evaluation techniques for groupware • Capturing Inappropriate Data, e.g. some ex-ante and ex-post evaluations are carried out with estimated figures. • Lack of appropriate measurements/ methodologies, e.g. many industries retain outdated and inappropriate procedures for investment appraisal.
Problems in evaluating Complex Distributed Systems (4) • Surveys of User Opinion may be useful, but could be biassed or partial: need to use good survey methods. • Focus Groups and similar guided discussions may be used. • Distributed Focus Groups using Videoconferencing + Shared Whiteboard: another possible MSc project, using the VOTER research lab. • Suitable where the group members are distributed.
VOTER CSCW Suite Videoconferencing VOTER Net SmartBoard Servers Wireless BS
Problems in evaluating Complex Distributed Systems (5) • Time to carry out an evaluation, • e.g. can evaluation be conducted at a time when it will be effective? • If the project is externally funded, can a summative evaluation be built into the project timescale? • If not, how effective is the project as a contribution to knowledge? • Difficulties scaling evaluation techniques, e.g. a collaborative system evaluation can involve number of participants over time and it is difficult to scale the number. • Understanding the scope and impact of adopting new technology, e.g. the evaluators need to understand different operating systems and protocols within different cases.
Problems in evaluating Complex Distributed Systems (6) • Considering integration of different systems, e.g. incompatibility, lack of common definitions, structures, protocols and business concepts across a Virtual Organization. • Taxonomies within different collaborations/ disciplines, e.g. participants invent their own taxonomies and processes by default within their collaborations; this is particularly a problem for Virtual Organizations.