680 likes | 699 Views
Augmenting Intellect Part 2. Professor Michael Terry February 8, 2007. Talk Overview. Overview of HCI Types of problems investigated Goals of HCI Augmenting intellect Historical roots Challenges Ill-defined problems Interface-level support Open problems.
E N D
Augmenting IntellectPart 2 Professor Michael Terry February 8, 2007
Talk Overview • Overview of HCI • Types of problems investigated • Goals of HCI • Augmenting intellect • Historical roots • Challenges • Ill-defined problems • Interface-level support • Open problems
Challenges to Augmenting Intellect • Evaluation • What is intellect and can we actually augment it? • How can we measure success/failure of tools intended to augment intellect? • Design • How can we reliably (consistently) design computational devices to augment intellect?
Evaluation: Challenges • Human intellect is a big, nebulous quality • No single agreed-upon method for measuring it • Makes it difficult to directly assess whether computational device is “augmenting” intellect, and if so, how • Examples…
Evaluation: Challenges • Does Google augment our intellect? • Does MS Word? • Does Mathematica? • If they do, how do they augment our intellect? • And how can we compare which is “better” when there are alternatives?
Evaluation: Challenges • How can we determine the long-term effect of a tool on person? • If the tool does more work, is faster… • Could it make the person less capable in the long term? • Could it make them less creative and more dependent on the tool? • Example: Calculators • Do they impede the learning of math skills? • Example: Coding • Who has ever tweaked some buggy lines, recompiled, and re-run it to see if the changes worked… • …Rather than understood what the real problem was?
Evaluation: Summary • Must indirectly measure effects of tools on very specific tasks • Is user faster? • Does user expend less (perceived) effort? • Can user solve harder problems? • Is user more creative? • Avoid same pitfalls as calling a system “easy-to-use” or “usable”
Design: Challenges • What forms should augmentation take? • Should tool be “smarter” and proactively solve problems? • Should it provide streamlined access to information? • Should it offer communication/collaboration facilities to connect people? • Should it automate mundane tasks so we can focus on higher-level issues? • Should it create new symbolic languages? • Wide range of assistance possible
Design: Challenges • Computational augmentation can exist at many scales • Interface mechanism-level augmentation • Undo, previews • Application-level • Mathematica, Matlab • Societal-level • Collaborative filtering (e.g., Amazon’s rating systems)
Design: Summary • Would like to develop generalizable principles to apply to the design of new tools intended to augment intellect • It’s not enough to have 1-off solutions • Example: Every application should provide “undo” facilities to support experimentation vs. Mathematica • Need to be able to reliably repeat our successes • Provide prescriptive guidelines for the design of new applications
Summarizing the Issues • What do we build? • For whom? • What level/degree/scale of intervention? • Do we make the machine or the person smarter? • How can we measure success in the short- and long-term?
Narrowing the Problem • Strengths/weaknesses of humans, computers suggest avenues for developing general principles
Computer Constants • Computers not too creative (right now), but… • Are fast at simple calculations • Have perfect memories (until your disk crashes…) • Can simulate potential future events • Can attend to multiple tasks simultaneously
Human Constants • People are inventive, creative, but… • Have limitations in memory • 7 +/- 2 chunks in short-term memory • Unreliable long-term memory • Have limited powers of prediction • Cannot reliably predict outcome of complex events • Cannot reliably work several steps ahead • Are slow to work out implications • Can focus on only one thing at a time • Extreme example: Writing concurrent software
Human-Machine Symbiosis • Human limitations lead to common problem solving strategies across domains • Common problem solving strategies can be (partially) enhanced by computational capabilities • This suggests generalizable user interface design principles are possible • But must first understand what these common problem solving strategies are…
Talk Overview • Overview of HCI • Types of problems investigated • Goals of HCI • Augmenting intellect • Historical roots • Challenges • Ill-defined problems • Interface-level support • Open problems
Ill-Defined Problems • Two general classes of problems people solve • Well-defined problems vs. ill-defined problems • Guess which is the more difficult…
Well-Defined Problems • Have: • Well-defined goal state • Well-defined evaluation function • Well-defined problem state • Well-defined operators to manipulate problem state • Examples: • Games (Sudoku, chess, checkers) • Caveat: • Some well-defined problems can appear ill-defined by the size of the problem space created (e.g., chess)
Ill-Defined Problems • Described by Walter Reitman (1965) • Have: • Ill-defined goal state • Ill-defined evaluation function • Ill-defined problem state • Ill-defined operators to manipulate problem state • Examples: • Design a fuel-efficient car • Design software • Design anything with a given set of constraints • Write a paper
Ill-Defined Problem Implications • Complex problems, not wholly understood at onset • No “textbook” way to solve problems, no “right” answer • People cope with the uncertainty by actively experimenting • Donald Schön (1983) dubs experimental process reflection-in-action
Reflection-in-Action • Solution developed step-by-step • An informed act of improvisation • Process: • Person makes a “move” on a problem based on experience with solving similar problems • Person reflects on results, uses new information to decide next move • Consider write-compile-test cycles in software development • Method of managing complexity of solving an ill-defined problem
Broader Experimentation • Experienced practitioners further experiment by actively generating sets of possibilities • Sets allow one to explore design space, compare and contrast alternative solutions • Process is called Set-Based Problem Solving Point-Based Problem Solving Set-Based Problem Solving (Terminology from Ward et al, 1995, and Sobek et al, 1997)
We Have Our Target! • People actively apply intellect when solving ill-defined problems… • Ill-defined problems, of all forms, require active experimentation with the problem and its solution… • Experimentation includes generating sets of alternative solutions to compare and contrast… • Therefore, one general way to augment intellect is to support experimentation within user interfaces
Talk Overview • Overview of HCI • Types of problems investigated • Goals of HCI • Augmenting intellect • Historical roots • Challenges • Ill-defined problems • Interface-level support • Open problems
Supporting Experimentation • Experimentation with computer-based tools can happen in near- and long-term • Choosing a command/action (near-term) • Choosing a commands’ parameters (near-term) • Choosing a sequence of actions (long-term) • Developing sets of alternatives (long-term) • What ways do computers currently support these practices?
Current Support • Previews • Undo/Redo • Save As… • Revision control • Idiosyncratic, manually-driven techniques • Embed alternatives in same document • Use of layers in Photoshop • Commenting out sections of code • Write new paragraph below its replacement • Demos…
Limitations • Demos • Word • Photoshop
Process Support Tools • Process Support Tools manage past, present, potential future solution states • Are domain-independent tools and services • Undo/redo, preview, revision control all process-support tools • Three classes of process-support tools • History tools • Previewing Tools • What-If Tools
History Tools • Provide explicit support for managing versions of conceptually same document • Revision control systems • CVS, Subversion, version tracking in MS Word… • Snapshotting capabilities (Photoshop) • Editable histories • Branching histories • Save As…/Duplicate (thin interface-level support)
History Tools Editable Graphical Histories (Kurlander & Feiner, 1988)
History Tools Branching History in Designer’s Outpost (Klemmer et al, 2002)
Previewing Tools • Provide support for exploring potential future states without requiring full commitment • Previews • Design Galleries (Marks et al) • Suggestive interfaces (Takeo Igarashi, others) • Side Views (Terry & Mynatt)
Previewing Tools Suggestive Interface in Chateau (Igarashi & Hughes, 2001)
What-If Tools • Provide support for exploring sets of alternatives • Vary in how explicit support is for parallel versions • Undo • Spreadsheet convention • Subjunctive interface (Aran Lunzer) • Parallel Pies (Terry & Mynatt)
What-If Tools Subjunctive Interface (Lunzer & Hornbaek, 2003)
Limitations • Consider when/how experimentation takes place: • Choosing a command/action (near-term) • Choosing a commands’ parameters (near-term) • Choosing a sequence of actions (long-term) • Developing sets of alternatives (long-term) • What are some limitations with existing tools?
Limitations: Choosing Actions • Many choices, little information • Difficult to predict results of future actions • Undo/redo, previews helpful, but provide only one view at a time
Limitations: Exploring Alternatives • User must manually manage process of creating, managing, comparing sets of alternatives • Save As… • Alternatives embedded within same document • Branch, tag in revision control systems
Summary of Tensions • Applications assume solution development through revision of single solution instance • A linear problem solving process • Enforced by equating an overall solution with a single document • Document the only organizational structure for data • Not expressive enough to hold alternatives • Few mechanisms to explicitly support exploration • Costly in time, effort to explore alternatives • Ultimately can discourage exploration
Summary of Tensions Interface designs assume this model of problem solving But users often wantto explore and experiment
Set-Based Interaction • Can reconceptualize interaction to explicitly support creation, manipulation, and evaluation of sets of alternatives • Let user explore without worrying about saving, documenting each state and its derivation • Allow manipulation of multiple versions simultaneously • Demo…
Evaluation • It’s not enough to identify a need and design to it • We need to know how the tool affects work practices • Do people develop better solutions? • Are they faster? • More satisfied? • Work with less effort? • Want to know what works, what doesn’t, so we can replicate successes, avoid same old mistakes • But nature of ill-defined problems muddies the waters...
Evaluation Challenges • Efficiency may not be an appropriate metric • User may take longer with tools, but arrive at better solutions • Difficult to assess whether one solution is better than another • Bane of ill-defined problems: No concrete evaluation function • People are slow to adopt new work practices • Tools may enable work practices that reliably result in better solutions • But you need to learn these new work practices • Example: Use of layers in Photoshop • Comparing problem solving strategies not easy
Study of Side Views and Parallel Pies • Task • Transform start state to known end state • Order of operations makes it difficult • Experimentation required • Mimics real-world tasks, but can judge solution quality • RMS difference in CIE-LUV colorspace • 5 minute time limit • 24 subjects
Results • People use sliders 50% less when multiple previews (Side Views) available • No differences in efficiency or quality of solution found • But dramatically different problem solving practices when Parallel Pies present
Process Map • Derivation tree • Active state timeline • Command timeline • Conventions to indicate: • Undone/abandoned nodes • Duplicated states • Simultaneously active states