1 / 68

Augmenting Intellect Part 2

Augmenting Intellect Part 2. Professor Michael Terry February 8, 2007. Talk Overview. Overview of HCI Types of problems investigated Goals of HCI Augmenting intellect Historical roots Challenges Ill-defined problems Interface-level support Open problems.

Download Presentation

Augmenting Intellect Part 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Augmenting IntellectPart 2 Professor Michael Terry February 8, 2007

  2. Talk Overview • Overview of HCI • Types of problems investigated • Goals of HCI • Augmenting intellect • Historical roots • Challenges • Ill-defined problems • Interface-level support • Open problems

  3. Challenges to Augmenting Intellect • Evaluation • What is intellect and can we actually augment it? • How can we measure success/failure of tools intended to augment intellect? • Design • How can we reliably (consistently) design computational devices to augment intellect?

  4. Evaluation: Challenges • Human intellect is a big, nebulous quality • No single agreed-upon method for measuring it • Makes it difficult to directly assess whether computational device is “augmenting” intellect, and if so, how • Examples…

  5. Evaluation: Challenges • Does Google augment our intellect? • Does MS Word? • Does Mathematica? • If they do, how do they augment our intellect? • And how can we compare which is “better” when there are alternatives?

  6. Evaluation: Challenges • How can we determine the long-term effect of a tool on person? • If the tool does more work, is faster… • Could it make the person less capable in the long term? • Could it make them less creative and more dependent on the tool? • Example: Calculators • Do they impede the learning of math skills? • Example: Coding • Who has ever tweaked some buggy lines, recompiled, and re-run it to see if the changes worked… • …Rather than understood what the real problem was?

  7. Evaluation: Summary • Must indirectly measure effects of tools on very specific tasks • Is user faster? • Does user expend less (perceived) effort? • Can user solve harder problems? • Is user more creative? • Avoid same pitfalls as calling a system “easy-to-use” or “usable”

  8. Design: Challenges • What forms should augmentation take? • Should tool be “smarter” and proactively solve problems? • Should it provide streamlined access to information? • Should it offer communication/collaboration facilities to connect people? • Should it automate mundane tasks so we can focus on higher-level issues? • Should it create new symbolic languages? • Wide range of assistance possible

  9. Design: Challenges • Computational augmentation can exist at many scales • Interface mechanism-level augmentation • Undo, previews • Application-level • Mathematica, Matlab • Societal-level • Collaborative filtering (e.g., Amazon’s rating systems)

  10. Design: Summary • Would like to develop generalizable principles to apply to the design of new tools intended to augment intellect • It’s not enough to have 1-off solutions • Example: Every application should provide “undo” facilities to support experimentation vs. Mathematica • Need to be able to reliably repeat our successes • Provide prescriptive guidelines for the design of new applications

  11. Summarizing the Issues • What do we build? • For whom? • What level/degree/scale of intervention? • Do we make the machine or the person smarter? • How can we measure success in the short- and long-term?

  12. Narrowing the Problem • Strengths/weaknesses of humans, computers suggest avenues for developing general principles

  13. Computer Constants • Computers not too creative (right now), but… • Are fast at simple calculations • Have perfect memories (until your disk crashes…) • Can simulate potential future events • Can attend to multiple tasks simultaneously

  14. Human Constants • People are inventive, creative, but… • Have limitations in memory • 7 +/- 2 chunks in short-term memory • Unreliable long-term memory • Have limited powers of prediction • Cannot reliably predict outcome of complex events • Cannot reliably work several steps ahead • Are slow to work out implications • Can focus on only one thing at a time • Extreme example: Writing concurrent software

  15. Human-Machine Symbiosis • Human limitations lead to common problem solving strategies across domains • Common problem solving strategies can be (partially) enhanced by computational capabilities • This suggests generalizable user interface design principles are possible • But must first understand what these common problem solving strategies are…

  16. Talk Overview • Overview of HCI • Types of problems investigated • Goals of HCI • Augmenting intellect • Historical roots • Challenges • Ill-defined problems • Interface-level support • Open problems

  17. Ill-Defined Problems • Two general classes of problems people solve • Well-defined problems vs. ill-defined problems • Guess which is the more difficult…

  18. Well-Defined Problems • Have: • Well-defined goal state • Well-defined evaluation function • Well-defined problem state • Well-defined operators to manipulate problem state • Examples: • Games (Sudoku, chess, checkers) • Caveat: • Some well-defined problems can appear ill-defined by the size of the problem space created (e.g., chess)

  19. Ill-Defined Problems • Described by Walter Reitman (1965) • Have: • Ill-defined goal state • Ill-defined evaluation function • Ill-defined problem state • Ill-defined operators to manipulate problem state • Examples: • Design a fuel-efficient car • Design software • Design anything with a given set of constraints • Write a paper

  20. Ill-Defined Problem Implications • Complex problems, not wholly understood at onset • No “textbook” way to solve problems, no “right” answer • People cope with the uncertainty by actively experimenting • Donald Schön (1983) dubs experimental process reflection-in-action

  21. Reflection-in-Action • Solution developed step-by-step • An informed act of improvisation • Process: • Person makes a “move” on a problem based on experience with solving similar problems • Person reflects on results, uses new information to decide next move • Consider write-compile-test cycles in software development • Method of managing complexity of solving an ill-defined problem

  22. Broader Experimentation • Experienced practitioners further experiment by actively generating sets of possibilities • Sets allow one to explore design space, compare and contrast alternative solutions • Process is called Set-Based Problem Solving Point-Based Problem Solving Set-Based Problem Solving (Terminology from Ward et al, 1995, and Sobek et al, 1997)

  23. We Have Our Target! • People actively apply intellect when solving ill-defined problems… • Ill-defined problems, of all forms, require active experimentation with the problem and its solution… • Experimentation includes generating sets of alternative solutions to compare and contrast… • Therefore, one general way to augment intellect is to support experimentation within user interfaces

  24. Talk Overview • Overview of HCI • Types of problems investigated • Goals of HCI • Augmenting intellect • Historical roots • Challenges • Ill-defined problems • Interface-level support • Open problems

  25. Supporting Experimentation • Experimentation with computer-based tools can happen in near- and long-term • Choosing a command/action (near-term) • Choosing a commands’ parameters (near-term) • Choosing a sequence of actions (long-term) • Developing sets of alternatives (long-term) • What ways do computers currently support these practices?

  26. Current Support • Previews • Undo/Redo • Save As… • Revision control • Idiosyncratic, manually-driven techniques • Embed alternatives in same document • Use of layers in Photoshop • Commenting out sections of code • Write new paragraph below its replacement • Demos…

  27. Limitations • Demos • Word • Photoshop

  28. Process Support Tools • Process Support Tools manage past, present, potential future solution states • Are domain-independent tools and services • Undo/redo, preview, revision control all process-support tools • Three classes of process-support tools • History tools • Previewing Tools • What-If Tools

  29. History Tools • Provide explicit support for managing versions of conceptually same document • Revision control systems • CVS, Subversion, version tracking in MS Word… • Snapshotting capabilities (Photoshop) • Editable histories • Branching histories • Save As…/Duplicate (thin interface-level support)

  30. History Tools Editable Graphical Histories (Kurlander & Feiner, 1988)

  31. History Tools Branching History in Designer’s Outpost (Klemmer et al, 2002)

  32. Previewing Tools • Provide support for exploring potential future states without requiring full commitment • Previews • Design Galleries (Marks et al) • Suggestive interfaces (Takeo Igarashi, others) • Side Views (Terry & Mynatt)

  33. Design Galleries (Marks et al, 1997)

  34. Previewing Tools Suggestive Interface in Chateau (Igarashi & Hughes, 2001)

  35. What-If Tools • Provide support for exploring sets of alternatives • Vary in how explicit support is for parallel versions • Undo • Spreadsheet convention • Subjunctive interface (Aran Lunzer) • Parallel Pies (Terry & Mynatt)

  36. What-If Tools

  37. What-If Tools Subjunctive Interface (Lunzer & Hornbaek, 2003)

  38. Limitations • Consider when/how experimentation takes place: • Choosing a command/action (near-term) • Choosing a commands’ parameters (near-term) • Choosing a sequence of actions (long-term) • Developing sets of alternatives (long-term) • What are some limitations with existing tools?

  39. Limitations: Choosing Actions • Many choices, little information • Difficult to predict results of future actions • Undo/redo, previews helpful, but provide only one view at a time

  40. Limitations: Exploring Alternatives • User must manually manage process of creating, managing, comparing sets of alternatives • Save As… • Alternatives embedded within same document • Branch, tag in revision control systems

  41. Summary of Tensions • Applications assume solution development through revision of single solution instance • A linear problem solving process • Enforced by equating an overall solution with a single document • Document the only organizational structure for data • Not expressive enough to hold alternatives • Few mechanisms to explicitly support exploration • Costly in time, effort to explore alternatives • Ultimately can discourage exploration

  42. Summary of Tensions Interface designs assume this model of problem solving But users often wantto explore and experiment

  43. Set-Based Interaction • Can reconceptualize interaction to explicitly support creation, manipulation, and evaluation of sets of alternatives • Let user explore without worrying about saving, documenting each state and its derivation • Allow manipulation of multiple versions simultaneously • Demo…

  44. Evaluation • It’s not enough to identify a need and design to it • We need to know how the tool affects work practices • Do people develop better solutions? • Are they faster? • More satisfied? • Work with less effort? • Want to know what works, what doesn’t, so we can replicate successes, avoid same old mistakes • But nature of ill-defined problems muddies the waters...

  45. Evaluation Challenges • Efficiency may not be an appropriate metric • User may take longer with tools, but arrive at better solutions • Difficult to assess whether one solution is better than another • Bane of ill-defined problems: No concrete evaluation function • People are slow to adopt new work practices • Tools may enable work practices that reliably result in better solutions • But you need to learn these new work practices • Example: Use of layers in Photoshop • Comparing problem solving strategies not easy

  46. Study of Side Views and Parallel Pies • Task • Transform start state to known end state • Order of operations makes it difficult • Experimentation required • Mimics real-world tasks, but can judge solution quality • RMS difference in CIE-LUV colorspace • 5 minute time limit • 24 subjects

  47. Results • People use sliders 50% less when multiple previews (Side Views) available • No differences in efficiency or quality of solution found • But dramatically different problem solving practices when Parallel Pies present

  48. Process Map

  49. Process Map • Derivation tree • Active state timeline • Command timeline • Conventions to indicate: • Undone/abandoned nodes • Duplicated states • Simultaneously active states

More Related