990 likes | 1.18k Views
The Evolution of Human-Performance Modeling Techniques for Usability. Uri Dekel ( udekel@cs.cmu.edu ) Presented in “Methods of Software Engineering”, Fall 2004. Outline. Motivation and scope From early models to GOMS Stimulus-Response-Controller models Information Processing models
E N D
The Evolution of Human-Performance Modeling Techniques for Usability Uri Dekel (udekel@cs.cmu.edu) Presented in “Methods of Software Engineering”, Fall 2004
Outline • Motivation and scope • From early models to GOMS • Stimulus-Response-Controller models • Information Processing models • GOMS variants: what to use? • SW tools for GOMS • Lessons learned…
Motivation • Minor timing differences may have a major economic impact • Consider a call center with 100 employees • Average call length = 1 min • 144000 calls per day for entire call center • Improvement of 2 seconds per call • 80 person hours per day • 29200 person hours per year
Where can we optimize? • Moore’s law works for HW and SW • In the past, system reaction time was slow • Databases, networks and GUIs were slow • Now practically instantaneous • Moore’s law does not apply to humans… • But usability has significant impact on performance
Motivation • Problems and solution: • “How to design more usable interfaces?” • Partial solution: usability methods and principles • “How to ensure a design can be used effectively?” • Inadequate solution: use intuition • Inadequate solution: functional prototypes in a design-implement-test-redesign cycle • Expensive and time consuming, especially for hardware • Possible solution: paper prototyping complemented by quantitative models for predicting human performance
Motivation • We need to predict performance on a system which is not yet available • Nielsen, 1993: • “A Holy Grail for many usability scientists is the invention of analytic methods that would allow designers to predict the usability of a user interface before it has even been tested.” • “Not only would such a method save us from user testing, it would allow for precise estimates of the trade-offs between different design solutions without having to build them.” • “The only thing that would be better would be a generative theory of usability that could design the user interface based on a description of the usability goals to be achieved.”
Cognitive modeling • Definition: • producing computational models for how people perform tasks and solve problems, based on psychological principles • Uses: • Predicting task duration and error potential • Adapting interfaces by anticipating behavior
Outside our Scope • Predicting the intent of the user • Model the activities of the user • Relies on AI techniques to make predictions • Useful for intelligent and adaptable UIs • Improves learning curve
Outside our Scope • Predicting the intent of the user • Model the activities of the user • Relies on AI techniques to make predictions • Useful for intelligent and adaptable UIs • Improves learning curve • But not always successful…
Scope • Predicting the usability of the UI • Qualitative models • Will the UI be intuitive and simple to learn? • Is the UI aesthetic and consistent? • Will the user experience be positive? • Quantitative models • How long will it take to become proficient in using the UI? • How long will it take a skilled user to accomplish the task?
Goal of this talk • The goal is NOT: • To introduce you to GOMS and it variants • You got that from the reading • The goal is: • To provide the theoretical foundation and evolution of models which led to GOMS • To show tools that support GOMS • To understand how it could be useful to you
Stimulus-Response-Controller • Research predates Computer Science • Attempts to improve usability of interactive electronic systems such as control panels, radar displays, air traffic control, etc. • Early models developed by experimental psychology researches in the 1950s • Limited to single short perceptual and motor activities • Based on information and communications theory • Human is a simple device which responds to stimuli by carrying out a motor behavior • Based on Shannon’s definitions of entropy and channel capacity
Information Theory 101: Entropy • Entropy of a random event is a measure of its actual randomness • High entropy if unpredictable:
Information Theory 101: Entropy • Entropy of a random event is a measure of its actual randomness • High entropy if unpredictable: • The winning numbers for this week’s lottery • Same probability for all results • Low entropy if predictable:
Information Theory 101: Entropy • Entropy of a random event is a measure of its actual randomness • High entropy if unpredictable: • The winning numbers for this week’s lottery • Same probability for all results • Low entropy if predictable: • What lunch will be served at the next seminar? • High probability of Pizza. Low probability for Sushi
Information Theory 101: Entropy • Entropy can measure amount of information in message • Consider a message encoded as a string of bits. • Is the next bit 0 or 1 ? • High entropy • What if we add parity bit? • Lower entropy for the parity bit • What if we replicate every bit once? • Even lower for replicated bits
Information Theory 101: Entropy • Formally: • Let x be a random event with n possible values • The entropy of X is:
Information Theory 101: Channel Capacity • Information rate in a perfect channel • n = bits per second • H = entropy per bit • R = nH • R=n if entropy is 1 (pure data) • The channel bandwidth curbs the rate
Information Theory 101: Channel Capacity • Information rate in an analog channel • Curbed by bandwidth and noise • We can fix some errors using different encodings • Is there a limit to how much we can transfer?
Information Theory 101: Channel Capacity • Shannon’s definition of channel capacity • Maximal information rate possible on the channel • For every R<C, there is an encoding which allows the message to be sent with no errors • Theoretical maximum effectiveness of error correction codes • Does not tell us what the code is • Capacity formula: • B = bandwidth • SNR = Signal-to-noise ratio
Fitts’ Law • Paul Fitts studied the human limitation in performing different movement tasks • Measured difficulty of movement tasks in information-metric bits • Movement task is the transmission of information through the “human channel” • But this channel has a capacity
Fitts’ Law • Fitts’ law [1954] predicts movement time from starting point to specific target area • Difficulty index • A = distance to target center, W = target width • Movement time: • a = device dependent intercept • b = device dependent Index of Performance • The coefficients are measured experimentally • e.g., mouse IP lower than stylus, joystick
Fitts’ Law Implications • Primary implication: • Big targets at close distance are acquired faster than small targets at long range • Used to empirically test certain designs • Theoretical rationale for many design principles
Fitts’ Law Implications • Should buttons on stylus based touch screen (e.g., PDA) be smaller, larger or the same as buttons in a mouse based machine?
Fitts’ Law Implications • Should buttons on stylus based touch screen (e.g., PDA) be smaller, larger or the same as buttons in a mouse based machine? • Answer: larger, because it is more difficult to precisely point the stylus (higher index of performance)
Fitts’ Law Implications • Why is the context sensitive menu (“right-click menu” in Windows) located close to the mouse cursor?
Fitts’ Law Implications • Why is the context sensitive menu (“right-click menu” in Windows) located close to the mouse cursor? • Answer: mouse needs to travel shorter distance
Which is better for a context sensitive menu, a pie menu or a linear menu? Fitts’ Law Implications
Which is better for a context sensitive menu, a pie menu or a linear menu? Answer: if all options have equal probabilities, a pie menu. If one option is highly dominant, a linear menu Fitts’ Law Implications
Fitts’ Law Implications • In Microsoft Windows, why is it easier to close a maximized window than to close a regular window?
Fitts’ Law Implications • In Microsoft Windows, why is it easier to close a maximized window than to close a regular window? • Answer: If the mouse cannot leave the screen, target amplitude is infinite in the corner of the screen where the close box is located
Fitts’ Law Implications • Why use mouse-gestures to control applications?
Fitts’ Law Implications • Why use mouse-gestures to control applications? • Answer: A mouse gesture starts at the current location and requires limited movement, compared to acquiring the necessary buttons.
Fitts’ Law limitations • Addresses only target distance and size, ignores other effects • Applies only single-dimensional targets • Later research showed extensions to 2D and 3D • Considers only human motor activity • Cannot account for software acceleration • Does not account for training • Insignificant effect in such low-level operations
Fitts’ Law limitations • Only supports short paths • Research provided methods for complicated paths, using integration • But most importantly: • Operates at a very low level • Difficult to extend to complex tasks
Hick’s Law • Humans have a non-zero reaction time • Situation is perceived, then decision is made • Hick’s law predicts decision time as a function of the number of choices • Humans try to subdivide a problem • Binary rather than linear search • For equal probabilities: • coefficient a measured experimentally • For differing probabilities:
Hick’s Law • Hick’s law holds only if a selection strategy is possible • e.g., alphabetical listing • Intuitive implications: • Split menus into categories and groups • An unfamiliar command should be close to related familiar commands
Hick’s Law Example • The next slide presents a screenshot from Microsoft Word • How fast can you locate the toolbar button for the “WordArt character spacing” command?
Limitations of the Early Models • Developed before interactive computer systems became prevalent • Use metaphors of analog signal processing • Human-Computer Interaction is continuous • Cannot be broken down into discrete events • Human processing has parallelism
Information Processing Models • Developed in the 1960s • Combine psychology and computer science • Humans performs sequence of operations on symbols • Generic structure:
Information Processing Models • Models from general psychology are fitted to the results of actual experiments • Not predictive for other systems • Zero-parameter models can provide predictions for future system • Parameterized only by information from existing systems • e.g., typing speed, difficulty index, etc.
Model Human Processor • [Card, Moral and Newell in 1983] • Framework for zero-parameter models of specific tasks • Humans process visual and auditory input • Output is motor activity • Unique in decomposition into three systems • Each consists of processor and memory • Can operate serially or in parallel • Each with unique rules of operation
Model Human Processor Properties • Processor • Cycle time limits amount of processing • Memory • Relatively permanent long term memory • Short term memory • Consists of small activated LTM chunks • There are “seven plus minus two” chunks • Every memory unit has: • Capacity, decay time, information type
Perceptual System • Input arrives from perceptual receptors in outside world • Placed in visual and auditory stores • Stored close to physical form • “bitmap” and “waveforms” rather than symbols • Processor encodes symbolically in stores in LTM • Memory and processing limitations lead to memory loss • Attention directs items to be saved
Cognitive System • Responsible for making decision and scheduling motor operations • Performs a recognize-act cycle • Uses association to activate LTM chunks • Acts by modifying data in working memory • Might require several cycles