750 likes | 771 Views
Learn how to perform exclusion analysis on your website, make changes based on the results, and prepare for user trials to improve usability.
E N D
Usability with ProjectLecture 11 – 22/10/08 Dr. Simeon Keates
Exercise – part 1 • Perform an exclusion analysis on your web-site • (As you did on Wednesday) • Prepare a summary of your calculation • Assumptions • Levels of capability required • Exclusion (total and %age) for 16+ and 75+ • Make any changes necessary to your site • + any outstanding ones from last couple of weeks
And finally… • Turn to the back page of today’s handout…
Feedback from survey Likes: • Concrete examples / case studies • Project and relationship to exercises • Course reading material Room for improvement • Reduce number of presentations on Friday… • Need more Web material… • Need less Web material… • More guest lecturers
Feedback from survey Friday morning presentations: • Will now be 3 groups • You will find out on each Friday which groups… • Remember the purpose is to get you used to presenting usability findings
The importance of user trials • User trials are the “gold standard” for finding usability issues • They can be expensive to set-up • You need to plan to ensure maximum benefit from them
Prepare, prepare, prepare • As the old mantra goes: Fail to prepare, prepare to fail • It is essential to plan well • You need to identify: • What do we want the users to do? • What with? • What results are we expecting to collect? • How are we going to analyse them? • What are we going to do with them? • How can things go wrong???
Preparing for a user trial… • The next slides show an corporate-centric view of how to do usability • Source: “Observing the user experience” by Mike Kuniavsky • Issues to consider: • The steps described are very typical in industry • However, are they the “best”?
Preparing for a user trial… • Stage 1: Deciding what to do • Stage 2: Setting up a schedule • Stage 3: Estimating budgets • Stage 4: Preparing the research plan • Stage 5: Maintaining the research plan
Stage 1 – Deciding what to do • Step 1 – Collect issues and present them as goals • Step 2 – Prioritise the goals • Step 3 – Re-write the goals as questions to be answered
Step 1 – Collecting issues and presenting them as goals • 1 – Identify the stakeholders, e.g. • Product development managers • Interaction designers / information architects • Marketing • Customer relationship managers • Corporate branding • Users!!!
Collecting issues and presenting them as goals • Examples of key questions to ask: • 1 – In terms of what you do on a day-to-day basis, what are the goals of the product? • 2 – Are there ways your current solution is not meeting those goals? If so, what are they? • 3 – Are there questions you want answering about the new product? If so, what are they? • Make a list of the accumulated goals…
Step 2 – Prioritising the goals • You should now have corporate and user goals • The prioritisation of those goals may be “obvious” • If not, you need a method to help you decide
Prioritising the goals • List the goals • Rate each for “importance” (1 = unimportant, 5 = must have) • Rate each for “severity” (1 = annoyance, 5 = showstopper) • Multiply together and rank by resultant score Example:
Step 3 – Re-write the goals as research questions • Issue: • Better conversion of viewers to shoppers • Research Question: • Why don’t [some] visitors become shoppers? • Issue: • Help people use the search engine “better” and more often • Research Question: • How do people navigate the site, especially when looking for something specific? • Issue: • Why are so many people abandoning their shopping carts? • Research Question: • How do they expect the shopping cart to function? Is it failing them?
Expand general questions with specific ones • Example: Why are so many people abandoning the shopping cart? • Specific questions: • What is the ratio of carts abandoned to those completed? • On what pages are they abandoned? • What pages do people most frequently open their shopping carts? • Do people understand the instructions on the cart pages? • Do they know they are abandoning the cart? • Do they know what a shopping cart is? • How do they use the cart? • How do they shop on the site?
Tips for setting the research goals • Never go into user research to prove a point • Never create goals that seek to justify a position or reinforce a perspective • The aim is to uncover what people really want • Learn the product thoroughly • Research goals are more meaningful is you understand what is truly needed • Be prepared to address fundamental questions about the product • Even if the question is: “Should we be building this product?”
Stage 2 – Setting up a schedule • You need to turn the goals into a research plan • Need to add a time-line (schedule) to the goals • Needs to be sensitive to the development schedule
Option 1 – In the beginning… (of a development cycle) Early design and requirements gathering • Internal discovery: identify the business requirements and constraints • Survey: determine (e.g.) the demographic segmentation and product use of the existing user base • Log file analysis: examine the users’ current behaviours (if such data available) • Profiling: develop a representation of the users based on existing users or those of competitive products • Usability testing: uncover current interaction problems with existing product • Contextual enquiry: uncover problems users have with the product and the task • Task analysis: specify how he problems are currently solved (or not) • Focus groups: determine whether people feel the proposed solutions will help
Option 1 – In the beginning… Development and design • Usability tests: perform 4 to 5 back-to-back usability tests of prototypes to test their efficacy • Competitive usability tests: compare the prototypes to competitors’ products to determine strengths and weaknesses Post-release • Surveys and log file analysis: compare change in use of the product compared with past behaviour • Diaries: track long-term behaviour • Contextual enquiries: study how people are actually using the product
Option 2 – In the middle… • Usability is not always recognised as an issue up-front • Decisions will have been made about: • Who the users are • What their problems are • What solutions to use • These often cannot be revised without significant costs • This is often about “making the best of a bad job” • If usability is raised as an issue midway – then that usually means a problem has been found
Option 2 – In the middle… Design and development • Usability testing and competitive usability testing: rapidly iterate and improve the design Post-release • Log file analysis: perform analysis before and after release to see how user behaviour has changed Requirements gathering • Contextual inquiry: identify outstanding issues from existing user base for the next product release
Organise research questions into projects • To establish a schedule, it is necessary to understand the work packages to be performed • Need to associate time estimates with research questions • e.g. What is the ratio of carts abandoned to those completed? • Time to answer: 2 days of log analysis • Need to aggregate (collect) those time estimates into work packages • Then order those work packages into projects • Then produce a timetable (schedule) for the work
Choosing between the techniques Technique: Profiles (c.f. Personas) • Stage of development: Beginning • Duration: 2 – 5 days over 2 weeks • Cycle time: Once per major design, or when new user markets are defined • Description: Turn audience descriptions into fictional characters to understand how user needs relate • Benefits: Low cost method that creates good communication tools. Brings focus onto specific audiences rather than “the user” • Pitfalls: Based primarily on team’s understanding of users, not external research
Choosing between the techniques Technique: Contextual inquiry and task analysis • Stage of development: Initial problem definition • Duration: 2 – 4 weeks (not including recruiting) • Cycle time: Once per major set of features • Description: Observe people as they solve problems to create a mental model that defines their current understanding and behaviour • Benefits: Creates a comprehensive understanding of the problem that is being addressed • Pitfalls: Labour intensive
Choosing between the techniques Technique: Focus groups • Stage of development: Early development feature definition • Duration: 2 – 4 weeks (not including recruiting) • Cycle time: Once per major set specification, then after every feature cluster • Description: Structured group interviews(?) of 6-12 target audience representatives • Benefits: Uncovers people’s priorities and desires, collects anecdotes and investigates reactions to ideas • Pitfalls: Subject to “group-thinking”, desires can be easily misinterpreted as needs
Choosing between the techniques Technique: Usability testing (c.f. usability observations) • Stage of development: Throughout design and development • Duration: 1 – 2 weeks (not including recruiting) • Cycle time: Frequently (ideally) • Description: Structured one-on-one interviews(?) with users as they try specific tasks with products/prototypes • Benefits: Low-cost(?) technique that uncovers interaction problems • Pitfalls: Does not address underlying needs(?), just abilities to perform actions
Choosing between the techniques Technique: Surveys • Stage of development: Beginning of development, after launch and before re-design • Duration: 2 – 6 weeks • Cycle time: Once before major (re-)design, regularly thereafter • Description: Randomly selected representatives of the audience are asked to complete questionnaires, quantitative summaries are then generated • Benefits: Quantitatively describes the audience, segments them, investigates their perceptions and priorities • Pitfalls: Does not address why people have those perceptions or what their actual needs are, subject to selection bias
Choosing between the techniques Technique: On-going research • Stage of development: Throughout life of product • Duration: On-going • Cycle time: Regularly after release • Description: Long-term studies of users, done through diaries and advisory boards • Benefits: Investigates how users’ view and use patterns change with time and experience • Pitfalls: Labour intensive, requires long-term participation
Choosing between the techniques Technique: Usage logs and customer support • Stage of development: Beginning of development, after launch and before re-design • Duration: Varies • Cycle time: Regularly after release • Description: Quantitatively analyse Web server log files and customer support comments/throughput • Benefits: Does not require additional data gathering, data reveals actual behaviour and perceived problems • Pitfalls: Does not provide any information about reasons (why) for behaviour or problems
Stage 3 – Estimating budgets • Budgets are not just about finance • You need to consider: • People’s time (including yours and your team’s) • Recruiting and incentive costs • Equipment costs • Travel costs • Lab costs • etc.
Stage 4 – Preparing the research plan • Summary • Describe the research plan in 2 paragraphs (<200 words) • Research Issues • What are were the identified issues that the research needs to address • Example: “Based on conversations with [x, y and z] we have identified 5 major issues that this research will attempt to shed light on…” etc. • Then describe the high-level issues (not specifics) • Research Structure • Describe the actual research methods that you will employ and the issues that they will address • Example: “We will conduct four sets of one-on-one structured interviews to establish [x]. We will then perform…“ etc.
Preparing the research plan (continued) • Schedule • Include a detailed schedule, ideally laid out on a week-by-week basis • Include prominent milestones • Budget • Include the projected budget • This should be broken down into work projects • And further broken down by major activity • Example: “5 usability tests: Preparation 10 hours; Recruiting and scheduling 40 hours (assuming 40 participants); Conducting tests 120 hours; Analysing tests 80 hours… Total time: [x] hours” • Also include financial costs where known • Example: “Recruiting incentive (25-40 people) $2500-$4000; Supplies (food, videotape, etc.) $500… Total cost: $[x]”
Preparing the research plan (continued) • Deliverables • Specify what you are going to produce at the end of the research • Example: “The results of each usability test will be sent via e-mail as they are completed. Each e-mail will include an outline of the procedures, a profile of the people involved, a summary of all trends observed in their behaviour (as they relate to the initial research goals), problems they encountered and a series of supporting quotations.”
Stage 5 – Maintaining the research plan • The research plan is a “living” document • It needs to be re-visited, reviewed and revised to keep up-to-date • Especially as preliminary information and results begin to come in from the users
Improvements to this approach • The preceding slides were based on Kuniavsky’s “Observing the user experience” • These are not “definitive” but “indicative” • Remember, for the best research plan: “It depends!”
Additional steps to consider Think of this as the “marketing” version of the research plan • Pilot studies • This discussion has no reference to conducting pilots • What if something goes wrong? • Related to pilot study issues • What if a show-stopping error is uncovered early on? • What if the functionality is not going to be completed on time? (e.g. coders still coding) • What exactly are we testing and how exactly are we going to report it? • The method described here is (arguably) too imprecise and unspecific • Method is based on answering perceived issues • Is that good enough?
Yet further points to consider • Where is the testing to be conducted? • Your office, remote, 3rd party venue? • What forms of data collection are to be used? • Logging, video recording, note-taking, etc. • What approval is required? • Ethical approval, medical approval, etc.
A more precise approach… A standard “scientific” report/paper has the following structure: • Aim: What are we trying to do here (one paragraph max) • Hypotheses: What exactly are we examining? • Background/rationale: What is the context of this research? • Equipment: What are we using? • Method: How are we going to do this? • Results: What data did we collect? • Discussion: What does our analysis show? • Conclusions/summary: What did we learn? • Further work: What should we do now? Note: this is the basic structure for your final report!
Even more precise A good plan should specifically include: • The overarching issues (Aims) • The exact questions being asked (Hypotheses) • The expected data types to be collected to test those hypotheses • The mathematical analyses to be used on those data types • A pre-determined set of success/failure criteria
An example • The overarching issues (Aims) • Too many users abandon their shopping cart / do not complete their purchases • The exact questions being asked (Hypotheses) • Hypothesis 1 (H1): Our site has a significantly higher rate of abandonment than AN Other site • [Note – need to define “abandoned”] • Hypothesis 2 (H2): The rate of abandonment is a direct consequence of usability/accessibility issues • The expected data types to be collected to test those hypotheses • H1: (From log files) Number of purchases completed vs. number abandoned • H2: (From usability trials) Quantitative and qualitative user data
An example (continued) • The mathematical analyses to be used on those data types • H1: Count number of purchases completed and number of purchases abandoned; Identify which purchases abandoned now are completed at a later date; Compare with available data from other sources (competitors, older versions of site, performance goals); Test for statistical significance at 5% threshold • H2: Ask users to complete a number of different purchase types; Count number/proportion of trials completed successfully; Count number of errors (and severity) encountered; Record time for task completion; Record user satisfaction with task (on Likert-type scales); Report severe/frequent usability and accessibility issues and user dissatisfaction • A pre-determined set of success/failure criteria • H1: Must exceed competitor X or defined performance goal [X% conversion] • H2: No severe usability/accessibility issues; Users generally satisfied
Summary • Ideally, need a blend of the “scientific” plan with the “business” plan for a working document • However, may need a “sanitised” version for non-experts
Involving users in the design process “Know thy users - for they are not you”
The accessibility ‘knowledge loop’ How to assess data representation acceptability How to capture & represent end-user information Data representation End- users Information users Case studies How to assess products/service acceptability How to use the information to provide correct products/services Products/ services
Knowing the users – I • Q – What affects how acceptable an interface is to person? • A – How it corresponds to their: • Capabilities • Experiences • Education • Expectations • Attitudes
Knowing the users – II • Q – How do “accessibility” users differ from “mainstream” users? • A – By their: • Capabilities • Experiences • Education • Expectations • Attitudes