330 likes | 344 Views
In this usability study, we tested the redesigned TransTracker app to observe how users interacted with it and their expectations of its functionality. The study included task scenarios and collected data on user performance and opinions.
E N D
TransTrackerPilot Usability Study Team: Drew Bregel - Development, Data Analysis Marianne Goldin - PM, UI Tester, Data Gathering, Presenter Joel Shapiro - Tasks, Interactions Joe Woo - Development, UI Tester
Our Tool • A mobile phone application that goes above and beyond Google Maps • Build on Metro TripPlanner, OneBusAway, Google Maps transit trip planning • A more visual/map-heavy interface (less text input), shows the user the context of their environment • Importantly: the app predicts what the user will do next, and knows what the user does frequently!
Introduction to the Experiment 1. After redesigning our program since the first usability test, we wanted to see how users interacted with TransTracker and expected it to function. 2. We video-recorded the tests with the users and collected data based on their performance and opinions on the program and how they interacted with it.
Method • 5 Volunteer Participants • Application running on an emulator on a PC Laptop • Private room in the CSE Basement • 2 testers: One to conduct the test, One to record the test, bring in the volunteers, and troubleshoot the app • Payment in form of a snack (pop, chips)
Test Procedure • Recruit volunteer • Consent form • Introduction to our application (avoid key-words used in UI, explain “think out loud”) • Participant reads task scenario out loud and does the task • Time each task (each user must complete their tasks) • Videotape each task • Demographic questions • Debriefing comments • Compensation: a snack from the student lounge
Participants - Demographics • 5 participants • Median age = 22 • 3 males, 2 females • All upperclassmen, CSE majors • 3 of 5 used mobile applications • All used web-based transit applications • Note: The 2 who did not use mobile apps, used web-based apps the same or more (our potential customers!) • All used the bus at least once a week (average= 6.8 times/week)
Test Measures • Dependent Variables (for each task) • Time to complete incident • # Errors • # Times user goes to wrong screen or scrolls to wrong area • Critical Incident Logs (for each task) • Both positive and negative • Transcribed from video
Tasks • We defined a list of task scenarios that would make use of the principal features of our programs and gave users context for the tasks.
Task 1: Predicted Location (Easy) • Using our app’s predictive feature, take a trip to your most frequent destination (“Home”) • The user can assume that one of the predicted routes is accurate since they have taken it before
Task 1: Looking for… • If user tried to select any tabs that were not needed to complete the task. • If the user attempted to scroll down pages when unnecessary. • Are users unclear on where click to take a trip? • Are users aware that they can click on a destination to take a trip? • Is it obvious that the Home screen is the default screen in the program?
Task 2: Saved Location (Medium) Find a trip you’ve taken before, and take it now The user can assume that the destination that they would like to go to is listed in the “Saved” page
Task 2: Looking for… • Does the user [mistakenly] select the New tab for this task? • Does the user understand the concept of Saved trips? • Does the user understand that they can re-take a trip that they’ve taken? • Does the user understand expect that the destinations and trip times are clickable?
Task 3: New Destination (Hard) User must enter anew trip into the program through the “New” page. The user is to make their new trip by searching for it, then taking the trip. The user has not taken the trip before
Task 3: Looking for… • Does the user understand the idea of a New trip? • Does the user understand how to select a destination from the search results? • Does the user understand what the map represents? • Does the user expect that the destinations and trip times are clickable?
Study Results • Collected both in real-time during the test… • Task time • # Errors • # Wrong Screens • And retroactively, through reviewing video of the tests • Critical results • Verification of # Errors / # Wrong Screens • Notes and chatter
Sample Video of Task 3 • http://www.youtube.com/watch?v=szcLDylxhtM
Major Themes in Negative Critical Incidents Task 1: • List comprehension • Map interactions • Task completion Task 2: • Prototype fidelity Task 3: • Wizard of Oz • List comprehension • Task completion
Task 1: List Comprehension • Affordances of the items on the list in the home screen • Context of the list - why is it important?
Task 2: Prototype Fidelity • Not all items that should be clickable, are actually clickable
Task 3: Wizard of Oz • Lots of false positives--it would be great if our app could read the user’s mind
Tasks 1-3: Task Completion • Since users did not understand the spirit of the application, they are not sure when they’ve gotten all the info they can • Users not sure what the “end point” of the task is
Recommendations for Design Changes • Differentiate between clickable and non-clickable items • Rename tabs • Replace scroll option with a screen expansion option • Clarify how the predicted locations on the Home screen are sorted
Recommendations for Design Changes(continued) • Tighter integration with Google Maps cues • Destinations and search results need more detailed/contextural info • Add clear exits to all pages that lack them • Increase use of symbols and images through the programs • Make current time visible and obvious
Summary • The questions that we asked our users they gave us a great set of data; almost exactly the kind of results that we were hoping for • Our script accurately reflected the tasks that we wanted our users to accomplish and led them on the paths that we wanted to find UX errors that we wanted to correct • Our program uses a small enough set of pages that it was easy to see overarching issues in the program