650 likes | 663 Views
IS 4800 Empirical Research Methods for Information Science Class Notes Feb. 29, 2012. Instructor: Prof. Carole Hafner, 446 WVH hafner@ccs.neu.edu Tel: 617-373-5116 Course Web site: www.ccs.neu.edu/course/is4800sp12/. Oral Presentation of Study Results. Presenting your research.
E N D
IS 4800 Empirical Research Methods for Information Science Class Notes Feb. 29, 2012 Instructor: Prof. Carole Hafner, 446 WVH hafner@ccs.neu.edu Tel: 617-373-5116 Course Web site: www.ccs.neu.edu/course/is4800sp12/
Presenting your research • A research project should “tell a story” • A brief presentation (20 min or less) is more difficult than a longer one – selectivity is critical • The T-shaped talk and the U-shaped talk • Rehearse for timing
Main concepts and ideas Do not go into great detail on experimental methods – BUT enough so people understand what you did Focus on motivation, results, implications If listener wants details they can read the paper or ask questions Oral Presentation
WEEK 1 WEEK 4 GOAL GOAL TASK TASK BOND BOND COMPOSITE COMPOSITE Oral PresentationDo use figures a lot
Visuals should be exhibits that you talk about Do not put lots of text on charts Do not read your charts for your presentation Use interactivity, video, images to keep your audience awake Oral PresentationGuide for Visuals
How did you evaluate that? How did you measure that? How did you control for extraneous variable X? Why didn’t you use statistic Y? Isn’t that a biased sample? What was your control group? How did you do study procedure Z? Common Questions
Describe your sample Minimal demographics – number of subjects, broken down by gender Better: age, occupation, major, year Minimize text on your charts If you use a novel measure (e.g., new survey) you must give details on the measure Examples of most important actual questions asked Any reliability/validity/psychometrics done If you do interviews, include actual quotes Build from data to conclusions Practice your timing/delivery with your project team Tips
Objectives (also critiques) Describe what your study is about – should also tell a story, and maybe a more complex one Motivate your study Assure reader you have conducted a sound study Research Methods – often presented in small font Present results in an objective manner Discuss implications Discuss future work Enable replication Written Study Reports
Typical Academic IS/CS/HCI Paper Structure • Astract • Introduction • Motivation • Related work • Hypotheses • Method • Results • Discussion • Limitations • Implications • Future work • References
Abstract (?) Executive Summary and Recommendations Introduction Motivation Related work System design Typical Design/Development Study • Evaluation • Hypotheses • Method • Results • Discussion – summary, limitations • Conclusion • Implications • Future work • References
Concise summary (one paragraph!) Abstract for an empirical study should include Information on the problem under study The nature of the subject sample A description of methods, equipment, and procedures A statement of the results A statement of the findings or conclusions drawn Often the last thing you write The Abstract
Part of paper giving justification for study Usually has the following information Introduction to the topic under study Brief review of research and theory related to the topic A statement of the problem to be addressed A statement of the purpose of the research A brief description of the research strategy A description of predictions and hypotheses CS/IS papers often put Related Work as a separate section after Introduction For each, describe how your work is different The Introduction
Includes information on exactly how a study was carried out Subsections Participants or subjects Describe in detail the participant or subject sample Human participants go in a Participants subsection, and animal subjects in a Subjects subsection Apparatus or materials Describe in detail any equipment or materials used Equipment is usually described in an Apparatus subsection and written materials in a Materials subsection The Method Section
Procedure Describe Exactly how the study was carried out The conditions to which subjects were exposed or under which observed The behaviors measured and how they were scored When and where observations were made Debriefing procedures Enough detail should be included in all sections so that the study could be replicated The Method Section
Objective, dry, boring – just the facts All relevant analyses are reported in the results section Do not present raw data Data should be reported in summary form Descriptive statistics Inferential statistics Results of descriptive and inferential statistics must be presented in narrative format Describe the source of any unconventional statistical tests The Results Section
This is where you can take some liberties with describing what the results mean Results are interpreted, conclusions drawn, and findings are related to previous research Section begins with a brief restatement of hypotheses Next, indicate if hypotheses were confirmed The rest of the section is dedicated to integrating findings with previous research It is fine to speculate, but speculations should not stray far from the data The Discussion Section
Liberally cite previous & related work. If you copy passages you must cite and, depending on length, format to indicate it is copied. Suggest using EndNote, BibTex or similar. Citations
Report all of your findings (not just the ones you like) Adhere to your original plan Report any deviations and why Power analysis, statistics, measures Do not drop subjects or data points without rigorous justification If your hypothesis test was not significant you cannot say anything about difference in means (example). If you did not do an experiment, attempting to control for extraneous variables, you cannot mention or imply causality. Ethical Issues
Introduction to Usability Testing • I. Summative evaluation: Measure/compare user performance and satisfaction • Quantitative measures • Statistical methods • II. Formative Evaluation: Identify Usability Problems • Quantitative and Qualitative measures • Ethnographic methods such as interviews, focus groups
Usability Goals (Nielsen) Learnability Efficiency Memorability Error avoidance/recovery User satisfaction Operationalize these goals to evaluate usability
What is a Usability Experiment? • Usability testing in a controlled environment • There is a test set of users • They perform pre-specified tasks • Data is collected (quantitative and qualitative) • Take mean and/or median value of measured attributes • Compare to goal or another system • Contrasted with “expert review” and “field study” evaluation methodologies • The growth of usability groups and usability laboratories
Experimental factors Subjects representative sufficient sample Variables independent variable (IV) characteristicchanged to produce different conditions. e.g. interface style, number of menu items. dependent variable (DV) characteristicsmeasured in the experiment e.g. timetaken, number of errors.
Experimental factors (cont.) Hypothesis prediction of outcome framed in terms ofIV and DV null hypothesis: states no differencebetween conditions aim is to disprovethis. Experimental design within groups design each subjectperforms experiment under each condition. transfer of learning possible less costlyand less likely to suffer from user variation. between groups design each subjectperforms under only one condition notransfer of learning more users required variation can bias results.
Summative AnalysisWhat to measure? (and it’s relationship to a usability goal) Total task time User “think time” (dead time??) Time spent not moving toward goal Ratio of successful actions/errors Commands used/not used frequency of user expression of: confusion, frustration, satisfaction frequency of reference to manuals/help system percent of time such reference provided the needed answer
Measuring User Performance Measuring learnability Time to complete a set of tasks Learnability/efficiency trade-off Measuring efficiency Time to complete a set of tasks How to define and locate “experienced” users Measuring memorability The most difficult, since “casual” users are hard to find for experiments Memory quizzes may be misleading
Measuring User Performance (cont.) Measuring user satisfaction Likert scale (agree or disagree) Semantic differential scale Physiological measure of stress Measuring errors Classification of minor v. serious
Reliability and Validity Reliability means repeatability. Statistical significance is a measure of reliability Validity means will the results transfer into a real-life situation. It depends on matching the users, task, environment Reliability - difficult to achieve because of high variability in individual user performance
Formative EvaluationWhat is a Usability Problem?? Unclear - the planned method for using the system is not readily understood or remembered (info. design level) Error-prone - the design leads users to stray from the correct operation of the system (any design level) Mechanism overhead - the mechanism design creates awkward work flow patterns that slow down or distract users. Environment clash - the design of the system does not fit well with the users’ overall work processes. (any design level) Ex: incomplete transaction cannot be saved
Qualitative methods for collecting usability problems Thinking aloud studies Difficult to conduct Experimenter prompting, non-directive Alternatives: constructive interaction, coaching method, retrospective testing Output: notes on what users did and expressed: goals, confusions or misunderstandings, errors, reactions expressed Questionnaires Should be usability-tested beforehand Focus groups, interviews
Observational Methods - Think Aloud user observed performing task user asked to describe what he is doing andwhy, what he thinks is happening etc. Advantages simplicity - requires little expertise can provide useful insight can show how system is actually use Disadvantages subjective selective act of describing may alter taskperformance
Observational Methods - Cooperative evaluation variation on thinkaloud user collaborates in evaluation both user and evaluator can ask each otherquestions throughout Additional advantages less constrained and easier to use user is encouraged to criticize system clarification possible