480 likes | 717 Views
Designing a Better Customer Survey. Scott S. Fricker Office of Survey Methods Research Bureau of Labor Statistics June 27, 2013. Scope of today’s talk. Topics covered Three of the Five Ws and one H of surveys Why/Who, What, and How Topics not covered
E N D
Designing a Better Customer Survey Scott S. Fricker Office of Survey Methods Research Bureau of Labor Statistics June 27, 2013
Scope of today’s talk • Topics covered • Three of the Five Ws and one H of surveys • Why/Who, What, and How • Topics not covered • Probably something you’d hoped I’d cover • Data collection • (e.g., mode issues, software, how to improve response, etc.) • Scale/index construction • Statistical sampling and analysis
Goals of Presentation • I hope to provide you with: • A general introduction to a vast subject • Evidence about the sources and impact of response errors • Examples of “good” and “bad” questions • Guidelines for developing effective questions and survey forms • Resources for further learning
Why Conduct a Customer Survey? • Quality begins and ends with the customer • Direct customer feedback helps: • Determine their needs and satisfaction • So we can design, validate, and improve our products/services and processes • Customer surveys support a Total Quality Management (TQM) approach
Why Conduct a Customer Survey? • Surveys are: • Standardized • Repeatable • Can produce both quantitative and rich qualitative data • Can allow inferences about the larger population
Who Should Conduct Customer Surveys? • Who? • Everyone concerned with assessing and improving the quality of their product! • Everyone…? • Yes, but not just ‘anyone’ • Customer surveys that are poorly constructed: • Waste organizational resources • Provide unactionable or misleading information • Risk alienating the customer
So…some ‘basic training’ • We’ll spend the rest of the time discussing: • How to decide what you really need to know from your survey • How to write the questions following best practices • And, importantly, how to test your survey
Step 1 Decide what you really need to know
Decide What You Really Need to Know • Survey Design Phase • Define goals and topic of survey • Break the topic into subtopics • Subdivide subtopics until you get to single issues/concepts that you can express as survey questions • Don’t reinvent the wheel – look at other surveys for suggestions
Decide What You Really Need to Know • Develop an Analysis Plan • How will do you plan to analyze the responses you get? • Will you really review all the open-ended questions? • Really, really? • What statistics do you plan to run? Will the response options support those analyses?
Decide What You Really Need to Know • How will you use the results? • Do you really need to know a respondent’s gender or zip code? • Will you need some information to ensure the sample is appropriate, even if you don’t need it for data analysis?
Decide What You Really Need to Know • Can you get the information somewhere else? • Is it already in a participant or employee database? • Is the question relevant to all respondents? • Can participants skip irrelevant questions easily?
Step 2 Write the questions following best practices
Model of Survey Response Process Encoding Comprehension Retrieval Judgment Response
Sources of Error in Response Process • Encoding • Was information ever stored in long-term memory? • Comprehension • Does R understanding match researcher’s? • Both literal and ‘pragmatic’ meaning • Retrieval • Recall, Reconstruction, Computation • Judgment • Generate internal answer, then assess it • Response • Edit (possibly censor) • Communicate
Problems in Answering Survey Questions Seven types of problems that can cause errors: • Failure to encode the information sought • Misinterpretation of the question • Forgetting and other memory problems • Inaccurate or unwarranted judgment/estimation strategies • Problems in formatting an answer • More or less deliberate misreporting • Failure to follow instructions
Best Practices • Question and response option wording • Rating scales • Rankings • Agree/Disagree items
Question Wording • Surveys may standardized the wording of questions, but that does not mean that meaning is standardized! • Pre-testing of items is important • Follow up with respondents if possible (e.g., usability test surveys)
Example: Have you smoked at least 100 cigarettes in your entire life? Comprehension Problems – Conceptual variability Words have many meanings (senses) Survey respondents reported their tobacco use, and then researchers followed to ask about their interpretation of the question: most frequent interpretation held by only 53.8% Interpretation of cigarette
Verbal labels • Vague quantifiers • “Few,” “A lot”, “Often” • Intensifiers • “A little,” “Slightly,” “Mildly” • Variation in interpretation • Absolute frequency implied by relative frequency • “Often” • Headaches • Car Accident
Vague Quantifiers: Problems and Solutions • How often did you attend a BLS sponsored training workshop during the past year? • Never • Rarely • Occasionally • Regularly • How often did you attend a BLS sponsored training workshop during the past year? • Not at all • A few times • About once a month • About two or three times a month • About once a week • More than once a week
Double-Barreled Questions • Avoid double-barreled questions • They force respondents to make a single response to multiple questions • They assume that respondents logically group the topics together, which may or may not be true • Recommendations • Watch for the use of “and” in questions. • Eliminate all double-barreled questions. • Divide them into multiple questions.
Rating Scales • Number of Scale Points • Odd v. Even? • Inclusion of Don’t Know Category? • Verbal labels vs. numeric values
Number of Scale Points • General rule: More than 5 points, need visual aid/flashcard (or use unfolding approach) • Numeric: • Use text labels for each option • Avoid numbers, unless they are meaningful • Especially avoid using negative numbers. Respondents do not like to select negative options. • Verbal: smaller number of scale points • Inability to distinguish between categories
Number of Scale Points, cont. • Even or odd number of scale points? • Substantive interest in middle position? • Socially more acceptable to report middle opinion rather than DK • Use of Filters • Number of scale points • 5 to 7 points • Increase reliability and validity • 7 to 11 points • Higher reliability the greater the number of points, but may be more burdensome
Rating Scales • Be sure the scale is balanced. • Example: How satisfied were you with the quality of today’s keynote speaker? (a) Very satisfied (b) Satisfied (c) Somewhat satisfied (d) Dissatisfied • This scale has 3 “satisfied” options, but only one “dissatisfied” option. • This scale has 3 “satisfied” options, but only one “dissatisfied” option.
Ranking • Definitions • Ranking: Select an order for the items, comparing each against all the others. • Rating: Select a value for individual items from a scale
Ranking • Consider other options before using ranking • Ranking is difficult and less enjoyable than other evaluation methods (Elig and Frieze, 1979). • You don’t get any interval level data
Ranking • Recommendations • Use ratings instead if you can. • Determine ranks from average ratings. • Use rankings if you need respondents to prioritize options.
Agree / Disagree Items • Why use agree / disagree items? • They are fairly easy to write • You can cover lots of topics with one scale • It’s a fairly standard scale • It’s familiar to respondents
Agree / Disagree Items • Unfortunately, they can be problematic • They are prone to acquiescence bias • The tendency to agree with a statement • They require an additional level of processing for the respondent • Respondents need to translate their response to the agree/disagree scale.
Agree / Disagree Items • Recommendation • Avoid agree / disagree items if possible • Use “construct specific” responses
Other Issues • Consider the survey as a conversation • Be sure the responses match the question. • Avoid jargon unless appropriate • Remember that responses can be impacted by • The size of the text field • Question order • Graphics, even seemingly innocuous ones
Broader Issue - Satisficing • Responding to surveys often requires considerable effort • Rather than finding the ‘optimal’ answer, people may take shortcuts, choose the 1st minimally acceptable answer • “Satisficing” (Krosnick, 1991) – depends on: • Task difficulty, respondent ability and motivation
Satisficing – Remedies • Minimize task difficulty • Minimize number of words in questions • Ask only about a single evaluative dimension in each question • Decompose questions when possible • Ask for absolute not relative judgments • Label response options etc.
Satisficing – Remedies, cont. • Maximize motivation • Describe purpose and value of study • Provide instructions to think carefully • Include random probes (“why do you say that?”) • Keep surveys short and put important questions early
Satisficing – Remedies, cont. • Minimize “response effects” • Avoid blocks of ratings on the same scale (prevents ‘straight-lining’) • Do not offer ‘no opinion’ response options • Avoid agree/disagree, yes/no, true/false questions
Step 3 Test the survey
Testing Surveys • Be sure your questions work • Two common techniques for evaluating surveys are: • Expert reviews • Cognitive Interviewing (see Willis, 2005)
Expert Reviews • Who is an expert? • Questionnaire design expert • Subject matter expert • Questionnaire administration expert (e.g., interviewers) • What do experts do? • To identify potential response problems • To make recommendations for possible improvements
Cognitive Interviewing • Cognitive interviewing basics • Have participant complete the survey • Afterwards, ask participants questions, such as • In your own words, what was the question asking? • What did you consider in determining your response? • Was there anything difficult about this question? • Review the qualitative data you get to identify potential problems and solutions.
Summary • Decide what you really need to know • Write the questions following best practices • Test the survey
References Bradburn, N., Sudman, S., and Wansink, B. (2004). Asking Questions: The Definitive Guide to Questionnaire Design -- For Market Research, Political Polls, and Social and Health Questionnaires (Research Methods for the Social Sciences). Wiley and sons. Elig, T. W., & Frieze, I.H. (1979). Measuring causal attributions for success and failure. Journal of Personality and Social Psychology, 37(4), 621-634. Krosnick, J.A. and Presser, S. (2010). Question and questionnaire design. In Handbook of Survey Research, 2nd Edition, Peter V. Marsden and James D. Wright (Eds). Bingley, UK: Emerald Group Publishing Ltd. Willis, G. (2005). Cognitive Interviewing: A Tool for Improving Questionnaire Design, Thousand Oaks, CA: Sage Publications, Inc.