360 likes | 482 Views
Collecting High Quality Outcome Data, part 2. Learning objectives. By the end of this module, learners will understand: Steps to implement data collection, including developing a data collection schedule, training data collectors, and pilot testing instruments
E N D
Learning objectives • By the end of this module, learners will understand: • Steps to implement data collection, including developing a data collection schedule, training data collectors, and pilot testing instruments • Data quality as it relates to reliability, validity and minimizing sources of bias • Key concepts are illustrated using a variety of program measurement scenarios.
Agenda • Implementing data collection • Developing a data collection schedule • Training data collectors • Pilot testing instruments • Ensuring data quality • Reliability • Validity • Minimizing bias • Summary of key points; additional resources
How to use this course module • Go through the sections of the course module at your own pace. Use the “back” arrow to return to a section if you want to see it again. • The audio narrative on the course is automatic. If you prefer to turn it off, click on the button that looks like a speaker. • There are several learning exercises within the course. Try them out to check on your understanding of the material. There is no grade associated with these exercises. • Practicum materials are provided separately for use by trainers and facilitators.
Steps to implement data collection • After identifying a data source, method and instrument: • Identify whom to work with to collect data • Set a schedule for collecting data • Train data collectors • Pilot test the data collection process • Make changes • Implement data collection FOR BEST RESULTS make key decisions about how to implement data collection BEFORE program startup!
Creating a data collection schedule Identifies who will collect data, using which instrument, and when Share with team to keep everyone informed Include stakeholders in planning Include dates for collecting, analyzing, and reporting data Use any format you like
Training data collectors • Determine best person(s) to collect data • Provide written instructions for collecting data • Explain importance and value of data for program • Walk data collectors through instrument • Practice or role play data collection • Review data collection schedule • Explain how to return completed instruments
Pilot testing for feasibility and data quality • Try out instruments with a small group similar to program participants • Discuss instrument with respondents • Analyze pilot test data toensure the instrument yields the right information • Revise instrument and data collection process based on feedback • Implement data collection! Questions for Debrief How long did it take to complete? What did you think the questions were asking you about? Were any questions unclear, confusing, or difficult to answer? Were response options adequate? Did questions allow you to say everything you wanted to say?
Exercise #1 • Which statement is FALSE? • Data collectors should be trained only when instruments are complex. • Data collectors should receive written instructions on how to administer the instrument. • Data collectors should receive a copy of the data collection schedule. • Train data collectors by walking them through the instrument. • None of the above
Exercise #2 • The main purpose of pilot testing is to: • Determine how well an instrument will work in your context • Find out if respondents like your instrument • Ensure the confidentiality of sensitive data • Determine which of two competing instruments works best
Ensuring data quality: Reliability, validity, bias • Reliabilityis the ability of a method or instrument to yield consistent results under the same conditions. • Validityis the ability of a method or instrument to measure accurately. • Bias involves systematic distortion of results due to over- or under-representation of particular groups, question wording that encourages or discourages particular responses, and by poorly timed data collection.
Reliability • Administer instruments the same way every time • Written instructions for respondents • Written instructions for data collectors • Train and monitor data collectors • Design instruments to improve reliability • Use clear and unambiguous language so question meaning is clear. • Use more than one question to measure an outcome. • Use attractive, uncluttered layouts that are easy to follow.
Ensuring reliability • For Surveys • Avoid ambiguous wording that may lead respondents to interpret the same question differently. • For Interviews • Don’t paraphrase or change question wording. • Don’t give verbal or non-verbal cues that suggest preferred responses. • For Observation • Train and monitor observers to ensure consistency in how they interpret what they see.
Validity • Validity is the ability of a method or instrument to measure accurately. • Instrument measures same outcome identified in theory of change (attitude, knowledge, behavior, condition) • Instrument measures relevant dimensions of outcome • Instrument results corroborated by other evidence
Ensuring validity — example • Academic engagement (attachment to school) • Instrument should measure same type of outcome – in this case, attitudes - as intended outcome • Instrument should measure the outcome dimensions targeted by intervention, including feelings about: • Students showing improvement in attitudes towards school should not exhibit contradictory behavior • - Teachers • - Students • - Being in school • - Doing schoolwork
Sources of bias • Who, how, when, and where you ask • Who: Non-responders = hidden bias • Participants who drop out early • Sampling bias: Participants don’t have equal chance of being selected • How: Wording that encourages or discourages particular responses • When and Where: Timing and location can influence responses (e.g., conducting satisfaction survey right after free lunch) • Bias is often unintentional, but can lead to over- or under-estimation of program results
Minimizing bias • Get data from as many respondents as possible • Work with program sites to maximize data collection • Follow up with non-responders • Take steps to reduce participant attrition • Pilot test instruments and data collection procedures • Mind your language • Time data collection to avoid circumstances that may distort responses
Exercise #3 • Which statement is true? • Reliability refers to the ability of an instrument to: • Produce consistent results • Produce expected results • Produce accurate results • Be used for a long time • Be completed by respondents with ease
Exercise #4 • Which statement is true? • Validity refers to the ability of an instrument to: • Produce consistent results • Produce expected results • Produce accurate results • Be used for a long time • Be completed by respondents with ease
Exercise #5 • Bias can be minimized by doing all of the following except: • Getting data from as many respondents as possible • Sampling program participants • Using neutral language in questions • Pilot testing instruments • Reducing attrition of program participants • Following up with non-responders
Measurement Scenarios • Explore these measurement scenarios to see how programs address issues of reliability, validity, and bias.
Capacity Building—recruiting volunteers to serve more people
Summary of key points • Steps to implement data collection include identifying the players involved in data collection, creating a data collection schedule, training data collectors, pilot testing instruments, and revising instruments as needed. • A data collection schedule identifies who will collect data, using which instrument, and when. • Training data collectors by walking them through the instrument and role playing the process. • Pilot testing involves having a small group of people complete an instrument and asking them about the experience.
Summary of key points • Reliability, validity, and bias are key criteria for data quality. • Reliability is the ability of a method or instrument to yield consistent results under the same conditions. • Validity is the ability of a method or instrument to measure accurately. • Bias involves systematic distortion of results due to over- or under-representation of particular groups, question wording that encourages or discourages particular responses, and by poorly timed data collection.
Additional resources • CNCS Performance Measurement • http://nationalservice.gov/resources/npm/home • Performance measurement effective practices • http://www.nationalserviceresources.org/practices/topic/92 • Instrument Formatting Checklist • http://www.nationalserviceresources.org/files/Instrument_Development_Checklist_and_Sample.pdf • Practicum Materials • [URL]