220 likes | 236 Views
The reliability of measurement is crucial for consistent results in data collection. This article explains threats to reliability, solutions, and accuracy assessment. It also covers Inter-observer Agreement (IOA), methods to calculate IOA, considerations, and reporting. Validity, types, threats, and examples are discussed. Understanding these concepts is vital for impactful data collection in ABA.
E N D
Reliability of Measurement • Measurement is reliable when it yields the same values across repeated measures of the same event • Relates to repeatability • Not the same as accuracy • Low reliability signals suspect data
Threats to Reliability • Human error • Miss recording a data point • Usually result from poorly designed measurement systems • Cumbersome or difficult to use • To complex • Can reduce by using technology – Cameras
2. Inadequate observer training • Training must be explicit and systematic • Careful selection of observers • Must clearly define the target behavior • Train to competency standard • Have on-going training to minimize observer drift • Have back up observers observe the primary observers
3. Unintended influences on observers • Causes all sorts of problems • Expectations of what the data should look like • Observer reactivity when she/he is aware that others are evaluating the data • Measurement bias • Feedback to observers about how their data relates to the goals of intervention
Solutions to Reliability Issues • Design a good measurement system • Take your time on the front end • Train observers carefully • Evaluate extent to which data are accurate and reliable • Measure the measurement system
Accuracy of Measurement • Observed values match the true values of an event • Issue: Do not want to base research conclusions or treatment decisions on faulty data
purposes of accuracy assessment: • Determine if data are good enough to make decisions • Discover and correct measurement errors • Reveal consistent patterns of measurement error • Assure consumers that data are accurate
observed values Must match true values • Determined by calculating correspondence of each data point with its true value • Accuracy assessment should be reported in research
Inter- observer Agreement (IOA) or Reliability (IOR) • Is the degree to which two or more independent observers report the same values for the same events • Used to: • Determine competency of observers • Detect observer drift • Judge clarity of definitions and system • Increase validity of the data
Requirements for IOA / IOR • Observers must: • Use the same observation code and measurement system • Observe and measure the same participants and events • Observe and record independently of one another
Methods to Calculate IOA / IOR • (Smaller Freq. / Larger Freq.) * 100 = percentage • Can be done with intervals as well • Agreements / Agreements + Disagreements X 100 • Methods can compare: • Total count recorded by each observer • Mean count-per-interval • Exact count-per-interval • Trial-by-trial
Timing recording methods: • Total duration IOA • Mean duration-per-occurrence IOA • Latency-per-response • Mean IRT-per-response
Interval recording and Time sampling: • Interval-by-interval IOA (Point by point) • Scored-interval IOA • Unscored-interval IOA
Considerations in IOA • During each condition and phase of a study • Distributed across days of the week, time of day, settings, observers • Minimum of 20% of sessions, preferably 25-30% • More frequent with complex systems
Considerations in IOA • Obtain and report IOA at the same levels at which researchers will report and discuss it within the results • For each behavior • For each participant • In each phase of intervention or baseline
Other Considerations • More conservative methods should be used • Methods that will overestimate actual agreement should be avoided • If in doubt, report more than one calculation • 80% agreement usually the benchmark • Higher the better • Depends upon the complexity of the measurement system
Reporting IOA • Can use • Narrative • Tables • Graphs • Report how, when, and how often IOA was assessed
Validity • Many types • Are you measuring what you believe you are measuring • Ensures the data are representative • In ABA, usually measure: • a socially significant behavior • dimension of the behavior relevant to the question
Threats to Validity • Measuring a behavior other than the behavior of interest • Measuring a dimension that is irrelevant or ill suited to the reason for measuring behavior • Measurement artifacts • Must provide evidence that the behavior measured is directly related to behavior of interest
Examples • Discontinuous measurement • Poorly scheduled observations • Insensitive or limiting measurement scales
Conclusions • Reliabiltiy and validity of data collection are important • Impacts the client, • Impacts your reputation for good work