1 / 1

INTRODUCTION

BIBLIOGRAPHY. SUMMARY AND CONCLUSIONS. Project VIABLE: Critical Components of DBR Training to Enhance Rater Accuracy Jessica Amon*, Shannon Brooks*, Stephen Kilgus**, Sandra M. Chafouleas**, & Chris Riley-Tillman* * East Carolina University, ** University of Connecticut. Project VIABLE.

damian
Download Presentation

INTRODUCTION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. BIBLIOGRAPHY SUMMARY AND CONCLUSIONS Project VIABLE: Critical Components of DBR Training to Enhance Rater Accuracy Jessica Amon*, Shannon Brooks*, Stephen Kilgus**, Sandra M. Chafouleas**, & Chris Riley-Tillman* * East Carolina University, ** University of Connecticut . Project VIABLE RESULTS RESULTS Six outcome variables were of interest, which included the accuracy with which participants rated each experimental video clip. Accuracy was calculated by taking the absolute value of the difference between each individual’s rating and the true score for that particular video clip (A = |xi – xtrue|). Lower scores were indicative of greater accuracy. The absolute value of each result was calculated to ensure that no pattern of rating would erroneously lead any one group to appear more accurate than another. See Table 1 for a summary of descriptive statistics for accuracy scores by condition, behavior target, and rate of behavior. A repeated measures MANOVA revealed a statistically significant (a) main effect for experimental clip (Wilks’ Lambda F = 4.571, p < .000, partial η2 =.932), (b) two-way interaction between experimental clip and practice level (Wilks’ Lambda F = 3.923, p = .002, partial η2 =.105), and (c) three-way interaction between experimental clip, type of training, and practice level (Wilks’ Lambda F = 2.896, p = .002, partial η2 = .08). The finding of the statistically significant three-way interaction may be taken to suggest that the moderating influence of type of training on the effect of practice level on accuracy varied across experimental clips. In other words, the effect of level of practice was not consistent across types of training. Furthermore, the influence of the within-subjects factor of ‘Experimental Clip’ suggests that these relationships were not consistent across each clip (e.g., a statistically significant difference between ST-3 and ST-6 that existed when rating experimental clip 1 may not have existed with regard to experimental clip 4). A series of post hoc comparisons were then conducted to further elucidate any differences amongst groups regarding mean rating accuracy. Comparisons were kept within experimental clip, as it was determined that such would comprise the most meaningful contrasts. That is, comparisons between groups across clips were considered to be uninterpretable, as a decision could not be made as to whether any difference was due to clip or training content. All possible unique (within dependent variable) comparisons were made, resulting in a total of 90 contrasts (15 unique contrasts per each of 6 dependent variables). Of the 90 possible unique contrasts, four were found to be statistically significant at the .0006 level. For a summary of these specific contrasts, please see Table 2. This study represents one of of several investigations initiated under Project VIABLE. Through Project VIABLE, empirical attention is being directed toward the development and evaluation of formative measures of social behavior involving a direct behavior rating (DBR). The goal of Project VIABLE is to examine the DBR through 3 phases of investigation including (1) foundations of measurement, (2) decision making and validity, and (3) feasibility.  Table 1. Average un-transformed absolute accuracy scores across all experimental clips. Table 2. Statistically significant comparisons of average rating accuracy INTRODUCTION DBR refers to the rating of a specified behavior at least daily, and then sharing that information with someone other than the rater. The question as to how much training is necessary to facilitate appropriate rater accuracy among DBR users has recently begun to be explored. Extant research has suggested that providing DBR users with training incorporating practice and performance feedback resulted in greater accuracy than exposure to a brief familiarization training session (Schlientz, et. al, 2009). Additional findings have indicated that the inclusion of practice with feedback resulted in improved rater accuracy over and above practice alone when rating student disruptive behavior. Despite these findings, a review of literature in related fields suggests that additional training components may be of interest. Specifically, work within the area of industrial/organizational psychology has supported training that calls attention to (a) common rater errors, and/or (b) rater frame-of-reference may lead to increased accuracy. The purpose of this study was to therefore examine the impact of adding Frame of Reference (FOR) and Rater Error Training (RET) to standard DBR training involving practice and feedback (STANDARD). In addition, the amount of exposure to practice with feedback was evaluated. MATERIALS & METHODS Participants were 177 undergraduate students recruited from a university in the southeast. Participants were assigned to one of six conditions a priori. Each condition was comprised of one ofthree types of training (Standard, FOR, and FOR+RET) and one of two levels of exposure (3 or 6 clips). In all conditions, participants were first asked to complete information pertaining to demographics and DBR familiarity. Next, participants were shown 3 one-minute pre-test clips and asked to rate one student on a specific behavior after each clip. All participants were then presented with a brief presentation on DBR. In the FOR+RET conditions, participants were also presented with common examples of errors in rating (e.g. the halo effect, leniency/severity, central tendency, primacy/recency). The presenter then demonstrated the correct way to rate a student’s behavior. Standard condition participants were provided with the true score for each pre-test clip. FOR and FOR+RET participants were given both the true score and an explanation for this score. Next, participants rated one student on one behavior in each of 3 or 6 (depending on the condition) modeling clips. In the FOR and FOR+RET conditions, participants were asked to write an explanation of why they chose the rating they did. The presenter then offered feedback on the clip. In the Standard conditions, feedback included the provision of true score. In the FOR and FOR+RET conditions, feedback included both the true score as well as the replaying of the clip as the presenter pointed out the reasons for each rating. Finally, all participants were asked to view and rate 6 experimental clips. The current investigation serves as an extension of the literature pertaining to training components to promote rating accuracy using DBR. Prior research has produced somewhat mixed findings. Work conducted by both Harrison et al. (2010) and Schlientz et al. (2009) has suggested the “more is better” approach, with the most intensive approaches to training leading to the most accurate raters. In contrast, LeBel and colleagues’ (2009) findings indicated that only moderate levels of training may be sufficient. It is this latter finding with which the current investigation is aligned. Results of the current study were generally consistent, with most groups not exhibiting greater accuracy over the others, to a statistically significant degree. However, data did suggest that when rating certain clips, the affordance of more practice with feedback did lead to greater accuracy, regardless of the type of training given. This would seem to indicate that as long as sufficient opportunity to practice is provided, one may receive a less intensive form of training (e.g., Standard Training) and still produce accurate ratings of student behavior. This suggests the potential feasibility of DBR training in school settings as relatively efficient training procedures might be incorporated as a first step in improving rater accuracy. It is recommended that future DBR-related work focus on the development of a standardized DBR training package. Subsequent investigations should then examine the use of this package, including both its feasibility and effectiveness within applied settings. * - The most accurate group within the comparison. 1 – Standard training 2 – Frame-of-Reference training 3 – Frame-of-Reference + Rater Error Training . Note – Low, medium, and high refer to the percent of time the behavior was displayed during each particular clip by the student of interest (0-25%, 26-74%, and 75-100%, respectively). Each of the six levels (3 levels X 2 behaviors) corresponds to an experimental video clip of student behavior rated by participants. CONTACTS For additional information, please direct all correspondence to Chris Riley-Tillman as rileytillmant@ecu.edu or Jessica Amon as jga1006@ecu.edu Preparation of this poster was supported by a grant from the Institute for Education Sciences (IES), U.S. Department of Education (R324B060014). Schlientz, M. D., Riley-Tillman, T. C., Briesch, A. M., Walcott, C. M., & Chafouleas, S.M., (2009) The Impact of Training on the Accuracy of Direct Behavior Ratings (DBRs). School Psychology Quarterly, 24, 73-83. Harrison, S. & Riley-Tillman, T.C. (2010, March). Direct Behavior Ratings: Training Strategies to Improve Accuracy. Presentation at the National Association of School Psychologists Annual Convention, Chicago, IL. LeBel, T. J., A., Briesch, A. M., Kilgus, S. P., Riley-Tillman, T. C., Chafouleas, S. M., & Christ, T. J. (2009, February). Behavioral Specificity and Wording Impact on Direct Behavior Rating Accuracy. Poster presentation at the National Association of School Psychologists Annual Convention, Boston MA.

More Related