10 likes | 89 Views
The Impact of Training on the Accuracy of Teacher-Completed Direct Behavior Ratings (DBRs). Teresa J. LeBel, Stephen P. Kilgus, Amy M. Briesch, & Sandra M. Chafouleas University of Connecticut . UCONN. UCONN. Introduction. Results. Results.
E N D
The Impact of Training on the Accuracy of Teacher-Completed Direct Behavior Ratings (DBRs) Teresa J. LeBel, Stephen P. Kilgus, Amy M. Briesch, & Sandra M. Chafouleas University of Connecticut UCONN UCONN Introduction Results Results Research has shown a pressing need for proactive efforts to address challenging behavior in order to facilitate both academic and social behavioral success (e.g. Walker, Ramsey, & Gresham, 2004). However, when making decisions about intervention selection and implementation, data are needed to provide understanding of the effects of those intervention attempts. It is of vital importance that the data collection procedures result in reliable and accurate data, are easy to use, require minimal training, are time efficient, and are acceptable to the user (i.e., the teacher). The Direct Behavior Rating (DBR) is a brief measure used to rate behavior over a specified period of time and under specific and similar conditions (Chafouleas, Christ, Riley-Tillman, Briesch, & Chanese, 2007). DBR-type tools have typically been investigated as and shown to be an acceptable and efficient method of intervention (e.g., Chafouleas, Riley-Tilman, & McDougal, 2002; Crone, Horner, & Hawken, 2002; McCain & Kelley, 1993). However, focus on DBR use in assessment has recently been increasing given the potential efficiency of data collection. Despite the interest in DBR use in the assessment of social behavior, to date, few studies have investigated the psychometric properties of DBRs across assessment purposes. However, a growing body of evidence (e.g., Chafouleas, McDougal, Riley-Tillman, Panahon, & Hilt, 2005; Chafouleas et al., 2007; Riley-Tillman, Chafouleas, Sassu, Chanese & Glazer, in press; Steege, Davin, & Hathaway, 2001) supports the use of DBRs in behavioral assessment. Given the potential promise of DBRs in behavioral assessment, it is important to investigate the degree of training necessary for intended users (i.e., teachers) to reliably and accurately rate behavior using a DBR. As previous studies in related areas (e.g. school based consultation) have found direct training including modeling and feedback to be an effective method for enhancing performance, it is important to investigate whether this type of training is equally effective for DBRs. This study provided preliminary investigation of the effect of type of training on the accuracy of teacher DBR use, as well as teacher acceptability of the assessment measure. Accuracy. Chi-square goodness-of-fit analyses were conducted to compare the proportion of individuals within each training group who rated accurately (see Table 3). For this method of analysis, data from both true score (i.e. expert rating) and DBR ratings made on a continuous scale (i.e. 0-100%) were converted to a categorical scale (i.e. 0-10). For example, a continuous measurement of 0% corresponded to a categorical score of ‘0’, 5% to ‘1’, 49% to ‘5’, and 72% to ‘8’. The conversion was made in order to create a categorical range of accepted accuracy. Specifically, any DBR rating which fell within ten percentage points of the true score was deemed to be accurate for the purpose of this study. A summary of the means and standard deviations of group ratings is provided in Table 2. For chi-square analyses of rating accuracy, comparisons were made between the proportion of “Passes” and “Fails” within each training group. For ratings of disruptive behavior, no significant differences in accuracy were found between groups. However, for academic engagement, differences in accuracy were found between the DT and NT groups (2 = 4.64, p = .03), and the IT and DT groups (2 = 7.54, p = .02). No significance, however, was found between the IT and NT groups (Yates 2 = .10, p = .75). Acceptability. An examination of ARP-R results indicates that those within the IT group found the DBR to be the most acceptable (mean = 3.96). The ratings across all groups, however, fell within the mid-range in terms of acceptability (DT group M=3.35, SD=.80; IT group M=3.96, SD =1.03; NT group M=2.84, SD =1.08). Summary and Conclusions Results from the current study indicated that no difference in rating accuracy existed between the NT and IT groups in regards to both target behaviors (i.e. disruptive behavior, academic engagement). This lack of discrepancy is interesting, in that it may suggest that a minimal level of rater training may be sufficient to produce accurate ratings. Yet, although no difference in accuracy was found between the NT and IT groups, each was significantly more accurate when rating academic engagement than those individuals within the DT group. However, no such differences were found between any of the three groups in regard to the rating of disruptive behavior. Overall, results of this preliminary investigation on DBR training suggest that (a) there exists an optimal degree of training, and (b) any level of training beyond this point may have a deleterious effect on rating accuracy. In addition, results suggest that teachers find the DBR to be a moderately acceptable tool for assessing student behavior. Given the preliminary nature of this study, additional research is needed before firm conclusions about DBR training requirements can be drawn. It will be important to determine which components of training (e.g. rater feedback) are deemed critical to produce accurate ratings. In addition, it may prove useful to further investigate the sources of rater error associated with DBR use (e.g. rater bias) in order to target the minimization of such sources of error in training. Lastly, although the use of video footage allowed for a preliminary exploration of the effects of DBR training on rater accuracy, investigation should be further extended into actual classrooms to gain a more complete understanding of the training needed to accurately rate student behavior using a DBR. In sum, despite certain limitations, results of the current study hold promise for the future of the DBR as a tool for the assessment of social behavior. Specifically, results seem to concur with previous work indicating that only a moderate degree of training may be sufficient to prepare teachers to make accurate ratings of student behavior (e.g. Angkaw et al., 2006, Chafouleas et al., 2005). This implication highlights the feasibility of the DBR, as it may only require a small amount of time to be spent in preparing teachers to use the tool accurately. This, in addition to the ease with which the DBR can be used in a classroom (an estimated 10-60 seconds per student; Chafouleas, Riley-Tillman, & McDougal, 2002) is promising, as other methods of social behavior assessment (e.g. systematic direct observation, rating scales) require an extensive amount of both time and training to reliably employ (Pelham, Fabiano, & Massetti, 2005). Method Participants included 40 general education teachers employed in a private high school in the Northeast. Video footage of a third grade classroom setting, as well as simulated classroom behavior footage, was collected and edited into 2-minute clips. Clips were selected based on the behaviors exhibited and visibility of the target children. The DBR consisted of a 100 mm continuous line divided into 10 equal gradients with three anchors (0%, 50%, 100%). Participants were asked to rate the percentage of time the target student was exhibiting disruptive behavior or academic engagement. Definitions of these behaviors and brief instructions regarding DBR procedures were included on the rating form. The Assessment Rating Profile- Revised (ARP-R; Eckert, Hintze, & Shapiro, 1997) was administered to participants at study completion to assess teacher acceptability of the DBR as a tool for documenting student behavior. Participants were randomly assigned to one of three groups (Direct Training, Indirect Training, No Training), which differed in the level of instruction and modeling provided, and the opportunity for practice using the DBR. The No Training (NT) group received only general instructions regarding viewing the video, behavior definitions, and completing the DBR. The Indirect Training (IT) group was given an instructional session on DBRs by a proctor, including reasons for use, how to complete the rating, and examples of rating specific behaviors. The Direct Training (DT) group received the exact same procedures as the Indirect Group, but in addition, had the opportunity to practice rating the specified behaviors. Each group, following the training conditions, was asked to watch the same two-minute video clip of typical classroom instruction and rate the target student on the proportion of time the student exhibited disruptive behavior and academic engagement. The DBR data were then compared to expert ratings, compiled from direct observational data. For additional information, please direct all correspondence to Teresa LeBel at teresa.lebel@uconn.edu LeBel, T. J., Kilgus, S. P., Briesch, A. M., & Chafouleas, S.M. (2008, February). The influence of training on teacher-completed direct behavior ratings. Poster presentation at the National Association of School Psychologists Annual Convention, New Orleans, LA.