1 / 1

INTRODUCTION

Project VIABLE: The Impact of Scaling and Effect Size on the Decisions Made with Graphed Intervention Data T. Chris Riley-Tillman*, Sandra M. Chafouleas**, Theodore Christ***, Teresa J. LeBel**, Amy Ivey*, Amy M. Briesch**

ace
Download Presentation

INTRODUCTION

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Project VIABLE: The Impact of Scaling and Effect Size on the Decisions Made with Graphed Intervention Data T. Chris Riley-Tillman*, Sandra M. Chafouleas**, Theodore Christ***, Teresa J. LeBel**, Amy Ivey*, Amy M. Briesch** * East Carolina University, ** University of Connecticut, *** University of Minnesota Project VIABLE RESULTS SUMMARY and CONCLUSIONS • Does the scale (6, 8, 10 or 100) used to present data influence the decisions that school • psychologists make about interventions? • In this study, the degradation of continuous data into a 6, 8 or 10 point scale did not impact decisions made about the intervention. • What decisions do school psychologists make about intervention effectiveness when presented • with “typical” intervention data of varying effect size (Cohen’s d)? • Is the intervention more effective in producing on-task behavior than the typical teaching • environment (baseline)? • Participants perceived an “effective” intervention begins between an ES of 0 (no) and an ES of 1 (yes). • The percent of respondents who could not determine if the intervention was effective drops between an ES of 1 and an ES of 2. • How effective do you think the intervention was at increasing the student's appropriate behavior? • 70% of participants rated an intervention with an ES of 0 as “not effective” – yet this suggests 30% believe an ES of 0 to be at least “somewhat effective” • When rating an intervention with an ES of 1, over 80% rated it as “somewhat effective” or “effective”. • When rating interventions with an ES of 2, respondents endorsed that it is “effective” (39%) or “highly effective” (36%) yet 21% of participants endorsed “somewhat effective”. • When rating interventions with an ES of 3, respondents typically endorsed it as “highly effective”. However, over 10% suggested this intervention was “somewhat effective” and another 30% did rate such an intervention as “highly effective”. In the end, even with an ES of 3, over 40% did not endorse the maximum level of effectiveness. • After considering these data, which of the following decisions would you make about this intervention? • Participants did not want to discontinue the intervention, regardless of actual effectiveness. • Generally, a “continue as is” decision was not made until an ES of 3 was presented • How much confidence do you have in your decision about this intervention? • The majority of respondents reported “some” or “a good deal” of confidence • However, a “no confidence” indication was reported most often in response to a non-effective intervention (i.e., ES = 0) This study represents one of the investigations initiated under Project VIABLE. Through Project VIABLE, empirical attention is being directed toward the development and evaluation of formative measures of social behavior involving a direct behavior rating (DBR). The goal of Project VIABLE is to examine the DBR through 3 phases of investigation including 1) foundations of measurement, 2) decision making and validity, and 3) feasibility.  The scaling question was addressed by examining if there were any differences in the response pattern of participants at the different effect size levels across the four versions of the scale. The scale used to present the data did not result in statistically significant differences (Chi Square, post correction) for any of the four questions, at any of the effect size levels. The decisions made by participant across effect size are presented below. INTRODUCTION Given an increasing emphasis on data-based decision making regarding school-based interventions, it is important to understand how educational professionals make decisions in regard to intervention outcome data. Thus, the current project VIABLE study was designed to examine two questions. 1) Does the scale (6, 8, 10 or 100) used to present data influence the decisions that school psychologists make about interventions? Given that social behavior data are often collected using some form of rating scale (from 1-4 to 0-100), the manner in which such data is presented could have an impact on the decision making process. In addition, direct behavior ratings using a continuous line (0-100 scale) can be more feasibly coded using a 0-10 scale. The impact of such coding on intervention decisions is examined in this study 2) What decisions do school psychologists make about intervention effectiveness when presented with “typical” intervention data of varying effect size (Cohen’s d)? In single subject research, there is uncertainty as to how to interpret what constitutes “small”, “medium” and “large” intervention effects. Thus, this question gathered information about what school-based practitioners perceive as an “effective intervention” based on effect size. Over two hundred NASP members judged “how effective” interventions were based on outcome data presented in a simple AB fashion. The intervention data were varied by effect size and scale (as discussed below) with the goal of gathering information as to what an educational professional considers a “large” intervention effect and what decisions they would make based on such outcome data. MATERIALS & METHODS In summary, the results of this study suggest that the scale of presentation does not have a significant effect on decisions made by practicing school psychologists across a range of effect sizes (varied by level). This finding is important as it is thought that it is more feasible to code intervention data from a direct behavior rating in a 0-10 scale rather than a 0-100 scale. In regard to practitioners decisions about invention data varied by effect size, there seems to be some general census. The majority of respondents suggested an ES of 0 was “not effective”, an ES of 1 was “somewhat effective” or “effective”, an ES of 2 was “effective or “highly effective”, and an ES of 3 was “highly effective” or “effective”. There were some additional interesting findings. For example, in contrast to what would be expected, high effect sizes (2 and 3) were routinely judged at “somewhat effective” and “effective” rather than “highly effective”. This suggests that in the field there is an expectation of what would be considered extremely large level gains for an intervention to be judged at the highest level of effectiveness. In addition, it was concerning to find that a non-effective intervention, while typically judged correctly (70% of respondents said it was “not effective”), was in almost 30% of cases considered “somewhat effective”. It is critical to note that in this study, ES was only varied by level. In addition, only a social behavior intervention was presented. Regardless, this use of intervention outcome data, and what is considered an effective intervention, needs to be further explored as we move forward with implementing a response to intervention model. One thousand NASP members were mailed a packet which requested them to provide basic demographic information and then judge the effectiveness of four interventions. Each participant was asked the following questions. 1) Is the intervention more effective in producing on-task behavior than the typical teaching environment (baseline)? 2) How effective do you think the intervention was at increasing the student's appropriate behavior? 3) After considering this data, which of the following decisions would you make about this intervention? 4) How much confidence do you have in your decision about this intervention? Each participant was presented with one intervention at each level of effect size (0, 1, 2 and 3) as well as each scaling option (6, 8, 10 and 100). The presentation was counterbalanced to control for order effects. Sample intervention data was produced though a simulation process. The final presented AB graphs were select though expert validation procedures. CONTACTS For additional information, please direct all correspondence to Chris Riley-Tillman as rileytillmant@ecu.edu Riley-Tillman, T.C., Chafouleas, S.M., Christ, T.J., LeBel, T.J., Ivey, A., Briesch, A.M. (2007, March). Project VIABLE: The Impact of Scaling and Effect Size on the Decisions Made with Graphed Intervention Data. Poster presentation at the National Association of School Psychologists Annual Convention, New York, New York.

More Related