140 likes | 222 Views
A “Dose-Response” Strategy for Assessing Program Impact in Naturalistic Contexts. Megan Phillips George Tremblay Antioch University Antioch University New England New England Michael Duffin Program Evaluation and Educational
E N D
A “Dose-Response” Strategy for Assessing Program Impact in Naturalistic Contexts Megan PhillipsGeorge Tremblay Antioch University Antioch University New England New England Michael Duffin Program Evaluation and Educational Research Associates, Inc. Presented at the Annual Convention of the Association for Behavioral and Cognitive Therapies, November 18, 2006
RATIONALE • With increasing accountability pressures in virtually all service sectors(American Psychological Association, 2005), we need evaluation strategies that can be utilized outside of highly controlled research contexts(cf. Strosahl et al., 1998, on the “manipulated training research method”). • Evaluation of dose-response relationships, while not a “strong” form of causal evidence(McCabe, 2004),has nevertheless been recognized as providing some support for a causal relationship(O’Neill, 2002).
REQUIREMENTS • Variability in exposure to intervention • Identification and measurement of targeted outcomes
ADVANTAGES • Provides a relatively efficient probe for active program effects, which can warrant further and more rigorous controlled analyses. • Uses a single measurement event while allowing for the collection of a wide range of dose values. • Data can be readily aggregated across time or settings. • Successfully detects small effects that are statistically significant.
LIMITATIONS • Measurement of dose may be somewhat indirect (e.g., estimates of time exposed to intervention). Evaluators must be open to site-specific operationalization of the dose measure, which may complicate comparison across programs. • Requires a deeper level of understanding of statistics than users of the evaluation data may be accustomed to. • Evaluators need to provide users of the data with some benchmark for interpreting the significance of observed effect sizes.
AN ILLUSTRATION • The Place-based Education Evaluation Collaborative (PEEC): • represents several innovative educational programs that share common themes, such as: • Enhanced community-school connections • Increased understanding of and connection to local place • Increased civic participation • maintains an ongoing, cross-program, multi-method evaluation effort.
EVALUATION QUESTION: • Is variability in dose (independent variable) of a place-based education program associated with variability in levels of behaviors and attitudes that the program is attempting to impact, i.e. a larger response (dependent variable)? • Sample: • 338 educator and 721 student surveys from 55 schools, collected over one year. • Representative of wide range of demographic characteristics, grade ranges, & program intensities.
METHOD: Measures • Dose measures • Composite dose was calculated from survey items including: • extent of program implementation: measured on a scale of 0 to 4 • total # of hours of exposure to program elements: raw scores in hours scaled to 0 to 4 metric comparable to “program implementation” • Distribution of composite dose scores across sample covered entire range from 0 to 4, offering suitable variability in the independent variable for dose-response calculations. • Response measures • Broad conceptual categories (modules) were developed that matched desired program outcomes. Each module was composed of indices designed to capture specific dimensions of the modules. • Individual survey questions were developed for each index, using items from existing surveys when possible to maximize validity of comparing current and future results to previously collected data.
METHOD: Analysis Dose-response analysis: • Multiple regression analyses were used to explore the percent variance of outcome variables (modules & indices) that could be accounted for by the predictor variable (program dose).
RESULTS • Statistically significant relationships (p<.01) were found between program dose and all outcome measures except two student-level indices and one educator-level index. • This analysis allowed for the identification of more and less active ingredients of the program (Figures 1 & 2).
Indication of an active ingredient Figure 1. Overall educator practice was analyzed at the super-ordinate level by combining average Likert scale responses for 12 items. The best fit multiple regression line shows that 19% of the variability in survey response is predicted by program dose.
Indication of a less active ingredient Figure 2. Student attachment to place was analyzed at the super-ordinate level by combining average Likert scale responses for 15 student survey items. The best fit multiple regression line shows that 6% of the variability in survey response is predicted by dose.
DISCUSSION • Benefits of the dose-response strategy in the PEEC evaluation context: • Data set can now be cumulative year-to-year. • Once an initial investment in survey instrument design & administration was made, future evaluation costs should decline. • Limitationsof the dose-response strategy in the PEEC evaluation context: • Relied on self-report data as opposed to more empirically verifiable observations. • Psychometric properties of the survey instruments have yet to be validated.
REFERENCES American Psychological Association. (2005, August). Policy statement on evidence-based practice in psychology. Retrieved February 19, 2006, from http://www2.apa.org/practice/ebstatement.pdf McCabe, O.L. (2004). Crossing the quality chasm in behavioral health care: The role of evidence-based practice. Professional Psychology: Research and Practice, 35, 571-579. Strosahl, K. D., Hayes, S. C., Bergan, J., & Romano, P. (1998). Assessing the field effectiveness of acceptance and commitment therapy: An example of the manipulated training research model. Behavior Therapy, 29, 35-64. O’Neill, R.T. (2002, June). A perspective on exposure-response relationships. Paper presented at the annual meeting of the American Association of Pharmaceutical Scientists, Arlington, VA. Retrieved October 25, 2006 from http://www.fda.gov/cder/offices/biostatistics/oneill_364/oneill_364.ppt • The data presented here were collected as part of an evaluation conducted by Program Evaluation and Educational Research Associates, Inc., under the supervision of Michael Duffin. The project was undertaken with the support of the Place-Based Education Evaluation Collaborative (PEEC). For more information about PEEC go to: http://www.PEECworks.org/ • An electronic version of this poster can be downloaded from: http://www.peecworks.org/PEEC/PEEC_Reports/S0112C7E1-0112C8A6