190 likes | 355 Views
Testing a Strategic Evaluation Framework for Incrementally Building Evaluation Capacity in a Federal R&D Program 27 th Annual Conference of the American Evaluation Association Washington, DC October 17, 2013. JOHN TUNNA Director Office of Research and Development
E N D
Testing a Strategic Evaluation Framework for Incrementally Building Evaluation Capacity in a Federal R&D Program27th Annual Conference of the American Evaluation AssociationWashington, DCOctober 17, 2013 JOHN TUNNA Director Office of Research and Development Office of Railroad Policy and Development Federal Railroad Administration
Federal Railroad Administration (FRA)Evaluation Implementation Plan • Introduction • R&D Evaluation Mandate • R&D Evaluation Goals • R&D Evaluation Standards • Uses of Evaluation • Formative • Summative • Types of Evaluation (CIPP Evaluation Model) • Context • Input • Implementation • Impact • Evaluation Framework & Key Evaluation Questions • Start-up Pilot Evaluations • Institutionalizing and Mainstreaming Evaluation • Metaevaluation • The Evaluation Manual • Evaluation templates • Attestation of standards
R&D Evaluation Mandate • Congressional Mandates • Government Performance and Results Act (GPRA, 1993) • Program Assessment Rating Tool (PARTs, 2002) • GPRA Modernization Act of 2010 • OMB Memos • M-13-17, July 26, 2013: Next Steps in the Evidence and Innovation Agenda • M-13-16, July 26, 2013: Science and Technology Priorities for the FY 2015 Budget • M-10-32, July 29, 2010: Evaluating Programs for Efficacy and Cost-Efficiency • M-10-01, October 7, 2009: Increased Emphasis on Program Evaluations • M-09-27, August 8, 2009: Science and Technology Priorities for the FY2011 Budget • Federal Evaluation Working Group • Reconvened in 2012 to help build evaluation capacity across the federal government • “[We] need to use evidence and rigorous evaluation in budget, management, and policy decisions to make government work effectively.” • GAO reports • Program Evaluation: Strategies to Facilitate Agencies’ Use of Evaluation in Program Management and Policy Making (June, 2013) • Program Evaluation: A Variety of Rigorous Methods Can Help Identify Effective Interventions (GAO-10-30, November, 2009) • Program Evaluation: Experienced Agencies Follow a Similar Model for Prioritizing Research (GAO-11-176 , January, 2011)
R&D Evaluation Mandate OMB Memo M-13-16 (July 26, 2013) Subject: Science and Technology Priorities for the FY 2015 Budget “Agencies. . . should give priority to R&D that strengthens the scientific basis for decision-making in their mission areas, including but not limited to health, safety, and environmental impacts. This includes efforts to enhance the accessibility and usefulness of data and tools for decision support, as well as research in the social and behavioral sciences to support evidence-based policy and effective policy implementation. “ “Agencies should work with their OMB contacts to agree on a format within their 2015 Budget submissions to: (1) explain agency progress in using evidence and (2) present their plans to build new knowledge of what works and is cost-effective.“
R&D Evaluation Goals • Meet R&D accountability requirements • Guide and strengthen Division R&D program effectiveness and impact • Facilitate knowledge diffusion and technology transfer • Build R&D evaluation capacity • Improve railroad safety
Why Evaluation in R&D? Assessing the logic of R&D Programs ACTIVITIES OUTPUTS OUTCOMES IMPACTS Funded Activity “Family” ___________ Scientific Research Technology Development Deliverables/ Products Application of Research Reduced Accidents Injuries Data Use Adoption of Guidelines, Standards or Regulations Technical Report(s) Forecasting Model(s) Changing Practices Emergent Outcomes Positive Knowledge Gains Negative Environmental Effects
Program Evaluation Standards:Guiding Principles for Conducting Evaluations • Utility (useful): to ensure evaluations serve the information needs of the intended users. • Feasibility (practical): to ensure evaluations are realistic, prudent, diplomatic, and frugal. • Propriety (ethical): to ensure evaluations will be conducted legally, ethically, and with due regard for the welfare of those involved in the evaluation, as well as those affected by its results. • Accuracy (valid): to ensure that an evaluation will reveal and convey valid and reliable information about all important features of the subject program. • Accountability (professional): to ensure that those responsible for conducting the evaluation document and make available for inspection all aspects of the evaluation that are needed for independent assessments of its utility, feasibility, propriety, accuracy, and accountability. Note: The Program Evaluation Standards were developed by the Joint Committee on Standards for Educational Evaluation and have been accredited by the American National Standards Institute (ANSI).
CIPP Evaluation Model:(Context, Input, Process, Product) • Context • Input • Implementation • Impact Types of Evaluation Stakeholder engagement is key Daniel L. Stufflebeam's adaptation of his CIPP Evaluation Model framework for use in guiding program evaluations of the Federal Railroad Administration's Office of Research and Development. For additional information, see Stufflebeam, D.L. (2000). The CIPP model for evaluation. In D.L. Stufflebeam, G. F. Madaus, & T. Kellaghan, (Eds.), in Evaluation models (2nd ed.). (Chapter 16). Boston: Kluwer Academic Publishers.
EvaluationFramework: Roles and Types of Evaluation
EvaluationFramework: Key Evaluation Questions – Safety Culture
Evaluation as a Key Strategy Tool • Ask questions that matter. • About processes, products, programs, policies, and impacts • Then develop appropriate and rigorous methods to answer them. • Measure the extent to which, and ways, programs goals are being met. • What’s working, and why, or why not? • Use to refine program strategy, design and implementation. • Inform others about lessons learned, progress, and program impacts. • Improve likelihood of success with: • Intended users • Intended uses • Outcomes and impacts • Unanticipated (positive) outcomes • Use evaluation to develop appropriate and useful performance measures for reporting R&D outcomes, and monitoring those outcomes for continuous improvement.
Michael Coplen Senior Evaluator Office of Research & Development Federal Railroad Administration 202-493-6346 Michael.Coplen@dot.gov 13
QUESTIONS? 14
EvaluationFramework: Illustrative Questions – Fatigue Website
Input Evaluation: Program Design and Partnership Commitment to Change • Clear Signal for Action (CSA) Theory of Change (Management & Labor)
Implementation Evaluation Peer-to-Peer Feedback Safety Outcomes Continuous Improvement (CI) Safety Leadership Development (SLD)
Impact Evaluation: Expected changes and possible metrics (Union Pacific example)