1 / 19

JOHN TUNNA Director Office of Research and Development Office of Railroad Policy and Development

Testing a Strategic Evaluation Framework for Incrementally Building Evaluation Capacity in a Federal R&D Program 27 th Annual Conference of the American Evaluation Association Washington, DC October 17, 2013. JOHN TUNNA Director Office of Research and Development

sheba
Download Presentation

JOHN TUNNA Director Office of Research and Development Office of Railroad Policy and Development

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Testing a Strategic Evaluation Framework for Incrementally Building Evaluation Capacity in a Federal R&D Program27th Annual Conference of the American Evaluation AssociationWashington, DCOctober 17, 2013 JOHN TUNNA Director Office of Research and Development Office of Railroad Policy and Development Federal Railroad Administration

  2. Federal Railroad Administration (FRA)Evaluation Implementation Plan • Introduction • R&D Evaluation Mandate • R&D Evaluation Goals • R&D Evaluation Standards • Uses of Evaluation • Formative • Summative • Types of Evaluation (CIPP Evaluation Model) • Context • Input • Implementation • Impact • Evaluation Framework & Key Evaluation Questions • Start-up Pilot Evaluations • Institutionalizing and Mainstreaming Evaluation • Metaevaluation • The Evaluation Manual • Evaluation templates • Attestation of standards

  3. R&D Evaluation Mandate • Congressional Mandates • Government Performance and Results Act (GPRA, 1993) • Program Assessment Rating Tool (PARTs, 2002) • GPRA Modernization Act of 2010 • OMB Memos • M-13-17, July 26, 2013: Next Steps in the Evidence and Innovation Agenda • M-13-16, July 26, 2013: Science and Technology Priorities for the FY 2015 Budget • M-10-32, July 29, 2010: Evaluating Programs for Efficacy and Cost-Efficiency • M-10-01, October 7, 2009: Increased Emphasis on Program Evaluations • M-09-27, August 8, 2009: Science and Technology Priorities for the FY2011 Budget • Federal Evaluation Working Group • Reconvened in 2012 to help build evaluation capacity across the federal government • “[We] need to use evidence and rigorous evaluation in budget, management, and policy decisions to make government work effectively.” • GAO reports • Program Evaluation: Strategies to Facilitate Agencies’ Use of Evaluation in Program Management and Policy Making (June, 2013) • Program Evaluation: A Variety of Rigorous Methods Can Help Identify Effective Interventions (GAO-10-30, November, 2009) • Program Evaluation: Experienced Agencies Follow a Similar Model for Prioritizing Research (GAO-11-176 , January, 2011)

  4. R&D Evaluation Mandate OMB Memo M-13-16 (July 26, 2013) Subject: Science and Technology Priorities for the FY 2015 Budget “Agencies. . . should give priority to R&D that strengthens the scientific basis for decision-making in their mission areas, including but not limited to health, safety, and environmental impacts. This includes efforts to enhance the accessibility and usefulness of data and tools for decision support, as well as research in the social and behavioral sciences to support evidence-based policy and effective policy implementation. “ “Agencies should work with their OMB contacts to agree on a format within their 2015 Budget submissions to: (1) explain agency progress in using evidence and (2) present their plans to build new knowledge of what works and is cost-effective.“

  5. R&D Evaluation Goals • Meet R&D accountability requirements • Guide and strengthen Division R&D program effectiveness and impact • Facilitate knowledge diffusion and technology transfer • Build R&D evaluation capacity • Improve railroad safety

  6. Why Evaluation in R&D? Assessing the logic of R&D Programs ACTIVITIES OUTPUTS OUTCOMES IMPACTS Funded Activity “Family” ___________ Scientific Research Technology Development Deliverables/ Products Application of Research Reduced Accidents Injuries Data Use Adoption of Guidelines, Standards or Regulations Technical Report(s) Forecasting Model(s) Changing Practices Emergent Outcomes Positive Knowledge Gains Negative Environmental Effects

  7. The Research-Evaluation Paradigm

  8. Program Evaluation Standards:Guiding Principles for Conducting Evaluations • Utility (useful): to ensure evaluations serve the information needs of the intended users. • Feasibility (practical): to ensure evaluations are realistic, prudent, diplomatic, and frugal. • Propriety (ethical): to ensure evaluations will be conducted legally, ethically, and with due regard for the welfare of those involved in the evaluation, as well as those affected by its results. • Accuracy (valid): to ensure that an evaluation will reveal and convey valid and reliable information about all important features of the subject program. • Accountability (professional): to ensure that those responsible for conducting the evaluation document and make available for inspection all aspects of the evaluation that are needed for independent assessments of its utility, feasibility, propriety, accuracy, and accountability. Note: The Program Evaluation Standards were developed by the Joint Committee on Standards for Educational Evaluation and have been accredited by the American National Standards Institute (ANSI).

  9. CIPP Evaluation Model:(Context, Input, Process, Product) • Context • Input • Implementation • Impact Types of Evaluation Stakeholder engagement is key Daniel L. Stufflebeam's adaptation of his CIPP Evaluation Model framework for use in guiding program evaluations of the Federal Railroad Administration's Office of Research and Development. For additional information, see Stufflebeam, D.L. (2000). The CIPP model for evaluation. In D.L. Stufflebeam, G. F. Madaus, & T. Kellaghan, (Eds.), in Evaluation models (2nd ed.). (Chapter 16). Boston: Kluwer Academic Publishers.

  10. EvaluationFramework: Roles and Types of Evaluation

  11. EvaluationFramework: Key Evaluation Questions – Safety Culture

  12. Evaluation as a Key Strategy Tool • Ask questions that matter. • About processes, products, programs, policies, and impacts • Then develop appropriate and rigorous methods to answer them. • Measure the extent to which, and ways, programs goals are being met. • What’s working, and why, or why not? • Use to refine program strategy, design and implementation. • Inform others about lessons learned, progress, and program impacts. • Improve likelihood of success with: • Intended users • Intended uses • Outcomes and impacts • Unanticipated (positive) outcomes • Use evaluation to develop appropriate and useful performance measures for reporting R&D outcomes, and monitoring those outcomes for continuous improvement.

  13. Michael Coplen Senior Evaluator Office of Research & Development Federal Railroad Administration 202-493-6346 Michael.Coplen@dot.gov 13

  14. QUESTIONS? 14

  15. Supplemental Information 15

  16. EvaluationFramework: Illustrative Questions – Fatigue Website

  17. Input Evaluation: Program Design and Partnership Commitment to Change • Clear Signal for Action (CSA) Theory of Change (Management & Labor)

  18. Implementation Evaluation Peer-to-Peer Feedback Safety Outcomes Continuous Improvement (CI) Safety Leadership Development (SLD)

  19. Impact Evaluation: Expected changes and possible metrics (Union Pacific example)

More Related