1 / 4

Laure Haak , Discovery Logic. Chair and Discussant

Learn how to enhance program evaluation with templates, data standards, and collaboration strategies. Discover best practices for data collection and reporting. Explore questions on evaluation methods and grantee involvement.

frankv
Download Presentation

Laure Haak , Discovery Logic. Chair and Discussant

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Session 731Progress Reporting for US Federal Grant Awards: Templates, Guidance, and Data Standards to Support Effective Program Evaluation Laure Haak, Discovery Logic. Chair and Discussant Helena Davis, NIEHS/NIH. Using the Logic Model Process to Guide Data Collection on Outcomes and Metrics Larry Nagahara and Nicole Moore, NCI/NIH. Evaluating Collabortion and Team Sciene in the National Cancer Institute’s Physical Sciences-Oncology Consortium David Baker, CASRAI. Creating a Shared Core Set of Reporting Elements Sponsored by the AEA Research, Technology ,and Development Evaluation TIG

  2. Panel Scope Effective evaluation of grant programs will require mixed methods and a variety of data sources and types. • Can we support iterative program evaluation? • Can we collect quality data efficiently? • How do we involve grantees? • Can we scale processes across centers and programs?

  3. Questions • Are evaluation guidelines included in funding announcement? • How are data collected (e-document, web form)? • When are data collected (before, during after)? • Are there standard definitions used across the Institute or Agency? • Who reports and how are the data verified?

More Related