120 likes | 269 Views
AGEP Evaluation Capacity Meeting 2008. Yolanda George, Deputy Director, Education & Human Resources Programs. Objectives.
E N D
AGEP Evaluation Capacity Meeting 2008 Yolanda George, Deputy Director, Education & Human Resources Programs
Objectives • Identifying methods and questions for evaluation studies related to STEM graduate student progression to the PhD and professoriate, including admissions/selections, retention/attrition, PhD completion, and post-doctoral experiences, including collection of quantitative and qualitative data. • Identifying methods and questions for Alliance evaluations, particularly in terms of progression to PhD and the professoriate. What can AGEPs learn from cross institutional studies?
As you listen to presentations…. • What research informed the design of the study? • What type of data was collected? What was the rationale for deciding to collect this data? • What methods were used? What was the rationale for selecting methods used? • How were comparisons groups constructed? What are the reporting limitations in regards to the construction of the comparison groups?
Another Objective for this AGEP Meeting Developing and writing impact statements or highlights (nuggets) that include data for use in: • AGEP NSF Annual Reports Findings section • AGEP Supplemental Report Questions • NSF Highlights • Brochures and Web sites
The poster should include quantitative and qualitative data that provides evidence of: • Graduate student changes for selected STEM fields or all STEM fields • Infrastructure changes. This can include changes in institutional or departmental polices or practices • Alliance impact. This can include changes in institutional or departmental policies or practices related to graduate school affairs, postdoctoral arrangements, or faculty hiring. Stories and pictures are welcome but the major emphasis must be on quantitative and, as appropriate, qualitative data. Program descriptions need to be kept to a minimum and put in the context of the data behind decisions to keep or eliminate strategies. A focus can be on what works and what doesn't as long as the emphasis on the data that showed whether different strategies worked on not.
Impact Evaluations and Statements An impact evaluation measures the program's effects and the extent to which its goals were attained. Although evaluation designs may produce useful information about a program's effectiveness, some may produce more useful information than others. • For example, designs that track effects over extended time periods (time series designs) are generally superior to those that simply compare periods before and after intervention (pre-post designs); • Comparison group designs are superior to those that lack any basis for comparison; and • Designs that use true control groups (experimental designs) have the greatest potential for producing authoritative results. http://www.ojp.usdoj.gov/BJA/evaluation/guide/documents/impact_eval_gangs.htm
http://www.ed.gov/about/inits/ed/competitiveness/acc-mathscience/report.pdfhttp://www.ed.gov/about/inits/ed/competitiveness/acc-mathscience/report.pdf
Multiple Evidence Collection for AGEP • AAAS & Campbell Kibler, Inc (Collecting trend data) Are the numbers changing? Coming Soon • Carlos Rodriquez --- Impact Evaluation with Comparisons • Portfolio Assessment – (announced by Bernice Anderson yesterday) Alliance Evaluation • Alliance Level Evaluation (Annual Reports that include highlights with comparisons, if appropriate) You might want to re-evaluate your Alliance design in light of the need to show attribution. Read the ACC report.
Highlights (Nuggets) • Send highlight to l-allen@nsf.gov • Include highlights in your annual reports to NSF • AGEP Supplemental Annual Report Questions • Send highlight to your communication office. • Include highlights on your web sites, brochures, & posters
Evaluation Capacity Tool Kit for AGEPs What do you want in the toolkit? Do you have sample evaluations that you want to include in the evaluation toolkit?