190 likes | 360 Views
The Role of IR in Assessment. Lanette A. Raymond, MA Catherine J. Wynne, MA Office of Institutional Research & Assessment Suffolk County Community College http://sccaix1.sunysuffolk.edu/Web/Central/IT/InstResearch/. The Role of IR in Assessment. Advocacy Technical Expertise Education.
E N D
The Role of IR in Assessment Lanette A. Raymond, MA Catherine J. Wynne, MA Office of Institutional Research & Assessment Suffolk County Community College http://sccaix1.sunysuffolk.edu/Web/Central/IT/InstResearch/
The Role of IR in Assessment • Advocacy • Technical Expertise • Education
Methods What is being assessed? • Level of Assessment • Goals and Objectives
Research Design How strong should the evidence be? What issues/questions does the assessment address? • Classic Experimental Designs • Quasi-Experimental Designs • Pre-Experimental Designs
Classic Experimental Designs Provides Evidence of Causality • Demonstrates covariation • Eliminates spurious relations • Establishes time order of occurrences
Pre-Experimental Designs Provide a snap-shot of student achievement • No control over other causal factors • Does not allow for comparisons • Dense in descriptive information
Quasi-Experimental Designs Allow evaluation in real-life settings • Contrasted Group Design • Planned Variation Design • Pretest-Posttest Design • Time-Series Design • Control-Series Design
Measurement What are we going to do, make it or buy it? • Standardized Instruments (Buy It) • Local/Home-Grown Instruments (Make It)
Advantages of Buying a Standardized Instrument • Standardized administration • Acceptable in terms of invested faculty time • Higher external credibility • Nationally normed
Disadvantages of Buying a Standardized Instrument • May not reflect specific curriculum content • Limited reporting of results may obscure important aspects of the assessment • Norms open to inappropriate interpretation
Advantages of Making a Local/Home-Grown Instrument • Tailored to the curriculum • Data available for detailed analyses • Amenable to a variety of formats • Perceived as more legitimate by faculty
Disadvantages of Making a Local/Home-Grown Instrument • Perceived as less credible from outside the institution • Cannot be compared across institutions • Costly to produce in terms of faculty time • May not be well constructed
Evaluation or Development of the Measure How do we know it will work here? • Style • Validity • Reliability
The Style of the Measure Is it consistent with the style of measurement within the course/program? • Multiple Choice • Essay • Other
The Validity of the Measure To what extent does the assessment measure allow assessment of what it is purported to measure? • Face Validity • Content Validity • Construct Validity • Criterion-Related Validity
The Reliability of the Measure Can we get consistent measurement with the assessment instrument? • Inter-rater (scorer) reliability • Test-retest reliability • Split-half or alternate forms reliability • Inter-item reliability
Implementation Procedures Just how are we going to collect the data? • Sampling • Authenticity
Analyses & Interpretation What does all this data tell us? • Descriptives • Correlations • Comparisons • Changes Over Time • Graphs & Charts
The Role of IR in Assessment Lanette A. Raymond, MA Catherine J. Wynne, MA Office of Institutional Research & Assessment Suffolk County Community College http://sccaix1.sunysuffolk.edu/Web/Central/IT/InstResearch/