230 likes | 408 Views
Issues in Selecting Assessments for Measuring Outcomes for Young Children. Dale Walker & Kristie Pretti-Frontczak ECO Center and Kent State University Presentation at OSEP Early Childhood Conference Washington, DC, December 2005. Why Assessment?.
E N D
Issues in Selecting Assessments for Measuring Outcomes for Young Children Dale Walker & Kristie Pretti-Frontczak ECO Center and Kent State University Presentation at OSEP Early Childhood Conference Washington, DC, December 2005
Why Assessment? • Gather information about skills and capabilities to make decisions about practice • To determine eligibility for services • To determine if a child is benefiting from services or if changes need to be made • To measure development over time • To document outcomes
Purpose of Assessments –It’s all about the question(s) you want to answer • Screening – Is there a suspected delay? Does the child need further assessment? • Eligibility determination – Is the child eligible for specialized services? • Program planning – What content should be taught? How should content be taught? • Progress monitoring – Are children making desired progress? • Program evaluation/Accountability – Is the program achieving it intended outcomes and/or meeting required outcomes?
Assessment Options • Norm-Referenced • Criterion-Referenced • Curriculum-Based • Direct Observation • Progress Monitoring • Parent or Professional Report Any combination of assessments…
Provides information on development in relation to others Already used for eligibility in many states Diagnosis of developmental delay Standardized procedures Do not inform intervention Information removed from context of child’s routines Usually not developed or validated with children with disabilities Do not meet many recommended practice standards May be difficult to administer or require specialized training Norm-Referenced Pros/Cons
Norm-Referenced Assessment Table • Table consists of a review of 18 norm-referenced assessments • Information regarding each assessment is provided including: • Publisher information • Areas of development assessed • Test norms provided • Scores produced • Age range covered http://fpsrv.dl.kent.edu/ecis/Web/Research/OSEP/NRT.pdf
Measure child’s performance of specific objectives Direct link between assessment and intervention Provides information on children’s strengths and emerging skills Helps teams plan and meet individual children’s needs Meets recommended assessment practice standards Measures intra-child progress May be used to measure program effectiveness Requires agreement on criteria and standards Criteria must be clear and appropriate Usually does not show performance compared to other children Do not have standard administration procedures May not move child toward important goals Scores may not reflect increasing proficiency toward outcomes Criterion-Referenced Pros/Cons
Provides link between assessment and curriculum Expectations based upon the curriculum and instruction Can be used to plan intervention Measure child’s current status on curriculum Evaluate program effects Often team based Meets DEC and NAEYC recommended standards Represents picture of the child’s performance May not have established reliability and validity May not have procedures for comparing child to a normal distribution Generally linked to a specific curriculum Often composed of milestones that may not be in order of importance Curriculum-Based Pros/Cons
Curriculum-Based Assessment Rating Rubric • Evaluates the quality of CBAs for use with young children • Composed of 17 quality elements • Used to guide teams in selecting appropriate CBAs http://fpsrv.dl.kent.edu/ecis/Web/Research/OSEP/CBArubric.pdf
Used to monitor ongoing progress toward important outcomes over time Compare to children of similar ages over time Repeatable measures for monitoring progress Standardized administration Standards for technical adequacy Efficient to administer May also be used as a screening tool Indicators of progress may be viewed as not being comprehensive Not used for eligibility determination May not provide specific skills to teach but indicators of important skills Progress Monitoring Pros/Cons
High social validity Provides diverse perspective Important for informing intervention, program, IFSP/IEP Parents and professionals know the child, the environments in which they interact Collaboration requires time and effort to establish May not be reliable across time Does not permit comparison across children May include personal bias Parent & Professional Report Pros/Cons
Using Multiple Sources of Data or Single Source to Measure Outcomes? • Pros and Cons • Recommended practices • Need to summarize information generated • Ways data can be used beyond reporting OSEP outcomes
Using Data Beyond OSEP Reporting • Good assessment data can be used to…. • Reveal patterns regarding children’s strengths and emerging skills • Develop functional and meaningful IFSPs/IEPs • Inform program staff and families about strengths and weaknesses • Guide the development of intervention • Monitor children’s progress to inform intervention efforts • Enhance collaboration • Inform providers, programs, districts/parishes, regions, and states regarding important trends
Ongoing work and challenges… • Existing assessment tools were not developed to measure the three outcomes • ECO’s response: “Cross-walking” or mapping frequently used assessments to the outcomes • Work with publishers and state staff to develop guidance for how to use assessment results to generate OSEP-requested data
Work with Publishers and Developers • Finalizing crosswalks • Alignment with OSEP outcomes • How to determine what is “typical” performance • Age-anchored benchmarks to measures • How scores can be summarized using the ECO Summary Form • Possible recalibration of scores in a way that maintains the integrity of different assessments • Pilot studies with GSEG and interested states • Data summary report forms that assist users with alignment of information from assessment to OSEP outcomes
Example of Developing a Validated Crosswalk • First align • On the face of it – which items appear to align/match which outcomes? • Second validate • Do experts agree? • Check for internal consistency • Third examine the sensitivity of the assessment in measuring child change http://fpsrv.dl.kent.edu/ecis/Web/Research/OSEP/Steps.pdf
Example of Interpreting the evidence • Standard scores • Residual Change Scores • Goal Attainment Scaling • Number of objectives achieved/Percent objectives achieved • Rate of Growth • Item Response Theory • Proportional Change Index • Stoplight model
Interpreting the AEPS for Accountability • First administration (near entry) • Is the child above or below a cut off score? • If above – considered to be developing typically • If below – development is suspect • Which level of the AEPS was administered? • Child is less than three and Level I is used • Child is less then three and Level II is used • Child is older than three and Level I is used • Child is older than three and Level II is used
Interpreting the AEPS for Accountability • Second administration (near exit) • Use cut off scores again • Examine which level was used • Look for • changes in area percent scores • changes in scoring notes • changes in which level was administered