150 likes | 289 Views
A Presentation to the USDOE January 13, 2010 Mary Ann Snider Chief of Educator Excellence and Instructional Effectiveness. Race to the Top- Assessment Programs Project and Consortium Management Lessons Learned from NECAP and the National Center for the Improvement of Educational Assessment.
E N D
A Presentation to the USDOEJanuary 13, 2010Mary Ann SniderChief of Educator Excellence and Instructional Effectiveness Race to the Top- Assessment Programs Project and Consortium Management Lessons Learned from NECAP and the National Center for the Improvement of Educational Assessment
RI Participates in Three Consortia Models Three consortia- three different models Model I- NECAP: efficiency, capacity, cost-savings for high impact program (State Testing Program) Model II-WIDA: expertise on particular subgroup of students for moderate impact program (Complicated Test Design for Specific Population) Model III-ACHIEVE: comparability/common curriculum, end-of-course model for low impact program (Specific content test model for comparability of results)
Governance and Leadership • Model I- Members are operational partners • Model II- Members serve as a board of directors • Model III- Members serve on an advisory committee
Governance and Leadership Depends on- • Size of consortia • Expertise and capacity of members • Purpose and products of assessments • Phase of program: initial design, maintaining and implementing, responding to changes
Consortium Members Characteristics • Must have Common Standards • Must have a common vision for test blueprint (types of items, length of test, number of sessions) • Must have common operational agreements- spring versus fall, ability and willingness to integrate technology, release of test items, test security agreements
Consortium Member Characteristics • Should have common uses of test (informing or determining promotion or graduation decisions, impact on educator evaluation) • Should have common reporting needs- scale scores, standards-based, sub-scores, item analyses, historical student data
Consortium Member Characteristics • Could have common technical expectations and capacities- demographic files, score files, timing to “clean files” for accuracy in reporting, standard setting agreements (representation and methodology), reconciling discrepancies, connection to data warehouse.
Governance and Leadership- NECAP • Goal is to reach consensus but each state has one vote when consensus can’t be reached. • This model is carried throughout tiers of responsibility- commissioners (signing off on achievement cut scores, directors approving overall design and procedures, content teams selecting items and anchor papers, review teams approving items for inclusion)
Roles for Third Parties • Facilitate management meetings • Provide technical oversight of assessment design • Serve as “architect” between operational partners and contractors • Convene Technical Advisory Committees • Develop ancillary test support materials • Provide professional development
Features for Success • Set clear expectations and clarify the extent of control each member will have on decisions • Decide which decisions need consensus and which need unanimous agreement and which can be handled by voting • Decide how contracts and funding will be shared • Develop strong protocols for communication (e.g. weekly calls, status reports, questions and concerns)
Features for Success • Identify strengths and potential needs among all members in the partnership (e.g. content teams, strong ELL staff, etc.) • Determine what must be done collectively and what can be done individually (accountability methodology, single cut score and set of achievement descriptors, common administration procedures, accommodations, reports)
What can (and probably will) go wrong? • Lead participants change- commissioners, testing directors, content members • State budgets and capacity change • There are vastly differing opinions when interpreting content standards for test items, anchor papers, etc. among members
What can (and probably will) go wrong! • Demands on the test change • A lack of strong commitment to working collaboratively makes a difficult decision harder This is the Report Name
What Should the RTTT Consider? • Identify what features are critical and should be expected across the consortia (e.g. Alignment to Common Standards, consistent accommodations, distribution of item types, involvement of teachers) • Acknowledge what assessments have been a struggle for states and encourage different types of consortia to develop them in partnership with experts (ELL, 2%, Alternate Assessments)
What should the RTTT Consider? • Allow states to work together on NECAP-like assessment programs in core areas with NAEP-like items embedded • Identify areas for innovation and build national assessment models (end-of-course assessments, career and technical assessments) • Work with testing companies to ensure they are prepared to accommodate the operational, contractual, and technical issues necessary to successfully support a consortia assessment project