1 / 15

A Presentation to the USDOE January 13, 2010 Mary Ann Snider Chief of Educator Excellence and Instructional Effectivenes

A Presentation to the USDOE January 13, 2010 Mary Ann Snider Chief of Educator Excellence and Instructional Effectiveness. Race to the Top- Assessment Programs Project and Consortium Management Lessons Learned from NECAP and the National Center for the Improvement of Educational Assessment.

tobit
Download Presentation

A Presentation to the USDOE January 13, 2010 Mary Ann Snider Chief of Educator Excellence and Instructional Effectivenes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Presentation to the USDOEJanuary 13, 2010Mary Ann SniderChief of Educator Excellence and Instructional Effectiveness Race to the Top- Assessment Programs Project and Consortium Management Lessons Learned from NECAP and the National Center for the Improvement of Educational Assessment

  2. RI Participates in Three Consortia Models Three consortia- three different models Model I- NECAP: efficiency, capacity, cost-savings for high impact program (State Testing Program) Model II-WIDA: expertise on particular subgroup of students for moderate impact program (Complicated Test Design for Specific Population) Model III-ACHIEVE: comparability/common curriculum, end-of-course model for low impact program (Specific content test model for comparability of results)

  3. Governance and Leadership • Model I- Members are operational partners • Model II- Members serve as a board of directors • Model III- Members serve on an advisory committee

  4. Governance and Leadership Depends on- • Size of consortia • Expertise and capacity of members • Purpose and products of assessments • Phase of program: initial design, maintaining and implementing, responding to changes

  5. Consortium Members Characteristics • Must have Common Standards • Must have a common vision for test blueprint (types of items, length of test, number of sessions) • Must have common operational agreements- spring versus fall, ability and willingness to integrate technology, release of test items, test security agreements

  6. Consortium Member Characteristics • Should have common uses of test (informing or determining promotion or graduation decisions, impact on educator evaluation) • Should have common reporting needs- scale scores, standards-based, sub-scores, item analyses, historical student data

  7. Consortium Member Characteristics • Could have common technical expectations and capacities- demographic files, score files, timing to “clean files” for accuracy in reporting, standard setting agreements (representation and methodology), reconciling discrepancies, connection to data warehouse.

  8. Governance and Leadership- NECAP • Goal is to reach consensus but each state has one vote when consensus can’t be reached. • This model is carried throughout tiers of responsibility- commissioners (signing off on achievement cut scores, directors approving overall design and procedures, content teams selecting items and anchor papers, review teams approving items for inclusion)

  9. Roles for Third Parties • Facilitate management meetings • Provide technical oversight of assessment design • Serve as “architect” between operational partners and contractors • Convene Technical Advisory Committees • Develop ancillary test support materials • Provide professional development

  10. Features for Success • Set clear expectations and clarify the extent of control each member will have on decisions • Decide which decisions need consensus and which need unanimous agreement and which can be handled by voting • Decide how contracts and funding will be shared • Develop strong protocols for communication (e.g. weekly calls, status reports, questions and concerns)

  11. Features for Success • Identify strengths and potential needs among all members in the partnership (e.g. content teams, strong ELL staff, etc.) • Determine what must be done collectively and what can be done individually (accountability methodology, single cut score and set of achievement descriptors, common administration procedures, accommodations, reports)

  12. What can (and probably will) go wrong? • Lead participants change- commissioners, testing directors, content members • State budgets and capacity change • There are vastly differing opinions when interpreting content standards for test items, anchor papers, etc. among members

  13. What can (and probably will) go wrong! • Demands on the test change • A lack of strong commitment to working collaboratively makes a difficult decision harder This is the Report Name

  14. What Should the RTTT Consider? • Identify what features are critical and should be expected across the consortia (e.g. Alignment to Common Standards, consistent accommodations, distribution of item types, involvement of teachers) • Acknowledge what assessments have been a struggle for states and encourage different types of consortia to develop them in partnership with experts (ELL, 2%, Alternate Assessments)

  15. What should the RTTT Consider? • Allow states to work together on NECAP-like assessment programs in core areas with NAEP-like items embedded • Identify areas for innovation and build national assessment models (end-of-course assessments, career and technical assessments) • Work with testing companies to ensure they are prepared to accommodate the operational, contractual, and technical issues necessary to successfully support a consortia assessment project

More Related