180 likes | 321 Views
AHRQ National Advisory Council on Healthcare Research and Quality Subcommittee on Children’s Healthcare Quality Measures for Medicaid and CHIP Programs. Rita Mangione-Smith and Jeffrey Schiff September 17 & 18, 2009. CHIP.
E N D
AHRQ National Advisory Council on Healthcare Research and Quality Subcommittee on Children’s Healthcare Quality Measures for Medicaid and CHIP Programs Rita Mangione-Smith and Jeffrey Schiff September 17 & 18, 2009
Potential Impact of Core Measures Identification Work: 48% of America’s Children Under 19 Sources: AHRQ based on: Coverage estimates: Based on 2008 national participant and spending data derived from CMS and U.S. Census Current Population Survey data sources, reported in National Center on Children in Poverty, http://www.nccp.org/profiles/index_32.html. CHIP estimates are for number of children in separate SCHIP programs. Medicaid estimates include children in Medicaid SCHIP programs. Coverage estimates reflect Medicaid and CHIP enrollees whether or not they received health care services. Total number of children under 19 interpolated from U.S. Census Bureau figures for number of children 17 and under (73.9 million) and 19 and under (83 million).
Our charge Core set of measures that when “TAKEN TOGETHER – USED TO ESTIMATE THE OVERALL NATIONAL QUALITY OF HEALTH CARE FOR CHILDREN”
Conceptual Framework – Guiding Determination of the Scope for Core Measurement Set Grounded→ Intermediate → Aspirational Measures Measures Measures • Lean towards recommending more grounded measures • Grounded:10-25, currently feasible, many already in place • Intermediate category: #?, good specifications, some States already using them • Aspirational: needed measures to fill in the gaps
Scope for Core Measurement Set 1. Must be realistic about staffing/funding needs for collecting/analyzing/reporting available data 2. Comprehensive effort to find good measures for all service categories, duration of enrollment, and other aspects of care required by the legislation; however, if no good measures currently exist for a given aspect of care, a measure will not be recommended for the core set 3. Include measures not currently used by Medicaid/CHIP • e.g. State and national measurement efforts 4. Choose measures that are actionable • There should be clear steps a State, plan, or provider can take to improve on performance – the measure should inform what these steps need to be
Consensus on Criteria Definitions • Validity • Measures must be supported by scientific evidence or, where evidence is insufficient, by expert consensus • Measures must support a link between: • Structure and outcomes of care • Structure and processes of care • Processes and outcomes of care • The measure must represent an aspect of care that is under the control of health care providers and systems • The measure should truly assess what it purports to measure • Measures supported by evidence from unpublished data should be considered for inclusion
Consensus on Criteria Definitions • Feasibility • The data necessary to score the measure must be available to State Medicaid and CHIP programs • Administrative data, medical records data, survey data • Detailed specifications must be available for the measure that allow for reliable and unbiased scoring of the measure across States and institutions
Consensus on Criteria Definitions • Importance • The measure should be actionable • Cost of the condition to the Nation should be high • Health care systems are clearly accountable for the quality problem assessed by the measure • The extent of the quality problem should be substantial • There should be documented variation in performance on the measure • The measure should be representative of a class of quality problems: “sentinel measure” of quality of care provided for preventive care, mental health care, or dental care, etc.
Consensus on Criteria Definitions • Importance • The measure assesses an aspect of health care where there are known disparities • The core set should represent a balanced portfolio of measures and be consistent with the intent of the legislation • Improving on performance for the core set of measures should have the potential to transform care for our Nation’s children
Consensus on Criteria Definitions • Transparency • For all measures recommended for inclusion in the core set: • The level of scientific evidence supporting the measure will be reported • Example: USPSTF grades A, B, C, or I; Level I, II, III • The level of expected burden for obtaining the data needed to score the measure will be reported • Low, moderate, high
Delphi Scoring Process Completed by Subcommittee Members • Measures scored on a 9-point scale • 7-9 Measure definitely valid, feasible, and important • 4-6 Measure has uncertain validity, feasibility, or importance • 1-3 Measure is not valid, feasible, or important • Measures with a Validity score of > 7 pass • Measures with Feasibility and Importance scores > 4 pass • Delphi Round 1 completed at July 2009 meeting • Assessed validity, feasibility, and importance of measures in use by State Medicaid and CHIP programs
Delphi Scoring Process Completed by Subcommittee Members • Delphi Round 2 completed, Monday September 14th in preparation for second meeting • Four groups of measures were assessed: • Measures that had passing scores for validity, feasibility, and importance in Delphi Round 1 • Measures that were judged to be “controversial” during scoring for validity and feasibility in Delphi Round 1 • Measures identified through environmental scans but that were not included on the original list of measures scored during Delphi Round 1 • Measures nominated by SNAC members, Federal partner agencies, and the public, between July 24th and August 24th
65/121 Measures Passed Delphi Round 2 • Prevention and Health Promotion: 27/50 • Management of Acute Conditions: 16/25 • Management of Chronic Conditions: 18/31 • Family Experiences with Care: 3/6 • Most Integrated Health Care Systems: 0/1 • Availability of Services: 0/5 • Health Status: 0/1 • Duration of Enrollment: 1/2
Meeting 9/17-9/18: Goal is to identify a parsimonious, core, grounded, and balanced set of measures • 65 measures passed Delphi Round 2 and will be discussed for final inclusion at the meeting • Balancing grid developed to track the following constructs: • Ages covered by the measure • Disparities in performance on the measure • Sites/Types of care addressed by the measure • Aspects of the Care Continuum addressed by the measure • e.g. Outpatient, Inpatient, Mental Health, Dental • Aspects of the System Continuum addressed by the measure • e.g. Structure, Process, Outcomes, Efficiency • Types of Entities Using the measure • e.g. Medicaid programs, health plans, clinics/providers, researchers • The Data Source for the measure • e.g. Administrative, Medical Records, Survey
Process for this Meeting Nomination and VFI process (VFI = Validity, feasibility, and importance) Review interval process Vote to select only one of “nearly same” measures
Process for this meeting Pruning • Discuss and confirm “domains” on the balancing grid • Reaffirm the concepts of a “core, grounded, parsimonious” set of measures • Discuss measures by legislative category (e.g prevention, health promotion – PHP) • Rank order within each category or subcategory • Retain top 33% in each category • Present this set on the balancing grid after the meeting today
Process for this meeting Creation of a unified “whole” • Review balancing grid from Thursday • Identify and acknowledge holes • Vote/rank remaining measures as a whole set • Review on the balancing grid for varying numbers of measures (10-15-20-25) • Discuss the number for a parsimonious set • Vote for a set by size • Record group comments about the nominated core set