180 likes | 314 Views
NOPR 2006: Results and Lessons Learned 2010 Annual AHRQ Conference Bruce E. Hillner , M.D. Eminent University Scholar and Professor Virginia Commonwealth University Richmond, VA. NOPR Results. Overall Impact on Patient Management Diagnosis, Staging, Restaging, Recurrence
E N D
NOPR 2006:Results and Lessons Learned2010 Annual AHRQ ConferenceBruce E. Hillner, M.D.Eminent University Scholar and ProfessorVirginia Commonwealth UniversityRichmond, VA
NOPR Results Overall Impact on Patient Management • Diagnosis, Staging, Restaging, Recurrence • Data on 22,975 scans from May 8, 2006 – May 7, 2007 • J ClinOncol2008; 26:2155-61 Impact on Patient Management by Cancer Type • Confirmed Cancers • Staging, Restaging, Recurrence • Data on 40,863 scans from May 8, 2006 – May 7, 2008 • J Nucl Med 2008; 49:1928-35 Treatment Monitoring • Data on 10,447 scans from May 8, 2006 – Dec 31, 2007 • Cancer 2009:115:410-18
PET Changed Intended Management in 36.5% of Cases Hillner et al., J Clin Oncol 2008
Changes in Intended Management (%) Stratified by Pre-PET Plan Pre-PET Plan Hillner et al., J Clin Oncol 2008
Change in Management by Cancer Type Hillner et al., J Nucl Med 2008
Global Summary • Change in intended management associated with PET in previously non-covered cancers was similar to that reported in single-institution studies of covered cancers • ~1/3 of older patients undergoing PET for cancer types covered under Medicare’s CED policy had a major change in intended management, including type of treatment • The relative impact of PET on intended management was observed across the full spectrum of indications for PET Hillner et al., J Clin Oncol 2008
Strengths of the NOPR Data • “Real world” data • Timely data • Very large patient cohorts • Current technology (≥ 85% PET/CT) • Good observational studies usually match controlled studies in magnitude and direction of effect • Results similar to more tightly managed single-institution studies (e.g., Hillner J ClinOncol 22: 4147, 2004) and to Australian studies with outcome validation (Scott J Nucl Med 49:1451, 2008)
NOPR Limitations (1) • Data “quality” • Potential that physicians may have been influenced by the knowledge that future Medicare reimbursement might be influenced by their responses • Collected change in “intended” management, not actual management • Unknown if management changes were in the correctdirection or improve long-term outcomes • Defining the relevant long-term outcomes for a diagnostic (instead of therapeutic) procedure is controversial
NOPR Limitations (2) • NOPR does not address: • Whether PET should be used in lieu of or as a complement to other imaging techniques • The optimal sequencing of CT, MRI and PET. • How much ‘better’ PET is than next best method
Lessons Learned 1: Must Do Even if Painful • Preparing a formal “operations manual” similar in structure to a clinical trial protocol document • Project annual registry enrollment for relevant time frame • Prepare a statistical plan even if multiple definitions of ‘meaningful’ change in endpoints are considered • If registry includes multiple diseases (e.g. different types of cancer) or sub-groups (e.g. past myocardial infarctions vs. angina), define the priority areas for first analysis
Lessons Learned 2 • Design your case report forms such that all data fields must be complete before accepting record • Web-based entry of data to ‘center’ minimizes costs and assists data integrity • No evidence that being the treating physician was associated with higher rates of change in the post-PET (treatment) strategies
Lessons Learned 3 • The primary endpoint of ‘intended’ vs. “actual” management was a compromise • Define and concurrently implement ‘validation’ strategy from the onset of the registry • Prospective claims collection • Prospective selected chart audits ($) • Consider how proximate imaging is to preferred ‘hard endpoint’
CMS Decision Memo CAG-00181RApril 3, 2009 New Framework differentiates PET imaging into uses informing initial treatment strategies from uses guiding subsequent treatment after completion of initial treatment
IOM Priority 17/100 “Compare the effectiveness of imaging technologies in diagnosing, staging, and monitoring positron emission tomography (PET), magnetic resonance imaging (MRI) and computed tomography (CT).”
Prospective Concurrent Claims Database of Registry Participants is Necessary but Not Necessarily Sufficient for Proving Better Outcomes • Relevant outcome rarely proximate to diagnostic test • Discordance between intended vs. actual management is more likely to vary depending on the type of intended management than on the value of the test information • Defining control group is problematic
The Challenge to Registry-based Studies:Defining appropriate comparison control groups Options a) Historical controls to Non-PET care when PET not available b) Contemporary controls to Non-PET when PET was available Both face: • Indication Bias • Differ in presentation • Differ in probability of metastasis • Differ in potential extent of metastasis • Provider Bias (MDs and hospital) • Patterns of care by referring MDs and hospitals using PET likely to differ from non-PET users • Spectrum Bias: For non-PET imaging, clinical indication not available
2010 CER Challenge • Such ‘comparative effectiveness’ evaluations must move beyond the "if" to the “how" by addressing the relative value of • Sequencing • Frequency • Timing (during treatment monitoring) • Combinations of PET, MRI and CT • Comparisons between imaging types less likely to benefit from registry design • Complementary prospective and retrospective studies
Final Comments • It has taken 20-30 years for one “knowledge turn” to show that PET has unique value in cancer management • NOPR has shown the feasibility of performing large-scale, policy-relevant imaging research that is minimally intrusive to patients and imagers • Going forward the policy and economic questions for advanced imaging are when, how often, and in what sequence should advanced imaging be used in patients with suspected and confirmed cancer • Prospective multi-center investigator-initiated evaluations are needed to confirm ‘relative’ comparative value