260 likes | 396 Views
Challenges in Classifying Adverse Events in Cancer Clinical Trials. Steven Joffe, MD, MPH Dave Harrington, PhD David Studdert, JD, PhD Saul Weingart, MD, PhD Damiana Maloof, RN . Disclosure. Member of clinical trial adverse event review board for Genzyme Corp (not oncology-related).
E N D
Challenges in Classifying Adverse Events in Cancer Clinical Trials Steven Joffe, MD, MPH Dave Harrington, PhD David Studdert, JD, PhD Saul Weingart, MD, PhD Damiana Maloof, RN
Disclosure • Member of clinical trial adverse event review board for Genzyme Corp (not oncology-related)
Adverse Events in Clinical Trials • Adverse events (AEs) are critically important outcomes of clinical trials • Human subjects protection • Endpoints for judgments about benefits & risks of study interventions • Captured on Case Report Forms • Reported to oversight agencies
Components of AE Assessment • Type • Severity • Relatedness to study agent(s) • Expectedness
Global judgment about reportability to IRB Components of AE Assessment • Type • Severity • Relatedness to study agent(s) • Expectedness
Reporting Criteria(to Dana-Farber IRB) • Grade 5 (fatal) • Grade 4, unless specifically exempted • Grade 2/3, if unexpected AND possibly, probably or definitely related • Virtually identical to NCI’s Adverse Event Expedited Reporting System (AdEERS) criteria
AE Grading in Oncology • NCI’s Common Terminology Criteria for Adverse Events (CTCAE) typically used • Effort to standardize nomenclature • developed by consensus methods; no formal process to establish reliability of grading http://ctep.cancer.gov/protocolDevelopment/electronic_applications/ctc.htm#ctc_v30
Aims • To assess the validity of physician reviewers’ determinations about whether AEs in cancer trials meet IRB reporting criteria • To assess the interrater reliability of reviewers’ determinations about whether AEs that occur in cancer trials meet IRB reporting criteria • To assess the validity and reliability of revie-wers’ judgments about the components of AEs
Panelists’ Roles • Review primary data from criterion sets of AEs • Rate each AE: • Classification • Grade • Relatedness • Expectedness • Reportable to IRB } from CTCAE
Statistical Analysis • Validity of judgments regarding reportability to IRB • % agreement with gold standard • Interrater reliability of raters’ judgments • Kappa coefficients
Role of Experience: Rank Kappa
Conclusions • Oncologists’ judgments about whether or not AEs require reporting to the IRB show high agreement with gold standard • Interrater reliability of oncologists’ judgments about components of AEs varies • High: expectedness of AE; need for reporting • Moderate: grade of AE • Low: relationship of AE to study agents
Limitations • Small sample sizes • Criterion set of AEs • Panel of physician reviewers • Generalizability of set of AEs • Reviewers may not reflect population of investigators who file AE reports • Judgments based on document review rather than on firsthand knowledge
Thoughts About Direction of Bias in Agreement Statistics • Factors biasing towards less agreement • Reviewer experience • Factors biasing towards greater agreement • Standardized set of documents for review • Criterion set selected based on maximum agreement among expert panel reviewers
Implications • Judgments about AEs are complex • Human subjects: efforts to enhance reliability, or to minimize reliance on judgments about causation, are needed • Science: toxicity data from uncontrolled trials may be misleading • RCR: education about need for reporting is important but insufficient
Debra Morley Anna Mattson-DiCecca Physician panelists ORI NCI Milton Fund Acknowledgments