150 likes | 317 Views
A tool for the classification of study designs in systematic reviews of interventions and exposures Meera Viswanathan, PhD for the University of Alberta EPC. AHRQ Conference September 2009. Steering Committee. Ken Bond, UAEPC Donna Dryden, UAEPC Lisa Hartling, UAEPC Krystal Harvey, UAEPC
E N D
A tool for the classification of study designs in systematic reviews of interventions and exposuresMeera Viswanathan, PhDfor the University of Alberta EPC AHRQ ConferenceSeptember 2009
Steering Committee • Ken Bond, UAEPC • Donna Dryden, UAEPC • Lisa Hartling, UAEPC • Krystal Harvey, UAEPC • P. Lina Santaguida, McMaster EPC • Karen Siegel, AHRQ • Meera Viswanathan, RTI-UNC EPC
Background • EPC reports, particularly comparative effectiveness reviews, are increasingly including evidence from nonrandomized and observational designs • In systematic reviews, study design classification is essential for study selection, risk of bias assessment, approach to data analysis (e.g., pooling), interpretation of results, grading body of evidence • Assignment of study designs is often given inadequate attention
Objectives • Identify tools for classification of studies by design • Select a classification tool for evaluation • Develop guidelines for application of the tool • Test the tool for accuracy and inter-rater reliability
Objective 1: Identification of tools 31 organizations/individuals contacted 11 organizations/individuals responded 23 classification tools received 10 tools selected for closer evaluation 1 tool selected for modification and testing
Objective 2: Tool selection • Steering Committee ranked tools based on: • Ease of use • Unique classification for each study design • Unambiguous nomenclature and decision rules/definitions • Comprehensiveness • Potentially allows for identification of threats to validity and provides a guide to strength of inference • Developed by a well-established organization
Objective 3: Tool development • Three top-ranked tools: • Cochrane Non-Randomised Studies Methods Group • American Dietetic Association • RTI-UNC • Incorporated positive elements of other tools • Developed glossary
Objective 4: Testing round 1 • No clear patterns in disagreements • Disagreements occurred at all decision points • Tool vs. studies • Variations in application of the tool
Discussion • Moderate reliability, low agreement with reference standard • Studies vs. tool as source of disagreement • tool not comprehensive, e.g., quasi-experimental designs • studies challenging, e.g., sample of difficult studies, poor study reporting • To optimize agreement and reliability: • training in research methods • training in use of tool • pilot testing • decision rules
Next Steps • Test within a real systematic review • Further testing for specific study designs • Further evaluation of differences in reliability by education, training, and experience
Acknowledgments • Ahmed Abou-Setta • Liza Bialy • Michele Hamm • Nicola Hooton • David Jones • Andrea Milne • Kelly Russell • Jennifer Seida • Kai Wong • Ben Vandermeer (statistical analysis)
Questions? University of Alberta EPCEdmonton, Alberta, Canada