220 likes | 637 Views
Ob/GYN Journal Club Notes. September 28, 2018 Martha A. Wojtowycz, PhD wojtowym@upstate.edu. Learning Objectives. Distinguish between a systematic review and meta analysis I nterpret a flow chart for a randomized controlled trial Explain Type I and Type II errors , and power
E N D
Ob/GYN Journal Club Notes September 28, 2018 Martha A. Wojtowycz, PhD wojtowym@upstate.edu
Learning Objectives • Distinguish between a systematic review and meta analysis • Interpret a flow chart for a randomized controlled trial • Explain Type I and Type II errors, and power • Identify the important considerations for sample size calculation • Discuss the pros and cons of publishing an underpowered study
Wihbey, KA, et al. “Prophylactic negative pressure wound Therapy and wound complication after cesarean delivery in women with class II or III obesity”. Obstetrics and Gynecology, vol. 132, no. 2, August 2018.
Why did they do this study? • Two recent meta-analyses reported on the effectiveness of prophylactic negative pressure wound therapy at the time of cesarean • With conflicting results • What is a meta-analysis? • Often used when discussing systematic reviews. What is the difference?
Systematic Review • Detailed, systematic, and transparent method of collecting, appraising, and synthesizing evidence to answer a well-defined question • Steps in a systematic review: • Define the question • Develop a protocol for proposed methodology • Conduct a thorough literature search, using the protocol • Appraise the quality of studies • Synthesize evidence • Disseminate the findings • Highest level of evidence
Meta Analysis • Not the same as a systematic review, but often used within the context of a systematic review • Depends on the well defined question • Used when studies with quantitative data are appropriate for answering the study question • Not used when qualitative methods are more appropriate for answering the study question • Statistical procedure used for combining quantitative data from multiple separate studies • If treatment effect or effect size were the same across studies, then meta analysis could be used to identify this common effect • If treatment effect or effect size were different across studies, then meta analysis could used to identify reasons for variation
Randomized Control Trial • Why use this study design? • Appropriate when comparing two or more treatment options • Feasible (?) and ethical • Why was this RCT non-blinded? • Patient knew which treatment option she received – not possible to conceal • Surgeon, nurses knew which treatment arm the patient was in – not possible to conceal • Do you think this introduced any potential bias?
RCT – Flow Chart • Every RCT article should contain a flow chart that shows what happens to the subjects from the time they are assessed for eligibility to completion • What to look at • Difference in the number who are assessed for eligibility and who actually are randomized • Number who decline participation • Number who are excluded • Number lost to follow-up
Figure 1 Every RCT should show who was assessed, consented, declined, excluded, randomized, different study arms, lost to follow-up, completed Can this be improved? Why did this happen? Effect on outcomes?
Control in an RCT • After the randomization process need to know if • Patients in the two arms are similar • Risk factors associated with the outcomes are similar • Review Table 1 to see if the patients are similar in: • Demographics • Ob history • BMI categories • Medical factors for infection (e.g., diabetes mellitus) • Review Table 2 to see if the patients are similar in intrapartum or intraoperative characteristics from a clinical standpoints, e.g., 36% vs. 24% tubal ligation
Are there differences between these groups? If so, could they have an effect on the outcomes?
Statistical Inference • Process of making inferences about the population from a sample • Population: women with Class II or III Obesity who underwent a cesarean section • Sample: women in this study from the two centers • Want to know how the results from the study compare with the truth in the population
Type I and II Errors Type I error: Traditionally we set = .05 Think of it as the false positiveerror rate There is a 5% chance that we will say there is a difference when there really is NO difference. Type II error: Traditionally we set = .20 Think of it as the false negativeerror rate There is a 20% chance that we will say there is no difference when there really is a difference.
Power Power is the ability to detect a significant difference when there really is a difference Related to the probably of a Type II error Power (1- ) = .80 There is a 80% chance of detecting a significant difference when there really is a difference. Strive for 80% or higher power In this study, power ranged from <0.1% to 29%! Study lacked the power to detect significant differences Table 3 – results not statistically significant
Power and Sample Size Considerations • Power is related to sample size • Increase power by increasing sample size • Information needed to calculate required sample size: • Probability of Type I error () • Probability of Type II error () • Difference you want to be able to detect • Requires some idea of what the mean, the rate, etc., is for the non-exposed group • Other issues: • How many or what proportion with missing data, don’t complete study, lost to follow-up?
Power and Sample Size Considerations • Sample size considerations for this study • Probability of Type I error () = .05 • Probability of Type II error () = .20 • (80% Power) • Difference you want to be able to detect • Wanted to detect a 50% decrease in superficial surgical site infection, assuming a 20% occurrence of SSI • Sample size needed –400 total with200 in each arm • Only 166 (42% if required sample size) enrolled in study after the 2 year enrollment period
Based on data that you have plus imputed data on those lost to follow-up Based on outcome data that you have Worst case scenario
Questions to ask yourself about an RCT How rigorous was the study methodology? • Was assignment randomized? • Was allocation concealed? • Was masking (blinding) used for patients, clinicians, or outcome assessment? • Were groups equivalent at the start of the trial? • Was follow-up adequate? • Was the sample size adequate? Do the results apply to your patient(s)?
Underpowered studies • Should they be published? • Publication bias against studies that do not find any statistically significant results • Distinguish between a well-done but underpowered study and a poorly done study • Can future researchers learn from the problems encountered in this study, and use that information to improve the methodology? • Could they potentially be part of a systematic review that uses meta-analysis?
https://clinicaltrials.gov/ct2/show/NCT03009110?term=NCT03009110&recrs=ab&rank=1https://clinicaltrials.gov/ct2/show/NCT03009110?term=NCT03009110&recrs=ab&rank=1