360 likes | 797 Views
Critical Appraisal: Epidemiology 101. POS Lecture Series April 28, 2004. What to Believe?. "A proof is a proof. What kind of a proof? It's a proof. A proof is a proof. And when you have a good proof, it's because it's proven.". Introduction. Why do I need Critical Appraisal Skills?
E N D
Critical Appraisal:Epidemiology 101 POS Lecture Series April 28, 2004
"A proof is a proof. What kind of a proof? It's a proof. A proof is a proof. And when you have a good proof, it's because it's proven."
Introduction Why do I need Critical Appraisal Skills? • Not all literature accurate • Conclusions drawn not always possible • Why the inaccuracies? • Stupidity • “Publish or perish” • Money • Being cynical and suspicious is healthy
Introduction • Types of studies • Important components of a good randomized trial • 6 important questions to ask yourself when reading a paper
Study Types • Descriptive, Observational, Experimental • Descriptive – series, case report • Observational – groups determined by predetermined factor • Experimental – investigator in control of group assignments
Types of StudiesObservational • Case-control • uses • Advantages and disadvantages • Cost, good for causation in rare disease • Recall bias
Types of Studies Observational • Cohort • Definition • Advantages and disadvantages • Prospective • Cost high • Esp if disease is rare or time between exposure and onset of disease is long
Types of StudiesExperimental • Randomized trial • “Gold Standard” • Advantages and disadvantages
Principles of a Good Trial • Ideas, research question, hypothesis • Clinical relevance • Is it possible? • Time, finances, ethics
Principles of a Good Trial • Literature search • Background • Results of other trials • Convinced it was extensive
Principles of a Good Trial • Patient Selection • Inclusion and exclusion criteria • Are they well defined? • Are they reasonable? • Are they clinically relevant? • Do they change the results?
Principles of a Good Trial • Sample size calculation • Most ortho literature does not mention • There is SOME science • Based on primary outcome measurement
Sample Size Calculation • n = 2 [( + ) / ] 2 • Z of α (Type one error) • Usually 0.05 z=1.96 • Z of β (Type II error) • Usually 0.2 Z=1.28
Sample Size Calculation • n = 2 [( + ) / ] 2 • = S.D. of outcome measure • How do you know?? • Pilot study • published
Sample Size Calculation • n = 2 [( + ) / ] 2 • = Clinically relevant difference • This is the variable that can be manipulated • Depends of risks/cost of treatment
Sample Size Calculation • n = 2 [( + ) / ] 2 • Equivalency trial • Rarely done =0.05 and sample size increases • A neg trial that does not address this can not conclude “no difference in treatments” only “we failed to prove a difference”
Randomization • Computer, random number table, coin toss • Not birthday, MCP • Block randomization • Small number, multi-center • AABB, ABBA, etc • Potential for bias
Blinding • Always adds weight to a study • Are the subject and investigators blinded • Is it feasable or possible?
Intervention • Well defined, particulars discussed
Outcome Measurement • Primary outcome measure • Secondary outcome measures • Data dredging
Analysis • Biostats • Definitely some trust here • Everyone can’t be an expert
Relative Risk Reduction(RRR) RRR = (0.1 – 0.05)/ 0.1 = 50% If outcome is rare, this is misleading
Absolute Risk Reduction(ARR) ARR = 0.1 – 0.05 = 5% Good for rare outcomes and NNT
Number Needed to Treat(NNT) ARR = 0.1 – 0.05 = 5% NNT = 1/ARR = 1/0.05 = 20
Lost to Follow-up • 20 % added to sample size • Good Investigators very aggressive • “Worse case” Analysis