600 likes | 783 Views
Basics of Meta-analysis. Steff Lewis, Rob Scholten Cochrane Statistical Methods Group (Thanks to the many people who have worked on earlier versions of this presentation). Introduction. Session plan. Introduction Effect measures – what they mean Exercise 1 Meta-analysis Exercise 2
E N D
Basics of Meta-analysis Steff Lewis, Rob Scholten Cochrane Statistical Methods Group (Thanks to the many people who have worked on earlier versions of this presentation)
Session plan • Introduction • Effect measures – what they mean • Exercise 1 • Meta-analysis • Exercise 2 • Heterogeneity • Exercise 3 • Summary
Before we start…this workshop will be discuss binary outcomes only… • e.g. dead or alive, pain free or in pain, smoking or not smoking • each participant is in one of two possible, mutually exclusive, states There are other workshops for continuous data, etc
Where to start • You need a pre-defined question • “Does aspirin increase the chance of survival to 6 months after an acute stroke?” • “Does inhaling steam decrease the chance of a sinus infection in people who have a cold?”
Where to start • Collect data from all the trials and enter into Revman For each trial you need: The total number of patients in each treatment group. The number of patients who had the relevant outcome in each treatment group
Which effect measure? • In Revman you can choose: • Relative Risk (RR) = Risk Ratio, • Odds Ratio (OR) • Risk Difference (RD) = Absolute Risk Reduction (ARR),
Risk • 24 people skiing down a slope, and 6 fall • risk of a fall = 6 falls/24 who could have fallen = 6/24 = ¼ = 0.25 = 25% risk = number of events of interest total number of observations
Odds • 24 people skiing down a slope, and 6 fall • odds of a fall = 6 falls/18 did not fall = 6/18 = 1/3 = 0.33 (not usually as %) odds = number of events of interest number without the event
Expressing it in words • Risk • the chances of falling were one in four, or 25% • Odds • the chances of falling were one third of the chances of not falling • one person fell for every three that didn’t fall • the chances of falling were 3 to 1 against
Do risks and odds differ much? • Control arm of trial by Blum • 130 people still dyspeptic out of 164 • chance of still being dyspeptic • risk = 130/164 = 0.79; odds =130/34 = 3.82 • Tanzania trial, control arm • 4 cases in 63 women • chance of pregnancy induced hypertension • risk = 4/63 = 0.063; odds = 4/59 = 0.068 eg1 - Moayeddi et al BMJ 2000;321:659-64 eg2 - Knight M et al. Antiplatelet agents for preventing and treatingpre-eclampsia (Cochrane Review). In: The Cochrane Library, Issue 3, 2000. Oxford: Update Software.
Risk ratio (relative risk) • risk of event on treatment = 119/164 • risk of event on control = 130/164 • risk ratio = 119/164 = 0.726 = 0.92 130/1640.793 = risk on treatment risk on control Where risk ratio = 1, this implies no difference in effect
Odds ratio • odds of event on treatment = 119/45 • odds of event on control = 130/34 • odds ratio = 119/45 = 2.64 = 0.69 130/34 3.82 = odds on treatment odds on control Where odds ratio = 1, this implies no difference in effect
What is the difference between Peto OR and OR? • The Peto Odds Ratio is an approximation to the Odds Ratio that works particularly well with rare events
Expressing risk ratios and odds ratios • Risk ratio 0.92 • the risk of still being dyspeptic on treatment was about 92% of the risk on control • treatment reduced the risk by about 8% • treatment reduced the risk to 92% of what it was • Odds ratio 0.69 • treatment reduced the odds by about 30% • the odds of still being dyspeptic in treated patients were about two-thirds of what they were in controls
(Absolute) Risk difference • risk on treatment – risk on control • for Blum et al 119/164 – 130/164 = 0.726 – 0.793 = -0.067 usually expressed as a %, -6.7% • treatment reduced the risk of being dyspeptic by about 7 percentage points • Where risk difference = 0, this implies no difference in effect
What do we want from our summary statistic? • Communication of effect • Users must be able to use the result • Consistency of effect • It would be ideal to have one number to apply in all situations • Mathematical properties
Summary Further info in “Dealing with dichotomous data” workshop.
What is meta-analysis? • A way to calculate an average • Estimates an ‘average’ or ‘common’ effect • Improves the precision of an estimate by using all available data
What is a meta-analysis? Optional part of a systematic review Systematic reviews Meta-analyses
When can we do a meta-analysis? • When more than one study has estimated an effect • When there are no differences in the study characteristics that are likely to substantially affect outcome • When the outcome has been measured in similar ways • When the data are available (take care with interpretation when only some data are available)
Averaging studies • Starting with the summary statistic for each study, how should we combine these? • A simple average gives each study equal weight • This seems intuitively wrong • Some studies are more likely to give an answer closer to the ‘true’ effect than others
Weighting studies • More weight to the studies which give us more information • More participants • More events • Lower variance • Weight is closely related to the width of the study confidence interval: wider confidence interval = less weight
Displaying results graphically • Revman produces forest plots
there’s a label to tell you what the comparison is and what the outcome of interest is
At the bottom there’s a horizontal line. This is the scale measuring the treatment effect. Here the outcome is death and towards the left the scale is less than one, meaning the treatment has made death less likely. Take care to read what the labels say – things to the left do not always mean the treatment is better than the control.
The vertical line in the middle is where the treatment and control have the same effect – there is no difference between the two
The data for each trial are here, divided into the experimental and control groups This is the % weight given to this study in the pooled analysis For each study there is an id
The data shown in the graph are also given numerically The label above the graph tells you what statistic has been used • Each study is given a blob, placed where the data measure the effect. • The size of the blob is proportional to the % weight • The horizontal line is called a confidence interval and is a measure of • how we think the result of this study might vary with the play of chance. • The wider the horizontal line is, the less confident we are of the observed effect.
The pooled analysis is given a diamond shape where the widest bit in the middle is located at the calculated best guess (point estimate), and the horizontal width is the confidence interval Definition of a 95% confidence interval: If a trial was repeated 100 times, then 95 out of those 100 times, the best guess (point estimate) would lie within this interval.
Could we just add the data from all the trials together? • One approach to combining trials would be to add all the treatment groups together, add all the control groups together, and compare the totals • This is wrong for several reasons, and it can give the wrong answer
If we just add up the columns we get 34.3% vs 32.5% , a RR of 1.06, a higher death rate in the steroids group From a meta-analysis, we get RR=0.96 , a lower death rate in the steroids group
Problems with simple addition of studies • breaks the power of randomisation • imbalances within trials introduce bias
* # In effect we are comparing this experimental group directly with this control group – this is not a randomised comparison
* The Pitts trial contributes 17% (201/1194) of all the data to the experimental column, but 8% (74/925) to the control column. Therefore it contributes more information to the average death rate in the experimental column than it does to the control column. There is a high death rate in this trial, so the death rate for the expt column is higher than the control column.
Interpretation - “Evidence of absence” vs “Absence of evidence” • If the confidence interval crosses the line of no effect, this does not mean that there is no difference between the treatments • It means we have found no statistically significant difference in the effects of the two interventions
In the example below, as more data is included, the overall odds ratio remains the same but the confidence interval decreases. It is not true that there is ‘no difference’ shown in the first rows of the plot – there just isn’t enough data to show a statistically significant result.
Interpretation - Weighing up benefit and harm • When interpreting results, don’t just emphasise the positive results. • A treatment might cure acne instantly, but kill one person in 10,000 (very important as acne is not life threatening).
Interpretation - Quality • Rubbish studies = unbelievable results • If all the trials in a meta-analysis were of very low quality, then you should be less certain of your conclusions. • Instead of “Treatment X cures depression”, try “There is some evidence that Treatment X cures depression, but the data should be interpreted with caution.”
What is heterogeneity? • Heterogeneity is variation between the studies’ results
Causes of heterogeneity Differences between studies with respect to: • Patients: diagnosis, in- and exclusion criteria, etc. • Interventions: type, dose, duration, etc. • Outcomes: type, scale, cut-off points, duration of follow-up, etc. • Quality and methodology: randomised or not, allocation concealment, blinding, etc.
How to deal with heterogeneity 1.Do not pool at all 2. Ignore heterogeneity: use fixed effect model 3. Allow for heterogeneity: use random effects model 4. Explore heterogeneity: (“Dealing with heterogeneity” workshop )