1 / 48

Introduction to Critical Appraisal: Assessing Validity, Results, and Usefulness of Research

This presentation provides an introduction to critical appraisal, including its definition, differences between systematic reviews and meta-analyses, and the process of critically appraising research evidence. It covers topics such as levels of evidence, interpretation of statistics in meta-analyses, and the importance of critically appraising research. Advantages and disadvantages of critical appraisal are discussed, along with the necessary knowledge and resources for effective appraisal.

Download Presentation

Introduction to Critical Appraisal: Assessing Validity, Results, and Usefulness of Research

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Critical Appraisal CHRIS REDMAN ALEX SANCHEZ-VIVAR

  2. Presentation Overview • Introduction to critical appraisal • Definition, differences, strengths and weaknesses of systematic reviews and meta-analyses • Sources of systematic reviews/meta-analyses • Levels of Evidence • Interpretation of basic statistics in meta-analyses – confidence intervals, forest plots • Critical appraisal of systematic reviews/meta-analyses

  3. What is critical appraisal? • “Critical appraisal is the process of systematically examining research evidence to assess its validity, results, and relevance before using it to inform a decision” (Hill and Spittlehouse, 2001, p.1) • Consideration of quantitative and qualitative aspects

  4. Critical appraisal is not • Negative dismissal of any piece of research • Assessment on results alone • Based entirely on statistical analysis • Undertaken by experts only

  5. Why critically appraise? • To find out the validity of the study • are the methods robust? • To find out the reliability of the study • what are the results and are they credible? • To find out the applicability of the study • is it important enough to change my practice?

  6. How do I critically appraise research? • Be (critically) open to everything • Believe (in principle) papers from high quality journals • Read & decide yourself • Let other people read and decide for (with) you • Read for yourself and make a structured appraisal

  7. Advantages • systematic way of assessing validity, results & usefulness of research • contributes to improving practice (quality) • encourages objective assessment of information • not difficult to develop skills • Disadvantages • time consuming • not always any easy answers or what you hoped to find • dispiriting if ‘good’ evidence is lacking i.e. little / poor research done Critical appraisal

  8. Critical appraisal • BUT… you can all do it with the right tools & guidance What do I need to know? • Awareness of study designs • Levels of evidence • Statistics!! • CA checklists • CA resources

  9. Awareness of study design

  10. Observational study design measures of disease, measures of risk, and temporality

  11. What is a systematic review? • A review that has been prepared using some kind of systematic approach to minimising biases and random errors, and that the components of the approach will be documented in a materials and methods section Chalmers et al, 1995

  12. What is a systematic review Reviews Systematic reviews

  13. Rationale for systematic reviews • Information overload • Publication bias • Poor quality of reviews • Vitamin C and the prevention of the common cold (Pauling 1986) • Missing link • Inhalation of hexamethonium (comment by Clark et al, 2001)

  14. Sources of systematic reviews • The Cochrane Library • www.library.nhs.uk • DARE (in Cochrane Library ‘Other reviews’) • Health Technology Assessments (in Cochrane Library ‘Technology Assessments’) • Medline, Cinahl, Embase search on ‘systematic review’ in title, abstract • PubMed – Systematic Review in Limits > Topic • TRIP • www.tripdatabase.com • Evidence Based Reviews - Journals and Databases • https://www.library.nhs.uk/evidence • NHS Evidence • https://www.evidence.nhs.uk/

  15. Format of a systematic review • Formulation of a review question • Define inclusion/exclusion criteria • Locate studies • Select studies (inclusion/exclusion) • Assess study quality • Data extraction • Analyse and present results • Interpretation of results Egger et al, 2001

  16. Formulation of review question • Is the question focused in terms of • Population studied • Intervention/exposure given • Outcomes considered Do anticoagulants prevent strokes in patients with atrial fibrillation?

  17. Define inclusion/exclusion criteria • Were the right types of studies included to answer the question? Depends on the question. • Can have observational studies (cohort, case-control), diagnostic/screening tests, prognostic, non-randomised trials • Studies should be defined according to their design, participant characteristics, interventions and outcomes

  18. Locate studies • Comprehensive search • Databases • Conference proceedings • Hand searching • Grey literature (reports, research registers) • Foreign language • Follow-up references • Contacting experts/authors • Publication bias – unpublished studies • Explicit

  19. Select and Assess Studies • Eligibility criteria for study selection can be applied • More than one reviewer can help reduce bias • Checklists/scoring systems

  20. What do the findings mean? • Effect measures – odds ratios, relative risk, mean difference • P-values • Confidence intervals

  21. Using statistics • Assess the weight of the evidence that a treatment works (or doesn’t) • Give an estimate (and likely range) of the treatment effect • Test to see how likely it is that this effect would have been seen by chance

  22. Odds ratio (OR) • Expresses the odds of having an event compared with not having an event in two different groups OR = odds in the treated group / odds in the control group

  23. OR=1 treatment has identical effect to control • OR<1 event is less likely to happen than not (i.e. the treatment reduces the chance of having the event) • OR>1 event is more likely to happen than not (increases the chances of having the event) • Clinical trials typically look for treatments which reduce event rates, and which have odds ratios of less than one

  24. Importance of defining the outcome

  25. P-values – significance test • A p-value is a measure of statistical significance which tells us the probability of an event occurring due to chance alone • P-value results range from 0 to 1 • The closer the p-value is to zero, the less chance there is that the effects of two interventions are the same

  26. Statistical significance • In general, p-values of either 0.05 or 0.01 are used as a cut-off value, although this value is arbitrary • P-value of <0.05 indicates the result is unlikely to be due to chance • P-value of >0.05 indicates the result might have occurred by chance.

  27. Be careful… • A p-value in the non-significant range tells you that either there is no difference between the groups or there were too few subjects to demonstrate such a difference (ideally need to report confidence intervals) • There is not much difference between p=0.049 and p=0.051 • P-values do not indicate the magnitude of the observed difference between treatments that is needed to determine the clinical significance

  28. Interpretation of Confidence Intervals • Confidence interval is the range within which we have a measure of certainty that the true population value lies OR • The confidence interval around a result obtained from a study sample (point estimate) indicates the range of values within which there is a specific certainty (usually 95%) that the true population value for that result lies. (MeReC Briefing 2005)

  29. What can a CI tell us? • Tells us whether the result is significant or not • The width of the interval indicates precision. Wider intervals suggest less precision • Shows whether the strength of the evidence is strong or weak. • The general confidence level is 95%. Therefore, the 95% CI is the range within which we are 95% certain that the true population value lies

  30. Confidence Intervals reported on Ratios (odds ratio, etc) • The ‘line of no effect’ centres around 1 • If a CI for an RR or OR includes 1 (the line of no effect) then we are unable to demonstrate statistically significant difference between the two groups

  31. What is a meta-analysis? • A statistical analysis of the results from independent studies, which generally aims to produce a single estimate of the treatment effect Egger et al, 2001

  32. Effect of probiotics on the risk of antibiotic associated diarrhoea Interpretation of forest plots D'Souza, A. L et al. BMJ 2002;324:1361

  33. Interpretation of forest plots • Look at the title of the forest plot, the intervention, outcome effect measure of the investigation and the scale • The names on the left are the authors of the primary studies included in the MA • The small squares represent the results of the individual trial results • The size of each square represents the weight given to each study in the meta-analysis • The horizontal lines associated with each square represent the confidence interval associated with each result • The vertical line represents the line of no effect, i.e. where there is no statistically significant difference between the treatment/intervention group and the control group • The pooled analysis is given a diamond shape. The horizontal width of the diamond is the confidence interval

  34. Advantages of a systematic review/meta-analysis • Limits bias in identifying and excluding studies • Objective • Good quality evidence, more reliable and accurate conclusions • Added power by synthesising individual study results • Control over the volume of literature

  35. Drawbacks to systematic reviews/meta-analyses • Can be done badly • 2 systematic reviews on same topic can have different conclusions • Inappropriate aggregation of studies • A meta-analysis is only as good as the papers included • Tend to look at ‘broad questions’ that may not be immediately applicable to individual patients

  36. Conclusion • Critical appraisal of systematic reviews and other research is well within your capabilities • Use a recognised checklist (i.e. SIGN) • Update your literature searching skills regularly (contact your library skills trainer)

  37. D'Souza, A. L et al. BMJ 2002;324:1361

  38. (Other) Critical appraisal checklists • CASP (Critical Skills Appraisal Programme) • http://www.phru.nhs.uk/casp/critical_appraisal_tools.htm • JAMA Users’ Guides to the Medical Literature • http://www.cche.net/usersguides/main.asp • Crombie I (1996) The Pocket Guide to Critical Appraisal, BMJ Books, London • Greenhalgh T (2001) How to Read a Paper, BMJ Books, London • BestBETs CA database • http://www.bestbets.org/cgi-bin/browse.pl?~show=appraisal

  39. References • Systematic reviews in health care [electronic resource] : meta-analysis in context / edited by Matthias Egger, George Davey Smith, and Douglas G. Altman. BMJ Books 2001 (ebook) • What is a systematic review?, What is a meta-analysis?, What are confidence intervals? • http://www.evidence-based-medicine.co.uk/what_is_series.html • Understanding systematic reviews and meta-analysis. Akonberg AK. Archives of Disease in Childhood 2005;90:845-848.

  40. References • Cochrane Open Learning Material: Systematic Reviews and Meta-analyses (useful Forest Plot interpretation PDF) • http://www.cochrane-net.org/openlearning/HTML/mod3-2.htm • Funnel plots • Bias in meta-analysis detected by a simple, graphical test. Egger M, et al BMJ 1997 (315):629-634 • The case of the misleading funnel plot. Lau J, et al. BMJ 2006 (333):597-600 • Heterogeneity • What is heterogeneity and is it important? Fletcher J BMJ 2007;334:94-6

  41. The label tells you what the comparison and outcome of interest are Effect of probiotics on the risk of antibiotic associated diarrhoea

  42. Effect of probiotics on the risk of antibiotic associated diarrhoea Scale measuring treatment effect. Take care when reading labels!

  43. Effect of probiotics on the risk of antibiotic associated diarrhoea Each study has an ID (author)

  44. Effect of probiotics on the risk of antibiotic associated diarrhoea Treatment effect sizes for each study (plus 95% CI)

  45. Effect of probiotics on the risk of antibiotic associated diarrhoea Horizontal lines are confidence intervals Diamond shape is pooled effectHorizontal width of diamond is confidence interval

  46. Effect of probiotics on the risk of antibiotic associated diarrhoea The vertical line in middle is the line of no effectFor ratios this is 1, for means this is 0

  47. Rationale for meta-analysis Conventional and cumulative meta-analysis of 33 trials of intravenous streptokinase for acute myocardial infarction. Mulrow, C D BMJ 1994;309:597-599

More Related