330 likes | 499 Views
Evidence-Informed Practice Training. Module 3: CRITICAL APPRAISAL Evaluating the Evidence. Research and Evaluation Department. Learning Objectives. Participants will be able to identify and classify different types of research studies according the hierarchy of evidence
E N D
Evidence-Informed Practice Training Module 3: CRITICAL APPRAISAL Evaluating the Evidence Research and Evaluation Department
Learning Objectives • Participants will be able to identify and classify different types of research studies according the hierarchy of evidence • Participants will be able to determine the level of evidence according to the hierarchy of evidence • Participants will use the relevant Critical Appraisal tool to appraise a systematic review, a practice guideline and a qualitative report.
Why Critically Appraise Evaluate
What is critical appraisal when applied to research? • Critical appraisal is the process of systematically assessing and interpreting research studies by asking 3 key questions: • Is the study valid? • What are the results? • How relevant are the results from this study to my workplace? • Critical appraisal is an essential part of evidence-informed practice and allows us to assess the quality of research evidence and decide whether a body of research evidence is good enough to be used in decision making. • Kostoris Library; The Christie. NHS Foundation Trust. Retrieved October 28, 2009 from http://www.christie.nhs.uk/pro/library/Critical-Appraisal.aspx
Determine whether research deserves your consideration …not just to expose shoddy research methods! Critical appraisal doesn’t mean rejecting papers or evidence outright…some findings may be relevant Critical appraisal does require some knowledge of research design …but not detailed knowledge of statistics! Critical appraisal may not provide you with the “easy” answer Critical appraisal may mean you find that a favored intervention is in fact ineffective Why critically appraise ?
To determine the strength of the evidence found... in order to make sound decisions... about applying the findings to clinical practice.
QUANTITATIVE QUALITATIVE All research falls into these two broad categories…
Evidence pyramid (QUANTITATIVE) (secondary research) (primary research)
Experimental vs. Observational Randomized controlled trial (researcher determines exposure and observes outcomes prospectively) - E Cohort (researcher studies and compares exposed & unexposed groups retrospectively or prospectively) - O Case controlled study/case series (researcher retrospectively examines case and may look for controls or comparisons) - O Quantitative evidence - Primary (unfiltered)
Quantitative evidence - Primary (unfiltered) The quick tip: Did the investigator CHOOSE the treatment? YES: If the treatment was assigned, this is an experiment. NO: If the treatment was not assigned this is an observational study, and we are simply observing what happens naturally.
Study Designs SECONDARY Guidelines PRIMARY Systematic Reviews Analytical / Epidemiological Descriptive (Narrative Reviews) Cross-Sectional (Surveys) Interventional Observational Ecological (Correlational) Randomized Controlled Trials Clinical Controlled Trials Case Controlled Studies Cohort Studies Case Reports Case Series
Qualitative evidence – primary (unfiltered) Major qualitative research approaches: • Grounded theory – looks to develop theory about basic social processes (Lobiondo-Wood & Haber, 2009) • Ethnography – focuses on scientific description and interpretation of cultural or social groups and systems (Creswell, 1998) • Phenomenology – goal is to understand the meaning of a particular experience as it is lived by the participant (Lobiondo-Wood & Haber, 2009) • Case Study Method – reviews the peculiarities and the commonalities of a specific case (Creswell, 1998)
REVIEW VALIDITY Research Design Applicability Treatment Effect GENERALIZABILITY Of RESULTS RELEVANCE Is the quality of the study good enough to use the results? What do the results mean for my patients (clinical significance) ? Are the findings applicable in my setting?
REVIEW Systematic Review Meta- synthesis Practice Guidelines
How to appraise systematic reviews, practice guidelines & other publications Discover quality
REVIEW Practice Guidelines are: systematically developed statements to assist practitioners and patient decisions about appropriate health care for specific circumstances. Quantitative and Qualitative TOP Systematic Reviews are: a research summary of all evidence that relates to a particular question, the question could be on an intervention effectiveness, causation, diagnosis or prognosis. (Cullum et al, 2008) TOP Meta- Analysis: combines quantitative data across studies Quantitative Meta-Syntheses: the synthesis of findings across multiple qualitative reports Qualitative
EB CLINICAL PRACTICE GUIDELINE • Definition: A systematically developed statement based on scientific literature that helps practitioners and their patients to make appropriate health care decisions specific to their clinical circumstances. • Purpose: To make explicit recommendations with a definite intent to influence what clinicians do.
EB CLINICAL PRACTICE GUIDELINES • INTENDED GOALS • Enhance patient / health outcomes • Increase cost effectiveness of health care delivery • Synthesize large volumes of information • Codify optimal practice as an education tool • CPGs must be flexible and reflect unique nature of each patient and practice setting.
Appraisal of Guidelines Research & Evaluation Instrument (AGREE) VALIDITY Methodology Potential Uptake Final Recommend-ations RELEVANCE RESULTS
Appraisal of Guidelines Research & Evaluation Instrument (AGREE) Consists of 23 Key items in 6 Domains • Scope and Practice (3) • Stakeholder Involvement (4) • Rigour of development (7) • Clarity and Presentation (4) • Applicability (3) • Editorial Independence (2)
Agree continued 1. Scope and purpose – is concerned with the overall aim of the guideline, the specific clinical questions and the target patient population 2. Stakeholder involvement – focuses on the extent to which the guideline represents the views of its intended users. 3. Rigour and development – relates to the process used to collects and synthesize the evidence, methods to formulate recommendations and to update the guideline 4. Clarity and Presentation – looks at the language and format of the guideline
Agree continued 5. Applicability – pertains to the potential organizational, behavioral and cost implications of the guideline 6. Editorial Independence – is concerned with the independence of the recommendations and acknowledgement of possible conflict of interest from the guideline development group
Appraising Systematic Reviews and Other Study Types Critical Appraisal Skills Program (CASP) Since its ‘birth’ in 1993, the Critical Appraisal Skills Programme (CASP) has helped to develop an evidence-based approach in health and social care, working with local, national and international groups. CASP aims to enable individuals to develop the skills to find and make sense of research evidence, helping them to put knowledge into practice. See http://www.phru.nhs.uk/pages/PHD/CASP.htm for appraisal tools