1 / 29

Karin Hannes Centre for Methodology of Educational Research K.U.Leuven

Validity in Qualitative Research: a comparative analysis of 3 online appraisal instruments ability to establish rigor. Karin Hannes Centre for Methodology of Educational Research K.U.Leuven. The Aspirine C ase. To appraise or not to appraise.

harken
Download Presentation

Karin Hannes Centre for Methodology of Educational Research K.U.Leuven

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Validity in Qualitative Research: a comparative analysis of 3 online appraisal instruments ability to establish rigor Karin Hannes Centre for Methodology of Educational ResearchK.U.Leuven

  2. The Aspirine Case

  3. To appraise or not to appraise • The more you appraise, the more it stifles creativity. The main criterion is relevance! • The more you appraise, the lesser the chance to end up with flawed results. The main criterion is quality!

  4. To appraise or not to appraise… • That’s no longer the question… I appraise! • The question is…

  5. Which instrument am I going to use? Which criteria am I going to usewhenevaluatingmethodologicalquality?

  6. Which instrument? • MAKING SENSE OF THE MYRIAD OF CRITICAL APPRAISAL INSTRUMENTS

  7. Which instrument? • Selection of appraisal instruments: • Used in recently published QES (2005-2008) • Online available and ready to use • Broadly applicable to different qualitative research designs • Developed and supported by an organisation/institute/consortium or a context other than individual, academic interest.

  8. Which criteria are used to evaluate the quality of a study? • Three instruments fit the inclusion criteria: • Joanna Briggs Institute-Toolhttp://www.joannabriggs.edu.au/cqrmg/tools_3.html • Critical Appraisal Skills Programme-Toolhttp://www.phru.nhs.uk/Doc_Links/Qualitative%20Appraisal%20Tool.pdf • Evaluation Tool for Qualitative Studieshttp://www.fhsc.salford.ac.uk/hcprdu/tools/qualitative.htm • To facilitate comparison: • Criteria grouped under 11 headings • Cross-comparison of the criteria (main headings)

  9. Which criteria are used to evaluate the quality of a study?

  10. Which criteria are used to evaluate the quality of a study? All 3 instruments have focussed on the accuracy of the audit trail = Quality of reporting.

  11. Which criteria am I going to use evaluate the quality of a study? Let’s change the focus from • What had been evaluated in critical appraisal instruments? To • What should be the main focus in evaluating the methodological quality in qualitative research.

  12. Which criteria am I going to use to evaluate the quality of a study? • Validity and researcher bias shouldbeevaluated SomeQual. studies are more rigorousthanothers. • Epistemological and ontologicalassumptions of Quant. and Qual. research are incompatible It is inappropriate to usesuchmeasures. Validity in Quant. Research  instruments and procedures Validity in Qual. Research  kinds of understanding we have of the phenomenaunderstudy (accounts identifiedbyresearchers)

  13. Validity as the main criterion • We need to know • whether the set of arguments or the conclusion derived from a study necessarily follows from the premises. • whether it is well grounded in logic or truth. • whether it accurately reflects the concepts, the ideas that it is intended to measure. What is validity?  operationalisation using Maxwell’s framework

  14. Validity as the main criterion • Maxwell’s deconstruction of the concept validity (1992) • Descriptive validity • Interpretative validity • Theoretical validity • Generalisibility (external validity) • Evaluative validity

  15. Descriptive validity • The degree to which descriptive information such as events, subjects, setting, time, place are accurately reported (facts rather than interpretation). • Evaluation techniques: Methods- & Investigator triangulation  allows for cross-checking of observations

  16. Interpretive validity • The degree to which participants’ viewpoints, thoughts, intentions, and experiences are accurately understood and reported by the researcher. • Evaluation techniques: Display of citations, excerpts, use of multiple analysts (inter-rater agreements), self-reflection of the researcher, (member checking).

  17. Theoretical validity • The degree to which a theory or theoretical explanation informing or developed from a research study fits the data and is therefore credible/defensible. • Evaluation techniques: Persistent observation  stable patterns, deviant or disconfirming cases, multiple working hypotheses, theory triangulation and active search for deviant cases, pattern matching

  18. External validity (transferability) • The degree to which findings can be extended to other persons, times or settings than those directly studied. • Evaluation techniques: Demographics, contextual background information, thick description, replication logic

  19. Evaluative validity • The degree to which a certain phenomenon under study is legitimate, degree to which an evaluative critic is applied to the object of study. • Evaluation techniques: Application of an evaluative framework, ethics, ...

  20. What is the extent to which the different instruments establish validity?

  21. What is the extent to which the different instruments establish validity? • The most commonly used instrument ‘CASP’, is the least sensitive to aspects of validity (findings based on screening the main headings). It does not address interpretive nor theoretical validity or context as a criterion. • The theoretical position and the background of a researcher has a direct impact on the interpretation of the findings. • Statements that have no clear link to excerpts are at risk of not being grounded in the data. • Therefore, they should be LEAD CRITERIA in a critical appraisal instrument!

  22. What is the extent to which the different instruments establish validity? • Thisstudy is limitedby • Its focus on the mainheadingsof the instrument. Some of the subheadings of CASP do address issues of e.g. interpretivevalidityand some issues are notaddressed in the JBI-tool, e.g. sampling procedures. • its focus onvalidity as anevaluationcriterion. Whichotheraspects are important to consider in evaluating the quality of an instrument?

  23. What is the extent to which the different instruments establish validity? • Are there fundamental differences between appraisal instruments? • Do we need to give our choice of appraisal instrument some thought? • Could it assist us in evaluating the (methodological) quality of a study? • Does it help us to establish rigor in qualitative research?

  24. What is the extent to which the different instruments establish validity? • Checklists only capture what has been reported. • I argue for the use of verification techniques for validity as a means for obtaining rigor. • In evaluating validity at the end of a study (post hoc), rather than focusing on processes of verification during the study we run the risk of missing serious threats to validity until it is too late to correct them.

  25. To conclude... • Basic qualitative researchers • should be motivated to adopt techniques that improve validity • Should be guided in how to report qualitative research in order to facilitate critical appraisal

  26. To conclude • The development of a standards set of reporting criteria forqualitative research is virtuallyimpossible! • The development of a standard set of reporting criteria wouldfacilitatecriticalappraisal. We mightneedyou!To participate in a Delphi studyonexploring the potentialfeasibility, appropriateness, meaningfulness and effectiveness of reporting criteria.

  27. To conclude... • To validate is to investigate, to check, to question, and to theorize. All of these activities are integral components of qualitative inquiry that insure rigor (Morse, 2002). • The process of inquiry is where the real verification happens.

  28. Conflicts of interest:I am a Cochraniteand a Campbell knightI might substantially have been brainwashed in the ‘risk of bias’ discourse, beyond my personal control. Karin.hannes@ped.kuleuven.be

More Related