300 likes | 317 Views
Learn how to critically evaluate the methodology of a research report, understand the hierarchy of evidence in epidemiologic studies, and identify and prevent pitfalls and errors in research.
E N D
A Brief Introduction to Epidemiology - XII(Critiquing the Research: Methodological Issues) Betty C. Jung, RN, MPH, CHES
Learning/Performance Objectives • To be able to critically evaluate the methodology of a research report • Understanding the hierarchy of evidence as it relates to epidemiologic studies • To understand what the pitfalls and errors of epidemiological studies are • To understand how to correct, control or prevent such errors
Introduction The primary purpose of research is to conduct a scientific, or, scholarly investigation into a phenomenon, or to answer a burning question. Research is defined as a systematic approach to problem solving.
Hierarchy of Evidence • Systematic reviews & meta-analyses • Randomized controlled trials with definitive results (non-overlapping confidence intervals) • Randomized controlled trails with non-definitive results (a point estimate that suggests a clinically significant effect but with overlapping confident intervals)
Hierarchy of Evidence (continued) • Cohort studies • Case-control studies • Cross-sectional surveys • Case Reports
Choosing the Right Study Design +....+++++ indicates the degree of suitability - not suitable (b) If prospective (c) If population based From WHO
Frequency of Epidemiologic Studies • Cross-sectional (46%) • Cohort studies (29%) • Case-control studies (6%) • Etc (case studies, etc) (19%)
Limitations of Research Based on the Scientific Method • Every research study has flaws • No single study proves or disproves a hypothesis • Ethical issues can constrain researchers • Adequate control is hard to maintain in a study
Explanations for Artifactual Associations • Information Bias • Selection Bias • Failure to control for confounding variables • Ecologic fallacy • Sampling variability or chance
Errors in Epidemiological Studies • Random Error • Sample Size Calculations • Systematic Error • Selection Bias • Measurement Bias • Confounding • Validity • Internal Validity • External Validity
Random Error • Divergence, due to chance alone, of an observation on a sample from the true population value, leading to lack of precision in the measurement of an association • Sources of Random Error • Sampling error • Biological variation • Measurement error
Sample Size Calculations Variable to consider • Required level of statistical significance of the expected result • Acceptable chance of missing the real effect • Magnitude of the effect under investigation • Amount of disease in the population • Relative sizes of the groups being compared
Systematic Error (Bias) • Occurs when there is a tendency to produce results that differ in a systematic manner from the true values • A study with a small systematic error is considered highly accurate • Accuracy is not affected by sample size • Principal biases • Selection Bias • Measurement (Classification) Bias • Confounding
Selection Bias • Occurs when there is a systematic difference between the characteristics of the people selected for a study and the characteristics of those who are not • Distortion of effect resulting from the way participants are accepted into studies • Healthy Worker Effect – risk for certain illnesses in industrial working populations is lower than in the general population
Sources of Selection Bias • Volunteers for studies are almost always selective • Paid participants may be selectively different from the general population • Hospital and clinical data are based on a selective population • Disease or factor under investigation makes people unavailable for study
Measurement Bias • Occurs when individual measurements or classifications of disease or exposure are inaccurate • If occurs equally in groups being compared (non-differential bias) – results in underestimate of the true strength of the relationship • Sources: • Quality of laboratory analysis • Recall bias
Confounding • Occurs when another exposure exists in the study population and is associated with both the disease and the exposure being studied • When the effects of two exposures (risk factors) have not been separated, and incorrect conclusions are drawn that the effect is due to one rather than the other variable
Confounding (continued) • May create the appearance of a cause-effect relationship that really does not exist • For a variable to be a confounder, it must be a determinant (risk factor) itself of the disease and the exposure being studied • Age and social class are common confounders
Controlling Confounding through Study Design • Randomization – experimental studies only; sample size must be sufficient to avoid random maldistribution • Restriction – limit study to those with particular characteristics • Matching – case-control studies; potential confounding variables are evenly distributed in all study groups
Controlling Confounding During Analysis • Stratification – used in large studies; measuring the strength of associations in well-defined and homogenous categories (strata) of confounding variable • Statistical (Multivariate) Modeling – for estimating the strength of association while controlling multiple confounding variable at the same time
Validity • The degree to which a test is capable of measuring what it is intended to measure • Two types • Internal – degree to which the results of an observation are correct for the particular group studied • External (generalizability) – extent to which the study’s results can be applied to those beyond the study sample
Reliability • Repeatability • Example - Having both Observer A and Observer B examine subjects from all study groups, and subjects are randomly assigned to both observers would ensure that any errors of the observers would be spread across the groups. This would also help avoid spurious results
Ecological Fallacy • The error that occurs by assuming that because two or more characteristics expressed as group indices occur together, they are therefore associated • Unless ecologic studies can create specific rates for subpopulations, they are not proof of an association
Cohort Effect • When data suggest the possibility that they are demonstrating the experience of one particular group (cohort) over time • Age (Birth) Cohort Effect – a nonfatal persistent birth disorder can be highly prevalent at birth and persist in that birth cohort through time (i.e., thalidomide babies of the early 60s)
Suspects of the Cohort Effect in Cross-sectional studies • Any association of disease with age • An unexpected dip or increase in the distribution of a disease by age (bimodal distribution) • An unexpected secular decline in a nontreatable disease
Association as CausalHill’s 9 Rules of Evidence • Strength • Consistency • Specificity • Temporality • Biological gradient • Plausibility • Coherence • Experiment • Analogy
Pitfalls of Systematic Reviews & Meta-Analyses • Rare results of different studies agree, and number of patients in one study is not large enough to come up with a firm conclusion • Studies may be omitted if authors are interested in supporting a particular point • Publication bias – studies with negative effects may not get published, and therefore may be excluded
Research Ethics • Epidemiologists adhere to principles of biomedical ethics • Free and voluntary informed consent and right to withdraw by participants • Respect for personal privacy and confidentiality • People who have been exposed to a health hazard and become part of epidemiological studies need to understand that such studies may not improve their personal situation but may help to protect other people
References • For Internet Resources on the topics covered in this lecture, check out my Web site: http://www.bettycjung.net • Other lectures in this series: http://www.bettycjung.net/Bite.htm