1 / 154

Overview of Research Methods in Dentistry

Overview of Research Methods in Dentistry. Robert Weyant, DMD DrPH Department of Dental Public Health and Information Management University of Pittsburgh. What is “Causation”. Koch-Henle postulates Bradford-Hill 'criteria' inductionist, refutationist, or hypothetico-deductivist view

dixon
Download Presentation

Overview of Research Methods in Dentistry

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Overview of Research Methods in Dentistry Robert Weyant, DMD DrPH Department of Dental Public Health and Information Management University of Pittsburgh

  2. What is “Causation” • Koch-Henle postulates • Bradford-Hill 'criteria' • inductionist, refutationist, or hypothetico-deductivist view • Provides the basis for “intervention” "Causality. There is no escape from it, we are forever slaves to it. Our only hope, our only peace is to understand it, to understand the why” Larry, .; Andy Wachowski, . The Matrix: Reloaded.

  3. Austin Bradford Hill (1897-1991), a British medical statistician as a way of determining the causal link between a specific factor (e.g., cigarette smoking) and a disease (such as emphysema or lung cancer). Hill's Criteriaform the basis of modern epidemiological research, which attempts to establish scientifically valid causal connections (disease – and its cause) Temporal Relationship Strength Dose-Response Relationship Consistency Plausibility Consideration of Alternate Explanations Experiment Specificity Coherence Hills Criteria of Causation

  4. Deterministic Systems Events are part of an unbroken chain of prior occurrences. Outcomes occur predictably Newtonian Physics Stochastic Systems Outcomes are computationally and practically unpredictable. Present state does not fully determine the next state Biology and medicine are stochastic Systems

  5. Statistical Causality • Observational studies (like counting cancer cases among smokers and among non-smokers and then comparing the two) can give hints, but can never establish cause and effect. • Hypothesis generation. • The gold standard for causation here is the randomized experiment: • One limitation of experiment is they do a good job of testing for the presence of some causal effect they do less well at estimating the size of that effect in a population of interest. • Subject selection may lack generalizability. • . Med Outcome Exp

  6. Research Designs In clinical research

  7. Essentials of Research Design • Basic research • Clinical research (often experimental) • Epidemiological research (often observational, know denominator) • Health services research limited to human research (in vivo)

  8. What are our research(and clinical) concerns? • Exposure • Good or bad: Chemical, biological, psychological, educational, etc. • Outcome • Good or bad: disease, cure, improved attitude, longer life, etc. • We generally know one and want to measure the other • Concerns are that we measure both accurately and understand what population is represented.

  9. Classification Schemes • Descriptive vs. Analytical • Experimental vs. Observational • Time Referenced • Prospective vs. Cross-sectional vs. Retrospective

  10. Describe or Analyze? • Descriptive • simply describe what was seen (common in surveys). Prevalence of various conditions. • PREVALENCE: the proportion of the population who exhibit the condition of interest. • Analytic • attempt to determine the associations between disease and possible risk factors/determinates and to quantify risk. (common for experimental designs and search for causality)

  11. Experiment or Observe? • Experimentation is defined by the degree of control or manipulation the investigator has over the study conditions. • In a non-experimental (observational) design the investigator has less control over the study conditions. • The consequences of study design are in the limitations put upon the interpretation of the results of the study.

  12. Retrospective Case control Prospective All experimental and cohort (obs) Cross Sectional Time ? $ Time

  13. Classification according to CONTROL / INTERVENTION • Experimental Designs (Classic Design = RCT) • Prospective • Investigator alters the conditions understudy • There is a true control group • Randomization MUST occur • Observational • May be prospective/retrospective/cross-sectional • No control • No intervention

  14. Issues of concern • Population • Control group • Sample size • Placebo • Control of Operational Procedures • Validity and Reliability of Measures • Duration • Statistical Analysis

  15. 1. Population (Relevance) • When you read a study you must ask: • is the population representative of something I care about? • Is it appropriate to answer the question.

  16. How do people get into a “study”? • They volunteer • Often they are in the right place at the right time • They have the right disease (severity) or exposure. • Often “clinic” based studies are very poorly generalized to larger populations.

  17. Why people don’t get into a study • Too sick or not sick enough • Wrong gender, race, etc…. • Don’t live in the right place. • Don’t know about the study.

  18. Generalizability of Results Complete study and can be found for follow up Population of interest (in community) Present for study Eligible Consent/Enroll Barriers Lack of knowledge Referral Issues Fear Transportation Barriers Wrong disease severity Demographic issues Barriers Fear Transportation Not willing to be “randomized” Barriers Not adhere to protocol Lost to follow up (move, die) Where do research “subjects” come from?

  19. Is the study relevant and valid? • External validity • Do the study subjects represent a definable population of interest - i.e., “your patients”? • Hence, is it relevant • Internal validity • Is the study well designed and analyzed? • Hence, is it valid

  20. 2. Sample Size (did you look at enough people…) • There must be enough people in the study to ensure that the conclusions are valid. The likelihood that a finding will be spurious or incorrect decreases as you increase the number of individuals in the study. • POWER: the ability of a test to detect a significant difference when one exists. Be particularly attentive to negative studies. • Function of effect size, variance, sample size

  21. 3. Control Group ?(it worked!……compared to what?) • If we are to conclude that an intervention has an effect, then we must be sure that the group with and without the intervention were similar before the study began/and remained so except for the intervention. • If not, bias can result in spurious conclusions.

  22. 4. Placebo ?(I feel much better...what was that?) • Placebo is a material, formulation, intervention that is similar to the test product, but without the active ingredient. • There is a well documented placebo effect in many situations. • Up to 70% in some studies.

  23. 5. Control of operational procedures(What exactly did you do, doctor?) • When reading a study for your own use, it is important that the authors explain precisely what they did. This allows the reader to generalize to his/her own situation and helps to assess relevance

  24. 6. Reliability of measures (That was great…now do it again?) • One of the most important areas in any study: did the effect occur and how do we know. Someone measured it. We must be able to determine that the investigator(s) measured it accurately, repeatable. • INTRA-RATER reliability (same cases over time) • INTER-RATER reliability (comparison of same cases among raters) • Instrumentation

  25. 7. Duration of study(over so fast?) • Did the trial run long enough to measure the desired effect. • Caries trials 2-3 years • Calculus-preventing agents 90 days • Orthodontic outcomes (20 years?) • Implants (5 years)

  26. 8. Statistical Analysis(So, did I find anything “significant?”) • Where they appropriate to the design, quality of data, intent of investigators. • Statistical analysis is based on type of data (nominal, ordinal, ratio). • Type of question being asked • Summarize • Difference between groups • Effect size or risk

  27. Threats to Validity of a Study(Nice result, but what about…) • Bias: Any systematic error in a study which results in an incorrect estimate of the association between disease and exposure. • Confounding: results when there is a mixing of the effect of the exposure and disease with a “third factor” • Chance: The exposure:disease relationship is spurious as the result of random variation in sampling.

  28. Types of Bias • Selection • Non-representative sample • Non-comparable case/control groups • Loss to follow-up • Differential survival • Observation (Misclassification error) • Disease Classification • Exposure Classification • Instrumentation

  29. Confounding • Definition: the bias in the (crude) disease-exposure estimate that can result when the exposure-disease relationship is mixed up with the effect of “extraneous variables” • Confounding affects our understanding of the “true” disease-exposure relationship • The determination is “data-based” • Two methods • Stratification • mulitvariate analysis

  30. Chance • That’s what we have statistics for - to quantify the chance. • Type 1 (alpha) error (p-value).

  31. Case-control study yes Do we know disease status of patients before study Observational studies no Cross sectional study no Will observations be made at more than one time yes Cohort study no Alter the conditions under study yes yes True experiment Is there to be a control group Experimental studies Quasi - Experiment no Research Designs

  32. Observational Designs Cross Sectional Case Control (retrospective) Cohort (prospective)

  33. Cross Sectional Study • Measure, Classify, Compare • Used for questionnaires, surveys, prevalence estimates, to generate hypotheses. • Everything occurs “at once”.

  34. 1. Select Pop of interest 2. Select Sample 3. Assess population for both disease (outcome) status and risk factor (exposure) status RF + Disease Positive RF - Study Sample RF + Disease Negative Population of Interest RF - Cross Sectional Design Analyze using correlational statistics but causation not “provable” due to lack of temporal association

  35. Cross-Sectional Design Advantage: 1. Quick and Low Cost 2. Evaluate a large number of variables 3. Enroll a large number of Subjects Disadvantage: 1. Subject selection may reflect selection bias (volunteers, hospital patients) 2. Is difficult to identify cause and effect relationship. • Common Uses: • Questionnaires and Surveys • Prevalence studies • Hypothesis Development

  36. Case Control • Select cases and controls • Retrospective assessment of risk factors • Quantify exposure. Since no denominator, only relative rates.

  37. 1. Select group of subjects WITH disease/outcome of interest = CASES RF + 3. Measure (retrospectively) risk factors of interest. Cases RF - RF + Controls RF - 2. Select group of subjects WITHOUT disease/outcome = CONTROLS Case-Control Design 4. Analyze using strength of association measures. Selection of controls crucial Case selection also must be carefully considered Common Use: Rare Disease (e.g., birth defects) Long Latency (e.g., cancer)

  38. Case-Control Design Advantages 1. not dependent on natural frequency of disease (thus used to study rare diseases) 2. well suited to study diseases with long latency 3. requires comparatively few cases (2:1 or 3:1 matching) 4. not dependent on previously established cohort 5. allows study of multiple potential causes of disease 6. relatively low cost and quick 7. ethical: disease has already occurred Disadvantages 1. case selection may be problematic 2. controls may not be representative of same population as cases in terms of disease risk or confounders 3. investigators may be biased when know of disease status of subjects 4. subjects may bias answers (recall) due to disease status 5. factors which are used to match are removed from analysis 6. incidence, prevalence, RR and AR can't be calculated since no "population at risk" denominator is available

  39. Cohort Design • Select two or more groups (cohorts) that are free of disease but differ on their exposure status. • May start with one heterogeneous cohort. • Cohorts have a “denominator” which allows the calculation of true rates. • Useful when “exposure” varies over time.

  40. 1. Select Population of interest 2. Recruit sample WITHOUT disease(s) of interest and measure risk factors 3. Recall cohort periodically and remeasure risk factors and disease status Visit 2 Visit 3 Visit n Disease Free Study Sample (baseline exam) Population of Interest Time Cohort Study Design • Prospective, Observational Design. • Uses: • Determining/quantifying risk factors • Developing new etiological theory • Establishing causality

  41. Cohort Design Disadvantages 1. inefficient for study of rare disease 2. assessment of relationships limited to those defined at beginning of study 3. selection bias not controlled 4. loss to follow-up common 5. subjects may change in regards to characteristics (i.e. exposure status) 6. bias may be present if the characteristic studied influences surveillance and if surveillance influences detection of outcome (Berkson's fallacy) 7. expensive and time consuming Advantages 1. allows risk to be expressed as incidence 2. certain biases are reduced: exposure status disease status 3. subject characteristics can be related to more than one outcome

  42. Experimental Designs Clinical Trials (RCTs) Field Trials

  43. Clinical Trials • Prospective controlled experiment of human subjects to assess intervention for a specific disease. • Asks an important research question • Clinical event or outcome • Done in clinical or medical setting • Evaluates one or more interventions compared with “standard treatment” • Informed consent and DSMB required

  44. Phases of Clinical Trials • Phase I: dose finding • Phase II: efficacy at fixed dose • Phase III: comparing treatment (RCT) • Phase IV: late/uncommon effects

  45. Uses of Clinical Trails (experimental studies) • Test new drug therapy • Test new surgical interventions • Test educational/programatic interventions

  46. 1. Recruit individuals WITH disease. 2. Randomize into treatment arms 3. Follow up to assess outcomes Standard Treatment Outcomes Study Sample with disease New Treatment Outcomes Randomized Clinical Trial Design Randomization Randomization is essential, and along with strict control of experimental conditions allows for minimal bias Excellent internal validity (but possibly low external validity) Ethical only to the degree that differences in treatment are unknown at time of study initiation (equipoise). Requires DSMB.

  47. Experimental Design Advantages 1. investigator directly controls assignment to study groups 2. investigator directly controls exposure to agent. 3. random assignment measures can control extraneous factors. 4. blinding of evaluators may be possible Disadvantages: 1. not immune to problems encountered with other designs: (non-compliance, incomplete follow-up, biased observation) 2. may have low external validity 3. may not be feasible for studies of disease etiology (ethical considerations, rare disease) 4. may not be feasible for effective disease prevention exists. (can't withhold treatment) 5. Can be very expensive

  48. Efficacy is the potential to provide a clinical benefit. Measured in CTs Effectiveness is the benefit provided in routine “real world” use. Measured in surveillance systems (registries), after market incident reports, etc. Efficacy vs. Effectiveness

  49. Hierarchy of Research Designs • Experimental designs • Cohort studies • Case-control designs • Human trial without controls • Cross-sectional designs • Descriptive studies • Case reports • Personal opinion Based on control of bias and confounding and ability to make causal arguments

  50. RCT’s Strengths • Minimally biased design • Randomization • Control of extraneous variables • Prospective (causality established) • Design issues determined prior to initiation of study.

More Related