1 / 64

Effect Estimates and The Role of The Chance

This course provides an in-depth exploration of odds ratio, relative risk, p-values, confidence intervals, sample size estimation, and power analysis in epidemiology research. Learn about key statistical methodologies and their historical development, as well as the concepts of probability, risk, and attribution. Gain a deeper understanding of hypothesis testing and the scientific logic behind hypothesis development.

tromero
Download Presentation

Effect Estimates and The Role of The Chance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Effect Estimates and The Role of The Chance Önder Ergönül, MD, MPH Koç University, School of Medicine 18-29 June 2018, Istanbul

  2. Objectives • More about Odds ratio, relative risk • P value or Confidence interval? • Sample size estimation • Power analysis

  3. Research Methodology: Epidemiology + Statistics • Epidemiology • John Snow, 1850 • Cohort boom, 1950 • Statistics; early 20th century • K. Pearson • RA Fisher • J. Neyman The Lady Tasting Tea by David Salzburg

  4. Ratio, Proportion, Rate

  5. PrevalenceandIncidence Number of existing cases of a disease P= at a given point of time CI = P= incidence x duration Total population Number of new cases of a disease during a given period of time Total population at risk CI = Cumulative incidence

  6. Comparing Disease Occurence • Absolute comparisons • Risk • Risk density • Risk difference • Attributable fraction • Relative comparisons • Relative risk • Attributable risk • Odds ratio

  7. Relative Risk RR = = RR= incidence in exposed / incidence in nonexposed a / (a + b) Risk of exposed group c / (c + d) Risk of nonexposed group

  8. When OR is closeto RR: Rarediseaseassumption RR= = = = OR a/ (a+b) a / b ad c/ (c+d) c / d bc

  9. What is Probability? • p = probability or proportion Between 0 and 1. • Probability of success: Pr(y = 1) = p • Probability of no success: Pr(y = 0) = 1 – p

  10. Odds Ratio Odds = p/(1-p)= .75/ (1 - .75)= .75/.25 = 3 Odds Ratio =pA/(1-pA) pB/(1-pB)

  11. Relative Risk Riskheparin= 8/100 = 0.08 Riskplasebo= 18/100 = 0.18 Risk plasebo 0.18 = = 2.25 Relatif risk= Risk heparin 0.08

  12. Odds Ratio Oddsheparin= 8/92= 0.087 Oddsplasebo= 18/82= 0.22 Odds plasebo 0.22 = = 2.53 Odds oranı = Odds heparin 0.087

  13. Can we convert odds ratio to relative risk? Zhang and Yu proposed a simple formula toconvert an odds ratio provided by logistic regression to a relative risk : RR= OR / [ (1-Po) + (PoxOR)} Po: the incidence of the outcome in thenonexposed group OR: odds ratio from a logisticregression equation RR: is an estimated relative risk Zhang J, Yu KF. What’s relative risk? A method of correctingthe odds ratio in cohort studies of common outcomes. JAMA 1998;280:1690–1. Using the formula in thismanner is incorrect and will produce a biased estimate whenconfounding is present. If no confounding exists, then regressionanalysis is not needed and simple calculations can beused to compute an estimated relative risk. McNutt EA,et al. Am J Epidemiol 2003

  14. Risk Difference / Attributable Risk The risk difference (RD) or attributable risk (AR) is a measure of association that provides information about the absolute effect of the exposure or the excess risk of disease in those exposed compared with those nonexposed. AR = IRe-IRo Attributable fraction = = Good to see the attribution of the exposure RD Re-Ro R1 Re

  15. Population Attributable Risk Lee et al (N Engl J Med, 2006) report the following: Statistical Analysis Section (p.140, second to last paragraph) We estimated the population attributable risk (PAR) for heart failure associated with parental occurrence of the condition as a function of the proportion of cases occurring in those with a parent with heart failure (pd) and the multivariable-adjusted relative risk (RR, equivalent to hazard ratio from models with clinical covaraites), calculatedas (RR-1)RRx100 The population-attributable risk of heart failure that was due to the presence of the condition in a parent was 17.8 percent. Approximately 18 percent of the heart-failure burden in the offspring was attributable to parental heart failure.

  16. Conclusion Be careful while using p value confidence intervals Odds ratio

  17. Why do we need statistics and epidemiology? The need for development of a surgical method is usually obvious. But, how we decide to develop a statistical test? Why and how Pearson developed chi square test?

  18. What is the Scientific Logic? • How do we think by using scientific logic? • What are the steps of scientific logic? • What is the philosophy behind of our hypothesis development? • Why do we need statistics?

  19. Hypothesis Testing

  20. lex parsimoniae (the law of briefness) William Ocham (14th cc) Entia non sunt multiplicanda praeter necessitatem More things should not be used than are necessary. When there are many explanations for symptoms, the simplest diagnosis is the one to test first. If a child has a runny nose, it probably has the common cold rather than a rare birth defect. "When you hear hoof beats, think horses, not zebras".

  21. Popper belongs to a generation of Central European émigré scholars that profoundly influenced thought in the English-speaking countries in this century. His greatest contributions are in philosophy of science and in political and social philosophy. Popper’s ‘falsificationism’ reverses the usual view that accumulated experience leads to scientific hypotheses; rather, freely conjectured hypotheses precede, and are tested against, experience. The hypotheses that survive the testing process constitute current scientific knowledge. His general epistemology, ‘critical rationalism’, commends the Socratic method of posing questions and critically discussing the answers offered to them. He considers knowledge in the traditional sense of certainty, or in the modern sense of justified true belief, to be unobtainable. Ian C. Jarvie Publication date: 2002

  22. After the Anschluss, Popper was stimulated by the problem of why democracies had succumbed to totalitarianism and applied his critical rationalism to political philosophy. Since we have no infallible ways of getting or maintaining good government, Plato’s question ‘Who should rule?’ is misdirected. To advocate the rule of the best, the wise or the just invites tyranny disguised under those principles. By contrast, a prudently constructed open society constructs institutions to ensure that any regime can be ousted without violence, no matter what higher ends it proclaims itself to be seeking. Couched in the form of extended critiques of Plato and Platonism as well as of Marx and Marxism, Popper’s political philosophy has had considerable influence in post-war Europe, East and West. Ian C. Jarvie Publication date: 2002

  23. Hypothesis test=Significance test Even if my medication were completely ineffective, what are the chances my experiment would have produced the observed outcome?

  24. Probability of what? Under the assumption that there is no true effect or no true difference, of collecting data that shows a difference equal to or more extreme than what you actually observed Probability p

  25. P Value The area under the curve more extreme than the mean  2 SD, or more exact, the mean  1.96 SDis 0.05, or 5%. Since a “normal” value is on average at the center of the probability distribution of the biological parameter in healthy people, when we get a value on a patient that is in the extremes, or outside the reference range, we conclude that the probability of that value being from a healthy person is < 0.05. Since this probability is so small, we suspect that our patient is unhealthy.

  26. What is p Value? • The P (Probability) value is used when we wish to see how likely it is that a hypothesis is true. • P=0.05 means that the probability of the difference having happened by chance is 0.05 in 1, i.e. 1 in 20. • Statistically significant means unlikely to have happened by chance, and therefore important. • Example: Shot baskets with Michael Jordan: • Jordan 7/7 • Me 4/7 P=0.07 ?????

  27. What is p value ? The P value or calculated probability is the estimated probability of rejecting the null hypothesis (H0) of a study question when that hypothesis is true.

  28. Hypothesis testing • Null hypothesis:Study drug and placebo are equivalent? • Run analysesif P<0.05: (statistical significance), reject the null, conclude that there is an interesting phenomenon. • if P>0.05, the drug does not differ from placebo.

  29. Hypothesis testing Test result 37

  30. Hypothesis testing Test result

  31. Sample size parameters • Type I error (α), usually <0.05, false positivity • Type II error (β), usually <0.2, false negativity • Power (=1-β): The probability of the detection of a real difference as statistically significantIf β=0.20, the power 80% • The difference between 2 groups can be detected by the probability of 80%

  32. Power vs sample size chart

  33. Sample size parameters • Minimum expected difference • The smallest measured difference between comparison groups that has clinical importance • Defined on the basis of literature and clinical experience • As the minimum expected difference decreases sample size increases

  34. Sample size parameters • Estimated measurement variability • Expected SD in the measurements made within each comparison group • Defined on the basis of preliminary data, literature review, subjective experience • As statistical variability increases, sample size increases • A separate estimate of measurement variability is not required when the measurement being comparedis a proportion (in contrast to a mean), because the SDis mathematically derived from the proportion.

  35. Sample size parameters

  36. Exercise • Study title: • Efficacy of a new chemotherapeutic agent compared with standard therapy in mesothelioma • Study design: • Two-arm, randomized, prospective • Primary efficacy criterion: • Size of tumor after 12 weeks of treatment

  37. Exercise • Sample size calculation assumptions • Mean tumor size of standart therapy group after 12 weeks: 30 mm • Minimum expected difference between study groups: 2 mm, 5 mm, 7 mm • Estimated measurement variability: 10 mm

  38. Exercise Can you guess at which direction will the sample size change?

  39. Exercise

  40. Importance of sample size • Sample size is the number of subjects that should be included in the study • Sample size should be large enough to detect a true difference between study groups • Sample size should be small enough to prevent unnecessary costs and unethical applications

  41. Importance of sample size • “Not significant difference from a trial with insufficient number of patients” means almost nothing • “Not significant difference from a large enough trial” is most probably a real finding and tells us the treatments are likely to be equivalent

  42. Importance of sample size • “Significant difference from a trial withinsufficient number of patients” may or may not be replicable • “Significant difference from a large enough trial” is to be trusted as revealing a true difference

More Related