1 / 110

Introduction to Biostatistics for Clinical and Translational Researchers

Introduction to Biostatistics for Clinical and Translational Researchers. KUMC Departments of Biostatistics & Internal Medicine University of Kansas Cancer Center FRONTIERS: The Heartland Institute of Clinical and Translational Research. Course Information. Jo A. Wick, PhD

asta
Download Presentation

Introduction to Biostatistics for Clinical and Translational Researchers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Biostatistics for Clinical and Translational Researchers KUMC Departments of Biostatistics & Internal Medicine University of Kansas Cancer Center FRONTIERS: The Heartland Institute of Clinical and Translational Research

  2. Course Information • Jo A. Wick, PhD • Office Location: 5028 Robinson • Email: jwick@kumc.edu • Lectures are recorded and posted at http://biostatistics.kumc.edu under ‘Educational Opportunities’

  3. Course Objectives • Understand the role of statistics in the scientific process • Understand features, strengths and limitations of descriptive, observational and experimental studies • Distinguish between association and causation • Understand roles of chance, bias and confounding in the evaluation of research

  4. Course Calendar • June 29: Descriptive Statistics and Core Concepts • July 6: Hypothesis Testing • July 13: Linear Regression & Survival Analysis • July 20: Clinical Trial & Experimental Design

  5. Hypothesis Testing

  6. Which test do I use? • Research questions usually concern one of the following population parameters: • Mean: μ—interval or ratio response • Proportion: π—nominal or ordinal response • Time-to-event: t—combination response • These questions can involve exploring the characteristics of one group of interest, or they can explore differences in characteristics of 2 or more groups of interest.

  7. Inferences on a Single Mean • Example: BMI of single population—is it greater than 26.3? • One sample t-test

  8. Inferences on Two Means • Example: Smoking cessation • Two types of therapy: x = {behavioral therapy, literature} • Dependent variable: y = number of cigarettes smoked per day after six months of therapy1 1 Some other response that takes into account the difference from baseline would be more appropriate: change from baseline or %-change from baseline

  9. Smoking Cessation • Research question: Is behavioral therapy in addition to education better than education alone in getting smokers to quit? • H0: μ1 = μ2 versus H1: μ1≠ μ2 • Two independent samples t-test IF: • the number of cigarettes smoked is approximately normal OR can be transformed to an approximate normal distribution (e.g., natural log) • the variability within each group is approximately the same (ROT: no more than 2x difference)

  10. Smoking Cessation

  11. Smoking Cessation

  12. Smoking Cessation

  13. Smoking Cessation • Conclusion: Adding behavioral therapy to cessation education results in—on average—significantly fewer cigarettes smoked per day at six months post-therapy when compared to education alone (t30.9 = -2.87, p < 0.01). Reject H0: μ1 = μ2

  14. Smoking Cessation • The 95% confidence interval is: -8.39 ≤ μ1-μ2≤ -1.42 • Interpretation: On average, the difference in number of cigarettes smoked per day between the two groups is 4.9 cigarettes (95%CI: 1.42, 8.39).

  15. Confidence Intervals • What exactly do confidence intervals represent? • Remember that theoretical sampling distribution concept? • It doesn’t actually exist, it’s only mathematical. • What would we see if we took sample after sample after sample and did the same test on each . . .

  16. Confidence Intervals • Suppose we actually took sample after sample . . . • 100 of them, to be exact • Every time we take a different sample and compute the confidence interval, we will likely get a slightly different result simply due to sampling variability.

  17. Confidence Intervals • Suppose we actually took sample after sample . . . • 100 of them, to be exact • 95% confident means: “In 95 of the 100 samples, our interval will contain the true unknown value of the parameter. However, in 5 of the 100 it will not.”

  18. Confidence Intervals • Suppose we actually took sample after sample . . . • 100 of them, to be exact • Our “confidence” is in the procedure that produces the interval—i.e., it performs well most of the time. • Our “confidence” is not directly related to our particular interval—we cannot say “The probability that the mean number of cigarettes is between (1.4,8.4) is 0.95.”

  19. Inferences on More Than Two Means • Example: Smoking cessation • Three types of therapy: x = {pharmaceutical therapy, behavioral therapy, literature} • Dependent variable: y = number of cigarettes smoked per day after six months of therapy

  20. Smoking Cessation • Research question: Is therapy in addition to education better than education alone in getting smokers to quit? If so, is one therapy more effective? • H0: μ1 = μ2 = μ3 versus H1: At least one μ is different • More than 2 independent samples requires an ANOVA: • the number of cigarettes smoked is approximately normal OR can be transformed to an approximate normal distribution (e.g., natural log) • the variability within each group is approximately the same (ROT: no more than 2x difference)

  21. Smoking Cessation

  22. Smoking Cessation

  23. Smoking Cessation

  24. Smoking Cessation • Test of the ‘homogeneity’ assumption using Levene or Brown-Forsythe test: • Conclusion: Reject H0: σ1 = σ2 = σ3

  25. Smoking Cessation • Counts are notorious for this—try a natural log transformation • Note: Make sure you add 1 to each count because the log of 0 does not exist. • Modification: new y = log(y + 1) • Retest!

  26. Smoking Cessation

  27. Smoking Cessation • Test of the ‘homogeneity’ assumption using Levene or Brown-Forsythe test on the transformed count: • Conclusion: Fail to reject H0: σ1 = σ2 = σ3

  28. Smoking Cessation • ANOVA produces a table: • One-way ANOVA indicates you have a single categorical factor x (e.g., treatment) and a single continuous response y and your interest is in comparing the mean response μ across the levels of the categorical factor.

  29. Wait . . . • Why is ANOVA using variances when we’re hypothesizing about means? • Between-groups mean square: a variance • Within-groups mean square: also a variance • F: a ratio of variances—F = MSBG/MSWG

  30. What’s the Rationale? • In the simplest case of the one-way ANOVA, the variation in the response y is broken down into parts: variation in response attributed to the treatment (group/sample) and variation in response attributed to error (subject characteristics + everything else not controlled for) • The variation in the treatment (group/sample) means is compared to the variation within a treatment (group/sample) using a ratio—this is the F test statistic! • If the between treatment variation is a lot bigger than the within treatment variation, that suggests there are some different effects among the treatments.

  31. Rationale 1 2 3

  32. Rationale • There is an obvious difference between scenarios 1 and 2. What is it? • Just looking at the boxplots, which of the two scenarios (1 or 2) do you think would provide more evidence that at least one of the populations is different from the others? Why?

  33. Rationale 1 2 3

  34. F Distribution Properties, F(dfnum, dfden) • The values are non-negative, start at zero and extend to the right, approaching but never touching the horizontal axis. • The distribution of F changes as the degrees of freedom change.

  35. F Statistic • Case A: If all the sample means were exactly the same, what would be the value of the numerator of the F statistic? • Case B: If all the sample means were spread out and very different, how would the variation between sample means compare to the value in A?

  36. F Statistic • So what values could the F statistic take on? • Could you get an F that is negative? • What type of values of F would lead you to believe the null hypothesis—that there is no difference in group means—is not accurate?

  37. Smoking Cessation • ANOVA produces a table: • Conclusion: Reject H0: μ1 = μ2 = μ3.Some difference in the number of cigarettes smoked per day exists between subjects receiving the three types of therapy.

  38. Smoking Cessation • ANOVA produces a table: • But where is the difference? Are the two experimental therapies different? Or is it that each are different from the control?

  39. Smoking Cessation • Reject H0: μ1 = μ3 and μ1 = μ2. Both pharmaceutical and behavioral therapy are significantly different from the literature only control group, but the two therapies are not different from each other.

  40. Smoking Cessation • Conclusion: Adding either behavioral (p = 0.015) or pharmaceutical therapy (p < 0.01) to cessation education results in—on average—significantly fewer cigarettes smoked per day at six months post-therapy when compared to education alone.

  41. Smoking Cessation • On average, the number of cigarettes smoked per day by subjects receiving behavioral and pharmaceutical therapy is 1.1 fewer cigarettes (95%CI: 0.16, 2.79) and 1.5 fewer cigarettes (95%CI: 0.36, 3.45), respectively, than control subjects.

  42. Inferences on Means • Concerns a continuous response y • One or two groups: t • More than two groups: ANOVA • Remember, this (and the two-sample case) is essentially looking at the association between an x and a y, where x is categorical (nominal or ordinal) and y is continuous (interval or ratio). • Check assumptions! • Normality of y • Equal group variances

  43. ANOVA Models • There are many . . .

  44. Inferences on Proportions (k = 2) • Example: plant genetics • Two phenotypes: x = {yellow-flowered plants, green-flowered plants} • Dependent variable: y = proportion of plants out of 100 progeny that express each phenotype

  45. Plant Genetics • The plant geneticist hypothesizes that his crossed progeny will result in a 3:1 phenotypic ratio of yellow-flowered to green-flowered plants. • H0: The population contains 75% yellow-flowered plants versus H1: The population does not contain 75% yellow-flowered plants. • H0: πy = 0.75 versus H1: πy≠ 0.75 • This particular type of test is referred to as the chi-square goodness of fit test for k= 2.

  46. Plant Genetics • Chi-square statistics compute deviations between what is expected (under H0) and what is actually observed in the data: • DF = k – 1 where k is number of categories of x

  47. Plant Genetics • Suppose the researcher actually observed in his sample of 100 plants this breakdown of phenotype: • Does it appear that this type of sample could have come from a population where the true proportion of yellow-flowered plants is 75%?

  48. Plant Genetics • Conclusion: Reject H0: πy = 0.75—it does not appear that the geneticist’s hypothesis about the population phenotypic ratio is correct (p = 0.038).

  49. Inferences on Proportions (k > 2) • Example: plant genetics • Four phenotypes: x = {yellow-smooth flowered, yellow-wrinkled flowered, green-smooth flowered, green-wrinkled flowered} • Dependent variable: y = proportion of plants out of 250 progeny that express each phenotype

  50. Plant Genetics • The plant geneticist hypothesizes that his crossed progeny will result in a 9:3:3:1 phenotypic ratio of YS:YW:GS:GW plants. • Actual numeric hypothesis is H0: π1 = 0.5625, π2= 0.1875, π3= 0.1875, π4= 0.0625 • This particular type of test is referred to as the chi-square goodness of fit test for k = 4.

More Related