1 / 80

Comprehensive Statistical Analysis Course for Beginners

This course covers fundamental statistical concepts including normal distribution, probability, hypothesis testing, correlation, regression, ANOVA, factorial ANOVA, and more. Gain understanding of different statistical techniques and applications through real-world examples. Improve data analysis skills and learn practical statistical methods for research. Enhance critical thinking and interpretation of research articles.

coxf
Download Presentation

Comprehensive Statistical Analysis Course for Beginners

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Why is this important? • Requirement • Understand research articles • Do research for yourself • Real world

  2. The Three Goals of this Course • 1) Teach a new way of thinking • 2) Teach “factoids”

  3. Mean

  4. r =

  5. What you have learned! • Describing and Exploring Data / The Normal Distribution • Scales of measurement • Populations vs. Samples • Learned how to organize scores of one variable using: • frequency distributions • graphs

  6. What you have learned! • Measures of central tendency • Mean • Median • Mode • Variability • Range • IQR • Standard Deviation • Variance

  7. What you have learned! • Z Scores • Find the percentile of a give score • Find the score for a given percentile

  8. What you have learned! • Sampling Distributions & Hypothesis Testing • Is this quarter fair? • Sampling distribution • CLT • The probability of a given score occurring

  9. What you have learned! • Basic Concepts of Probability • Joint probabilities • Conditional probabilities • Different ways events can occur • Permutations • Combinations • The probability of winning the lottery • Binomial Distributions • Probability of winning the next 4 out of 10 games of Blingoo

  10. What you have learned! • Categorical Data and Chi-Square • Chi square as a measure of independence • Phi coefficient • Chi square as a measure of goodness of fit

  11. What you have learned! • Hypothesis Testing Applied to Means • One Sample t-tests • Two Sample t-tests • Equal N • Unequal N • Dependent samples

  12. What you have learned! • Correlation and Regression • Correlation • Regression

  13. What you have learned! • Alternative Correlational Techniques • Pearson Formulas • Point-Biserial • Phi Coefficent • Spearman’s rho • Non-Pearson Formulas • Kendall’s Tau

  14. What you have learned! • Multiple Regression • Multiple Regression • Causal Models • Standardized vs. unstandarized • Multiple R • Semipartical correlations • Common applications • Mediator Models • Moderator Mordels

  15. What you have learned! • Simple Analysis of Variance • ANOVA • Computation of ANOVA • Logic of ANOVA • Variance • Expected Mean Square • Sum of Squares

  16. What you have learned! • Multiple Comparisons Among Treatment Means • What to do with an omnibus ANOVA • Multiple t-tests • Linear Contrasts • Orthogonal Contrasts • Trend Analysis • Controlling for Type I errors • Bonferroni t • Fisher Least Significance Difference • Studentized Range Statistic • Dunnett’s Test

  17. What you have learned! • Factorial Analysis of Variance • Factorial ANOVA • Computation and logic of Factorial ANOVA • Interpreting Results • Main Effects • Interactions

  18. What you have learned! • Factorial Analysis of Variance and Repeated Measures • Factorial ANOVA • Computation and logic of Factorial ANOVA • Interpreting Results • Main Effects • Interactions • Repeated measures ANOVA

  19. The Three Goals of this Course • 1) Teach a new way of thinking • 2) Teach “factoids” • 3) Self-confidence in statistics

  20. Remember • You just invented a “magic math pill” that will increase test scores. • On the day of the first test you give the pill to 4 subjects. When these same subjects take the second test they do not get a pill • Did the pill increase their test scores?

  21. What if. . . • You just invented a “magic math pill” that will increase test scores. • On the day of the first test you give a full pill to 4 subjects. When these same subjects take the second test they get a placebo. When these same subjects that the third test they get no pill.

  22. Note • You have more than 2 groups • You have a repeated measures design • You need to conduct a Repeated Measures ANOVA

  23. Tests to Compare Means

  24. What if. . . • You just invented a “magic math pill” that will increase test scores. • On the day of the first test you give a full pill to 4 subjects. When these same subjects take the second test they get a placebo. When these same subjects that the third test they get no pill.

  25. Results

  26. For now . . . Ignore that it is a repeated design

  27. Between Variability = low

  28. Within Variability = high

  29. Notice – the within variability of a group can be predicted by the other groups

  30. Notice – the within variability of a group can be predicted by the other groups Pill and Placebo r = .99; Pill and No Pill r = .99; Placebo and No Pill r = .99

  31. These scores are correlated because, in general, some subjects tend to do very well and others tended to do very poorly

  32. Repeated ANOVA • Some of the variability of the scores within a group occurs due to the mean differences between subjects. • Want to calculate and then discard the variability that comes from the differences between the subjects.

  33. Example

  34. Sum of Squares • SS Total • The total deviation in the observed scores • Computed the same way as before

  35. SStotal = (57-75.66)2+ (71-75.66)2+ . . . . (96-75.66)2 = 908 *What makes this value get larger?

  36. SStotal = (57-75.66)2+ (71-75.66)2+ . . . . (96-75.66)2 = 908 *What makes this value get larger? *The variability of the scores!

  37. Sum of Squares • SS Subjects • Represents the SS deviations of the subject means around the grand mean • Its multiplied by k to give an estimate of the population variance (Central limit theorem)

  38. SSSubjects = 3((60.33-75.66)2+ (72.33-75.66)2+ . . . . (93.66-75.66)2) = 1712 *What makes this value get larger?

  39. SSSubjects = 3((60.33-75.66)2+ (72.33-75.66)2+ . . . . (93.66-75.66)2) = 1712 *What makes this value get larger? *Differences between subjects

  40. Sum of Squares • SS Treatment • Represents the SS deviations of the treatment means around the grand mean • Its multiplied by n to give an estimate of the population variance (Central limit theorem)

  41. SSTreatment = 4((74-75.66)2+ (75-75.66)2+(78-75.66)2) = 34.66 *What makes this value get larger?

  42. SSTreatment = 4((74-75.66)2+ (75-75.66)2+(78-75.66)2) = 34.66 *What makes this value get larger? *Differences between treatment groups

  43. Sum of Squares • Have a measure of how much all scores differ • SSTotal • Have a measure of how much this difference is due to subjects • SSSubjects • Have a measure of how much this difference is due to the treatment condition • SSTreatment • To compute error simply subtract!

  44. Sum of Squares • SSError = SSTotal - SSSubjects – SSTreatment 8.0 = 1754.66 - 1712.00 - 34.66

  45. Compute df df total = N -1

  46. Compute df df total = N -1 df subjects = n – 1

  47. Compute df df total = N -1 df subjects = n – 1 df treatment = k-1

More Related