1 / 60

Statistics

This announcement provides the due dates for Quiz 9 and Journal Summary 2 in the Research Methods in Psychology course, as well as an alternative assignment option. It also reminds students about the extra credit opportunity and upcoming group projects.

lavette
Download Presentation

Statistics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistics Psych 231: Research Methods in Psychology

  2. Quiz 9 is due Friday @ midnight • Journal Summary 2 is due in labs this week (alternative assignment needs to be in to me by 4:30 on Friday) • If you weren’t able to participate in the extra credit for being going to a section to be participants in group projects, you may earn 5 extra credit points by doing an extra journal summary. These must be turned in to me in-class one week from today (Nov. 16). • No formal labs next week, work on your group projects data analyses Reminders

  3. Sometimes you just can’t perform a fully controlled experiment • Because of the issue of interest • Limited resources (not enough subjects, observations are too costly, etc). • Surveys • Correlational • Quasi-Experiments • Developmental designs • Small-N designs • This does NOT imply that they are bad designs • Just remember the advantages and disadvantages of each Going to finish up after the stats lectures Non-Experimental designs

  4. Mistrust of statistics? • It is all in how you use them • They are a critical tool in research Statistics Alan Smith: Why you should love statistics ~ 9 mins

  5. Sampling methods Population Sample Samples and Populations

  6. Inferential statistics used to generalize back Population Sample • 2 General kinds of Statistics • Descriptive statistics • Used to describe, simplify, & organize data sets • Describing distributions of scores • Inferential statistics • Used to test claims about the population, based on data gathered from samples • Takes sampling error into account. Are the results above and beyond what you’d expect by random chance? Samples and Populations

  7. Properties: Shape, Center, and Spread (variability) • Shape • Symmetric v. asymmetric (skew) • Unimodal v. multimodal • Center • Where most of the data in the distribution are • Mean, Median, Mode • Spread (variability) • How similar/dissimilar are the scores in the distribution? • Standard deviation (variance), Range Describing Distributions

  8. Properties: Shape, Center, and Spread (variability) • Visual descriptions - A picture of the distribution is usually helpful* • Numerical descriptions of distributions Describing Distributions *Note: See Chapter 5 of APA style guide: Displaying Results

  9. Divide by the total number in the population Add up all of the X’s Divide by the total number in the sample • The mean (mathematical average) is the most popular and most important measure of center. • The formula for the population mean is (a parameter): • The formula for the sample mean is (a statistic): • Interpreting the mean: • The representative (standard) score • The center of the distribution mean Mean & Standard deviation

  10. mean • The mean (mathematical average) is the most popular and most important measure of center. • Other measures include median and mode. • The standard deviation is the most popular and important measure of variability. • The standard deviation measures how far off all of the individuals in the distribution are from a standard, where that standard is the mean of the distribution. • Essentially, the average of the deviations. Slightly different for samples Mean & Standard deviation

  11. standard deviation = σ = • Working your way through the formula: • Step 1: Compute deviation scores • Step 2: Compute the SS • Step 3: Determine the variance • Take the average of the squared deviations • Divide the SS by the N • Step 4: Determine the standard deviation • Take the square root of the variance An Example: Computing Standard Deviation (population)

  12. Main difference: • Step 1: Compute deviation scores • Step 2: Compute the SS • Step 3: Determine the variance • Take the average of the squared deviations • Divide the SS by the n-1 • Step 4: Determine the standard deviation • Take the square root of the variance • This is done because samples are biased to be less variable than the population. This “correction factor” will increase the sample’s SD (making it a better estimate of the population’s SD) An Example: Computing Standard Deviation (sample)

  13. Inferential statistics used to generalize back Population Sample • 2 General kinds of Statistics • Descriptive statistics • Used to describe, simplify, & organize data sets • Describing distributions of scores • Inferential statistics • Used to test claims about the population, based on data gathered from samples • Takes sampling error into account. Are the results above and beyond what you’d expect by random chance? Statistics

  14. Inferential statistics used to generalize back Population Sample A Treatment X = 80% Sample B No Treatment X = 76% • Two approaches • Hypothesis Testing • “There is a statistically significant difference between the two groups” • Confidence Intervals • “The mean difference between the two groups is between 4% ± 2%” Inferential Statistics

  15. Population Sample A Treatment X = 80% Sample B No Treatment X = 76% • Purpose: To make claims about populations based on data collected from samples • What’s the big deal? • Example Experiment: • Group A - gets treatment to improve memory • Group B - gets no treatment (control) • After treatment period test both groups for memory • Results: • Group A’s average memory score is 80% • Group B’s is 76% • Is the 4% difference a “real” difference (statistically significant) or is it just sampling error? Inferential Statistics

  16. Step 2: Set your decision criteria • Step 3: Collect your data from your sample(s) • Step 4: Compute your test statistics • Step 5: Make a decision about your null hypothesis • “Reject H0” • “Fail to reject H0” • Step 1: State your hypotheses Testing Hypotheses

  17. This is the hypothesis that you are testing • Step 1: State your hypotheses • Null hypothesis (H0) • Alternative hypothesis(ses) • “There are no differences (effects)” • Generally, “not all groups are equal” • You are not out to prove the alternative hypothesis(although it feels like this is what you want to do) • If you reject the null hypothesis, then you are left with support for the alternative(s) (NOT proof!) Testing Hypotheses

  18. Step 1: State your hypotheses • In our memory example experiment • Null H0: mean of Group A = mean of Group B • Alternative HA: mean of Group A ≠ mean of Group B • (Or more precisely: Group A > Group B) • It seems like our theory is that the treatment should improve memory. • That’s the alternative hypothesis. That’s NOT the one the we’ll test with inferential statistics. • Instead, we test the H0 Testing Hypotheses

  19. Step 2: Set your decision criteria • Your alpha level will be your guide for when to: • “reject the null hypothesis” • “fail to reject the null hypothesis” • Step 1: State your hypotheses • This could be correct conclusion or the incorrect conclusion • Two different ways to go wrong • Type I error: saying that there is a difference when there really isn’t one (probability of making this error is “alpha level”) • Type II error: saying that there is not a difference when there really is one Testing Hypotheses

  20. Real world (‘truth’) H0 is correct H0 is wrong Type I error Reject H0 Experimenter’s conclusions Fail to Reject H0 Type II error Error types

  21. Real world (‘truth’) Defendant is innocent Defendant is guilty Type I error Find guilty Jury’s decision Type II error Find not guilty Error types: Courtroom analogy

  22. Type I error: concluding that there is an effect (a difference between groups) when there really isn’t. • Sometimes called “significance level” • We try to minimize this (keep it low) • Pick a low level of alpha • Psychology: 0.05 and 0.01 most common • For Step 5, we compare a “p-value” of our test to the alpha level to decide whether to “reject” or “fail to reject” to H0 • Type II error: concluding that there isn’t an effect, when there really is. • Related to the Statistical Power of a test • How likely are you able to detect a difference if it is there Error types

  23. Step 3: Collect your data from your sample(s) • Step 4: Compute your test statistics • Descriptive statistics (means, standard deviations, etc.) • Inferential statistics (t-tests, ANOVAs, etc.) • Step 5: Make a decision about your null hypothesis • Reject H0“statistically significant differences” • Fail to reject H0“not statistically significant differences” • Make this decision by comparing your test’s “p-value” against the alpha level that you picked in Step 2. • Step 1: State your hypotheses • Step 2: Set your decision criteria Testing Hypotheses

  24. Consider the results of our class experiment X • Main effect of cell phone ✓ • Main effect of site type ✓ • An Interaction between cell phone and site type 0.04 -0.50 Factorial designs

  25. “Statistically significant differences” • When you “reject your null hypothesis” • Essentially this means that the observed difference is above what you’d expect by chance • “Chance” is determined by estimating how much sampling error there is • Factors affecting “chance” • Sample size • Population variability Statistical significance

  26. Population mean Population Distribution Sampling error (Pop mean - sample mean) x n = 1 Sampling error

  27. Population mean Population Distribution Sample mean Sampling error (Pop mean - sample mean) x x n = 2 Sampling error

  28. Population mean Population Distribution Sample mean x x x x x x x x x x Sampling error (Pop mean - sample mean) • Generally, as the sample size increases, the sampling error decreases n = 10 Sampling error

  29. Small population variability Large population variability • Typically the narrower the population distribution, the narrower the range of possible samples, and the smaller the “chance” Sampling error

  30. Population Distribution of sample means XB XC XD Avg. Sampling error “chance” XA • These two factors combine to impact the distribution of sample means. • The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population Samples of size = n Sampling error

  31. “A statistically significant difference” means: • the researcher is concluding that there is a difference above and beyond chance • with the probability of making a type I error at 5% (assuming an alpha level = 0.05) • Note “statistical significance” is not the same thing as theoretical significance. • Only means that there is a statistical difference • Doesn’t mean that it is an important difference Significance

  32. Failing to reject the null hypothesis • Generally, not interested in “accepting the null hypothesis” (remember we can’t prove things only disprove them) • Usually check to see if you made a Type II error (failed to detect a difference that is really there) • Check the statistical power of your test • Sample size is too small • Effects that you’re looking for are really small • Check your controls, maybe too much variability Non-Significance

  33. About populations Real world (‘truth’) H0 is correct H0 is wrong Type I error Reject H0 Experimenter’s conclusions Fail to Reject H0 Type II error 76% 80% XB XA • Example Experiment: • Group A - gets treatment to improve memory • Group B - gets no treatment (control) • After treatment period test both groups for memory • Results: • Group A’s average memory score is 80% • Group B’s is 76% H0: μA = μB H0: there is no difference between Grp A and Grp B • Is the 4% difference a “real” difference (statistically significant) or is it just sampling error? Two sample distributions From last time

  34. Real world (‘truth’) H0 is correct H0 is wrong Type I error Reject H0 Experimenter’s conclusions Fail to Reject H0 Type II error H0 is true (no treatment effect) 76% 80% XB XA • Tests the question: • Are there differences between groups due to a treatment? Two possibilities in the “real world” One population Two sample distributions “Generic” statistical test

  35. Real world (‘truth’) H0 is correct H0 is wrong Type I error Reject H0 Experimenter’s conclusions Fail to Reject H0 Type II error XB XA XA • Tests the question: • Are there differences between groups due to a treatment? Two possibilities in the “real world” H0 is false (is a treatment effect) H0 is true (no treatment effect) Two populations XB 76% 80% 76% 80% People who get the treatment change, they form a new population (the “treatment population) “Generic” statistical test

  36. XA XB • ER: Random sampling error • ID: Individual differences (if between subjects factor) • TR: The effect of a treatment • Why might the samples be different? (What is the source of the variability between groups)? “Generic” statistical test

  37. TR + ID + ER ID + ER XA XB • The generic test statistic - is a ratio of sources of variability • ER: Random sampling error • ID: Individual differences (if between subjects factor) • TR: The effect of a treatment Observed difference Computed test statistic = = Difference from chance “Generic” statistical test

  38. Population Distribution of sample means XB XC XD Avg. Sampling error “chance” XA • The distribution of sample means is a distribution of all possible sample means of a particular sample size that can be drawn from the population Samples of size = n Sampling error

  39. Test statistic TR + ID + ER ID + ER • The generic test statistic distribution • To reject the H0, you want a computed test statistics that is large • reflecting a large Treatment Effect (TR) • What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic Distribution of sample means α-level determines where these boundaries go “Generic” statistical test

  40. The generic test statistic distribution • To reject the H0, you want a computed test statistics that is large • reflecting a large Treatment Effect (TR) • What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic Reject H0 Fail to reject H0 “Generic” statistical test

  41. The generic test statistic distribution • To reject the H0, you want a computed test statistics that is large • reflecting a large Treatment Effect (TR) • What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic Reject H0 “One tailed test”: sometimes you know to expect a particular difference (e.g., “improve memory performance”) Fail to reject H0 “Generic” statistical test

  42. Things that affect the computed test statistic • Size of the treatment effect • The bigger the effect, the bigger the computed test statistic • Difference expected by chance (sample error) • Sample size • Variability in the population “Generic” statistical test

  43. “A statistically significant difference” means: • the researcher is concluding that there is a difference above and beyond chance • with the probability of making a type I error at 5% (assuming an alpha level = 0.05) • Note “statistical significance” is not the same thing as theoretical significance. • Only means that there is a statistical difference • Doesn’t mean that it is an important difference Significance

  44. Failing to reject the null hypothesis • Generally, not interested in “accepting the null hypothesis” (remember we can’t prove things only disprove them) • Usually check to see if you made a Type II error (failed to detect a difference that is really there) • Check the statistical power of your test • Sample size is too small • Effects that you’re looking for are really small • Check your controls, maybe too much variability Non-Significance

  45. What DOES “confident” mean? • “90% confidence” means that 90% of the interval estimates of this sample size will include the actual population mean CI: μ = (X) ± (tcrit) (diff by chance) 9 out of 10 intervals contain μ Actual population mean μ Using Confidence intervals

  46. Distribution of the test statistic The upper and lower 2.5% CI: μ = (X) ± (tcrit) (diff by chance) Confidence interval uses the tcrit values that identify the top and bottom tails 2.5% 2.5% A 95% CI is like using a “two-tailed” t-test with with α = 0.05 95% of the sample means Using Confidence intervals

  47. Note: How you compute your standard error will depend on your design CI: μ = (X) ± (tcrit) (diff by chance) Using Confidence intervals

  48. Two types typically • Standard Error (SE) • diff by chance • Confidence Intervals (CI) • A range of plausible estimates of the population mean CI: μ = (X) ± (tcrit) (diff by chance) Note: Make sure that you label your graphs, let the reader know what your error bars are Error bars

  49. 1 factor with two groups • T-tests • Between groups: 2-independent samples • Within groups: Repeated measures samples (matched, related) • 1 factor with more than two groups • Analysis of Variance (ANOVA) (either between groups or repeated measures) • Multi-factorial • Factorial ANOVA Some inferential statistical tests

  50. Observed difference X1 - X2 T = Diff by chance Based on sample error • Design • 2 separate experimental conditions • Degrees of freedom • Based on the size of the sample and the kind of t-test • Formula: Computation differs for between and within t-tests T-test

More Related