1 / 47

Lecturer’s desk

s c r e e n. s c r e e n. Lecturer’s desk. Row A. Row A. 13. 12. 11. 10. 9. 8. 7. 17. 16. 15. 14. Row A. 19. 18. 4. 3. 2. 1. 6. 5. Row B. 14. 13. 12. 11. 10. 9. 15. Row B. 8. 7. 20. 4. 3. 2. 1. 19. 18. 17. 16. 6. 5. Row B. Row C. 4. 3. 2. 1.

MikeCarlo
Download Presentation

Lecturer’s desk

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. s c r e e n s c r e e n Lecturer’s desk Row A Row A 13 12 11 10 9 8 7 17 16 15 14 Row A 19 18 4 3 2 1 6 5 Row B 14 13 12 11 10 9 15 Row B 8 7 20 4 3 2 1 19 18 17 16 6 5 Row B Row C 4 3 2 1 15 14 13 12 11 10 9 19 18 17 16 Row C 8 7 6 5 21 20 Row C Row D 20 19 18 17 22 21 4 3 2 1 16 15 14 13 12 11 10 9 Row D Row D 8 7 6 5 21 20 19 18 Row E 23 22 4 3 2 1 6 5 16 15 14 13 12 11 10 9 Row E Row E 8 7 17 21 20 19 18 Row F 23 22 4 3 2 1 Row F 6 5 17 16 15 14 13 12 11 10 9 Row F 8 7 22 21 20 19 17 16 15 14 13 12 11 10 9 Row G Row G 8 7 24 23 18 4 3 2 1 6 5 Row G 16 20 19 18 17 Row H 22 21 4 3 2 1 15 14 13 12 11 10 9 Row H 8 7 6 5 Row H table Row J Row J 25 24 23 22 1 18 table 9 6 26 5 20 19 21 13 8 7 14 26 25 24 23 4 3 2 1 27 5 20 22 21 14 13 12 11 10 9 6 18 17 16 15 Row K Row K 8 7 19 27 26 25 24 4 3 2 1 19 18 28 5 20 23 22 21 14 13 12 11 10 9 6 15 Row L Row L 8 7 17 16 4 3 2 1 27 26 25 24 22 21 15 Row M Row M 5 28 23 20 19 14 13 12 11 10 9 6 18 17 16 8 7 22 21 29 28 27 26 4 3 2 1 20 23 19 15 14 18 17 16 25 24 30 5 13 12 11 10 9 6 Row N Row N 8 7 29 28 27 26 22 21 30 4 3 2 1 20 23 19 15 14 18 17 16 25 24 5 13 12 11 10 9 6 Row P Row P 8 7 29 28 27 26 39 38 37 36 30 4 3 2 1 32 31 23 22 21 - 15 14 25 24 40 5 33 35 34 13 12 11 10 9 6 Row Q 8 7 Physics- atmospheric Sciences (PAS) - Room 201

  2. MGMT 276: Statistical Inference in ManagementFall 2015 Welcome

  3. Schedule of readings Before our next exam (November 10th) OpenStax Chapters 1 – 10 and Chapter 13 Plous (2, 3, & 4) Chapter 2: Cognitive Dissonance Chapter 3: Memory and Hindsight Bias Chapter 4: Context Dependence

  4. Homework due – Thursday (October 29th) On class website: Please print and complete homework worksheet #12 Hypothesis testing with z-tests and t-tests

  5. Stats Review for Exam 2 by Jonathon & Nick When: Monday evening October 19th 6:30 – 7:30pm Room: ILC 120 Cost: $5.00 Which of the following best describes your experience with the review session? a. It was a very helpful review session b. It was only okay c. It was not a helpful review session d. I did not attend this review

  6. By the end of lecture today10/27/15 Logic of hypothesis testing Steps for hypothesis testing Levels of significance (Levels of alpha) what does p < 0.05 mean? what does p < 0.01 mean? Hypothesis testing with z-score and t-scores (one-sample) Hypothesis testing with t-scores (two independent samples) Constructing brief, complete summary statements

  7. It went really well! Exam 2 – Thanks for your patience and cooperation The grades are posted

  8. Remember… In a negatively skewed distribution: mean < median < mode Mode 87.5 = mode = tallest point 85 = median = middle score 82 = mean = balance point Frequency Mean Score on Exam Median Note: Always “frequency” Note: Label and Numbers

  9. Five steps to hypothesis testing Step 1: Identify the research problem (hypothesis) Describe the null and alternative hypotheses Step 2: Decision rule • Alpha level? (α= .05 or .01)? • One or two tailed test? • Balance between Type I versus Type II error • Critical statistic (e.g. z or t or F or r) value? Step 3: Calculations Step 4: Make decision whether or not to reject null hypothesis If observed z (or t) is bigger then critical z (or t) then reject null Step 5: Conclusion - tie findings back in to research problem

  10. We lose one degree of freedom for every parameter we estimate Degrees of Freedom Degrees of Freedom (d.f.) is a parameter based on the sample size that is used to determine the value of the t statistic. Degrees of freedom tell how many observations are used to calculate s, less the number of intermediate estimates used in the calculation.

  11. . . A note on z scores, and t score: • Numerator is always distance between means • (how far away the distributions are or “effect size”) • Denominator is always measure of variability • (how wide or much overlap there is between distributions) Difference between means Difference between means Variability of curve(s)(within group variability) Variabilityof curve(s)

  12. . A note on variability versus effect size Difference between means Difference between means Variability of curve(s)(within group variability) Variabilityof curve(s)

  13. . A note on variability versus effect size Difference between means Difference between means . Variability of curve(s)(within group variability) Variabilityof curve(s)

  14. . Effect size is considered relativeto variability of distributions 1. Larger variance harder to find significant difference Treatment Effect x Treatment Effect 2. Smaller variance easier to find significant difference x

  15. . Effect size is considered relativeto variability of distributions Treatment Effect x Difference between means Treatment Effect x Variability of curve(s)(within group variability)

  16. Five steps to hypothesis testing Step 1: Identify the research problem (hypothesis) Describe the null and alternative hypotheses How is a t score different than a z score? Step 2: Decision rule: find “critical z” score • Alpha level? (α= .05 or .01)? • One versus two-tailed test Step 3: Calculations Step 4: Make decision whether or not to reject null hypothesis If observed z (or t) is bigger then critical z (or t) then reject null Population versus sample standard deviation Population versus sample standard deviation Step 5: Conclusion - tie findings back in to research problem

  17. Comparing z score distributions with t-score distributions z-scores Similarities include: Using bell-shaped distributions to make confidence interval estimations and decisions in hypothesis testing Use table to find areas under the curve (different table, though – areas often differ from z scores) t-scores • Summary of 2 main differences: • We are now estimating standard deviation • from the sample • (We don’t know population standard deviation) • We have to deal with degrees of freedom

  18. Comparing z score distributions with t-score distributions Differences include: • We use t-distribution when we don’t know standard deviation • of population, and have to estimate it from our sample 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Please notice: as sample sizes get smaller, the tails get thicker. As sample sizes get bigger tails get thinner and look more like the z-distribution

  19. Comparing z score distributions with t-score distributions Differences include: • We use t-distribution when we don’t know standard deviation • of population, and have to estimate it from our sample Critical t (just like critical z) separates common from rare scores Critical t used to define both common scores “confidence interval” and rare scores “region of rejection”

  20. Comparing z score distributions with t-score distributions Differences include: • We use t-distribution when we don’t know standard deviation • of population, and have to estimate it from our sample 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Please notice: as sample sizes get smaller, the tails get thicker. As sample sizes get bigger tails get thinner and look more like the z-distribution

  21. Comparing z score distributions with t-score distributions Please note: Once sample sizes get big enough the t distribution (curve) starts to look exactly like the z distribution (curve) scores Differences include: • We use t-distribution when we don’t know standard deviation • of population, and have to estimate it from our sample 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) 3) Because the shape changes, the relationship betweenthe scores and proportions under the curve change (So, we would have a different table for all the different possible n’s but just the important ones are summarized in our t-table)

  22. A quick re-visit with the law of large numbers • Relationship between • increased sample size • decreased variability • smaller “critical values” As n goes up variability goes down

  23. Law of large numbers: As the number of measurements increases the data becomes more stable and a better approximation of the true signal (e.g. mean) As the number of observations (n) increases or the number of times the experiment is performed, the signal will become more clear (static cancels out) With only a few people any little error is noticed (becomes exaggerated when we look at whole group) With many people any little error is corrected (becomes minimized when we look at whole group) http://www.youtube.com/watch?v=ne6tB2KiZuk

  24. Interpreting t-table We use degrees of freedom (df) to approximate sample size Technically, we have a different t-distribution for each sample size This t-table summarizes the most useful values for several distributions This t-table presents useful values for distributions (organized by degrees of freedom) Each curve is based on its own degrees of freedom (df) - based on sample size, and its own table tying together t-scores with area under the curve n = 17 n = 5 . Remember these useful values for z-scores? 1.64 1.96 2.58

  25. Area betweentwo scores Area between two scores Area beyond two scores (out in tails) Area beyond two scores (out in tails) Area in each tail (out in tails) Area in each tail (out in tails) df

  26. Area betweentwo scores Area between two scores Area beyond two scores (out in tails) Area beyond two scores (out in tails) Area in each tail (out in tails) Area in each tail (out in tails) df Notice with large sample size it is same values as z-score . Remember these useful values for z-scores? 1.64 1.96 2.58

  27. Degrees of Freedom Degrees of Freedom (d.f.) is a parameter based on the sample size that is used to determine the value of the t statistic. Degrees of freedom tell how many observations are used to calculate s, less the number of intermediate estimates used in the calculation.

  28. Pop Quiz – Part 1 Standard deviation and Variance For Sample and Population These would be helpful to know by heart – please memorize these formula

  29. Pop Quiz – Part 1 Standard deviation and Variance For Sample and Population • Part 2: • When we move from a two-tailed test to a one-tailed test what happens to the critical z score (bigger or smaller?) - Draw a picture - • What affect does this have on the hypothesis test (easier or harder to reject the null?)

  30. Pop Quiz – Part 3 1.  When do we use a t-test and when do we use a z-test?         (Be sure to write out the formulae) 2.  How many steps in hypothesis testing   (What are they?) 3.  What is our formula for degrees of freedom in one sample t-test? 4.  We lose one degree of freedom for every ________________ 5.  What are the three parts to the summary (below) The mean response time for following the sheriff’s new plan was 24 minutes, while the mean response time prior to the new plan was 30 minutes. A t-test was completed and there appears to be no significant difference in the response time following the implementation of the new plan t(9) = -1.71; n.s.

  31. Pop Quiz Standard deviation and Variance For Sample and Population Critical value gets smaller • Part 2: • When we move from a two-tailed test to a one-tailed test what happens to the critical z score (bigger or smaller?) - Draw a picture - • What affect does this have on the hypothesis test (easier or harder to reject the null?) Gets easier to reject the null

  32. Pop Quiz Writing Assignment 1.  When do we use a t-test and when do we use a z-test?         (Be sure to write out the formulae) Use the t-test when you don’t know the standard deviation of the population, and therefore have to estimate it using the standard deviation of the sample Population versus sample standard deviation Population versus sample standard deviation

  33. Five steps to hypothesis testing Step 1: Identify the research problem (hypothesis) Describe the null and alternative hypotheses How is a t score similar to a z score? How is a t score different than a z score? Same logic and same steps Step 2: Decision rule: find “critical z” score • Alpha level? (α= .05 or .01)? • One versus two-tailed test Step 3: Calculate observed z score Step 4: Compare “observed z” with “critical z” If observed z > critical z then reject null p < 0.05 and we have significant finding Step 5: Conclusion - tie findings back in to research problem

  34. Writing Assignment 3.  What is our formula for degrees of freedom in one sample t-test? One sample t-test Degrees of freedom =(df) = (n – 1) First Sample Second Sample Two sample t-test Degrees of freedom (df ) = (n1 - 1) + (n2 – 1) 4.  We lose one degree of freedom for every parameter we estimate Use the word "parameter” when describing a whole population (not just a sample). Usually we don’t know about the whole population so we have guess by using what we know about our sample. A short-hand way to let the reader know it we are describing a population (a parameter) is to use a Greek letter – for example, σ for populations standard deviation, and an s for the sample. In a t-test we never know the population standard deviation (parameter σ) we have to estimate this one parameter (using “s”), so we lose one df our degree of freedom is n-1 Sample standard deviation Parameter: Population standard deviation

  35. Writing Assignment 5.  What are the three parts to the summary (below) Finish with statistical summaryt(4) = 1.96; ns Start summary with two means (based on DV) for two levels of the IV Or if it *were* significant: t(9) = 3.93; p < 0.05 The mean response time for following the sheriff’s new plan was 24 minutes, while the mean response time prior to the new plan was 30 minutes. A t-test was completed and there appears to be no significant difference in the response time following the implementation of the new plan t(9) = -1.71; n.s. Describe type of test (t-test versus anova) with brief overview of results n.s. = “not significant” p<0.05 = “significant” n.s. = “not significant” p<0.05 = “significant” Type of test with degrees of freedom Value of observed statistic

  36. Hypothesis testing:one sample t-test Let’s jump right in and do a t-test Is the mean of my observed sample consistent with the known population mean or did it come from some other distribution? We are given the following problem: 800 students took a chemistry exam. Accidentally, 25 students got an additional ten minutes. Did this extra time make a significant difference in the scores? The average number correct by the large class was 74. The scores for the sample of 25 was Please note: In this example we are comparing our sample mean with the population mean (One-sample t-test) 76, 72, 78, 80, 73 70, 81, 75, 79, 76 77, 79, 81, 74, 62 95, 81, 69, 84, 76 75, 77, 74, 72, 75

  37. Hypothesis testing H1: = 74 µ = 74 Ho: Step 1: Identify the research problem / hypothesis Did the extra time given to this sample of students affect their chemistry test scores Describe the null and alternative hypotheses One tail or two tail test? µ

  38. Hypothesis testing Step 2: Decision rule = .05 n = 25 Degrees of freedom (df) = (n - 1) = (25 - 1) = 24 two tail test This was for z scores We use a different table for t-tests

  39. two tail test α= .05 (df) = 24 Critical t(24) = 2.064

  40. Hypothesis testing (x - x) (x - x)2 = 76.44 x 76 72 78 80 73 70 81 75 79 76 77 79 81 74 62 95 81 69 84 76 75 77 74 72 75 76 – 76.44 72 – 76.44 78 – 76.44 80 – 76.44 73 – 76.44 70 – 76.44 81 – 76.44 75 – 76.44 79 – 76.44 76 – 76.44 77 – 76.44 79 – 76.44 81 – 76.44 74 – 76.44 62 – 76.44 95 – 76.44 81 – 76.44 69 – 76.44 84 – 76.44 76 – 76.44 75 – 76.44 77 – 76.44 74 – 76.44 72 – 76.44 75– 76.44 0.1936 19.7136 2.4336 12.6736 11.8336 41.4736 20.7936 2.0736 6.5536 0.1936 0.3136 6.5536 20.7936 5.9536 208.5136 344.4736 20.7936 55.3536 57.1536 0.1936 2.0736 0.3136 5.9536 19.7136 2.0736 = -0.44 =-4.44 =+1.56 =+ 3.56 =-3.44 =-6.44 =+4.56 =-1.44 =+2.56 =-0.44 =+0.56 =+2.56 =+4.56 =-2.44 =-14.44 =+18.56 =+4.56 =-7.44 =+7.56 =-0.44 =-1.44 =+0.56 =-2.44 =-4.44 =-1.44 Step 3: Calculations µ = 74 Σx 1911 = = 25 N N = 25 = 6.01 868.16 24 Σx = 1911 Σ(x- x) = 0 Σ(x- x)2 = 868.16

  41. . Hypothesis testing = 76.44 76.44 - 74 = = 2.03 1.20 Step 3: Calculations µ = 74 N = 25 s = 6.01 76.44 - 74 6.01 critical t critical t 25 Observed t(24) = 2.03

  42. Hypothesis testing Step 4: Make decision whether or not to reject null hypothesis Observed t(24) = 2.03 Critical t(24)= 2.064 2.03 is not farther out on the curve than 2.064, so, we do not reject the null hypothesis Step 6: Conclusion: The extra time did not have a significant effect on the scores

  43. Hypothesis testing: Did the extra time given to these 25 students affect their average test score? Start summary with two means (based on DV) for two levels of the IV notice we are comparing a sample mean with a population mean: single sample t-test Finish with statistical summaryt(24) = 2.03; ns Describe type of test (t-test versus z-test) with brief overview of results Or if it had been different results that *were* significant:t(24) = -5.71; p < 0.05 The mean score for those students who where given extra time was 76.44 percent correct, while the mean score for the rest of the class was only 74 percent correct. A t-test was completed and there appears to be no significant difference in the test scores for these two groups t(24) = 2.03; n.s. n.s. = “not significant” p<0.05 = “significant” Type of test with degrees of freedom n.s. = “not significant” p<0.05 = “significant” Value of observed statistic

  44. Thank you! See you next time!!

More Related