720 likes | 833 Views
Lecture 3. Ordinary Least Squares Assumptions, Confidence Intervals, and Statistical Significance. Sampling Terminology. Parameter fixed, unknown number that describes the population Statistic known value calculated from a sample a statistic is often used to estimate a parameter
E N D
Lecture 3 Ordinary Least Squares Assumptions, Confidence Intervals, and Statistical Significance
Sampling Terminology • Parameter • fixed, unknown number that describes the population • Statistic • known value calculated from a sample • a statistic is often used to estimate a parameter • Variability • different samples from the same population may yield different values of the sample statistic • Sampling Distribution • tells what values a statistic takes and how often it takes those values in repeated sampling
Parameter vs. Statistic A properly chosen sample of 1600 people across the United States was asked if they regularly watch a certain television program, and 24% said yes. The parameter of interest here is the true proportion of all people in the U.S. who watch the program, while the statistic is the value 24% obtained from the sample of 1600 people.
The mean of a sample is denoted by – this is a statistic. is used to estimate µ. • The proportion of a sample with a certain trait is denoted by (“p-hat”) – this is a statistic. is used to estimate p. Parameter vs. Statistic • The mean of a population is denoted by µ – this is a parameter. • The true proportion of a population with a certain trait is denoted by p – this is a parameter.
( gets closer to µ ) The Law of Large Numbers Consider sampling at random from a population with true mean µ. As the number of (independent) observations sampled increases, the mean of the sample gets closer and closer to the true mean of the population.
The Law of Large NumbersGambling • The “house” in a gambling operation is not gambling at all • the games are defined so that the gambler has a negative expected gain per play (the true mean gain after all possible plays is negative) • each play is independent of previous plays, so the law of large numbers guarantees that the average winnings of a large number of customers will be close the the (negative) true average
Sampling Distribution • The sampling distribution of a statistic is the distribution of values taken by the statistic in all possible samples of the same size (n) from the same population • to describe a distribution we need to specify the shape, center, and spread • we will discuss the distribution of the sample mean (x-bar).
Case Study Does This Wine Smell Bad? Dimethyl sulfide (DMS) is sometimes present in wine, causing “off-odors”. Winemakers want to know the odor threshold – the lowest concentration of DMS that the human nose can detect. Different people have different thresholds, and of interest is the mean threshold in the population of all adults.
Case Study Does This Wine Smell Bad? Suppose the mean threshold of all adults is =25 micrograms of DMS per liter of wine, with a standard deviation of =7 micrograms per literand the threshold values follow a bell-shaped (normal) curve. Assume we KNOW THE VARIANCE!!!
Where should 95% of all individual threshold values fall? • mean plus or minus about two standard deviations 25 2(7) = 11 25 + 2(7) = 39 • 95% should fall between 11 & 39 • What about the mean (average) of a sample of n adults? What values would be expected?
Sampling Distribution • What about the mean (average) of a sample of n adults? What values would be expected? • Answer this by thinking: “What would happen if we took many samples of n subjects from this population?”(let’s say that n=10 subjects make up a sample) • take a large number of samples of n=10 subjects from the population • calculate the sample mean (x-bar) for each sample • make a histogram of the values of x-bar • examine the graphical display for shape, center, spread
Case Study Does This Wine Smell Bad? Mean threshold of all adults is =25 micrograms per liter, with a standard deviation of =7 micrograms per literand the threshold values follow a bell-shaped (normal) curve. Many (1000) repetitions of sampling n=10 adults from the population were simulated and the resulting histogram of the 1000x-bar values is on the next slide.
Case Study Does This Wine Smell Bad?
If numerous samples of size n are taken from a population with mean mand standard deviation , then the mean of the sampling distribution of is m (the population mean) and the standard deviation is: ( is the population s.d.) Mean and Standard Deviation of Sample Means
Since the mean of is m, we say thatis an unbiased estimator of m • Individual observations have standard deviation , but sample means from samples of size n have standard deviation . Averages are less variable than individual observations. Mean and Standard Deviation of Sample Means
If individual observations have the N(µ, ) distribution, then the sample mean of n independent observations has the N(µ,/ )distribution. (Note, σ is KNOWN) Sampling Distribution ofSample Means “If measurements in the population follow a Normal distribution, then so does the sample mean.”
Case Study Does This Wine Smell Bad? Mean threshold of all adults is =25 with a standard deviation of =7, and the threshold values follow a bell-shaped (normal) curve. (Population distribution)
If a random sample of size nis selected from ANYpopulation with mean mand standard deviation , then when n is “large” the sampling distribution of the sample mean is approximately Normal: is approximatelyN(µ,/ ) Central Limit Theorem “No matter what distribution the population values follow, the sample mean will follow a Normal distribution if the sample size is large.”
Central Limit Theorem:Sample Size • How large must n be for the CLT to hold? • depends on how far the population distribution is from Normal • the further from Normal, the larger the sample size needed • a sample size of 25 or 30 is typically large enough for any population distribution encountered in practice • recall: if the population is Normal, any sample size will work (n≥1)
Central Limit Theorem:Sample Size and Distribution of x-bar n=1 n=2 n=25 n=10
Statistical Inference • Provides methods for drawing conclusions about a population from sample data • Confidence Intervals • What is the population mean? • Tests of Significance • Is the population mean larger than 66.5? • This would be ONE-SIDED
Inference about a MeanSimple Conditions-will be relaxed • SRS from the population of interest • Variable has a Normal distribution N(m, s) in the population • Although the value of m is unknown, the value of the population standard deviation s is known
Confidence Interval A level C confidence interval has two parts • An interval calculated from the data, usually of the form:estimate ± margin of error • The confidence level C, which is the probability that the interval will capture the true parameter value in repeated samples; that is, C is the success rate for the method.
Case Study NAEP Quantitative Scores
The 68-95-99.7 rule indicates that and m are within two standard deviations (4.2) of each other in about 95% of all samples. Case Study NAEP Quantitative Scores
So, if we estimate that m lies within 4.2 of , we’ll be right about 95% of the time. Case Study NAEP Quantitative Scores
Confidence IntervalMean of a Normal Population Take an SRS of size n from a Normal population with unknown mean m and known std dev. s. A level C confidence interval for m is:
Confidence IntervalMean of a Normal Population LOOKING FOR z*
Case Study NAEP Quantitative Scores Using the 68-95-99.7 rule gave an approximate 95% confidence interval. A more precise 95% confidence interval can be found using the appropriate value of z* (1.960) with the previous formula. Show how to find in Table B.2 in next lecture We are 95% confident that the average NAEP quantitative score for all adult males is between 267.884 and 276.116.
n Sample means,n subjects But the sample distribution is narrower than the population distribution, by a factor of √n. Thus, the estimates gained from our samples are always relatively close to the population parameter µ. Population, xindividual subjects m If the population is normally distributed N(µ,σ), so will the sampling distribution N(µ,σ/√n).
Ninety-five percent of all sample means will be within roughly 2 standard deviations (2*s/√n) of the population parameter m. Because distances are symmetrical, this implies that the population parameter m must be within roughly 2 standard deviations from the sample average , in 95% of all samples. Red dot: mean value of individual sample This reasoning is the essence of statistical inference.
Hypothesis Testing • Start by explaining when σ is known • Move to unknown σ should be straightforward
Stating HypothesesNull Hypothesis, H0 • The statement being tested in a statistical test is called the null hypothesis. • The test is designed to assess the strength of evidence against the null hypothesis. • Usually the null hypothesis is a statement of “no effect” or “no difference”, or it is a statement of equality. • When performing a hypothesis test, we assume that the null hypothesis is true until we have sufficient evidence against it.
Stating HypothesesAlternative Hypothesis, Ha • The statement we are trying to find evidence for is called the alternative hypothesis. • Usually the alternative hypothesis is a statement of “there is an effect” or “there is a difference”, or it is a statement of inequality. • The alternative hypothesis should express the hopes or suspicions we bring to the data. It is cheating to first look at the data and then frame Ha to fit what the data show.
One-sided and two-sided tests • A two-tail or two-sided test of the population mean has these null and alternative hypotheses: H0: µ= [a specific number] Ha: µ [a specific number] • A one-tail or one-sided test of a population mean has these null and alternative hypotheses: H0: µ= [a specific number] Ha: µ < [a specific number] OR H0: µ = [a specific number] Ha: µ > [a specific number] The FDA tests whether a generic drug has an absorption extent similar to the known absorption extent of the brand-name drug it is copying. Higher or lower absorption would both be problematic, thus we test: H0: µgeneric = µbrandHa: µgenericµbrand two-sided
The P-value The packaging process has a known standard deviation s = 5 g. H0: µ = 227 g versus Ha: µ ≠ 227 g The average weight from your four random boxes is 222 g. What is the probability of drawing a random sample such as yours if H0 is true? Tests of statistical significance quantify the chance of obtaining a particular random sample result if the null hypothesis were true. This quantity is the P-value. This is a way of assessing the “believability” of the null hypothesis given the evidence provided by a random sample.
Interpreting a P-value Could random variation alone account for the difference between the null hypothesis and observations from a random sample? • A small P-value implies that random variation because of the sampling process alone is not likely to account for the observed difference. • With a small P-value, we reject H0. The true property of the population is significantly different from what was stated in H0. Thus small P-values are strong evidence AGAINST H0. But how small is small…?
P = 0.2758 P = 0.0735 Significant P-value??? P = 0.1711 P = 0.05 P = 0.0892 P = 0.01 When the shaded area becomes very small, the probability of drawing such a sample at random gets very slim. Oftentimes, a P-value of 0.05 or less is considered significant: The phenomenon observed is unlikely to be entirely due to chance event from the random sampling.
The significance level a The significance level, α, is the largest P-value tolerated for rejecting a true null hypothesis (how much evidence against H0 we require). This value is decided arbitrarily before conducting the test. • If the P-value is equal to or less than α(p ≤ α), then we reject H0. • If the P-value is greater than α (p > α), then we fail to reject H0. Does the packaging machine need revision? Two-sided test. The P-value is 4.56%. * If α had been set to 5%, then the P-value would be significant. * If α had been set to 1%, then the P-value would not be significant.
n All we need is one SRS of size n, and relying on the properties of the sample means distribution to infer the population mean m. n Sample Population m Implications We don’t need to take lots of random samples to “rebuild” the sampling distribution and find m at its center. THE WHOLE POINT OF THIS!!!!
If σ is Estimated • Usually we do not know σ. So when it is estimated, we have to use the t-distribution which is based on sample size. • When estimating σ using σ, as the sample size increases the t-distribution approaches the normal curve. ^
Conditions for Inferenceabout a Mean • Data are from a SRS of size n. • Population has a Normal distribution with mean m and standard deviation s. • Both m and s are usually unknown. • we use inference to estimate m. • Problem:s unknown means we cannot use the z procedures previously learned.
Standard Error • When we do not know the population standard deviation s (which is usually the case), we must estimate it with the sample standard deviation s. • When the standard deviation of a statistic is estimated from data, the result is called the standard error of the statistic. • The standard error of the sample mean is
One-Sample t Statistic • When we estimate s with s, our one-sample z statistic becomes a one-sample t statistic. • By changing the denominator to be the standard error, our statistic no longer follows a Normal distribution. The t test statistic follows a t distribution with k = n – 1 degrees of freedom.
The t Distributions • The t density curve is similar in shape to the standard Normal curve. They are both symmetric about 0 and bell-shaped. • The spread of the t distributions is a bit greater than that of the standard Normal curve (i.e., the t curve is slightly “fatter”). • As the degrees of freedom k increase, the t(k) density curve approaches the N(0, 1) curve more closely. This is because s estimates s more accurately as the sample size increases.
Critical Values from T-Distribution