60 likes | 191 Views
第三章 Estimation. Methods of estimation (MOM, Least squares, MLE) Sample properties of estimators Confidence intervals. A. METHODS OF ESTIMATION. METHOD OF MOMENTS suppose x 1 , x 2 , ... are iid. Fix k a positive integer. Then are iid and by the weak law of large numbers
E N D
第三章Estimation • Methods of estimation (MOM, Least squares, MLE) • Sample properties of estimators • Confidence intervals
A. METHODS OF ESTIMATION • METHOD OF MOMENTS suppose x1, x2, ... are iid. Fix k a positive integer. Then are iid and by the weak law of large numbers which are the kth moments about the origin. Define mk = Exk as the kth moment of x so 2. LEAST SQUARES We have an unknown mean . Our estimator will be . Take each xi, subtract its predicted value, i.e. its mean, square the difference and add up the squared differences This is nothing more than , which we know to be the least squares estimator of for any distribution. 3. MAXIMUM LIKELIHOOD ESTIMATORS
B. PROPERTIES OF ESTIMATORS SMALL SAMPLE PROPERTIES UNBIASEDNESS An estimator is said to be unbiased if in the long run it takes on the value of the population parameter. That is, if you were to draw a sample, compute the statistic, repeat this many, many times, then the average over all of the sample statistics would equal the population parameter. EFFICIENCY An estimator is said to be efficient if in the class of unbiased estimators it has minimum variance. SUFFICIENCY We say that an estimator is sufficient if it uses all the sample information. The median, because it considers only rank, is not sufficient. The sample mean considers each member of the sample as well as its size, so is a sufficient statistic.
LARGE SAMPLE PROPERTIES ASYMPTOTIC UNBIASEDNESS: An estimator is said to be asymptotically unbiased if the following is true ASYMPTOTIC EFFICIENCY: Define the asymptotic variance as An asymptotically efficient estimator is an unbiased estimator with smallest asymptotic variance. CONSISTENCY: A sequence of estimators is said to be consistent if it converges in probability to the true value of the parameter
C. CONFIDENCE INTERVALS Consider the first sample mean problems we dealt with Where . This can be rewritten as This is called a confidence interval. INTERPRETATION: Of all confidence intervals calculated in a similar fashion {95%, n=81} we would expect that 95% of them would cover. does not change, only the position of the interval. Think of a big barrel containing 1000 different confidence intervals, different because they each use a different value of the random variable. The probability of us reaching in and grabbing a "correct" interval is 95%. But, as soon as we break open the capsule and read the numbers the mean is either there or it isn't.