210 likes | 385 Views
Lecture 7: The Regression Model. February 3 rd , 2014. Question. Using the same data we estimate the following three models: Y = β 0 + β 1 ln (X) ln (Y) = β 0 + β 1 X l n (Y) = β 0 + β 1 ln (X ) We can compare the r 2 from: models 1 and 2 models 2 and 3 models 1 and 3
E N D
Lecture 7:The Regression Model February 3rd, 2014
Question Using the same data we estimate the following three models: • Y = β0+ β1ln(X) • ln(Y) = β0 + β1X • ln(Y) = β0 + β1ln(X) We can compare the r2from: • models 1 and 2 • models 2 and 3 • models 1 and 3 • none of the models • all of the models ---------------------------------------------------------------------------------------------- Also: Open ClassData/pet_food.csvand estimate the price elasticity using Amount of Sales and Price charged (will be useful in a couple of slides)
Administrative • Quiz 1 scores on blackboard. • Overall, not bad. • Quiz 2 scores up soon. • Problem set 3 – how’s was it ? • Problem set 4 up; due next Monday. • Exam 1 next Wednesday (Feb 12) • Haven’t written it yet… Probably 5 multi-parted data analysis questions • Open note / open book. No electronic notes (phones, etc). • I’ll probably ask you to leave your bag and phone in the front of the room and to spread out (easier for me to move around and answer questions)
Example • Data: pet_food.csv • Assume you are a monopolist and had control of market prices. Also assume that all of the data in the pet_food.csv came from an experiment your company controlled. Using the information on Amount of Sales and Price charged, determine the optimal price of your company’s pet food, when the marginal cost of the pet food is $0.75 • 0.85 • 1.14 • 1.27 • 1.68 • I don’t know.
Warning:Estimating Demand (Supply) • We’re still introducing concepts so I don’t want to jump too far ahead, but… • If market prices are a function of demand and supply, we have to be a little careful. • Unless we’re willing to make certain assumptions we often run into what is called an “identification problem.” • Too many equations to estimate from the data we have • We’ll deal with this issue later (hopefully), but I want to warn you now • You can’t estimate the demand function from prices alone without knowing something about the supply function. • Estimating demand functions is a large area in applied Economics/Econometrics. Can be tricky…
Last time • Log Transformations. When? • Rules of thumb: • Y = b0+ b1ln(X) • A 1% change in X is associated with a change of Y of 0.01 b1 • Intercept b0 is when X=1 • ln(Y) = b0+ b1 X • A change in X by 1 unit is associated with a b1% change in Y • Known as a log-linear regression • ln(Y) = b0+ b1ln(X) • A 1% change in X is associated with a b1% change in Y • Elasticity of Y with respect to X • Known as a log-log regression.
Simple Regression Model (SRM) • The SRM: what happens to Y, on average, given our information on X? • E(Y|X=x) often denoted μy|x • The SRM shows that this mean, E(Y|X=x) is just β0 + β1x • The data we observe (and use to estimate the slope & intercept) is a sample from the population. • The intercept & slope (b0, b1) are estimates of population parameters (β0,β1)
SRM and the errors • The deviations from this mean are “errors,” ε. • These errors are what we called residuals. • The SRM assumes the ε are: • The errors are normally distributed • Hence E(ε) = 0 given X. • Independent: εi is independent from εj • Equal variance: amount of error doesn’t depend on x
Data Generating Process • The assumed true model is β0 + β1x SRM assumes a normal distribution at each x
Data Generating Process • These draws from the normal Generate our data: Difference in Notation
SRM • Take away: Observed values of the response Y are linearly related to the values of the explanatory variable X by the equation: • The observations are independent of one another, have equal variance around the regression line, and are normally distributed around the regression line.
Conditions for the SRM Before beginning, ask two questions: • Does a linear relationship make sense? • Are there any omitted variables? Then begin working with data. Check list: • Is the association between y and x linear? • Have omitted/lurking variables been ruled out? • Are the errors evidently independent? • Are the variances of the residuals similar? • Are the residuals nearly normal?
Statistical Inference:Why do we care? • I said this class is about using data to make better decisions, so why do we care about statistical inference? • Suppose you estimate an simple regression model predicting sales from number of advertisements run: • PredictedSales = 37 + 2.5Ads • Your boss asks you if the advertisements are a good use of the company’s resources. What do you say?
Inference in Regression Standard Errors: • b0 and b1 are estimated from data that can vary from sample-to-sample. • The estimated standard error of b1is • Influenced by: • Standard deviation of the residuals. • As it increases, the standard error increases. • Sample size. • As it increases, the standard error decreases. • Standard deviation of x. • As it increases, the standard error decreases.
Inference in Regression Confidence Intervals: The 95% confidence interval for β1 is The 95% confidence interval for β0 is similar except a function of b0 But for n > 30 we often use 2 instead of 1.96
t-statistics • Imagine we’ve estimated: PredictedSales = -1.34 + 0.237* Traffic • Does traffic have an effect on sales? • Maybe. Maybe not. How confident are you? • We want to look at the confidence intervals (and actually do a hypothesis test) • To test if βi = 0 use t = bi / se(bi) • I.e., if we knew that se(b1) = .0243 • t = .237 / .0243 = 9.75 • t = (b – hypothesized value) / se(b) • how many SEs is the estimate away from the value (typically 0)
p-values • An equivalent method for examining statistical significance is to use the reported p-values rather than the t-statistics. • If you are interested in whether a coefficient is statistically different than zero, the p-value tells you the probability of a Type-I error (falsely rejecting the null hypothesis) • Or, equivalently, ‘the plausibility of the null hypothesis’ • Typically we use a 95% threshold, so we require that the p-value be less than 0.05.
Prediction Intervals • Say we have a new X observation and we want to predict where the Y will occur. • The regression line will give the expected (average) value of Y given X • If we want to be more certain about where Y will occur, then we need to calculate the Prediction Interval:
Prediction Intervals Different than a confidence interval derived from se. Why? • Think about what we’ve done so far: • We have some data and we estimated b0 and b1 for some true β’s. • We can calculate a confidence interval for the β’s given our b’s. • That standard error of the regression, se, from before was for the true β’s • The data we have to estimate our regression line is a sample from some larger population. With a different draw of data we might get slightly different set of data. • Look at the formula again: what happens as n ∞?
Prediction Intervals • So when we have infinite data (or “large enough”): • se(ynew) = se • Also notice how the size of the se(ynew) changes depending on where the xnew value is: • Goodness of prediction decreases as we move away from the average x
CAPM & Stock Returns • Seen it? • Expected excess return on an asset is proportional to the expected excess return on the “market portfolio”: R – Rf = β (Rm – Rf) R = expected return on the risky investment • Change in price plus any dividend payout as a percentage of initial price. E.g.: buy for $100, $2.50 dividend and sell a year later for $105 = ( (105-100) + 2.50) / 100 = 7.5% Rm = expected market return Rf= return on risk-free investment (t-bills) Risk premium of asset = β * market premium: • If β < 1, asset has less risk than the market portfolio and therefore lower expected excess return • If β > 1, asset has more risk than the market portfolio and therefore higher expected excess return