1 / 36

PO 141: INTRODUCTION TO PUBLIC POLICY

PO 141: INTRODUCTION TO PUBLIC POLICY. Summer I (2015) Claire Leavitt Boston University. TABLE OF CONTENTS. Statistical Inference Confidence intervals, hypothesis testing, errors, Bayesian inference Regression analysis Meaning, key terms, OLS, assumptions

fpalmer
Download Presentation

PO 141: INTRODUCTION TO PUBLIC POLICY

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PO 141: INTRODUCTION TO PUBLIC POLICY Summer I (2015) Claire Leavitt Boston University

  2. TABLE OF CONTENTS • Statistical Inference • Confidence intervals, hypothesis testing, errors, Bayesian inference • Regression analysis • Meaning, key terms, OLS, assumptions • Problems with regression analysis • Applying regression to policy analysis

  3. CONFIDENCE INTERVALS  How confident are we that the true population mean lies within a certain range? We use samples to answer this question  The range in question is the distance from the mean Confidence intervals based on probabilistic assessments

  4. HYPOTHESIS TESTING  How do we test whether a hypothesis is true or not? We reject or fail to reject (accept) a certain hypothesis with a certain level of confidence  Null hypothesis: The proposition/contention that is being tested Alternative hypothesis: What must be accepted if we reject the null hypothesis

  5. HYPOTHESIS TESTING  Come up with a null hypothesis (H0) and an alternative hypothesis (H1)  Calculate a test statistic that quantifies how far your sample mean is from the true population mean H0.  The test statistic accounts for how far your sample mean is from the hypothesized population mean (H0) and is also based on the standard error of the mean value of your sample

  6. HYPOTHESIS TESTING  Each test statistic is also based on degrees of freedom: n – 1.  This accounts for the fact that we’re estimating the standard error based on an estimate of the sample mean. How accurate is the estimate? The more degrees of freedom, the more accurate our estimate of the standard error  High test statistic = significant results!

  7. HYPOTHESIS TESTING  With a high test statistic, either your sample is flawed or the null hypothesis is wrong. Which is more likely?  Assess p-value: How likely it is that your observed sample mean was the result of random chance? A p-value of .03 means that there is only a 3% chance that your sample mean is the result of randomness. There is a 97% chance that your sample mean actually accurately reflects the truth about the population

  8. TYPE I AND TYPE II ERRORS  Type I error: Convict when innocent  Reject the null hypothesis in favor of the alternative when the null hypothesis is true; false positive  Type II error: Acquit when guilty  Fail to reject the null hypothesis when the alternative hypothesis is true; false negative  The goal of the justice system is to minimize Type I errors

  9. BAYESIAN INFERENCE  Bayesian inference is the consistent use of probability to minimize uncertainty—policymakers must update their assessments of likely outcomes based on what has already occurred (evidence); probabilities are conditional upon new information/knowledge coming to light  Example: The O.J. Simpson trial

  10. REGRESSION ANALYSIS  Regression analysis is a good (though not perfect) alternative to controlled experiments, especially for social scientists Regression analysis isolates the effect of a single explanatory variable on an outcome that researchers are trying to explain

  11. REGRESSION ANALYSIS  Dependent variable: The outcome researchers are trying to explain Independent variable: The factor that researchers believe might cause the outcome they’re interested in Control variable: An explanatory/independent variable that probably affects the outcome of interest but is NOT the factor researchers are most interested in

  12. EXPERIMENTS VS. REGRESSION  Experiments:  Treatment and control groups: only one difference (treatment) between two groups in an overall sample Treatment is applied randomly to the sample High internal validity—because the treatment is applied randomly, sample does not need to be representative. (We know the treatment works.)  Low external validity—can the results be extrapolated to a larger population? (The treatment works for some people, but will it work for everyone?)

  13. EXPERIMENTS VS. REGRESSION  Regression:  Often, social scientific experiments are impractical, impossible or unethical; regression is the next best thing  Regression seeks to isolate the effect of a “treatment variable” (what researchers are testing) by controlling for other possible effects  “To control for” means to hold the control variable constant; control variables function as the statistical equivalent of control groups in experiments

  14. LINEAR RELATIONSHIPS  Linear regression is a method for fitting a straight line that “best”expresses the relationship between certain explanatory variables and the dependent variable (outcome of interest) Linear relationships are easier to interpret than any other relationship between variables

  15. KEY TERMS: INTERCEPT  The intercept is where the regression line crosses the y-axis; practically, the intercept can mean the “starting point” or baseline for a relationship  E.g., height and weight —the intercept is the baseline weight from where we begin to assess the relationship (how height contributes to weight)

  16. KEY TERMS: SLOPE  Tells us how Y changes if you change X by one unit  Represents the steepness of a line, demonstrating the severity of the relationship; is there a strong or weak relationship between X and Y?  A slope of 0 means there is no relationship between X and Y Equation of a line:y = a + βx + ε

  17. KEY TERMS: RESIDUALS  Error terms or residuals are the distances between the OLS regression line and the individual data points Error terms represent what the regression line does not explain Example: Height explains some of weight, but not all—the error terms represents what other factors contribute to weight)  Example: Party affiliation is a great predictor of vote choice but other factors matter too

  18. KEY TERMS: RESIDUALS

  19. ORDINARY LEAST SQUARES (OLS)  A linear relationship, expressed by an intercept, a slope and an error term (y = a + βx + ε), means fitting the best line to the data. What do we mean by best? OLS minimizes the sum of the squared vertical distances from data points to the regression line  Why the squared distances?

  20. ORDINARY LEAST SQUARES (OLS)

  21. LINEAR REGRESSION  Linear regression can be bivariate (one independent variable) or multivariate (one explanatory variable and many control variables) Variables can be continuous or binary (dummy variables)  y = a + β1x1 + β2x2 +β3x3… + βixi + ε

  22. LINEAR REGRESSION  Example: A health policy analyst is trying to assess the relationship between number of sodas consumed per week and weight, controlling for other factors that might affect weight: age, hours exercised/week and sex, where 1 = female and 0 = male  What does this equation mean? y = 125 + .34x1 + .65x2 – 3.2x3 - 29x4 + ε

  23. LINEAR REGRESSION  y = 125 + .34x1 + .65x2 – 3.2x3 - 29x4 + ε  When looking at a regression equation, what do we care about?  Size: How big is the effect?  Direction: Is the relationship positive or negative?  Significance: Is this relationship true for the larger population of interest?

  24. SIGNIFICANCE  If a relationship is statistically significant, it means that there is a very small chance that a) the results we see (from a sample) were merely the result of chance/randomness and b) that the relationship isn’t true (for the larger population of interest)

  25. SIGNIFICANCE  Standard significance levels: .10, .5 and .01  These levels are arbitrary, but the smaller the better  Hypothesis testing on slope coefficients:  H0: β1 = 0 (there is no relationship between X and Y)  H1: β1 > 0; β1 < 0; β1≠ 0 (there is a relationship between X and Y)

  26. SIGNIFICANCE  We compute a test statistic just as we did for sample-means testing, based on:  Standard error of β1 (what is the dispersion, or spread, we would see if we were to perform the same regression on multiple samples? How much variation is there in our multiple estimates of β1?)

  27. SIGNIFICANCE  How big is the standard error relative to the coefficient, β1,itself? The test statistic quantifies this ratio:  A p-value is associated with a test statistic, just as in sample-means testing, based on the number of degrees of freedom  P-value = the probability of getting the observed value for β1 if the null hypothesis were true

  28. LINEAR TRANSFORMATIONS  Because linear relationships are easy to interpret, we may want to transform exponential relationships into linear terms using the common logarithm (base 10)  Example: Say you want to explain income  $100,000 can be transformed into 5 because log10100,000 = 5  $101,000 can be transformed into 5.004 because log10101,000 = 5.004  $102,000 can be transformed into 5.008 because log10102,000 = 5.008, and so on in a linear fashion; the increase in log(y) for every one-unit change in x is constant

  29. OLS ASSUMPTIONS  In order to perform OLS regression, we must make assumptions about what the error terms (extraneous, non-explanatory factors) look like  Linear regression assumes: 1. The expected value (mean) of error terms is equal to zero 2. Error terms are independent of one another and of the dependent variable 3. Error terms are distributed around the regression line with the same variance

  30. OLS ASSUMPTIONS 1. The expected value (mean) of error terms = 0

  31. OLS ASSUMPTIONS 2. Error terms are independent of one another

  32. OLS ASSUMPTIONS 3. Error terms are distributed around the regression line with the same variance

  33. OTHER PROBLEMS  Correlation ≠ causation  Endogeneity(internal) problems: Which direction does the causal arrow run? Does X explain Y or does Y explain X?  Example: Do higher rates of adult literacy lead to greater economic output, or does greater economic output lead to higher rates of adult literacy?

  34. OTHER PROBLEMS  Omitted variable bias: Is a significant variable masking some other variable and obscuring its effect?  r2 versus significance: r2 tells us how much of the variation in Y can be explained by all the independent variables in the regression. But relationships can still be significant even if they only explain a portion of the overall variation in outcome

  35. OTHER PROBLEMS  Multicollinearity: When two variables are highly correlated with one another  Why is this a problem?  If two variables are closely related andboth are included in a regression, it becomes difficult to isolate the effects of only one of those variables (thus, both variables might appear insignificant)

  36. OTHER PROBLEMS  Data mining: Including too many independent variables in a regression  Why is this a problem?  If you include too many variables, your r2 will be high and your model may have more predictive power, but you might obscure the significance of variables that actually do have important effects on the outcome

More Related