1 / 76

Using SAS for Time Series Data

Using SAS for Time Series Data. LSU Economics Department March 16, 2012. Next Workshop March 30. Instrumental Variables Estimation. Time-Series Data: Nonstationary Variables. Chapter Contents. 12.1 Stationary and Nonstationary Variables 12.2 Spurious Regressions

jeneil
Download Presentation

Using SAS for Time Series Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Using SAS for Time Series Data LSU Economics Department March 16, 2012

  2. Next Workshop March 30 Instrumental Variables Estimation

  3. Time-Series Data: Nonstationary Variables

  4. Chapter Contents • 12.1 Stationary and Nonstationary Variables • 12.2 Spurious Regressions • 12.3 Unit Root Tests for Nonstationarity

  5. The aim is to describe how to estimate regression models involving nonstationary variables • The first step is to examine the time-series concepts of stationarity (and nonstationarity) and how we distinguish between them.

  6. 12.1 Stationary and Nonstationary Variables

  7. 12.1 Stationary and Nonstationary Variables • The change in a variable is an important concept • The change in a variable yt, also known as its first difference, is given by Δyt = yt – yt-1. • Δyt is the change in the value of the variable y from period t - 1 to period t

  8. 12.1 Stationary and Nonstationary Variables FIGURE 12.1 U.S. economic time series

  9. 12.1 Stationary and Nonstationary Variables FIGURE 12.1 (Continued) U.S. economic time series

  10. 12.1 Stationary and Nonstationary Variables • Formally, a time series yt is stationary if its mean and variance are constant over time, and if the covariance between two values from the series depends only on the length of time separating the two values, and not on the actual times at which the variables are observed

  11. 12.1 Stationary and Nonstationary Variables • That is, the time series yt is stationary if for all values, and every time period, it is true that: Eq. 12.1a Eq. 12.1b Eq. 12.1c

  12. 12.1 Stationary and Nonstationary Variables FIGURE 12.2 Time-series models 12.1.1 The First-Order Autoregressive Model

  13. 12.1 Stationary and Nonstationary Variables FIGURE 12.2 (Continued) Time-series models 12.1.1 The First-Order Autoregressive Model

  14. 12.2 Spurious Regressions FIGURE 12.3 Time series and scatter plot of two random walk variables

  15. 12.2 Spurious Regressions • A simple regression of series one (rw1) on series two (rw2) yields: • These results are completely meaningless, or spurious • The apparent significance of the relationship is false

  16. 12.2 Spurious Regressions • When nonstationary time series are used in a regression model, the results may spuriously indicate a significant relationship when there is none • In these cases the least squares estimator and least squares predictor do not have their usual properties, and t-statistics are not reliable • Since many macroeconomic time series are nonstationary, it is particularly important to take care when estimating regressions with macroeconomic variables

  17. 12.3 Unit Root Tests for Stationarity

  18. 12.3 Unit Root Tests for Stationarity • There are many tests for determining whether a series is stationary or nonstationary • The most popular is the Dickey–Fuller test

  19. 12.3 Unit Root Tests for Stationarity • Consider the AR(1) model: • We can test for nonstationarity by testing the null hypothesis that ρ = 1 against the alternative that |ρ| < 1 • Or simply ρ < 1 12.3.1 Dickey-Fuller Test 1 (No constant and No Trend) Eq. 12.4

  20. 12.3 Unit Root Tests for Stationarity • A more convenient form is: • The hypotheses are: 12.3.1 Dickey-Fuller Test 1 (No constant and No Trend) Eq. 12.5a

  21. 12.3 Unit Root Tests for Stationarity • The second Dickey–Fuller test includes a constant term in the test equation: • The null and alternative hypotheses are the same as before 12.3.2 Dickey-Fuller Test 2 (With Constant but No Trend) Eq. 12.5b

  22. 12.3 Unit Root Tests for Stationarity • The third Dickey–Fuller test includes a constant and a trend in the test equation: • The null and alternative hypotheses are H0: γ = 0 and H1:γ < 0 12.3.3 Dickey-Fuller Test 3 (With Constant and With Trend) Eq. 12.5c

  23. 12.3 Unit Root Tests for Stationarity • To test the hypothesis in all three cases, we simply estimate the test equation by least squares and examine the t-statistic for the hypothesis that γ = 0 • Unfortunately this t-statistic no longer has the t-distribution • Instead, we use the statistic often called a τ (tau) statistic 12.3.4 The Dickey-Fuller Critical Values

  24. 12.3 Unit Root Tests for Stationarity Table 12.2 Critical Values for the Dickey–Fuller Test 12.3.4 The Dickey-Fuller Critical Values

  25. 12.3 Unit Root Tests for Stationarity • To carry out a one-tail test of significance, if τc is the critical value obtained from Table 12.2, we reject the null hypothesis of nonstationarity if τ ≤ τc • If τ > τc then we do not reject the null hypothesis that the series is nonstationary 12.3.4 The Dickey-Fuller Critical Values

  26. 12.3 Unit Root Tests for Stationarity • An important extension of the Dickey–Fuller test allows for the possibility that the error term is autocorrelated • Consider the model: where 12.3.4 The Dickey-Fuller Critical Values Eq. 12.6

  27. 12.3 Unit Root Tests for Stationarity • As an example, consider the two interest rate series: • The federal funds rate (Ft) • The three-year bond rate (Bt) • Following procedures described in Sections 9.3 and 9.4, we find that the inclusion of one lagged difference term is sufficient to eliminate autocorrelation in the residuals in both cases 12.3.6 The Dickey-Fuller Tests: An Example

  28. 12.3 Unit Root Tests for Stationarity • The results from estimating the resulting equations are: • The 5% critical value for tau (τc) is -2.86 • Since -2.505 > -2.86, we do not reject the null hypothesis 12.3.6 The Dickey-Fuller Tests: An Example

  29. 12.3 Unit Root Tests for Stationarity • Recall that if yt follows a random walk, then γ = 0 and the first difference of yt becomes: • Series like yt, which can be made stationary by taking the first difference, are said to be integrated of order one, and denoted as I(1) • Stationary series are said to be integrated of order zero, I(0) • In general, the order of integration of a series is the minimum number of times it must be differenced to make it stationary 12.3.7 Order of Integration

  30. 12.3 Unit Root Tests for Stationarity • The results of the Dickey–Fuller test for a random walk applied to the first differences are: 12.3.7 Order of Integration

  31. 12.3 Unit Root Tests for Stationarity • Based on the large negative value of the tau statistic (-5.487 < -1.94), we reject the null hypothesis that ΔFt is nonstationary and accept the alternative that it is stationary • We similarly conclude that ΔBt is stationary (-7:662 < -1:94) 12.3.7 Order of Integration

  32. 12.4 Cointegration

  33. 12.4 Cointegration • As a general rule, nonstationary time-series variables should not be used in regression models to avoid the problem of spurious regression • There is an exception to this rule

  34. 12.4 Cointegration • There is an important case when et = yt - β1 - β2xt is a stationary I(0) process • In this case yt and xt are said to be cointegrated • Cointegration implies that yt and xt share similar stochastic trends, and, since the difference et is stationary, they never diverge too far from each other

  35. 12.4 Cointegration • The test for stationarity of the residuals is based on the test equation: • The regression has no constant term because the mean of the regression residuals is zero. • We are basing this test upon estimated values of the residuals Eq. 12.7

  36. 12.4 Cointegration Table 12.4 Critical Values for the Cointegration Test

  37. 12.4 Cointegration • There are three sets of critical values • Which set we use depends on whether the residuals are derived from: Eq. 12.8a Eq. 12.8b Eq. 12.8c

  38. 12.4 Cointegration 12.4.1 An Example of a Cointegration Test • Consider the estimated model: • The unit root test for stationarity in the estimated residuals is: Eq. 12.9

  39. 12.4 Cointegration 12.4.1 An Example of a Cointegration Test • The null and alternative hypotheses in the test for cointegration are: • Similar to the one-tail unit root tests, we reject the null hypothesis of no cointegration if τ ≤ τc, and we do not reject the null hypothesis that the series are not cointegrated if τ > τc

  40. Chapter 9 Regression with Time Series Data: Stationary Variables Walter R. Paczkowski Rutgers University

  41. Chapter Contents • 9.1 Introduction • 9.2 Finite Distributed Lags • 9.3 Serial Correlation • 9.4 Other Tests for Serially Correlated Errors • 9.5 Estimation with Serially Correlated Errors • 9.6 Autoregressive Distributed Lag Models • 9.7 Forecasting • 9.8 Multiplier Analysis

  42. 9.1 Introduction

  43. 9.1 Introduction • When modeling relationships between variables, the nature of the data that have been collected has an important bearing on the appropriate choice of an econometric model • Two features of time-series data to consider: • Time-series observations on a given economic unit, observed over a number of time periods, are likely to be correlated • Time-series data have a natural ordering according to time

  44. 9.1 Introduction • There is also the possible existence of dynamic relationships between variables • A dynamic relationship is one in which the change in a variable now has an impact on that same variable, or other variables, in one or more future time periods • These effects do not occur instantaneously but are spread, or distributed, over future time periods

  45. 9.1 Introduction FIGURE 9.1 The distributed lag effect

  46. 9.1 Introduction 9.1.1 Dynamic Nature of Relationships • Ways to model the dynamic relationship: • Specify that a dependent variable y is a function of current and past values of an explanatory variable x • Because of the existence of these lagged effects, Eq. 9.1 is called a distributed lag model Eq. 9.1

  47. 9.1 Introduction 9.1.1 Dynamic Nature of Relationships • Ways to model the dynamic relationship (Continued): • Capturing the dynamic characteristics of time-series by specifying a model with a lagged dependent variable as one of the explanatory variables • Or have: • Such models are called autoregressive distributed lag (ARDL)models, with ‘‘autoregressive’’ meaning a regression of yt on its own lag or lags Eq. 9.2 Eq. 9.3

  48. 9.1 Introduction 9.1.1 Dynamic Nature of Relationships • Ways to model the dynamic relationship (Continued): • Model the continuing impact of change over several periods via the error term • In this case etis correlated with et - 1 • We say the errors are serially correlated or autocorrelated Eq. 9.4

  49. 9.1 Introduction 9.1.2 Least Squares Assumptions • The primary assumption is Assumption MR4: • For time series, this is written as: • The dynamic models in Eqs. 9.2, 9.3 and 9.4 imply correlation between yt and yt - 1 or et and et - 1 or both, so they clearly violate assumption MR4

  50. 9.2 Finite Distributed Lags

More Related