1 / 53

RW3: Uncorrelated Increments

RW3: Uncorrelated Increments. Tests of Serial Correlation. Tests of Serial Correlation. Under the weakest version of the RW theory, RW3, the increments of the random walk are uncorrelated at all leads and lags.

faith-hull
Download Presentation

RW3: Uncorrelated Increments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. RW3: Uncorrelated Increments Tests of Serial Correlation

  2. Tests of Serial Correlation • Under the weakest version of the RW theory, RW3, the increments of the random walk are uncorrelated at all leads and lags. • Therefore, to test the RW3 model, look at the returns and construct tests based on: • Autocorrelations themselves • The sum of squared autocorrelations (Box-Pierce Q). • Variance ratios (linear combinations of the autocorrelations).

  3. Autocorrelation Coefficients • With a covariance-stationary time series of continuously compounded returns, we can define the • kth order autocovariance, γ(k) • kth order autocorrelation, ρ(k)

  4. Sample Counterparts • For a sample of returns {rt}Tt=1

  5. Sampling Theory: A Simple Case • Depends on the data generating process for r. CLM show that if {rt} satisfies RW1 and has a variance σ2 and sixth moment proportional to σ6 (a normal dist’n would) then

  6. Sampling Theory… • This tells us that the sample correlation coefficients are negatively biased in finite samples. • This follows because of the estimation procedure. • You have to estimate the sum of the cross products of deviations from a mean (that is itself estimated). • Deviations from the sample mean are zero by construction so positive deviations must eventually be followed by negative deviations on average. • When you multiply these deviations together, the result is a negative bias.

  7. Sampling Theory… • The negative bias cannot be fixed but you can “bias correct.” • For this the expected value is O(T-2) • LM(1988) and others derive asymptotic approximations for sample autocorrelation coefficients under weaker conditions and they can be used to test RW2 and RW3.

  8. Sample Distributions • The sample auto correlation coefficients are asymptotically independent and normally distributed with distribution:

  9. Box-Pierce Q • RW1 implies that all autocorrelations are zero. • The Box-Pierce Q statistic is a joint test for the 1st m autocorrelations. A sum of squared autocorrelations. • Under an RW1 null hypothesis, asymptotically, • Good if there is no specific alternative hypothesis. This is next.

  10. Variance Ratios • For all 3 RW hypotheses, the variance of RW increments is linear in the time interval. • If the interval is twice as long, the variance must be twice as big. Thus, a 2-period continuously compounded return has twice the variance of a 1-period return. • To do any testing we need the sampling distribution of the VRs (variance ratios) under the random walk null hypothesis.

  11. Distribution Properties • Intuition • 2-period versus 1-period returns: where ρ(1) is the first-order autocorrelation coefficient of returns. • Under the RW null VR(2) =1 • With positive (negative) first-order autocorrelation VR > (<) 1.

  12. Distribution Properties… • At longer horizons higher order autocorrelations matter. so that the VR(q) is a particular linear combination of the 1st (q-1) autocorrelation coefficients (with linearly declining weights).

  13. Distribution Properties… • Under the RW1 null hypothesis, VR(q) still equals 1. This is also true under RW2 and RW3 as long as the variance of rt is finite (fat-tailed dist’ns sometimes don’t have finite second moments). • What’s nice about this formulation is that you can also work out the distribution of this ratio under a specific alternative hypothesis.

  14. Distribution Properties… • AR1 Alternative: • Then: • It’s easier to construct powerful tests against a specific rather than a general alternative hypothesis.

  15. Sampling Distribution • LM (1988)’s statistical test for RW1 • H0: rt = μ + εt, εt ~ IID N(0,σ2) • Data: 2n+1 observations of log prices {p0,…,p2n}. Consider the following estimators:

  16. Non-overlapping Observations • Note that the estimate of σ2b uses non-overlapping observations. • Under RW1, the mean and variance are linear in the increment interval, and so you can estimate more than one way. • The last slide used one half the sample variance of the increments of even-numbered observations {p0, p2,…,p2n}.

  17. Limiting Dist’ns for the Variance • Asymptotically, • Need to also know the limiting dist’n of the ratio. • It’s not a 2 because the two variance estimators are not asymptotically uncorrelated.

  18. The Variance Difference Estimator • One way around this problem would be to use another, simpler statistic. • VD is just the difference of the asymptotic variance estimators: • Then:

  19. The VR(2) Limiting Distribution • It turns out that LM can compute the dist’n of the ratio as:

  20. Generalization to Multiperiod Returns • Data is nq+1 observations of log prices {p0,…,pqn) where q is an integer greater than 1. Consider the following estimators:

  21. Asymptotic Distributions • The asymptotic distributions for the variance difference and variance ratio estimates are:

  22. Overlapping Observations • Note again that the estimate of σ2b uses non-overlapping observations. Can do better in finite samples by using overlapping observations since you get more observations.

  23. Unbiased Variance Estimators • The maximum likelihood estimators used above are biased. To correct, we can use unbiased versions:

  24. Unbiased Variance Estimators • Unfortunately, the variance ratio estimate is still biased, but LM (1989) show that it has better finite sample performance than provided by the maximum likelihood estimates. • The asymptotic distributions for the “unbiased” VR and VD statistics are given on page 53 of the text.

  25. RW3 • The problem with rejecting RW1 is that you could be rejecting not market efficiency (or lack of predictability), but the unconditional homoskedasticity implied by RW1. • Can we get a VR test for RW3? Yes. • The intuition is that all you have to do is get heteroskedasticity-consistent estimates of the VR and VD statistics and you are done. • The reality is that you need to find the asymptotic variance of the variance ratio. • Of course, you could try modeling the heteroskedasticity, but that is harder to do. • Luckily there is the White (1984) heteroskedasticity-consistent estimator.

  26. LM 1988 Theoretical Results • The statistics and converge almost surely to zero for all q as n increases without bound. • Asymptotically, under “quite general” conditions.

  27. LM 1988 Theoretical Results… • A heteroskedasticity-consistent estimator of δk, the asymptotic variance of is • A heteroskedasticity-consistent estimator of θ(q), the asymptotic variance of is

  28. LM 1988 Theoretical Results… • Asymptotically, the standardized test statistic Ψ*(q) is distributed:

  29. Empirical Evidence • Autocorrelations • Daily (1962-1994) equal-weighted CRSP index has a first-order autocorrelation of 35.0% (with a standard error of 1.11%). Implies that 12.3% of the daily variation is explainable by lagged return. • Box-Pierce Q statistic for 5 autocorrelations has value 263.3. The 99.5-percentile for χ25 is 16.7. • Weekly and monthly returns exhibit similar patterns for the indexes

  30. Empirical Evidence • Variance Ratios • As the autocorrelations suggest the variance ratios are greater than one. • The equal-weighted index has VR’s that are highly significant, larger in the 1st half of the sample (a common pattern). VR’s increase in q suggesting positive serial correlation for multiperiod returns. • VR’s of the value-weighted index are greater than one but insignificant in full sample and both subsamples. Suggests that firm size is an interesting issue. • Rejection of RW stronger for smaller firms. Their returns more serially correlated.

  31. Empirical Evidence • Individual Securities • Variance ratios suggest small negative serial correlations. • Insignificance likely due to fact that with so much nonsystematic risk any predictable components are hard to find. • The contrast with the indexes is suggestive: large positive cross-autocorrelations across individual securities across time.

  32. Empirical Evidence • Lead-lag relations. • Larger capitalization stocks lead smaller capitalization stocks. • First-order autocorrelation between last week’s return on large stocks with this week’s return on small stocks is 26.5% while the first-order autocorrelation between last week’s return on small stocks with this week’s return on large stocks is 2.4%. • Lo and MacKinlay (1990) show that this relation is responsible for over half the positive index autocorrelation. Also show that it can explain the profitability of contrarian investment strategies, no over-reaction is required. Efficiency?

  33. VR and Long Horizon Returns • LM show that VR is less than one in a mean-reverting model similar to the one used in FF (1988). • There are problems with the estimates introduced when the horizon, q is large relative to the total time span T (so n is small). • For example, has a variance that is bounded above by 1 so that you never reject the null, regardless of the data.

  34. Empirical Evidence • Long horizon returns • Negative serial correlation in multi-year index returns has been found. Interpreted by some as mean reversion due to a temporary component of returns. • More careful analysis correcting for some of the issues related to long horizons suggests that there is little evidence of mean reversion in long horizon returns.

  35. FF 1988 • Early market efficiency tests: • Used short-horizon returns. • There’s lots of data on short-horizon returns. • Estimated autocorrelations are usually close to zero. • What if you used transaction by transaction returns?

  36. FF 1988 • Long horizon return autocorrelations are examined. • Slowly decaying component of prices induces negative autocorrelations in returns that is: • Small for short-horizon returns. • Big for long-horizon returns. • Data: 1926-1985 – Glimpse of results: • Large negative autocorrelations for returns beyond a year. • Industry portfolios: predictable variation is 35% of total variation for 3-5 year return variances. • Rises to 40% for small-firm portfolios. • Only 25% for large-firm portfolios.

  37. Competing Explanations • Market irrationality, where prices take long but temporary swings away from fundamental values.

  38. Competing Explanations • Time-varying equilibrium expected returns. • Could be due to changing investment opportunity sets or changing price for risk. • Intuition: Stock prices are just the discounted value of expected dividends:

  39. Competing Explanations • Suppose further (a leap) that changes in investment opportunity sets affect discount rates but not expected cash flows. • Then increases in the expected discount rate due to movements in some autocorrelated state variable lead to decreases in prices today and vice versa. • If the state variable is mean-reverting, shocks tend to die out over a long time. • Because of the persistence, if discount rates are high today, they are high tomorrow, but not as high, and the price rises a little, and so on. • Hence, the autocorrelated state variable leads to autocorrelated prices even though the cash flows do not change.

  40. The Model • Let p(t) be the log stock price. • As is standard, let p(t) = q(t) + z(t), where z(t) is stationary and q(t) = q(t-1) + μ + η(t) with η(t) white noise. • For the stationary component, the paper uses an AR(1) process z(t) = φz(t-1) + η(t) with η(t) white noise and φ close to but not equal to 1.0.

  41. Discussion of Model • This is one way to mix RW and stationary price components • The q is the RW part • The z is the stationary price component (φ < 1). • You get a non-stationary price process with a permanent gain of a large fraction of each period’s shock.

  42. Implications of autocorrelated z for r • The continuously compounded return from t to t+T is: • Now, suppose you actually saw the z’s not just the r’s. • The r has a change in z component in it, but it also has a RW component. • Let’s look at mean reversion just in the z’s and see what is implies for serial correlation in the r’s, which are what we observe.

  43. Implications of autocorrelated z for r • Take two adjacent Δz’s, each over a T period interval and find the autocorrelation. • Covariance stationarity implies that this correlation coefficient happens to be equal to the regression coefficient of one Δz on the other

  44. Implications of autocorrelated z for r • The numerator is: which equals which as T increases approaches - σ2(z) (again because Φ < 1). • The denominator is: which approaches 2σ2(z) so that the autocorrelation approaches –0.5.

  45. AR1 • If z(t) is an AR1, then the expected change from t to T is: and the covariance in the numerator of (t) reduces to: • Thus the covariance is minus the variance of the T-period expected change: • When z is an AR1, the correlation coefficient is minus the ratio of the variance of the expected change to the variance of the actual change.

  46. Long-run Persistence • When φ is close to one, as T increases, the expected change goes to: and • This means that ρ(T) is close to zero for short horizon returns and slowly approaches –0.5 as the horizon becomes infinitely long. • Slow mean reversion is more evident if one looks at long horizons. • Why look for slow mean reversion? Documented in the natural sciences.

  47. Properties of Returns • We said before that returns were the sum of changes in z and changes in a RW. • What does the RW part do? • Messes up inference but where? • The point FF make is that for very long horizon returns, the RW component is going to dominate and empirically, small autocorrelations are what one will find.

  48. Properties of Returns… • Let β(T) be the slope in the regression of adjacent T-period returns (the same thing as the correlation between them). • If RW and stationary components are uncorrelated:

  49. Interpretation • So, β measures the proportion of the variance of the T-period return explained by the mean reversion in a slowly decaying price component. • If there is no stationary component, β = 0. • If there is only a stationary component, β = ρ. • If there are both components, what happens?

  50. The U-Shaped Pattern • As T increases, • The variance of the predictable component pushes the variance toward 0.5. • The variance of the unpredictable component increases linearly with T. • Hence, the white noise component eventually dominates • With T small, • β = 0, so the best you’ll get is a U-shaped pattern.

More Related