280 likes | 418 Views
Statistical Properties of Returns. Predictability of Returns. Asset Return Predictability. Previously: Introduced some of the issues and the datasets available locally. Now: Continued discussion of tests of predictability (CLM Ch. 2). Forms of the random walk hypothesis and martingales.
E N D
Statistical Properties of Returns Predictability of Returns
Asset Return Predictability • Previously: Introduced some of the issues and the datasets available locally. • Now: Continued discussion of tests of predictability (CLM Ch. 2). • Forms of the random walk hypothesis and martingales. • Tests of the random walk hypothesis: CJ test, runs test, technical trading LMW (2000).
Asset Return Predictability • Why we care and what we’ll do: • Try to forecast future returns using only past returns. • Weak form efficiency says that one shouldn’t be able to make abnormal profits (over the appropriate risk-adjusted returns) using past return information alone. • Want to test different versions of the random walk hypothesis.
The Random Walk Hypothesis - Taxonomy • Consider returns rt and rt+k. • Consider functions f(rt) and g(rt+k). • CLM 2.1.1 considers when Cov[f(rt), g(rt+k)] = 0 for all t and k 0. • Almost all forms of the RW hypothesis are captured by this equation, which can be thought of as an “orthogonality condition,” using various restrictions on f and g.
Independent vs. Uncorrelated - Aside • Two random variables, X and Y, are independent if for any real numbers x and y Pr(X x and Y y) = Pr(X x)Pr(Y y) • Can also be defined by X and Y are independent if for all pairs of functions f and g Cov(f(X), g(Y)) = 0. • Also if F(X,Y) = FX(X)FY(Y) (also works with pdfs). • Lack of Correlation is simply: Cov(X, Y) = 0. • Clearly, independence is much stronger.
Martingales • A stochastic process {Pt} (i.e. price) which satisfies: E[Pt+1 – Pt| Pt, Pt-1, …] = 0 This is also called a fair game. • If Pt is an asset price, this says that conditional on past prices, the best guess at tomorrow’s stock price is today’s. • Price changes can not be forecast using past prices. • Non-overlapping price changes are uncorrelated at all leads and lags if the price process follows a martingale. (As a homework exercise show this follows from the definition.)
Martingales • It was once thought that prices following a martingale was a necessary condition for an efficient capital market. • If there are to be no profits available from trading on past price information, the conditional expectation, conditional on price history, of future price changes cannot be positive or negative so must be zero. • The more efficient the market, the more random are prices. • Ignores a risk/return tradeoff. Compensation for risk requires drift. • Cox and Ross and Harrison and Kreps show that properly risk adjusted returns (log price changes) do follow a martingale.
RW1: IID Increments • The governing equation for the most restrictive of the random walk processes is: Pt = μ + Pt-1 + εt the εt are i.i.d. with mean zero and variance σ2. • After t periods: E[Pt|P0] = P0 + μt Var [Pt|P0] = σ2t These results also hold for RW2 and RW3.
Distributional Assumptions • A common assumption is to suppose the εt are i.i.d. N(0,σ2). This makes prices behave as an arithmetic Brownian motion, sampled at evenly spaced intervals. • This makes life very easy because we can work with the normal distribution but it violates limited liability. • Suppose log prices follow this process: pt = μ + pt-1 + εt so that continuously compounded returns are i.i.d. normal. In this case prices follow a geometric Brownian motion, an assumption often used in continuous-time asset pricing models.
RW2 – Independent Increments • To think that price changes are identically distributed over long periods is unpalatable. • RW2 retains the independence of the increments but allows them to be drawn from different distributions. • This means that we can allow for unconditional heteroskedasticity in the εt’s, something that fits with the data. • Any arbitrary transformation of past prices is useless in predicting (any arbitrary transformation of) future prices changes.
RW3 – Uncorrelated Increments • Weakest of the RW hypothesis. • You can’t forecast future price increments. • Higher moments (e.g., variance) may be forecastable. • That is there may be conditional heteroskedasticity in the innovation process over time. e.g.: Cov(εt, εt-k) = 0 but Cov(ε2t, ε2t-k) 0.
Tests of RW1 • Sequences and Reversals • Start with prices following a geometric Brownian motion without drift: pt = pt-1 + εt, where the εt ~ i.i.d.N(0, σ2). • Let It equal one if pt – pt-1 is positive and zero otherwise. • Cowles and Jones (1937) compare the frequency of sequences of two returns with the same sign to the frequency of successive returns with a reversal of signs.
Sequences and Reversals • Let there be n+1 returns (t = 0,1,2,…,n), Ns be the number of sequences and Nr = n – Ns be the number of reversals. • If log-prices follow a driftless random walk, and the distributions of the ε’s is symmetric, then positive and negative increments are equally likely, and the CJ ratio should be approximately one. ĈJ Ns/Nr. • This ratio may be seen as a consistent estimator of the probability of a sequence to the probability of a reversal πs/ πr = πs/(1- πs).
Sequences and Reversals • Consistency here means convergence in probability. • In this case, under the null a sequence and a reversal are equally likely, the ratio should converge to one. • CJ(1937) found a value of 1.17 for the ratio using an index of railroad stocks from 1835-1935 and concluded that stock returns are predictable.
Issue: Drift • If prices follow a geometric Brownian motion with drift: pt = μ+ pt-1 + εt, where εt~i.i.d. N(0, σ2). • Now the indicator variable It is biased in the direction of the drift: It = 1 with probability where π Pr(rt > 0) = Φ(μ / σ) • With a postive drift, πs > ½ and with negative drift πs < ½.
Drift • In this case, • The ratio is strictly greater than one for any π ½.
The Effect Of Drift • We can calibrate to US annual data. • Let μ = .08, and σ = .21, then • The CJ statistic becomes πs/(1- πs) = 1.19, very close to the 1.17 that CJ found.
The Effect Of Drift • Statistical Significance? • Is 1.19 statistically significantly different from 1.17? • Is 1.17 statistically significantly different from 0? • The answer to this requires a measure of standard errors and so the sampling theory for the CJ statistic.
The Effect Of Drift • Sampling Theory • Start with the fact that Ns is a binomial random variable, that is it is the sum of n Bernoulli random variables, Yt where Yt = 1 with probability πs = π2 + (1- π)2 and zero otherwise. • Using the normal approximation to the binomial, the distribution for Ns for large n has: • Mean n πs • Variance n πs(1- πs) + 2(π3 + (1- π)3 - πs2) • Its variance is not n πs(1- πs) because the drift makes each pair of Y’s dependent (CLM pg 37).
The Effect Of Drift • Then asymptotically, • CJ(1937)’s estimate of 1.17 yields • The above equation then yields a standard error estimate of 0.2537. • Under the null, 1.19 is not different from 1.17 nor is 1.17 different from 1.0.
The Effect Of Drift • You would need a much higher π to find any significance. • How high? To reject the random walk using this test you would need a π of about 0.72. Thus you’d need almost a ¾ chance of prices going up (or down) every year to detect deviations from a random walk with this test. • This test doesn’t have much power to detect deviations from the random walk given the historical estimates of the parameters of the US economy.
The Runs Test • Used to detect “streakiness” in data. • For example: identify streaks in athletic performances (basketball, baseball). • The idea is to look at the data and see whether it was generated by a set of binomial trials where the probability is estimated in-sample.
The Runs Test • Consider the sequence: 1001110100 • It has six “runs” in it – three of 1’s (of lengths 1,3, and 1) and three of 0’s (of length 2,1, and 2). • Now suppose you have a sample of n multinomial trials • Let πI be the probability that an event of type I occurs in a period. • Then, Nruns(i) is the total number of runs of the ith type. • To do any testing we must find the sampling distribution of the number of runs in this situation.
The Runs Test • Intuition for the Bernoulli case: 2 types, up and down • Let π be the probability of an “up.” • Then, the expected total number of runs is 2n π(1- π) + π2 + (1- π)2 • Note that this value is maximized at π = ½.
The Runs Test • What is the sensitivity of the total number of runs to drift? • In CLM table 2, the total number of runs for a sample of 1000 for a geometric random walk with normally distributed increments, drifts of zero through 20% and a standard deviation of 21% is presented. • In this case π = Φ(μ/σ). • The expected number of runs falls from 500 at π = ½ to only 283.5 at μ = 20% or π = 83% (if σ = 21%). • There are expected to be fewer runs but the drift means that the runs of “ups” will be longer.
The Runs Test • Bernoulli test statistic • Wallis and Roberts Continuity correction: • Fama(1965) finds no evidence against the RW using the runs test. Recently the theory has been extended to non-i.i.d. sequences and in other ways, but we will not examine these contributions.