240 likes | 375 Views
Nick Bloom Micro-heterogeneity & Macro, general equilibrium. Do micro distributions matter for macro outcomes. Probably the greatest unanswered question in macro is how to get a tractable micro-to-macro model. Currently the battleground is in general equilibrium models
E N D
Do micro distributions matter for macro outcomes Probably the greatest unanswered question in macro is how to get a tractable micro-to-macro model. Currently the battleground is in general equilibrium models First overview the basics, and then discuss a couple of key papers
The easy life under partial equilibrium In partial equilibrium models each firms solves its own problem, for example solving for capital and labor: V(K,L,A)=maxK’,L’{F(A,K’,L’) – wL - C(K’-K,L’-L) + (1/(1+r))*E(V(K’,L’,A’))} The key assumption is wages (w) and interest rates (r) are fixed. This allows you to ignore the interaction between firms Bertola, Caballero, Engel etc.. all do this in their earlier work for numerical simplicity, but is this valid?
The problem is the curse of dimensionallity In general equilibrium each firm is still assumed to be solving its own profit maximisation problem. But now wages and prices are functions of the cross-sectional distribution (m) so that w=f(m), r=g(m): V(K,L,A,m)=maxK’,L’{F(A,K’,L’) – w(m)L - C(K’-K,L’-L) + (1/(1+r(m)))*E(V(K’,L’,A’,m))} This problem is now a lot tougher – every firm has to keep track of its own state variables and every other firms state variables. So if you have 3 states (K,L,A) and N firms, that 3N states!
Solving models under General Equilibrium • This is called the “Curse” because its exponential in N • If it takes a XGB to solve for 1 firm, it will take XNGB to solve for N firms. • Hence lots of computing power alone is never going to solve this • So the trick is to somehow approximate this cross-sectional distribution in a way that: • Reduces it down to something finite and managable • Does not dramatically change the GE flavor of the solution • Anything that does this is also easily defensible under bounded rationality – most individuals/firms also approximate life….
Per Krusell and Anthony Smith (1998) “Income and wealth heterogeneity in the macroeconomy” Journal of Political Economy
Overview • Undertakes a GE estimation of the effects of wealth distribution on the economy • The fundamental idea was to: • Approximate the cross-sectional distribution using moments • Use this to operationalize a Recursive Competitive Equilibrium (to be explained more in a minute) • Also combined different parameters to fit actual data better • An important paper: • First paper to undertake this GE approximation • Shares the code for this and provides sufficiently good instructions for others to follow – always do this!
A Recursive Competitive Equilibrium - Theory • In short this makes sure three sets of conditions are met: • Firms and households are optimising given: • Market prices (typically wages and interest rates) • Expectations over evolution of aggregate and cross-section • Market prices clear the goods and labor markets • Expectations are consistent with outcomes
A Recursive Competitive Equilibrium - Practice Numerical solutions assume you can approximate the expectation of distributions. They reformulate using this approximation This assumes bounded rationality due to computational costs Important to test this by confirming that the value maximisation for firms and agents is only reduced marginally by the approximation With this approach you then numerically solve recursively: Solve for (1, value functions) and (2, market clearing) jointly given an assumption on (3, distributions). Then simulates data for (3, distributions). Then use this simulation to re-solve (1, value functions) and (2, market clearing). Then simulate (3, expectations) again, and continue to loop until you converge
Solving Recursive Competitive Equilibrium models • Unfortunately there are no results showing that approximate numerical solutions to RCEs with fixed-costs are well behaved: • A solution exists • This is unique • The RCE solution mechanism outlined earlier will converge • In practice, however, it seems to work. But anyone that can make progress on showing any of the above will have a winning paper…
The Krusell Smith moments approach to RCEs • They use moments to approximate the distribution – appealing as a statistically standard way to describe any distribution • There are other approaches, for example: • Cabellero and Engel played around with various Characteristic functions (Taylor, Fourier, Chebyshev etc..) • Khan and Thomas (2004) used uniform histograms • The choice depends really on the support of the distribution to be approximated
The Krusell Smith results from using moments In the paper they report finding that only the 1st moment is required for the solution of the model, with higher moments providing no additional fit. This is also a result that Thomas (2002), Thomas and Kahn (2004), and Bachman, Caballero and Engel (2006) report My guess is this is not generally robust – for example with time varying uncertainty distributions compress and expand Another great paper would be to properly evaluate this across many models
They find no impact of cross-sectional distribution The main result from KS is that cross-sectional distribution of wealth has no real effect on – approximate aggregation This is because their utility function is pretty linear for medium and high levels of wealth, so consumption behaviour is roughly linear. Since consumption (which is individual weighted by wealth) is mostly in the hands of the rich the average agent is linear If agents are linear higher order moments don’t matter (next slide) This had a big impact on macro – suggests that “RAs rule OK”
Remember our old friend from last time… If the response function (the adjustment hazard for investment and the MPC for consumption) is constant (linear in the gap/wealth) then distributions does not matter Mandated (desired) investment Year Aggregate investment Adjustment hazard Distribution of plants
Message is – at least for consumption - micro-distribution appears not to matter • Good paper – how could you build on this: • Topic – Look at something more non-linear (labor or investment) • Technique – Use higher moments, these might matter • Technical – Provide some more formal proofs for RCEs
Ruediger Bachmann, Ricardo Caballero and Eduardo Engel (2007) “Lumpy Investment in Dynamic General Equilibrium” Yale WP
Overview • Paper estimates micro-to-macro investment in GE setting. In particular revisits the results from Khan and Thomas, finding lumpiness matters • Contribution is: • Demonstrates the impact of micro-macro GE is sensitive to parameter choices • Provides alternative methodologies for estimating these parameters • Quantifies separate impact of PE and GE smoothing • Good paper, shows that key results on GE smoothing are very sensitive to a few parameters, plus new techniques
They follow basic Khan and Thomas (2005) approach • Generally follow the approach of Khan and Thomas • Main points of departure are over parameter choices, particularly: • Bigger adjustment costs – more lumps (so micro matters more) • More curvature of the production function – curvature means higher option values, so actions now influence the future • Higher intertemporal elasticity of substitution – higher values mean output moves more over time to save adjustment costs • Inclusion of maintenance investment – reduces drift rate so raises the “memory” of the process
With these alternative parameters they find a major role for micro smoothing
Key identifying assumption in there is PE at industry level – which allows you to compare PE to GE Good idea to try and use additional data to identify paramters They (like me) believe plant level is already partially aggregated So use industry level data assuming it is fully aggregated, but PE Volatility of investment rates
Our old friend – time varying responsiveness index • If you accept the RI is time varying (which I think I do) then this requires additional assumptions: • Time varying cross-section matters (very possible) • Time varying adjustment costs (less likely) • Other time varying factors in the model (need to introduce these) • Other time varying shocks – uncertainty….
Message is parameter choices matter a lot in determining micro-macro aggregation effects • Good paper – how could you build on this – similar to earlier, plus: • Modeling – Include other adjustment costs (quadratic and linear), allow for labor adjustment costs or even technology vintages • Technique – evaluate impact of using higher moments (is there any way to get them to matter?) • Identification – robust ways to estimate the underlying parameters • So what – push beyond time varying RI to look at major shocks (tax credits etc), when this would be really valuable