380 likes | 750 Views
DSGE Models and Optimal Monetary Policy. Andrew P. Blake. A framework of analysis. Typified by Woodford’s Interest and Prices Sometimes called DSGE models Also known as NNS models Strongly micro-founded models Prominent role for monetary policy Optimising agents and policymakers.
E N D
DSGE Models and Optimal Monetary Policy Andrew P. Blake
A framework of analysis • Typified by Woodford’s Interest and Prices • Sometimes called DSGE models • Also known as NNS models • Strongly micro-founded models • Prominent role for monetary policy • Optimising agents and policymakers
What do we assume? • Model is stochastic, linear, time invariant • Objective function can be approximated very well by a quadratic • That the solutions are certainty equivalent • Not always clear that they are • Agents (when they form them) have rational expectations or fixed coefficient extrapolative expectations
Linear stochastic model • We consider a model in state space form: • u is a vector of control instruments, s a vector of endogenous variables, ε is a shock vector • The model coefficients are in A, B and C
Quadratic objective function • Assume the following objective function: • Q and R are positive (semi-) definite symmetric matrices of weights • 0 < ρ≤ 1 is the discount factor • We take the initial time to be 0
How do we solve for the optimal policy? • We have two options: • Dynamic programming • Pontryagin’s minimum principle • Both are equivalent with non-anticipatory behaviour • Very different with rational expectations • We will require both to analyse optimal policy
Dynamic programming • Approach due to Bellman (1957) • Formulated the value function: • Recognised that it must have the structure:
Optimal policy rule • First order condition (FOC) for u: • Use to solve for policy rule:
The Riccati equation • Leaves us with an unknown in S • Collect terms from the value function: • Drop z:
Riccati equation (cont.) • If we substitute in for F we can obtain: • Complicated matrix quadratic in S • Solved ‘backwards’ by iteration, perhaps by:
Properties of the solution • ‘Principle of optimality’ • The optimal policy depends on the unknown S • S must satisfy the Riccati equation • Once you solve for S you can define the policy rule and evaluate the welfare loss • S does not depend on s or u only on the model and the objective function • The initial values do not affect the optimal control
Lagrange multipliers • Due to Pontryagin (1957) • Formulated a system using constraints as: • λ is a vector of Lagrange multipliers: • The constrained objective function is:
FOCs • Differentiate with respect to the three sets of variables:
Hamiltonian system • Use the FOCs to yield the Hamiltonian system: • This system is saddlepath stable • Need to eliminate the co-states to determine the solution • NB: Now in the form of a (singular) rational expectations model (discussed later)
Solutions are equivalent • Assume that the solution to the saddlepath problem is • Substitute into the FOCs to give:
Equivalence (cont.) • We can combine these with the model and eliminate s to give: • Same solution for S that we had before • Pontryagin and Bellman give the same answer • Norman (1974, IER) showed them to be stochastically equivalent • Kalman (1961) developed certainty equivalence
What happens with RE? • Modify the model to: • Now we have z as predetermined variables and x as jump variables • Model has a saddlepath structure on its own • Solved using Blanchard-Kahn etc.
Bellman’s dedication • At the beginning of Bellman’s book Dynamic Programming he dedicates it thus: To Betty-Jo Whose decision processes defy analysis
Control with RE • How do rational expectations affect the optimal policy? • Somewhat unbelievably - no change • Best policy characterised by the same algebra • However, we need to be careful about the jump variables, and Betty-Jo • We now obtain pre-determined values for the co-states λ • Why?
Pre-determined co-states • Look at the value function • Remember the reaction function is: • So the cost can be written as • We can minimise the cost by choosing some co-states and letting x jump
Pre-determined co-states (cont.) • At time 0 this is minimised by: • We can rearrange the reaction function to: • Where etc
Pre-determined co-states (cont.) • Alternatively the value function can be written in terms of the x and the z’s as: • The loss is:
Cost-to-go • At time 0, z0 is predetermined • x0 is not, and can be any value • In fact is a function of z0 (and implicitly u) • We can choose the value of λx at time 0 to minimise cost • We choose it to be 0 • This minimises the cost-to-go in period 0
Time inconsistency • This is true at time 0 • Time passes, maybe just one period • Time 1 ‘becomes time 0’ • Same optimality conditions apply • We should reset the co-states to 0 • The optimal policy is time inconsistent
Different to non-RE • We established before that the non-RE solution did not depend on the initial conditions (or any z) • Now it directly does • Can we use the same solution methods? • DP or LM? • Yes, as long as we ‘re-assign’ the co-states • However, we are implicitly using the LM solution as it is ‘open-loop’ – the policy depends directly on the initial conditions
Where does this fit in? • Originally established in 1980s • Clearest statement Currie and Levine (1993) • Re-discovered in recent US literature • Ljungqvist and Sargent Recursive Macroeconomic Theory (2000, and new edition) • Compare with Stokey and Lucas
How do we deal with time inconsistency? • Why not use the ‘principle of optimality’ • Start at the end and work back • How do we incorporate this into the RE control problem? • Assume expectations about the future are ‘fixed’ in some way • Optimise subject to these expectations
A rule for future expectations • Assume that: • If we substitute this into the model we get:
A rule for future expectations • The ‘pre-determined’ model is: • Using the reaction function for x we get:
Dynamic programming solution • To calculate the best policy we need to make assumptions about leadership • What is the effect on x of changes in u? • If we assume no leadership it is zero • Otherwise it is K, need to use:
Dynamic programming (cont.) • FOC for u for leadership: where: • This policy must be time consistent • Only uses intra-period leadership
Dynamic programming (cont.) • This is known in the dynamic game literature as feedback Stackelberg • Also need to solve for S • Substitute in using relations above • Can also assume that x unaffected by u • Feedback Nash equilibrium • Developed by Oudiz and Sachs (1985)
Dynamic programming (cont.) • Key assumption that we condition on a rule for expectations • Could condition on a time path (LM) • Time consistent by construction • Principle of optimality • Many other policies have similar properties • Stochastic properties now matter
Time consistency • Not the only time consistent solutions • Could use Lagrange multipliers • DP is not only time consistent it is subgame perfect • Much stronger requirement • See Blake (2004) for discussion
What’s new with DSGE models? • Woodford and others have derived welfare loss functions that are quadratic and depend only on the variances of inflation and output • These are approximations to the true social utility functions • Can apply LQ control as above to these models • Parameters of the model appear in the loss function and vice versa (e.g. discount factor)
DGSE models in WinSolve • Can set up micro-founded models • Can set up micro-founded loss functions • Can explore optimal monetary policy • Time inconsistent • Time consistent • Taylor-type approximations • Let’s do it!