From ecb6cc4cde8cc02989bba524f74afb86ac9e0ac7 Mon Sep 17 00:00:00 2001 From: Johannes Pfeifer Date: Thu, 17 Mar 2016 13:53:40 +0100 Subject: [PATCH 1/2] Correct manual on Ramsey Closes #944 --- doc/dynare.texi | 17 +++++------------ 1 file changed, 5 insertions(+), 12 deletions(-) diff --git a/doc/dynare.texi b/doc/dynare.texi index bfdeb6be6..a5e95e473 100644 --- a/doc/dynare.texi +++ b/doc/dynare.texi @@ -7160,11 +7160,10 @@ end; This command computes the first order approximation of the policy that maximizes the policy maker's objective function subject to the constraints provided by the equilibrium path of the private economy and under -commitment to this optimal policy. Following @cite{Woodford (1999)}, the Ramsey -policy is computed using a timeless perspective. That is, the government forgoes -its first-period advantage and does not exploit the preset privates sector expectations -(which are the source of the well-known time inconsistency that requires the -assumption of commitment). Rather, it acts as if the initial multipliers had +commitment to this optimal policy. The Ramsey policy is computed is computed +by approximating the equilibrium system around the perturbation point where the +Lagrange multipliers are at their steady state, i.e. where the Ramsey planner acts +as if the initial multipliers had been set to 0 in the distant past, giving them time to converge to their steady state value. Consequently, the optimal decision rules are computed around this steady state of the endogenous variables and the Lagrange multipliers. @@ -7219,8 +7218,7 @@ In addition, it stores the value of planner objective function under Ramsey policy in @code{oo_.planner_objective_value}, given the initial values of the endogenous state variables. If not specified with @code{histval}, they are taken to be at their steady state values. The result is a 1 by 2 -vector, where the first entry stores the value of the planner objective under -the timeless perspective to Ramsey policy, i.e. where the initial Lagrange +vector, where the first entry stores the value of the planner objective when the initial Lagrange multipliers associated with the planner's problem are set to their steady state values (@pxref{ramsey_policy}). In contrast, the second entry stores the value of the planner objective with @@ -13250,11 +13248,6 @@ Villemot, Sébastien (2011): ``Solving rational expectations models at first order: what Dynare does,'' @i{Dynare Working Papers}, 2, CEPREMAP -@item -Woodford, Michael (2011): ``Commentary: How Should Monetary Policy Be -Conducted in an Era of Price Stability?'' @i{Proceedings - Economic Policy Symposium - Jackson Hole}, -277-316 - @end itemize From 1554f15a65d13c52eb93b66d4a27969f5c76936e Mon Sep 17 00:00:00 2001 From: Michel Juillard Date: Sun, 8 Jan 2017 17:16:41 +0100 Subject: [PATCH 2/2] - rephrased description of objective function value when Lagrange multipliers are set to 0 - type correction --- doc/dynare.texi | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/doc/dynare.texi b/doc/dynare.texi index a5e95e473..1d740583b 100644 --- a/doc/dynare.texi +++ b/doc/dynare.texi @@ -7160,7 +7160,7 @@ end; This command computes the first order approximation of the policy that maximizes the policy maker's objective function subject to the constraints provided by the equilibrium path of the private economy and under -commitment to this optimal policy. The Ramsey policy is computed is computed +commitment to this optimal policy. The Ramsey policy is computed by approximating the equilibrium system around the perturbation point where the Lagrange multipliers are at their steady state, i.e. where the Ramsey planner acts as if the initial multipliers had @@ -7221,10 +7221,13 @@ taken to be at their steady state values. The result is a 1 by 2 vector, where the first entry stores the value of the planner objective when the initial Lagrange multipliers associated with the planner's problem are set to their steady state values (@pxref{ramsey_policy}). + In contrast, the second entry stores the value of the planner objective with initial Lagrange multipliers of the planner's problem set to 0, i.e. it is assumed -that the planner succumbs to the temptation to exploit the preset private expecatations -in the first period (but not in later periods due to commitment). +that the planner exploits its ability to surprise private agents in the first +period of implementing Ramsey policy. This is the value of implementating +optimal policy for the first time and committing not to re-optimize in the future. + Because it entails computing at least a second order approximation, this computation is skipped with a message when the model is too large (more than 180 state variables, including lagged Lagrange multipliers).