Day
2: Tuesday
[Note: each of the readings below
describes a dynamic economy, but does not necessarily study it with dynamic
programming. In our lecture, we will consider both the general economic problem
and the dynamic programming formulation]
* Robert E. Hall, "Stochastic Implications of the
Life-Cycle Permanent Income Hypothesis," Journal of Political Economy,
vol. 96, no. 5 (October 1978), 971-987. PDF
Fumio Hayashi, “Tobin’s Marginal Q and Average Q: A
Neoclassical Interpretation,” Econometrica 50(1),
Jan. 1982, 213-24. PDF
Note that we will study a slightly
simpler form of the dynamic program than LS, in that the transition equation
for the controlled state variable is non-stochastic. This allows for a somewhat
simpler form of various constructions, including the derivation and use of the
envelope theorem.
Lecture 4
(PDF
of slides)
Study Problems:
Problem 1: optimal intertemporal labor supply and consumption with
non-time-separable preferences
Problem 2: preferences and technology implying consumption is a constant
share of output; derivation using dynamic programming (both the Euler equation
and the value function)
Macroeconomists use dynamic
programming in three different ways, illustrated in these problems and in the
Macro-Lab example. First, as in problem 1, DP is used to derive restrictions on
outcomes, for example those of a household choosing consumption and labor
supply over time. These can be used for
analytical or computational purposes. Second, as in problem 2, DP can be used
to explicitly determine decision rules and the value function, although this
approach works out only in a small number of special cases. (log utility,
cobb-douglas production, and full depreciation will do the trick as in this
problem; there are a small number of other cases including "power"
utility and a linear production function as suggested by results in lectures 1
and 2). This problem also illustrates the convergence of finite horizon problem
decision rules and value functions to the infinite horizon values. Third, as in
the MACROLAB, DP is used -- together with a particular approximation technique
-- to determine numerical forms of decision rules and value functions.
The MACROLAB implicitly stresses
three important aspects of dynamic programming, as it builds an optimal
decision rule on a discrete grid of decisions (capital choices) for certain and
stochastic models. DP may be used in
settings where the problem is not differentiable, so that it is pointless to
take FOCs as in Problem 1. In fact, such
"discrete choice" models are standard in many areas of economics. DP
may also be used to produce approximate decision rules in settings where there
is no exact solution or to evaluate the accuracy of alternative approximations.
Finally, the second of the MACROLAB examples displays the introduction of
uncertainty into the neoclassical growth model: DP makes it very easy to move
conceptually (or computationally) from a certain to a stochastic model.
m-file for deterministic growth
model
m-file for stochastic growth model