20. Bianchi Overborrowing Model#
GPU
This lecture was built using a machine with JAX installed and access to a GPU.
To run this lecture on Google Colab, click on the “play” icon top right, select Colab, and set the runtime environment to include a GPU.
To run this lecture on your own machine, you need to install Google JAX.
This lecture provides a JAX implementation of “Overborrowing and Systemic Externalities” [Bianchi, 2011] by Javier Bianchi.
In addition to what’s in Anaconda, this lecture will need the following libraries:
!pip install quantecon
Show code cell output
Requirement already satisfied: quantecon in /opt/conda/envs/quantecon/lib/python3.11/site-packages (0.7.2)
Requirement already satisfied: numba>=0.49.0 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from quantecon) (0.59.1)
Requirement already satisfied: numpy>=1.17.0 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from quantecon) (1.26.4)
Requirement already satisfied: requests in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from quantecon) (2.32.2)
Requirement already satisfied: scipy>=1.5.0 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from quantecon) (1.13.1)
Requirement already satisfied: sympy in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from quantecon) (1.12)
Requirement already satisfied: llvmlite<0.43,>=0.42.0dev0 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from numba>=0.49.0->quantecon) (0.42.0)
Requirement already satisfied: charset-normalizer<4,>=2 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from requests->quantecon) (2.0.4)
Requirement already satisfied: idna<4,>=2.5 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from requests->quantecon) (3.7)
Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from requests->quantecon) (2.2.2)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from requests->quantecon) (2024.6.2)
Requirement already satisfied: mpmath>=0.19 in /opt/conda/envs/quantecon/lib/python3.11/site-packages (from sympy->quantecon) (1.3.0)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
We use the following imports.
import jax
import numba
import jax.numpy as jnp
import matplotlib.pyplot as plt
import numpy as np
import quantecon as qe
import scipy as sp
import matplotlib.pyplot as plt
import seaborn
from time import time
20.1. Description of the model#
The model seeks to explain sudden stops in emerging market economies, where painful financial disruption follows a period of sustained heavy borrowing.
A representative household chooses how much to borrow on international markets and how much to consume.
The household is credit constrained, with the constraint depending on both current income and the real exchange rate.
The model shows that household overborrow because they do not internalize the effect of borrowing on the credit constraint.
This overborrowing leaves them vulnerable to bad shocks in current income.
In essence, the model works as follows
During good times, households borrow more and consume more
Increased consumption pushes up the price of nontradables and hence the real exchange rate
A rising real exchange rate loosens the credit constraint and encourages more borrowing
This leads to excessive borrowing relative to a planner.
This overborrowing leads to vulnerability vis-a-vis bad shocks.
When a bad shock hits, borrowing is restricted.
Consumption now falls, pushing down the real exchange rate.
This fall in the exchange rate further tightens the borrowing constraint, amplifying the shock
20.1.1. Decentralized equilibrium#
The model contains a representative household that seeks to maximize an expected sum of discounted utility where
the flow utility function \(u\) is CRRA, with \(u(c) = c^{1-\sigma}/(1-\sigma)\) and
\(c = (\omega c_t^{-\eta} + (1-\omega) c_n^{-\eta})^{-1/\eta}\)
Here \(c_t\) (\(c_n\)) is consumption of tradables (nontradables).
The household maximizes subject to the budget constraint
where
\(b\) is bond holdings (positive values denote assets)
primes denote next period values
the interest rate \(r\) is exogenous
\(p_n\) is the price of nontradables, while the price of tradables is normalized to 1
\(y_t\) and \(y_n\) are current tradable and nontradable income
The process for \(y := (y_t, y_n)\) is first-order Markov.
The household also faces the credit constraint
Market clearing implies
The household takes the aggregate timepath for bonds as given by \(B' = H(B, y)\) and solves
subject to the budget and credit constraints.
A decentralized equilibrium is a law of motion \(H\) such that the implied savings policy \(b' = b'(b, B, y)\) verifies
20.1.2. Notation#
Let
Using the market clearing conditions, we can write the household problem as
subject to
where \(p_n\) is given by
20.1.3. Constrained planner#
The constrained planner solves
subject to the market clearing conditions and the same constraint
although the price of nontradable is now given by
We see that the planner internalizes the impact of the savings choice \(b'\) on the price of nontradables and hence the credit constraint.
20.2. Markov dynamics#
We develop some functions for working with the VAR process
where
\(y = (y_t, y_n) = \) (tradables, nontradables)
\(A\) is \(2 \times 2\)
\(u' \sim N(0, \Omega)\)
the log function is applied pointwise
We use the following estimated values, reported on p. 12 of Yamada (2023).
A = [[0.2425, 0.3297],
[-0.1984, 0.7576]]
Ω = [[0.0052, 0.002],
[0.002, 0.0059]]
We’ll store the data in \(\Omega\) using its square root:
C = sp.linalg.sqrtm(Ω)
A = np.array(A)
Here’s a function to convert the VAR process to a Markov chain evolving on a rectilinear grid of points in \(\mathbb R^2\).
Under the hood, this function uses the QuantEcon function discrete_var
.
def discretize_income_var(A=A, C=C, grid_size=4, seed=1234):
"""
Discretize the VAR model, returning
y_t_nodes, a grid of y_t values
y_n_nodes, a grid of y_n values
Q, a Markov operator
Let n = grid_size. The format is that Q is n x n x n x n, with
Q[i, j, i', j'] = one step transition prob from
(y_t_nodes[i], y_n_nodes[j]) to (y_t_nodes[i'], y_n_nodes[j'])
"""
n = grid_size
rng = np.random.default_rng(seed)
mc = qe.markov.discrete_var(A, C, (n, n),
sim_length=1_000_000,
std_devs=np.sqrt(3),
random_state=rng)
y_nodes, Q = np.exp(mc.state_values), mc.P
# The array y_nodes is currently an array listing all 2 x 1 state pairs
# (y_t, y_n), so that y_nodes[i] is the i-th such pair, while Q[l, m]
# is the probability of transitioning from state l to state m in one step.
# We switch the representation to the one described in the docstring.
y_t_nodes = [y_nodes[n*i, 0] for i in range(n)]
y_n_nodes = y_nodes[0:4, 1]
Q = np.reshape(Q, (n, n, n, n))
return y_t_nodes, y_n_nodes, Q
Here’s code for sampling from the Markov chain.
def generate_discrete_var(A=A, C=C, grid_size=4, seed=1234,
ts_length=1_000_000,
indices=False):
"""
Generate a time series from the discretized model, returning y_t_series and
y_n_series. If `indices=True`, then these series are returned as grid
indices.
"""
n = grid_size
rng = np.random.default_rng(seed)
mc = qe.markov.discrete_var(A, C, (n, n),
sim_length=1_000_000,
std_devs=np.sqrt(3),
random_state=rng)
if indices:
y_series = mc.simulate_indices(ts_length=ts_length)
y_t_series, y_n_series = y_series % grid_size, y_series // grid_size
else:
y_series = np.exp(mc.simulate(ts_length=ts_length))
y_t_series, y_n_series = y_series[:, 0], y_series[:, 1]
return y_t_series, y_n_series
Here’s code for generating the original VAR process, which can be used for testing.
@numba.jit
def generate_var_process(A=A, C=C, ts_length=1_000_000):
"""
Generate the original VAR process.
"""
y_series = np.empty((ts_length, 2))
y_series[0, :] = np.zeros(2)
for t in range(ts_length-1):
y_series[t+1, :] = A @ y_series[t, :] + C @ np.random.randn(2)
y_t_series = np.exp(y_series[:, 0])
y_n_series = np.exp(y_series[:, 1])
return y_t_series, y_n_series
Let’s check some statistics for both the original and the discretized processes.
def corr(x, y):
m_x, m_y = x.mean(), y.mean()
s_xy = np.sqrt(np.sum((x - m_x)**2) * np.sum((y - m_y)**2))
return np.sum((x - m_x) * (y - m_y)) / (s_xy)
def print_stats(y_t_series, y_n_series):
print(f"Std dev of y_t is {y_t_series.std()}")
print(f"Std dev of y_n is {y_n_series.std()}")
print(f"corr(y_t, y_n) is {corr(y_t_series, y_n_series)}")
print(f"auto_corr(y_t) is {corr(y_t_series[:-1], y_t_series[1:])}")
print(f"auto_corr(y_n) is {corr(y_n_series[:-1], y_n_series[1:])}")
print("\n")
print("Statistics for original process.\n")
print_stats(*generate_var_process())
Statistics for original process.
Std dev of y_t is 0.08786665450592301
Std dev of y_n is 0.10553816506894553
corr(y_t, y_n) is 0.540315803170419
auto_corr(y_t) is 0.4558023516128351
auto_corr(y_n) is 0.6648046319195241
print("Statistics for discretized process.\n")
print_stats(*generate_discrete_var())
Statistics for discretized process.
Std dev of y_t is 0.08753613842184708
Std dev of y_n is 0.10546398305554536
corr(y_t, y_n) is 0.4763676161427541
auto_corr(y_t) is 0.4013004558421011
auto_corr(y_n) is 0.5896270019961042
20.3. Overborrowing Model#
In what follows
y
=(y_t, y_n)
is the exogenous state process
Individual states and actions are
c
= consumption of tradables (c
rather thanc_t
)b
= household savings (bond holdings)bp
= household savings decision
Aggregate quantities and prices are
p
= price of nontradables (p
rather thanp_n
)B
= aggregate savings (bond holdings)C
= aggregate consumptionH
= current guess of update rule as an array of the form \(H(B, y)\)
Here’s code to create three tuples that store model data relevant for computation.
def create_overborrowing_model(
σ=2, # CRRA utility parameter
η=(1/0.83)-1, # Elasticity = 0.83, η = 0.2048
β=0.91, # Discount factor
ω=0.31, # Aggregation constant
κ=0.3235, # Constraint parameter
r=0.04, # Interest rate
b_size=400, # Bond grid size
b_grid_min=-1.02, # Bond grid min
b_grid_max=-0.2 # Bond grid max (originally -0.6 to match fig)
):
"""
Creates an instance of the overborrowing model using
* default parameter values from Bianchi (2011)
* Markov dynamics from Yamada (2023)
The Markov kernel Q has the interpretation
Q[i, j, ip, jp] = one step prob of moving
(y_t[i], y_n[j]) -> (y_t[ip], y_n[jp])
"""
# Read in Markov data and shift to JAX arrays
data = discretize_income_var()
y_t_nodes, y_n_nodes, Q = tuple(map(jnp.array, data))
# Set up grid for bond holdings
b_grid = jnp.linspace(b_grid_min, b_grid_max, b_size)
# Pack and return
parameters = σ, η, β, ω, κ, r
sizes = b_size, len(y_t_nodes)
arrays = b_grid, y_t_nodes, y_n_nodes, Q
return parameters, sizes, arrays
Here’s flow utility.
@jax.jit
def w(parameters, c, y_n):
"""
Current utility when c_t = c and c_n = y_n.
a = [ω c^(- η) + (1 - ω) y_n^(- η)]^(-1/η)
w(c, y_n) := a^(1 - σ) / (1 - σ)
"""
σ, η, β, ω, κ, r = parameters
a = (ω * c**(-η) + (1 - ω) * y_n**(-η))**(-1/η)
return a**(1 - σ) / (1 - σ)
We need code to generate an initial guess of \(H\).
def generate_initial_H(parameters, sizes, arrays, at_constraint=False):
"""
Compute an initial guess for H. Repeat the indices for b_grid over y_t and
y_n axes.
"""
b_size, y_size = sizes
b_indices = jnp.arange(b_size)
O = jnp.ones((b_size, y_size, y_size), dtype=int)
return O * jnp.reshape(b_indices, (b_size, 1, 1))
generate_initial_H = jax.jit(generate_initial_H, static_argnums=(1,))
We need to construct the Bellman operator for the household.
Our first function returns the (unmaximized) RHS of the Bellman equation.
@jax.jit
def T_generator(v, H, parameters, arrays, i_b, i_B, i_y_t, i_y_n, i_bp):
"""
Given current state (b, B, y_t, y_n) with indices (i_b, i_B, i_y_t, i_y_n),
compute the unmaximized right hand side (RHS) of the Bellman equation as a
function of the next period choice bp = b', with index i_bp.
"""
# Unpack
σ, η, β, ω, κ, r = parameters
b_grid, y_t_nodes, y_n_nodes, Q = arrays
# Compute next period aggregate bonds given H
i_Bp = H[i_B, i_y_t, i_y_n]
# Evaluate states and actions at indices
B, Bp, b, bp = b_grid[i_B], b_grid[i_Bp], b_grid[i_b], b_grid[i_bp]
y_t = y_t_nodes[i_y_t]
y_n = y_n_nodes[i_y_n]
# Compute price of nontradables using aggregates
C = (1 + r) * B + y_t - Bp
p = ((1 - ω) / ω) * (C / y_n)**(η + 1)
# Compute household flow utility
c = (1 + r) * b + y_t - bp
utility = w(parameters, c, y_n)
# Compute expected value Σ_{y'} v(b', B', y') Q(y, y')
EV = jnp.sum(v[i_bp, i_Bp, :, :] * Q[i_y_t, i_y_n, :, :])
# Set up constraints
credit_constraint_holds = - κ * (p * y_n + y_t) <= bp
budget_constraint_holds = bp <= (1 + r) * b + y_t
constraints_hold = jnp.logical_and(credit_constraint_holds,
budget_constraint_holds)
# Compute and return
return jnp.where(constraints_hold, utility + β * EV, -jnp.inf)
Let’s now vectorize and jit-compile this map.
# Vectorize over the control bp and all the current states
T_vec_1 = jax.vmap(T_generator,
in_axes=(None, None, None, None, None, None, None, None, 0))
T_vec_2 = jax.vmap(T_vec_1,
in_axes=(None, None, None, None, None, None, None, 0, None))
T_vec_3 = jax.vmap(T_vec_2,
in_axes=(None, None, None, None, None, None, 0, None, None))
T_vec_4 = jax.vmap(T_vec_3,
in_axes=(None, None, None, None, None, 0, None, None, None))
T_vectorized = jax.vmap(T_vec_4,
in_axes=(None, None, None, None, 0, None, None, None, None))
Now we can set up the Bellman operator by maximizing over the choice variable \(b'\).
def T(parameters, sizes, arrays, v, H):
"""
Evaluate the RHS of the Bellman equation at all states and actions and then
maximize with respect to actions.
Return
* Tv as an array of shape (b_size, b_size, y_size, y_size).
"""
b_size, y_size = sizes
b_grid, y_t_nodes, y_n_nodes, Q = arrays
b_indices, y_indices = jnp.arange(b_size), jnp.arange(y_size)
val = T_vectorized(v, H, parameters, arrays,
b_indices, b_indices, y_indices, y_indices, b_indices)
# Maximize over bp
return jnp.max(val, axis=-1)
T = jax.jit(T, static_argnums=(1,))
Here’s a function that computes a greedy policy (best response to \(v\)).
def get_greedy(parameters, sizes, arrays, v, H):
"""
Compute the greedy policy for the household, which maximizes the right hand
side of the Bellman equation given v and H. The greedy policy is recorded
as an array giving the index i in b_grid such that b_grid[i] is the optimal
choice, for every state.
Return
* bp_policy as an array of shape (b_size, b_size, y_size, y_size).
"""
b_size, y_size = sizes
b_grid, y_t_nodes, y_n_nodes, Q = arrays
b_indices, y_indices = jnp.arange(b_size), jnp.arange(y_size)
val = T_vectorized(v, H, parameters, arrays,
b_indices, b_indices, y_indices, y_indices, b_indices)
return jnp.argmax(val, axis=-1)
get_greedy = jax.jit(get_greedy, static_argnums=(1,))
Here’s some code for value function iteration (VFI).
def vfi(T, v_init, max_iter=10_000, tol=1e-5):
"""
Use successive approximation to compute the fixed point of T, starting from
v_init.
"""
v = v_init
def cond_fun(state):
error, i, v = state
return (error > tol) & (i < max_iter)
def body_fun(state):
error, i, v = state
v_new = T(v)
error = jnp.max(jnp.abs(v_new - v))
return error, i+1, v_new
error, i, v_new = jax.lax.while_loop(cond_fun, body_fun,
(tol+1, 0, v))
return v_new, i
vfi = jax.jit(vfi, static_argnums=(0,))
This is how we update our guess of \(H\), using the current policy \(b'\) and a damped fixed point iteration scheme.
def update_H(parameters, sizes, arrays, H, α):
"""
Update guess of the aggregate update rule.
"""
# Set up
b_size, y_size = sizes
b_grid, y_t_nodes, y_n_nodes, Q = arrays
b_indices = jnp.arange(b_size)
# Compute household response to current guess H
v_init = jnp.ones((b_size, b_size, y_size, y_size))
_T = lambda v: T(parameters, sizes, arrays, v, H)
v, vfi_num_iter = vfi(_T, v_init)
bp_policy = get_greedy(parameters, sizes, arrays, v, H)
# Switch policy arrays to values rather than indices
H_vals = b_grid[H]
bp_vals = b_grid[bp_policy]
# Update guess
new_H_vals = α * bp_vals[b_indices, b_indices, :, :] + (1 - α) * H_vals
# Switch back to indices
new_H = jnp.searchsorted(b_grid, new_H_vals)
return new_H, vfi_num_iter
update_H = jax.jit(update_H, static_argnums=(1,))
Now we can write code to compute an equilibrium law of motion \(H\).
def compute_equilibrium(parameters, sizes, arrays,
α=0.5, tol=0.005, max_iter=500):
"""
Compute the equilibrium law of motion.
"""
H = generate_initial_H(parameters, sizes, arrays)
error = tol + 1
i = 0
msgs = []
while error > tol and i < max_iter:
H_new, vfi_num_iter = update_H(parameters, sizes, arrays, H, α)
msgs.append(f"VFI terminated after {vfi_num_iter} iterations.")
error = jnp.max(jnp.abs(b_grid[H] - b_grid[H_new]))
msgs.append(f"Updated H at iteration {i} with error {error}.")
H = H_new
i += 1
if i == max_iter:
msgs.append("Warning: Equilibrium search iteration hit upper bound.")
print("\n".join(msgs))
return H
20.4. Planner problem#
Now we switch to the planner problem.
Our first function returns the (unmaximized) RHS of the Bellman equation.
@jax.jit
def planner_T_generator(v, parameters, arrays, i_b, i_y_t, i_y_n, i_bp):
"""
Given current state (b, y_t, y_n) with indices (i_b, i_y_t, i_y_n),
compute the unmaximized right hand side (RHS) of the Bellman equation as a
function of the next period choice bp = b'.
"""
σ, η, β, ω, κ, r = parameters
b_grid, y_t_nodes, y_n_nodes, Q = arrays
y_t = y_t_nodes[i_y_t]
y_n = y_n_nodes[i_y_n]
b, bp = b_grid[i_b], b_grid[i_bp]
# Compute price of nontradables using aggregates
c = (1 + r) * b + y_t - bp
p = ((1 - ω) / ω) * (c / y_n)**(η + 1)
# Compute household flow utility
utility = w(parameters, c, y_n)
# Compute expected value (continuation)
EV = jnp.sum(v[i_bp, :, :] * Q[i_y_t, i_y_n, :, :])
# Set up constraints and evaluate
credit_constraint_holds = - κ * (p * y_n + y_t) <= bp
budget_constraint_holds = bp <= (1 + r) * b + y_t
return jnp.where(jnp.logical_and(credit_constraint_holds,
budget_constraint_holds),
utility + β * EV,
-jnp.inf)
# Vectorize over the control bp and all the current states
planner_T_vec_1 = jax.vmap(planner_T_generator,
in_axes=(None, None, None, None, None, None, 0))
planner_T_vec_2 = jax.vmap(planner_T_vec_1,
in_axes=(None, None, None, None, None, 0, None))
planner_T_vec_3 = jax.vmap(planner_T_vec_2,
in_axes=(None, None, None, None, 0, None, None))
planner_T_vectorized = jax.vmap(planner_T_vec_3,
in_axes=(None, None, None, 0, None, None, None))
Now we construct the Bellman operator.
def planner_T(parameters, sizes, arrays, v):
b_size, y_size = sizes
b_grid, y_t_nodes, y_n_nodes, Q = arrays
b_indices, y_indices = jnp.arange(b_size), jnp.arange(y_size)
# Evaluate RHS of Bellman equation at all states and actions
val = planner_T_vectorized(v, parameters, arrays,
b_indices, y_indices, y_indices, b_indices)
# Maximize over bp
return jnp.max(val, axis=-1)
planner_T = jax.jit(planner_T, static_argnums=(1,))
Here’s a function that computes a greedy policy (best response to \(v\)).
def planner_get_greedy(parameters, sizes, arrays, v):
b_size, y_size = sizes
b_grid, y_t_nodes, y_n_nodes, Q = arrays
b_indices, y_indices = jnp.arange(b_size), jnp.arange(y_size)
# Evaluate RHS of Bellman equation at all states and actions
val = planner_T_vectorized(v, parameters, arrays,
b_indices, y_indices, y_indices, b_indices)
# Maximize over bp
return jnp.argmax(val, axis=-1)
planner_get_greedy = jax.jit(planner_get_greedy, static_argnums=(1,))
Computing the planner solution is straightforward value function iteration:
def compute_planner_solution(model):
"""
Compute the constrained planner solution.
"""
parameters, sizes, arrays = model
b_size, y_size = sizes
b_indices = jnp.arange(b_size)
v_init = jnp.ones((b_size, y_size, y_size))
_T = lambda v: planner_T(parameters, sizes, arrays, v)
# Compute household response to current guess H
v, vfi_num_iter = vfi(_T, v_init)
bp_policy = planner_get_greedy(parameters, sizes, arrays, v)
return v, bp_policy, vfi_num_iter
20.5. Numerical solution#
Let’s now solve the model and compare the decentralized and planner solutions
20.5.1. Generating solutions#
Here we compute the two solutions.
model = create_overborrowing_model()
parameters, sizes, arrays = model
b_size, y_size = sizes
b_grid, y_t_nodes, y_n_nodes, Q = arrays
2024-08-12 04:03:55.031323: W external/xla/xla/service/gpu/nvptx_compiler.cc:836] The NVIDIA driver's CUDA version is 12.5 which is older than the PTX compiler version (12.6.20). Because the driver is older than the PTX compiler version, XLA is disabling parallel compilation, which may slow down compilation. You should update your NVIDIA driver or use the NVIDIA-provided CUDA forward compatibility packages.
print("Computing decentralized solution.")
start = time()
H_eq = compute_equilibrium(parameters, sizes, arrays)
diff_d_with_compile = time() - start
print(f"Computed decentralized equilibrium in {diff_d_with_compile} seconds")
Computing decentralized solution.
VFI terminated after 128 iterations.
Updated H at iteration 0 with error 0.15619052946567535.
VFI terminated after 128 iterations.
Updated H at iteration 1 with error 0.08837097883224487.
VFI terminated after 129 iterations.
Updated H at iteration 2 with error 0.0780951976776123.
VFI terminated after 128 iterations.
Updated H at iteration 3 with error 0.06987470388412476.
VFI terminated after 128 iterations.
Updated H at iteration 4 with error 0.05959904193878174.
VFI terminated after 128 iterations.
Updated H at iteration 5 with error 0.05137842893600464.
VFI terminated after 128 iterations.
Updated H at iteration 6 with error 0.04315793514251709.
VFI terminated after 128 iterations.
Updated H at iteration 7 with error 0.03904759883880615.
VFI terminated after 128 iterations.
Updated H at iteration 8 with error 0.0328822135925293.
VFI terminated after 129 iterations.
Updated H at iteration 9 with error 0.02877199649810791.
VFI terminated after 128 iterations.
Updated H at iteration 10 with error 0.024661660194396973.
VFI terminated after 128 iterations.
Updated H at iteration 11 with error 0.020551443099975586.
VFI terminated after 128 iterations.
Updated H at iteration 12 with error 0.018496274948120117.
VFI terminated after 128 iterations.
Updated H at iteration 13 with error 0.01438605785369873.
VFI terminated after 128 iterations.
Updated H at iteration 14 with error 0.012330859899520874.
VFI terminated after 129 iterations.
Updated H at iteration 15 with error 0.010275721549987793.
VFI terminated after 128 iterations.
Updated H at iteration 16 with error 0.010275721549987793.
VFI terminated after 128 iterations.
Updated H at iteration 17 with error 0.0082206130027771.
VFI terminated after 128 iterations.
Updated H at iteration 18 with error 0.0082206130027771.
VFI terminated after 128 iterations.
Updated H at iteration 19 with error 0.0061654746532440186.
VFI terminated after 128 iterations.
Updated H at iteration 20 with error 0.006165444850921631.
VFI terminated after 128 iterations.
Updated H at iteration 21 with error 0.006165444850921631.
VFI terminated after 128 iterations.
Updated H at iteration 22 with error 0.0041103363037109375.
Computed decentralized equilibrium in 54.89348554611206 seconds
We run it again to get rid of compile time.
start = time()
H_eq = compute_equilibrium(parameters, sizes, arrays)
diff_d_without_compile = time() - start
print(f"Computed decentralized equilibrium in {diff_d_without_compile} seconds")
VFI terminated after 128 iterations.
Updated H at iteration 0 with error 0.15619052946567535.
VFI terminated after 128 iterations.
Updated H at iteration 1 with error 0.08837097883224487.
VFI terminated after 129 iterations.
Updated H at iteration 2 with error 0.0780951976776123.
VFI terminated after 128 iterations.
Updated H at iteration 3 with error 0.06987470388412476.
VFI terminated after 128 iterations.
Updated H at iteration 4 with error 0.05959904193878174.
VFI terminated after 128 iterations.
Updated H at iteration 5 with error 0.05137842893600464.
VFI terminated after 128 iterations.
Updated H at iteration 6 with error 0.04315793514251709.
VFI terminated after 128 iterations.
Updated H at iteration 7 with error 0.03904759883880615.
VFI terminated after 128 iterations.
Updated H at iteration 8 with error 0.0328822135925293.
VFI terminated after 129 iterations.
Updated H at iteration 9 with error 0.02877199649810791.
VFI terminated after 128 iterations.
Updated H at iteration 10 with error 0.024661660194396973.
VFI terminated after 128 iterations.
Updated H at iteration 11 with error 0.020551443099975586.
VFI terminated after 128 iterations.
Updated H at iteration 12 with error 0.018496274948120117.
VFI terminated after 128 iterations.
Updated H at iteration 13 with error 0.01438605785369873.
VFI terminated after 128 iterations.
Updated H at iteration 14 with error 0.012330859899520874.
VFI terminated after 129 iterations.
Updated H at iteration 15 with error 0.010275721549987793.
VFI terminated after 128 iterations.
Updated H at iteration 16 with error 0.010275721549987793.
VFI terminated after 128 iterations.
Updated H at iteration 17 with error 0.0082206130027771.
VFI terminated after 128 iterations.
Updated H at iteration 18 with error 0.0082206130027771.
VFI terminated after 128 iterations.
Updated H at iteration 19 with error 0.0061654746532440186.
VFI terminated after 128 iterations.
Updated H at iteration 20 with error 0.006165444850921631.
VFI terminated after 128 iterations.
Updated H at iteration 21 with error 0.006165444850921631.
VFI terminated after 128 iterations.
Updated H at iteration 22 with error 0.0041103363037109375.
Computed decentralized equilibrium in 54.168943643569946 seconds
print("Computing planner's solution.")
start = time()
planner_v, H_plan, vfi_num_iter = compute_planner_solution(model)
diff_p_with_compile = time() - start
print(f"Computed planner's equilibrium in {diff_p_with_compile} seconds")
Computing planner's solution.
Computed planner's equilibrium in 0.8798391819000244 seconds
We run it again to eliminate compile time.
start = time()
planner_v, H_plan, vfi_num_iter = compute_planner_solution(model)
diff_p_without_compile = time() - start
print(f"Computed planner's equilibrium in {diff_p_without_compile} seconds")
Computed planner's equilibrium in 0.22881293296813965 seconds
20.5.2. Policy plots#
We produce a policy plot that is similar to Figure 1 in [Bianchi, 2011].
i, j = 1, 3
y_t, y_n = y_t_nodes[i], y_n_nodes[j]
fig, ax = plt.subplots()
ax.plot(b_grid, b_grid[H_eq[:, i, j]], label='decentralized equilibrium')
ax.plot(b_grid, b_grid[H_plan[:, i, j]], ls='--', label='social planner')
ax.plot(b_grid, b_grid, color='black', lw=0.5)
ax.set_ylim((-1.0, -0.6))
ax.set_xlim((-1.0, -0.6))
ax.set_xlabel("current bond holdings")
ax.set_ylabel("next period bond holdings")
ax.set_title(f"policy when $y_t = {y_t:.2}$ and $y_n = {y_n:.2}$")
ax.legend()
plt.show()
The match is not exact because we use a different estimate of the Markov dynamics for income.
Nonetheless, it is qualitatively similar.
20.6. Exercise#
Your task is to examine the ergodic distribution of borrowing in the decentralized and planner equilibria.
We recommend that you use simulation and a kernel density estimate.
Here’s a function for generating the borrowing sequence.
We use Numba because we want to compile a long for loop.
@numba.jit
def generate_borrowing_sequence(H, y_t_series, y_n_series):
"""
Generate the borrowing sequence B' = H(B, y_t, y_n).
* H is a policy array
* y_t_series and y_n_series are simulated time paths
Both y_t_series and y_n_series are stored as indices rather than values.
"""
B = np.empty_like(y_t_series)
B[0] = 0
for t in range(len(y_t_series)-1):
B[t+1] = H[B[t], y_t_series[t], y_n_series[t]]
return B
Note that you will have to convert JAX arrays into NumPy arrays if you want to use this function.
From here you will need to
generate a time path for income \(y = (y_t, y_n)\) using one of the functions provided above.
use the function
generate_borrowing_sequence
plusH_eq
andH_plan
to calculate bond holdings for the planner and the decentralized equilibriumproduce a kernel density plot for each of these data sets
If you are successful, your plot should look something like Fig 2 of [Bianchi, 2011] — although not exactly the same, due to the alternative specification of the Markov process.
To generate a kernel density plot, we recommend that you use kdeplot
from the package seaborn
, which is included in Anaconda.
Solution to Exercise 20.1
sim_length = 100_000
y_t_series, y_n_series = generate_discrete_var(ts_length=sim_length,
indices=True)
We convert JAX arrays to NumPy arrays in order to use Numba.
y_t_series, y_n_series, H_eq, H_plan = \
[np.array(v) for v in (y_t_series, y_n_series, H_eq, H_plan)]
Now let’s compute borrowing for the decentralized equilibrium and the planner.
B_eq = generate_borrowing_sequence(H_eq, y_t_series, y_n_series)
eq_b_sequence = b_grid[B_eq]
B_plan = generate_borrowing_sequence(H_plan, y_t_series, y_n_series)
plan_b_sequence = b_grid[B_plan]
We suppress some annoying warnings produced by the current version of seaborn
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
Now let’s plot the distributions using a kernel density estimator.
fig, ax = plt.subplots()
seaborn.kdeplot(eq_b_sequence, ax=ax, label='decentralized')
seaborn.kdeplot(plan_b_sequence, ax=ax, label='planner')
ax.legend()
ax.set_xlim((-1, -0.5))
ax.set_xlabel("probability")
ax.set_ylabel("bond holdings")
plt.show()
This corresponds to Figure 2 in [Bianchi, 2011].
Again, the match is not exact but it is qualitatively similar.
Asset holding has a longer left hand tail under the decentralized equilibrium, leaving the economy more vulnerable to bad shocks.