Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Alternative Combinations of Parameter Values

badge

Introduction

The notebook “Micro-and-Macro-Implications-of-Very-Impatient-HHs” is an exercise that demonstrates the consequences of changing a key parameter of the cstwMPC model, the time preference factor β\beta.

The REMARK SolvingMicroDSOPs reproduces the last figure in the SolvingMicroDSOPs lecture notes, which shows that there are classes of alternate values of β\beta and ρ\rho that fit the data almost as well as the exact ‘best fit’ combination.

Inspired by this comparison, this notebook asks you to examine the consequences for:

  • The consumption function

  • The distribution of wealth

Of joint changes in β\beta and ρ\rho together.

One way you can do this is to construct a list of alternative values of ρ\rho (say, values that range upward from the default value of ρ\rho, in increments of 0.2, all the way to ρ=5\rho=5). Then for each of these values of ρ\rho you will find the value of β\beta that leads the same value for target market resources, mˇ\check{m}.

As a reminder, mˇ\check{m} is defined as the value of mm at which the optimal value of c{c} is the value such that, at that value of c{c}, the expected level of m{m} next period is the same as its current value:

Et[mt+1]=mt\mathbb{E}_{t}[{m}_{t+1}] = {m}_{t}

Other notes:

  • The cstwMPC model solves and simulates the problems of consumers with 7 different values of β\beta

    • You should do your exercise using the middle value of β\beta from that exercise:

      • DiscFac_mean = 0.9855583

  • You are likely to run into the problem, as you experiment with parameter values, that you have asked HARK to solve a model that does not satisfy one of the impatience conditions required for the model to have a solution. Those conditions are explained intuitively in the TractableBufferStock model. The versions of the impatience conditions that apply to the IndShockConsumerType\texttt{IndShockConsumerType} model can be found in the paper BufferStockTheory, table 2.

    • The conditions that need to be satisfied are:

      • The Growth Impatience Condition (GIC)

      • The Return Impatience Condition (RIC)

  • Please accumulate the list of solved consumers’ problems in a list called MyTypes

    • For compatibility with a further part of the assignment below

# This cell merely imports and sets up some basic functions and packages
from HARK.utilities import get_lorenz_shares, get_percentiles
from tqdm import tqdm
import numpy as np
# Import IndShockConsumerType
from HARK.ConsumptionSaving.ConsIndShockModel import IndShockConsumerType

# Define a dictionary with calibrated parameters
cstwMPC_calibrated_parameters = {
    "CRRA": 1.0,  # Coefficient of relative risk aversion
    "Rfree": [1.01 / (1.0 - 1.0 / 160.0)],  # Survival probability,
    # Permanent income growth factor (no perm growth),
    "PermGroFac": [1.000**0.25],
    "PermGroFacAgg": 1.0,
    "BoroCnstArt": 0.0,
    "CubicBool": False,
    "vFuncBool": False,
    "PermShkStd": [
        (0.01 * 4 / 11) ** 0.5
    ],  # Standard deviation of permanent shocks to income
    "PermShkCount": 5,  # Number of points in permanent income shock grid
    "TranShkStd": [
        (0.01 * 4) ** 0.5
    ],  # Standard deviation of transitory shocks to income,
    "TranShkCount": 5,  # Number of points in transitory income shock grid
    "UnempPrb": 0.07,  # Probability of unemployment while working
    "IncUnemp": 0.15,  # Unemployment benefit replacement rate
    "UnempPrbRet": 0.0,
    "IncUnempRet": 0.0,
    "aXtraMin": 0.00001,  # Minimum end-of-period assets in grid
    "aXtraMax": 40,  # Maximum end-of-period assets in grid
    "aXtraCount": 32,  # Number of points in assets grid
    "aXtraExtra": [None],
    "aXtraNestFac": 3,  # Number of times to 'exponentially nest' when constructing assets grid
    "LivPrb": [1.0 - 1.0 / 160.0],  # Survival probability
    "DiscFac": 0.97,  # Default intertemporal discount factor; dummy value, will be overwritten
    "cycles": 0,
    "T_cycle": 1,
    "T_retire": 0,
    # Number of periods to simulate (idiosyncratic shocks model, perpetual youth)
    "T_sim": 1200,
    "T_age": 400,
    "IndL": 10.0 / 9.0,  # Labor supply per individual (constant),
    "kLogInitMean": np.log(0.00001),
    "kLogInitStd": 0.0,
    "pLogInitMean": 0.0,
    "pLogInitStd": 0.0,
    "AgentCount": 10000,
}
# Construct a list of solved consumers' problems, IndShockConsumerType is just a place holder
MyTypes = [IndShockConsumerType(verbose=0, **cstwMPC_calibrated_parameters)]

Simulating the Distribution of Wealth for Alternative Combinations

You should now have constructed a list of consumer types all of whom have the same target level of market resources mˇ\check{m}.

But the fact that everyone has the same target m{m} does not mean that the distribution of m{m} will be the same for all of these consumer types.

In the code block below, fill in the contents of the loop to solve and simulate each agent type for many periods. To do this, you should invoke the methods solve\texttt{solve}, initialize_sim\texttt{initialize\_sim}, and simulate\texttt{simulate} in that order. Simulating for 1200 quarters (300 years) will approximate the long run distribution of wealth in the population.

for ThisType in tqdm(MyTypes):
    ThisType.solve()
    ThisType.initialize_sim()
    ThisType.simulate()
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:03<00:00,  3.03s/it]

Now that you have solved and simulated these consumers, make a plot that shows the relationship between your alternative values of ρ\rho and the mean level of assets

# To help you out, we have given you the command needed to construct a list of the levels of assets for all consumers
aLvl_all = np.concatenate([ThisType.state_now["aLvl"] for ThisType in MyTypes])

# You should take the mean of aLvl for each consumer in MyTypes, divide it by the mean across all simulations
# and then plot the ratio of the values of mean(aLvl) for each group against the value of $\rho$

Interpret

Here, you should attempt to give an intiutive explanation of the results you see in the figure you just constructed

The Distribution of Wealth...

Your next exercise is to show how the distribution of wealth differs for the different parameter values

# Finish filling in this function to calculate the Euclidean distance between the simulated and actual Lorenz curves.


def calcLorenzDistance(SomeTypes):
    """
    Calculates the Euclidean distance between the simulated and actual (from SCF data) Lorenz curves at the
    20th, 40th, 60th, and 80th percentiles.

    Parameters
    ----------
    SomeTypes : [AgentType]
        List of AgentTypes that have been solved and simulated.  Current levels of individual assets should
        be stored in the attribute aLvl.

    Returns
    -------
    lorenz_distance : float
        Euclidean distance (square root of sum of squared differences) between simulated and actual Lorenz curves.
    """
    # Define empirical Lorenz curve points
    lorenz_SCF = np.array([-0.00183091, 0.0104425, 0.0552605, 0.1751907])

    # Extract asset holdings from all consumer types
    aLvl_sim = np.concatenate([ThisType.aLvl for ThisType in MyTypes])

    # Calculate simulated Lorenz curve points
    lorenz_sim = get_lorenz_shares(aLvl_sim, percentiles=[0.2, 0.4, 0.6, 0.8])

    # Calculate the Euclidean distance between the simulated and actual Lorenz curves
    lorenz_distance = np.sqrt(np.sum((lorenz_SCF - lorenz_sim) ** 2))

    # Return the Lorenz distance
    return lorenz_distance

...and the Marginal Propensity to Consume

Now let’s look at the aggregate MPC. In the code block below, write a function that produces text output of the following form:

The 35th percentile of the MPC is 0.15623\texttt{The 35th percentile of the MPC is 0.15623}

Your function should take two inputs: a list of types of consumers and an array of percentiles (numbers between 0 and 1). It should return no outputs, merely print to screen one line of text for each requested percentile. The model is calibrated at a quarterly frequency, but Carroll et al report MPCs at an annual frequency. To convert, use the formula:

κY1.0(1.0κQ)4\kappa_{Y} \approx 1.0 - (1.0 - \kappa_{Q})^4

# Write a function to tell us about the distribution of the MPC in this code block, then test it!
# You will almost surely find it useful to use a for loop in this function.
def describeMPCdstn(SomeTypes, percentiles):
    MPC_sim = np.concatenate([ThisType.MPCnow for ThisType in SomeTypes])
    MPCpercentiles_quarterly = get_percentiles(MPC_sim, percentiles=percentiles)
    MPCpercentiles_annual = 1.0 - (1.0 - MPCpercentiles_quarterly) ** 4

    for j in range(len(percentiles)):
        print(
            "The "
            + str(100 * percentiles[j])
            + "th percentile of the MPC is "
            + str(MPCpercentiles_annual[j])
        )


describeMPCdstn(MyTypes, np.linspace(0.05, 0.95, 19))
The 5.0th percentile of the MPC is 0.3830226479018095
The 10.0th percentile of the MPC is 0.4190098031734306
The 15.0th percentile of the MPC is 0.45984701160581964
The 20.0th percentile of the MPC is 0.45984701160581964
The 25.0th percentile of the MPC is 0.45984701160581964
The 30.0th percentile of the MPC is 0.4979166414954148
The 35.0th percentile of the MPC is 0.4979166414954148
The 40.0th percentile of the MPC is 0.4979166414954148
The 44.99999999999999th percentile of the MPC is 0.5372418610399308
The 49.99999999999999th percentile of the MPC is 0.5372418610399308
The 54.99999999999999th percentile of the MPC is 0.5372418610399308
The 60.0th percentile of the MPC is 0.5821887061768969
The 65.0th percentile of the MPC is 0.5821887061768969
The 70.0th percentile of the MPC is 0.634537312685834
The 75.0th percentile of the MPC is 0.634537312685834
The 80.0th percentile of the MPC is 0.7267307307276032
The 85.0th percentile of the MPC is 0.7799255201452847
The 90.0th percentile of the MPC is 0.8208530902866055
The 95.0th percentile of the MPC is 0.8966083611183647

If You Get Here ...

If you have finished the above exercises quickly and have more time to spend on this assignment, for extra credit you can do the same exercise where, instead of exploring the consequences of alternative values of relative risk aversion ρ\rho, you should test the consequences of different values of the growth factor Γ\Gamma that lead to the same mˇ\check{m}.