Econ-ARK for Central Banks

Christopher D. Carroll

Johns Hopkins University

Econ-ARK

October 17, 2024

Microfoundations in a Nutshell (1/2)

Prehistory

  • Modigliani and Brumberg (1954), Friedman (1957), Diamond (1964)
  • Perfect Foresight models: 1960s-70s

Bewley (1977)

  • Formalization of Friedman PIH
  • Rigorous treatment of uncertainty, liq constr
  • Entirely qualitative/analytical

Microfoundations in a Nutshell (2/2)

Early 1990s: Numerical computation of SS dist’n of wealth

  • Life Cycle / OLG:
    • Zeldes (1989); Hubbard, Skinner, and Zeldes (1994, 1995);
    • Huggett (1996); Carroll (1997)
  • Infinite Horizon
    • Deaton (1991); Carroll (1992); Aiyagari (1994)
  • Aggregate dynamics? Hopeless:
    • requires predicting evolution of entire distribution

Dynamics

Krusell-Smith (1998):

  • mean of distribution \(\bar{k}\) is good enough!
  • still excruciatingly slow

Reiter (2010):

  • SS Micro and Dyn Macro can be solved independently
    • Why? Idiosyncratic shocks 100x larger than agg
    • So agg shocks cause ‘small’ perturbation of dstn
  • \(\Rightarrow\) Reiter Singularity: 2014-2018

Where Does Econ-ARK Fit? (1/2)

Rich Set of Tools for SS Micro …

Where Does Econ-ARK Fit? (2/2)

…Easy to Connect to SSJ toolkit

  1. Econ-ARK/HARK:
    1. solve for micro steady state
    2. compute Jacobians
  2. Feed results to SSJ toolkit

But Wait, There’s More: Indirect Inference

Life Cycle Model (Gourinchas-Parker; Cagetti)

But Wait, There’s More: REMARKS

  • An easy-to-use standard for guaranteed replicability

    • on any computer (Mac, Win, Linux)
    • using any open-source language: Python, R, Julia, …
    • About 26 of them
  • Builds on industry standards: Docker, cff, conda, pip, …

  • Aim:

    • Set a standard for journals
      • Now every journal has different requirements
    • Results should be replicable on submission
      • Editors, referees can “kick the tires”
      • Readers can easily stand on your giant shoulders
      • Central banks can exchange and compare models

Where Are We Going? ‘DYNARK’:

  • Model specification tools for any Bellman Problem (mockup)
  • Three layers:
    1. Abstract mathematical description
      • The symbolic version that appears in the text
      • Describes the “Platonic Ideal” of the model
      • What you would solve with \(\infty\) computing power
    2. Numerical/approximation details
      • Metaparameters like # of gridpoints for approx
      • Restrictions on ranges of parameters (e.g. 1 < CRRA < 10)
    3. Specify your claims:
      • Concrete assertion about results
      • “Model requires CRRA > 8 to match portfolio share”

Why These Elements?

  • 2 and 3 allow AUTOMATIC robustness testing
  • For each approx:
    • does claim fail as gridpoints increase?
  • For each restriction:
    • ‘parameter sweep’ of values in allowed ranges

Underlying Motivation

Tower of Babel problem

  1. Lack of transparency: What exactly is your model?
    • Lots of buried assumptions: gridpoints, boundaries, dstns
  2. Lack of replicability
    • Some notorious stories (only runs on Win 8.1 w/ Matlab 8.7.6.5)

Causes:

  • Everyone writes their own code
    • Often inherited from advisor
    • Barriers to entry
  • Can take months just to get someone else’s model working
    • Quicker to build on your own Byzantine legacy code

Powered by Econ-ARK