Brute-force maximization infeasible with many controls
Total number of control vectors: \(|\mathcal{A}| = A^p\)
The alternative is a costly non-linear root-finding step
Endogenous gridpoint method (EGM) avoids such step
But only in the case of a single control
Needed: algorithm that avoids root-finding at every step
2) The curse of optimization
“Solving the FOCs becomes a high-dimensional nonlinear root-finding problem.”
The third curse
When \(u_{t+1}\) is a \(m-\)dimensional vector of standard normal r.v., \(\mathbb{E}[V_{\boldsymbol{\pi}}(\mathbf{s}')]\) corresponds to the integral: \[\mathbb{E}[V_{\boldsymbol{\pi}}(\mathbf{s}')] = \int_{u_1} \int_{u_2} \ldots \int_{u_m} V_{\boldsymbol{\pi}}(\mathbf{s}')\phi(\mathbf{u}) du_1 du_2 \ldots d u_m.\]
Evaluating integral is very costly with large number of shocks
Cost of quadrature solution increases exponentially with \(m\)
With \(U\) possible values, the total number of points is \(|\mathcal{U}| = U^m\)
Consider multi-region RBC example:
With a shock for every region, we have \(m = 50\)
Needed: an efficient way to compute expectations
3) The curse of expectation
“Computing \(\mathbb{E}[V(\mathbf{s}')]\) requires integrating over a huge shock space.”
Overcoming the curses
The goal of this course is to show how to overcome the three curses of dimensionality
The key will be to use machine learning techniques to handle each of the curses
We will learn how to represent functions, train models, and how to use automatic differentiation
Deep neural network architecture
Training history
Overview of the course
Besides this introductory module, the course is organized into four modules.
The first two modules cover classical numerical methods
The last two modules cover machine learning techniques
Module 02: Discrete-Time Methods
Tauchen’s discretization method
Value function iteration
Endogenous gridpoint method
Module 03: Continuous-Time Methods
Finite-difference methods
Stability/consistency/monotonicity
Spectral methods
Module 04: Fundamentals of Machine Learning
Supervised learning and neural networks
Optimization algorithms
Automatic differentiation
Module 05: The Deep Policy Iteration (DPI) Method
Hyper-dual approach to Itô’s lemma
Deep policy iteration algorithm
Applications
The Julia programming language
The course is meant to be practical and hands-on.
The course will teach you the theory
But also focus on the practical implementation
We will use the Julia programming language to implement the methods discussed in the course.
Julia is a modern, high-level, high-performance programming language