26. Execution Statistics#

This table contains the latest execution statistics.

Document

Modified

Method

Run Time (s)

Status

aiyagari_jax

2025-05-19 03:32

cache

68.83

arellano

2025-05-19 03:33

cache

22.52

autodiff

2025-05-19 03:33

cache

13.19

hopenhayn

2025-05-19 03:33

cache

24.88

ifp_egm

2025-05-19 03:36

cache

143.13

intro

2025-05-19 03:36

cache

1.18

inventory_dynamics

2025-05-19 03:36

cache

9.7

inventory_ssd

2025-05-19 04:12

cache

2142.59

jax_intro

2025-05-19 04:12

cache

41.96

jax_nn

2025-05-19 04:14

cache

97.76

job_search

2025-05-19 04:14

cache

10.27

keras

2025-05-19 04:14

cache

28.08

kesten_processes

2025-05-19 04:15

cache

15.29

lucas_model

2025-05-19 04:15

cache

20.16

markov_asset

2025-05-19 04:15

cache

11.93

mle

2025-05-19 04:16

cache

15.67

newtons_method

2025-05-19 04:19

cache

186.86

opt_invest

2025-05-19 04:19

cache

22.82

opt_savings_1

2025-05-19 04:20

cache

50.87

opt_savings_2

2025-05-19 04:20

cache

21.02

overborrowing

2025-05-20 05:59

cache

34.54

short_path

2025-05-19 04:21

cache

4.46

status

2025-05-19 04:21

cache

2.47

troubleshooting

2025-05-19 03:36

cache

1.18

wealth_dynamics

2025-05-19 04:23

cache

158.44

zreferences

2025-05-19 03:36

cache

1.18

These lectures are built on linux instances through github actions that has access to a gpu. These lectures make use of the nvidia T4 card.

You can check the backend used by JAX using:

import jax
# Check if JAX is using GPU
print(f"JAX backend: {jax.devices()[0].platform}")
JAX backend: gpu

and the hardware we are running on:

!nvidia-smi
/home/runner/miniconda3/envs/quantecon/lib/python3.12/pty.py:95: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.
  pid, fd = os.forkpty()
Mon May 19 04:21:15 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 575.51.03              Driver Version: 575.51.03      CUDA Version: 12.9     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  Tesla T4                       Off |   00000000:00:1E.0 Off |                    0 |
| N/A   34C    P0             32W /   70W |     109MiB /  15360MiB |      1%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            8505      C   ...da3/envs/quantecon/bin/python        106MiB |
+-----------------------------------------------------------------------------------------+