26. Execution Statistics#

This table contains the latest execution statistics.

Document

Modified

Method

Run Time (s)

Status

aiyagari_jax

2025-02-10 03:12

cache

66.47

arellano

2025-02-10 03:13

cache

20.08

autodiff

2025-02-10 03:13

cache

10.88

hopenhayn

2025-02-10 03:13

cache

21.22

ifp_egm

2025-02-10 03:15

cache

118.24

intro

2025-02-10 03:15

cache

1.04

inventory_dynamics

2025-02-10 03:15

cache

8.19

inventory_ssd

2025-02-10 03:48

cache

1933.22

jax_intro

2025-02-10 03:48

cache

31.26

jax_nn

2025-03-14 22:54

cache

87.97

job_search

2025-02-10 03:48

cache

8.45

keras

2025-03-14 22:55

cache

22.72

kesten_processes

2025-02-10 03:49

cache

13.47

lucas_model

2025-02-10 03:50

cache

15.8

markov_asset

2025-02-10 03:50

cache

10.33

mle

2025-02-10 03:50

cache

15.37

newtons_method

2025-02-10 03:52

cache

131.75

opt_invest

2025-02-10 03:53

cache

22.11

opt_savings_1

2025-02-10 03:53

cache

34.3

opt_savings_2

2025-02-10 03:53

cache

19.95

overborrowing

2025-02-10 03:54

cache

24.29

short_path

2025-02-10 03:54

cache

3.59

status

2025-02-10 03:54

cache

2.44

troubleshooting

2025-02-10 03:15

cache

1.04

wealth_dynamics

2025-02-10 03:58

cache

220.1

zreferences

2025-02-10 03:15

cache

1.04

These lectures are built on linux instances through github actions that has access to a gpu. These lectures make use of the nvidia T4 card.

You can check the backend used by JAX using:

import jax
# Check if JAX is using GPU
print(f"JAX backend: {jax.devices()[0].platform}")
JAX backend: gpu

and the hardware we are running on:

!nvidia-smi
/opt/conda/envs/quantecon/lib/python3.12/pty.py:95: RuntimeWarning: os.fork() was called. os.fork() is incompatible with multithreaded code, and JAX is multithreaded, so this will likely lead to a deadlock.
  pid, fd = os.forkpty()
Mon Feb 10 03:54:25 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.127.05             Driver Version: 550.127.05     CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  Tesla T4                       On  |   00000001:00:00.0 Off |                  Off |
| N/A   44C    P0             29W /   70W |     105MiB /  16384MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI        PID   Type   Process name                              GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
+-----------------------------------------------------------------------------------------+