Skip to content

Commit

Permalink
[WASM] Remove numba and quantecon for WASM lectures (#571)
Browse files Browse the repository at this point in the history
* remove numba from ar1_process

* Replace quantecon with wasm version

* skip lp_intro

* install quantecon_wasm

* Use github url to fetch the graph file data

* add a pass in if statment to avoid failure when pip is commented

* remove fixed files

* fix pip installs

* fix failure
  • Loading branch information
kp992 authored Feb 25, 2025
1 parent 1c432f8 commit c84c821
Show file tree
Hide file tree
Showing 14 changed files with 53 additions and 149 deletions.
1 change: 0 additions & 1 deletion lectures/ar1_processes.md
Original file line number Diff line number Diff line change
Expand Up @@ -409,7 +409,6 @@ Here is one solution:
```{code-cell} ipython3
from scipy.special import factorial2
def sample_moments_ar1(k, m=100_000, mu_0=0.0, sigma_0=1.0, seed=1234):
np.random.seed(seed)
sample_sum = 0.0
Expand Down
2 changes: 1 addition & 1 deletion lectures/eigen_II.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie
```{code-cell} ipython3
:tags: [hide-output]
%pip install quantecon_wasm
!pip install quantecon_wasm
```

In this lecture we will begin with the foundational concepts in spectral theory.
Expand Down
3 changes: 1 addition & 2 deletions lectures/french_rev.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,7 @@ This lecture uses data from three spreadsheets assembled by {cite}`sargent_velde
* [datasets/assignat.xlsx](https://github.com/QuantEcon/lecture-python-intro/blob/main/lectures/datasets/assignat.xlsx)

```{code-cell} ipython3
%pip install openpyxl
%pip install requests
!pip install openpyxl requests
```

```{code-cell} ipython3
Expand Down
5 changes: 2 additions & 3 deletions lectures/heavy_tails.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie
```{code-cell} ipython3
:tags: [hide-output]
!pip install --upgrade yfinance pandas_datareader
%pip install openpyxl
!pip install yfinance pandas_datareader
```

We use the following imports.
Expand All @@ -30,7 +29,7 @@ import matplotlib.pyplot as plt
import numpy as np
import yfinance as yf
import pandas as pd
import statsmodels.api as
import statsmodels.api as sm
import pyodide_http
from pandas_datareader import wb
Expand Down
1 change: 0 additions & 1 deletion lectures/inequality.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,6 @@ We will need to install the following packages
:tags: [hide-output]
!pip install wbgapi plotly
%pip install openpyxl
```

We will also use the following imports.
Expand Down
6 changes: 3 additions & 3 deletions lectures/inflation_history.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ jupytext:
extension: .md
format_name: myst
format_version: 0.13
jupytext_version: 1.16.1
jupytext_version: 1.16.7
kernelspec:
display_name: Python 3 (ipykernel)
language: python
Expand All @@ -22,8 +22,7 @@ The `xlrd` package is used by `pandas` to perform operations on Excel files.
```{code-cell} ipython3
:tags: [hide-output]
!pip install xlrd
!pip install openpyxl
!pip install xlrd openpyxl
```

<!-- Check for pandas>=2.1.4 for Google Collab Compat -->
Expand All @@ -36,6 +35,7 @@ from packaging.version import Version
if Version(version("pandas")) < Version('2.1.4'):
!pip install "pandas>=2.1.4"
pass
```

We can then import the Python modules we will use.
Expand Down
1 change: 0 additions & 1 deletion lectures/input_output.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,6 @@ This lecture requires the following imports and installs before we proceed.
:tags: [hide-output]
!pip install quantecon_book_networks
!pip install quantecon
!pip install pandas-datareader
```

Expand Down
2 changes: 1 addition & 1 deletion lectures/long_run_growth.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ be interesting to describe both total GDP and GDP per capita as it evolves withi
First we will need to install the following package

```{code-cell} ipython3
%pip install openpyxl
!pip install openpyxl
```

Now let's import the packages needed to explore what the data says about long-run growth
Expand Down
2 changes: 1 addition & 1 deletion lectures/markov_chains_I.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie
```{code-cell} ipython3
:tags: [hide-output]
%pip install quantecon_wasm
!pip install quantecon_wasm
```

## Overview
Expand Down
2 changes: 1 addition & 1 deletion lectures/markov_chains_II.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ In addition to what's in Anaconda, this lecture will need the following librarie
```{code-cell} ipython3
:tags: [hide-output]
%pip install quantecon_wasm
!pip install quantecon_wasm
```

## Overview
Expand Down
4 changes: 0 additions & 4 deletions lectures/mle.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,6 @@ kernelspec:

# Maximum Likelihood Estimation

```{code-cell} ipython3
%pip install openpyxl
```

```{code-cell} ipython3
from scipy.stats import lognorm, pareto, expon
import numpy as np
Expand Down
4 changes: 2 additions & 2 deletions lectures/networks.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ kernelspec:
```{code-cell} ipython3
:tags: [hide-output]
!pip install quantecon-book-networks pandas-datareader
!pip install quantecon_wasm quantecon-book-networks pandas-datareader
```

## Outline
Expand Down Expand Up @@ -54,7 +54,7 @@ import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
import pandas as pd
import quantecon as qe
import quantecon_wasm as qe
import matplotlib.cm as cm
import quantecon_book_networks.input_output as qbn_io
Expand Down
165 changes: 39 additions & 126 deletions lectures/short_path.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,10 @@ jupytext:
text_representation:
extension: .md
format_name: myst
format_version: 0.13
jupytext_version: 1.16.7
kernelspec:
display_name: Python 3
display_name: Python 3 (ipykernel)
language: python
name: python3
---
Expand Down Expand Up @@ -44,7 +46,7 @@ Dynamic programming is an extremely powerful optimization technique that we appl

The only scientific library we'll need in what follows is NumPy:

```{code-cell} python3
```{code-cell} ipython3
import numpy as np
```

Expand Down Expand Up @@ -195,7 +197,7 @@ $$

For example, for the simple graph above, we set

```{code-cell} python3
```{code-cell} ipython3
from numpy import inf
Q = np.array([[inf, 1, 5, 3, inf, inf, inf],
Expand All @@ -216,7 +218,7 @@ For the sequence of approximations $\{J_n\}$ of the cost-to-go functions, we can

Let's try with this example and see how we go:

```{code-cell} python3
```{code-cell} ipython3
nodes = range(7) # Nodes = 0, 1, ..., 6
J = np.zeros_like(nodes, dtype=int) # Initial guess
next_J = np.empty_like(nodes, dtype=int) # Stores updated guess
Expand Down Expand Up @@ -249,7 +251,7 @@ But, importantly, we now have a methodology for tackling large graphs.
:label: short_path_ex1
```

The text below describes a weighted directed graph.
The file data below describes a weighted directed graph.

The line `node0, node1 0.04, node8 11.11, node14 72.21` means that from node0 we can go to

Expand All @@ -268,108 +270,16 @@ You will be dealing with floating point numbers now, rather than
integers, so consider replacing `np.equal()` with `np.allclose()`.
```

```{code-cell} python3
%%file graph.txt
node0, node1 0.04, node8 11.11, node14 72.21
node1, node46 1247.25, node6 20.59, node13 64.94
node2, node66 54.18, node31 166.80, node45 1561.45
node3, node20 133.65, node6 2.06, node11 42.43
node4, node75 3706.67, node5 0.73, node7 1.02
node5, node45 1382.97, node7 3.33, node11 34.54
node6, node31 63.17, node9 0.72, node10 13.10
node7, node50 478.14, node9 3.15, node10 5.85
node8, node69 577.91, node11 7.45, node12 3.18
node9, node70 2454.28, node13 4.42, node20 16.53
node10, node89 5352.79, node12 1.87, node16 25.16
node11, node94 4961.32, node18 37.55, node20 65.08
node12, node84 3914.62, node24 34.32, node28 170.04
node13, node60 2135.95, node38 236.33, node40 475.33
node14, node67 1878.96, node16 2.70, node24 38.65
node15, node91 3597.11, node17 1.01, node18 2.57
node16, node36 392.92, node19 3.49, node38 278.71
node17, node76 783.29, node22 24.78, node23 26.45
node18, node91 3363.17, node23 16.23, node28 55.84
node19, node26 20.09, node20 0.24, node28 70.54
node20, node98 3523.33, node24 9.81, node33 145.80
node21, node56 626.04, node28 36.65, node31 27.06
node22, node72 1447.22, node39 136.32, node40 124.22
node23, node52 336.73, node26 2.66, node33 22.37
node24, node66 875.19, node26 1.80, node28 14.25
node25, node70 1343.63, node32 36.58, node35 45.55
node26, node47 135.78, node27 0.01, node42 122.00
node27, node65 480.55, node35 48.10, node43 246.24
node28, node82 2538.18, node34 21.79, node36 15.52
node29, node64 635.52, node32 4.22, node33 12.61
node30, node98 2616.03, node33 5.61, node35 13.95
node31, node98 3350.98, node36 20.44, node44 125.88
node32, node97 2613.92, node34 3.33, node35 1.46
node33, node81 1854.73, node41 3.23, node47 111.54
node34, node73 1075.38, node42 51.52, node48 129.45
node35, node52 17.57, node41 2.09, node50 78.81
node36, node71 1171.60, node54 101.08, node57 260.46
node37, node75 269.97, node38 0.36, node46 80.49
node38, node93 2767.85, node40 1.79, node42 8.78
node39, node50 39.88, node40 0.95, node41 1.34
node40, node75 548.68, node47 28.57, node54 53.46
node41, node53 18.23, node46 0.28, node54 162.24
node42, node59 141.86, node47 10.08, node72 437.49
node43, node98 2984.83, node54 95.06, node60 116.23
node44, node91 807.39, node46 1.56, node47 2.14
node45, node58 79.93, node47 3.68, node49 15.51
node46, node52 22.68, node57 27.50, node67 65.48
node47, node50 2.82, node56 49.31, node61 172.64
node48, node99 2564.12, node59 34.52, node60 66.44
node49, node78 53.79, node50 0.51, node56 10.89
node50, node85 251.76, node53 1.38, node55 20.10
node51, node98 2110.67, node59 23.67, node60 73.79
node52, node94 1471.80, node64 102.41, node66 123.03
node53, node72 22.85, node56 4.33, node67 88.35
node54, node88 967.59, node59 24.30, node73 238.61
node55, node84 86.09, node57 2.13, node64 60.80
node56, node76 197.03, node57 0.02, node61 11.06
node57, node86 701.09, node58 0.46, node60 7.01
node58, node83 556.70, node64 29.85, node65 34.32
node59, node90 820.66, node60 0.72, node71 0.67
node60, node76 48.03, node65 4.76, node67 1.63
node61, node98 1057.59, node63 0.95, node64 4.88
node62, node91 132.23, node64 2.94, node76 38.43
node63, node66 4.43, node72 70.08, node75 56.34
node64, node80 47.73, node65 0.30, node76 11.98
node65, node94 594.93, node66 0.64, node73 33.23
node66, node98 395.63, node68 2.66, node73 37.53
node67, node82 153.53, node68 0.09, node70 0.98
node68, node94 232.10, node70 3.35, node71 1.66
node69, node99 247.80, node70 0.06, node73 8.99
node70, node76 27.18, node72 1.50, node73 8.37
node71, node89 104.50, node74 8.86, node91 284.64
node72, node76 15.32, node84 102.77, node92 133.06
node73, node83 52.22, node76 1.40, node90 243.00
node74, node81 1.07, node76 0.52, node78 8.08
node75, node92 68.53, node76 0.81, node77 1.19
node76, node85 13.18, node77 0.45, node78 2.36
node77, node80 8.94, node78 0.98, node86 64.32
node78, node98 355.90, node81 2.59
node79, node81 0.09, node85 1.45, node91 22.35
node80, node92 121.87, node88 28.78, node98 264.34
node81, node94 99.78, node89 39.52, node92 99.89
node82, node91 47.44, node88 28.05, node93 11.99
node83, node94 114.95, node86 8.75, node88 5.78
node84, node89 19.14, node94 30.41, node98 121.05
node85, node97 94.51, node87 2.66, node89 4.90
node86, node97 85.09
node87, node88 0.21, node91 11.14, node92 21.23
node88, node93 1.31, node91 6.83, node98 6.12
node89, node97 36.97, node99 82.12
node90, node96 23.53, node94 10.47, node99 50.99
node91, node97 22.17
node92, node96 10.83, node97 11.24, node99 34.68
node93, node94 0.19, node97 6.71, node99 32.77
node94, node98 5.91, node96 2.03
node95, node98 6.17, node99 0.27
node96, node98 3.32, node97 0.43, node99 5.87
node97, node98 0.30
node98, node99 0.33
node99,
```{code-cell} ipython3
import requests
file_url = "https://raw.githubusercontent.com/QuantEcon/lecture-python-intro/main/lectures/graph.txt"
graph_file_response = requests.get(file_url)
```

```{code-cell} ipython3
graph_file_data = str(graph_file_response.content, 'utf-8')
print(graph_file_data)
```

```{exercise-end}
Expand All @@ -381,27 +291,30 @@ node99,

First let's write a function that reads in the graph data above and builds a distance matrix.

```{code-cell} python3
```{code-cell} ipython3
num_nodes = 100
destination_node = 99
def map_graph_to_distance_matrix(in_file):
def map_graph_to_distance_matrix(in_file_data):
# First let's set of the distance matrix Q with inf everywhere
Q = np.full((num_nodes, num_nodes), np.inf)
# Now we read in the data and modify Q
with open(in_file) as infile:
for line in infile:
elements = line.split(',')
node = elements.pop(0)
node = int(node[4:]) # convert node description to integer
if node != destination_node:
for element in elements:
destination, cost = element.split()
destination = int(destination[4:])
Q[node, destination] = float(cost)
Q[destination_node, destination_node] = 0
lines = in_file_data.split('\n')
for line_ in lines:
line = line_.strip()
if line == '':
continue
elements = line.split(',')
node = elements.pop(0)
node = int(node[4:]) # convert node description to integer
if node != destination_node:
for element in elements:
destination, cost = element.split()
destination = int(destination[4:])
Q[node, destination] = float(cost)
Q[destination_node, destination_node] = 0
return Q
```

Expand All @@ -414,7 +327,7 @@ We'll use the algorithm described above.

The minimization step is vectorized to make it faster.

```{code-cell} python3
```{code-cell} ipython3
def bellman(J, Q):
return np.min(Q + J, axis=1)
Expand Down Expand Up @@ -442,7 +355,7 @@ dealing with floating point numbers now.
Finally, here's a function that uses the cost-to-go function to obtain the
optimal path (and its cost).

```{code-cell} python3
```{code-cell} ipython3
def print_best_path(J, Q):
sum_costs = 0
current_node = 0
Expand All @@ -459,17 +372,17 @@ def print_best_path(J, Q):

Okay, now we have the necessary functions, let's call them to do the job we were assigned.

```{code-cell} python3
Q = map_graph_to_distance_matrix('graph.txt')
```{code-cell} ipython3
Q = map_graph_to_distance_matrix(graph_file_data)
J = compute_cost_to_go(Q)
print_best_path(J, Q)
```

The total cost of the path should agree with $J[0]$ so let's check this.

```{code-cell} python3
```{code-cell} ipython3
J[0]
```

```{solution-end}
```
```
Loading

0 comments on commit c84c821

Please sign in to comment.