Skip to content

Commit

Permalink
Merge pull request #136 from rusty-electron/Development
Browse files Browse the repository at this point in the history
fix tutorial notebook 1 toc links and update some text
  • Loading branch information
stefanradev93 authored Feb 21, 2024
2 parents 801e56f + d477cd4 commit 225a817
Showing 1 changed file with 53 additions and 7 deletions.
60 changes: 53 additions & 7 deletions examples/Intro_Amortized_Posterior_Estimation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,44 @@
},
"source": [
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n",
"<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Introduction\" data-toc-modified-id=\"Introduction-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Introduction</a></span></li><li><span><a href=\"#Defining-the-Generative-Model\" data-toc-modified-id=\"Defining-the-Generative-Model-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Defining the Generative Model</a></span><ul class=\"toc-item\"><li><span><a href=\"#Prior\" data-toc-modified-id=\"Prior-2.1\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Prior</a></span></li><li><span><a href=\"#Simulator\" data-toc-modified-id=\"Simulator-2.2\"><span class=\"toc-item-num\">2.2&nbsp;&nbsp;</span>Simulator</a></span></li><li><span><a href=\"#Generative-Model\" data-toc-modified-id=\"Generative-Model-2.3\"><span class=\"toc-item-num\">2.3&nbsp;&nbsp;</span>Generative Model</a></span></li></ul></li><li><span><a href=\"#Defining-the-Neural-Approximator\" data-toc-modified-id=\"Defining-the-Neural-Approximator-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Defining the Neural Approximator</a></span><ul class=\"toc-item\"><li><span><a href=\"#Summary-Network\" data-toc-modified-id=\"Summary-Network-3.1\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>Summary Network</a></span></li><li><span><a href=\"#Inference-Network\" data-toc-modified-id=\"Inference-Network-3.2\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>Inference Network</a></span></li><li><span><a href=\"#Amortized-Posterior\" data-toc-modified-id=\"Amortized-Posterior-3.3\"><span class=\"toc-item-num\">3.3&nbsp;&nbsp;</span>Amortized Posterior</a></span></li></ul></li><li><span><a href=\"#Defining-the-Trainer\" data-toc-modified-id=\"Defining-the-Trainer-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Defining the Trainer</a></span></li><li><span><a href=\"#Training-Phase\" data-toc-modified-id=\"Training-Phase-5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>Training Phase</a></span><ul class=\"toc-item\"><li><span><a href=\"#Online-Training\" data-toc-modified-id=\"Online-Training-5.1\"><span class=\"toc-item-num\">5.1&nbsp;&nbsp;</span>Online Training</a></span></li><li><span><a href=\"#Inspecting-the-Loss\" data-toc-modified-id=\"Inspecting-the-Loss-5.2\"><span class=\"toc-item-num\">5.2&nbsp;&nbsp;</span>Inspecting the Loss</a></span></li><li><span><a href=\"#Validating-Consistency\" data-toc-modified-id=\"Validating-Consistency-5.3\"><span class=\"toc-item-num\">5.3&nbsp;&nbsp;</span>Validating Consistency</a></span><ul class=\"toc-item\"><li><span><a href=\"#Latent-space-inspection\" data-toc-modified-id=\"Latent-space-inspection-5.3.1\"><span class=\"toc-item-num\">5.3.1&nbsp;&nbsp;</span>Latent space inspection</a></span></li><li><span><a href=\"#Simulation-Based-Calibration\" data-toc-modified-id=\"Simulation-Based-Calibration-5.3.2\"><span class=\"toc-item-num\">5.3.2&nbsp;&nbsp;</span>Simulation-Based Calibration</a></span></li><li><span><a href=\"#Posterior-z-score-and-contraction\" data-toc-modified-id=\"Posterior-z-score-and-contraction-5.3.3\"><span class=\"toc-item-num\">5.3.3&nbsp;&nbsp;</span>Posterior z-score and contraction</a></span></li></ul></li></ul></li><li><span><a href=\"#Inference-Phase\" data-toc-modified-id=\"Inference-Phase-6\"><span class=\"toc-item-num\">6&nbsp;&nbsp;</span>Inference Phase</a></span></li></ul></div>"
"<div class=\"toc\">\n",
" <ul class=\"toc-item\">\n",
" <li><span><a href=\"#introduction\" data-toc-modified-id=\"introduction-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Introduction</a></span></li>\n",
" <li>\n",
" <span><a href=\"#defining-the-generative-model\" data-toc-modified-id=\"defining-the-generative-model-2\"><span class=\"toc-item-num\">2&nbsp;&nbsp;</span>Defining the Generative Model</a></span>\n",
" <ul class=\"toc-item\">\n",
" <li><span><a href=\"#prior\" data-toc-modified-id=\"prior-2.1\"><span class=\"toc-item-num\">2.1&nbsp;&nbsp;</span>Prior</a></span></li>\n",
" <li><span><a href=\"#simulator\" data-toc-modified-id=\"simulator-2.2\"><span class=\"toc-item-num\">2.2&nbsp;&nbsp;</span>Simulator</a></span></li>\n",
" <li><span><a href=\"#generative-model\" data-toc-modified-id=\"generative-model-2.3\"><span class=\"toc-item-num\">2.3&nbsp;&nbsp;</span>Generative Model</a></span></li>\n",
" </ul>\n",
" </li>\n",
" <li>\n",
" <span><a href=\"#defining-the-neural-approximator\" data-toc-modified-id=\"defining-the-neural-approximator-3\"><span class=\"toc-item-num\">3&nbsp;&nbsp;</span>Defining the Neural Approximator</a></span>\n",
" <ul class=\"toc-item\">\n",
" <li><span><a href=\"#summary-network\" data-toc-modified-id=\"summary-network-3.1\"><span class=\"toc-item-num\">3.1&nbsp;&nbsp;</span>Summary Network</a></span></li>\n",
" <li><span><a href=\"#inference-network\" data-toc-modified-id=\"inference-network-3.2\"><span class=\"toc-item-num\">3.2&nbsp;&nbsp;</span>Inference Network</a></span></li>\n",
" <li><span><a href=\"#amortized-posterior\" data-toc-modified-id=\"amortized-posterior-3.3\"><span class=\"toc-item-num\">3.3&nbsp;&nbsp;</span>Amortized Posterior</a></span></li>\n",
" </ul>\n",
" </li>\n",
" <li><span><a href=\"#defining-the-trainer\" data-toc-modified-id=\"defining-the-trainer-4\"><span class=\"toc-item-num\">4&nbsp;&nbsp;</span>Defining the Trainer</a></span></li>\n",
" <li>\n",
" <span><a href=\"#training-phase\" data-toc-modified-id=\"training-phase-5\"><span class=\"toc-item-num\">5&nbsp;&nbsp;</span>Training Phase</a></span>\n",
" <ul class=\"toc-item\">\n",
" <li><span><a href=\"#online-training\" data-toc-modified-id=\"online-training-5.1\"><span class=\"toc-item-num\">5.1&nbsp;&nbsp;</span>Online Training</a></span></li>\n",
" <li><span><a href=\"#inspecting-the-loss\" data-toc-modified-id=\"inspecting-the-loss-5.2\"><span class=\"toc-item-num\">5.2&nbsp;&nbsp;</span>Inspecting the Loss</a></span></li>\n",
" <li>\n",
" <span><a href=\"#validating-consistency\" data-toc-modified-id=\"validating-consistency-5.3\"><span class=\"toc-item-num\">5.3&nbsp;&nbsp;</span>Validating Consistency</a></span>\n",
" <ul class=\"toc-item\">\n",
" <li><span><a href=\"#latent-space-inspection\" data-toc-modified-id=\"latent-space-inspection-5.3.1\"><span class=\"toc-item-num\">5.3.1&nbsp;&nbsp;</span>Latent space inspection</a></span></li>\n",
" <li><span><a href=\"#simulation-based-calibration\" data-toc-modified-id=\"simulation-based-calibration-5.3.2\"><span class=\"toc-item-num\">5.3.2&nbsp;&nbsp;</span>Simulation-Based Calibration</a></span></li>\n",
" <li><span><a href=\"#posterior-z-score-and-contraction\" data-toc-modified-id=\"posterior-z-score-and-contraction-5.3.3\"><span class=\"toc-item-num\">5.3.3&nbsp;&nbsp;</span>Posterior z-score and contraction</a></span></li>\n",
" </ul>\n",
" </li>\n",
" </ul>\n",
" </li>\n",
" <li><span><a href=\"#inference-phase\" data-toc-modified-id=\"inference-phase-6\"><span class=\"toc-item-num\">6&nbsp;&nbsp;</span>Inference Phase</a></span></li>\n",
" </ul>\n",
"</div>\n"
]
},
{
Expand Down Expand Up @@ -516,6 +553,8 @@
"id": "biblical-tongue",
"metadata": {},
"source": [
"Note about `z`: The inference network takes summary vectors as input and outputs latent vectors (`z`). The latent space is a lower-dimensional space that is assumed to capture the essential information about the parameters. It is essentially transforming the parameter space to the latent space (Gaussian in this case). \n",
"\n",
"We can inspect the shapes of the outputs as well:"
]
},
Expand Down Expand Up @@ -832,7 +871,7 @@
"id": "departmental-preservation",
"metadata": {},
"source": [
"We can inspect the evolution of the loss via a utility function ``plot_losses``, for which we have imported the ``diagnostics`` module from ``BayesFlow``."
"Bayesian models can be complex and computationally intensive, and metrics like training and validation loss can provide critical insights into the model's performance and stability. A decreasing loss over time indicates that the model is learning effectively, while fluctuations or increases in loss might suggest issues in the training process, such as overfitting, underfitting, or inappropriate learning rate settings. We can inspect the evolution of the loss via a utility function ``plot_losses``, for which we have imported the ``diagnostics`` module from ``BayesFlow``. "
]
},
{
Expand Down Expand Up @@ -866,7 +905,7 @@
"### Validating Consistency\n",
"Validating the consistency of our model-amortizer coupling is an important step which should be performed before any real data are presented to the networks. In other words, the model should work in the ''small world'', before going out in the world of real data. This involves testing the model under known conditions and ensuring that it behaves logically and accurately. It is a critical step to avoid surprises when the model is later exposed to real and more complex data. In addition to a smooth loss reduction curve, we can use at least four handy diagnostic utilities. \n",
"\n",
"For a better illustration, we will start by generating some test simulations (not seen during training). Note, that we also use the default configurator to prepare these test simulations for interacting with the networks."
"For a better illustration, we will start by generating some test simulations (not seen during training) using the simulator `model`. Note, that we also use the default configurator to prepare these test simulations for interacting with the networks."
]
},
{
Expand All @@ -885,7 +924,8 @@
"metadata": {},
"source": [
"#### Latent space inspection\n",
"Since our training objective prescribes a unit Gaussian to the latent variable $\\boldsymbol{z}$ (see: https://arxiv.org/abs/2003.06281), we expect that, upon good convergence, the latent space will exhibit the prescribed probabilistic structure. Good convergence means that the model has learned an appropriate representation of the data in its latent space. We can quickly inspect this structure by calling the ``plot_latent_space_2d`` function from the `diagnostics` module."
"\n",
"Since our training objective prescribes a unit Gaussian to the latent variable $\\boldsymbol{z}$ (see: https://arxiv.org/abs/2003.06281), we expect that, upon good convergence, the latent space will exhibit the prescribed probabilistic structure. Good convergence means that the model has learned an appropriate representation of the data in its latent space. We can quickly inspect this structure by calling the ``plot_latent_space_2d`` function from the `diagnostics` module. "
]
},
{
Expand Down Expand Up @@ -931,7 +971,7 @@
"\n",
"SBC is a technique used to assess whether a probabilistic model correctly infers parameters from data. The basic idea is to simulate a large number of datasets from the model's prior distribution and then perform posterior inference on these simulated datasets. The goal is to check if the inferred posterior distributions are consistent with the priors. Essentially, SBC tests if the model can accurately recover the parameters it used to generate the data in the first place. This process helps identify any systematic biases or inaccuracies in the model's inference process.\n",
"\n",
"To perform SBC, we first need to obtain a number of `L` posterior draws from `M` simulated data sets. While the procedure is typically intractable, amortized inference allows us to perform SBC instantly."
"To perform SBC, we first need to obtain `L` number of posterior draws from `M` simulated data sets. While the procedure is typically intractable, amortized inference allows us to perform SBC instantly."
]
},
{
Expand Down Expand Up @@ -1012,12 +1052,18 @@
"metadata": {},
"source": [
"#### Posterior z-score and contraction\n",
"\n",
"* Posterior z-score: It measures how many standard deviations away the mean of the posterior distribution is from the true value of the parameter. A z-score of 0 indicates the mean perfectly aligns with the true value (no bias) while positive/negative z-scores indicate positive/negative bias, respectively.\n",
"* Posterior contraction: It measures how much the posterior distribution contracts around the true value of the parameter as more data is observed. A higher contraction indicates that the data provides strong evidence, narrowing the uncertainty range.\n",
"\n",
"Ideally, we should obtain high contraction and a z-score near 0. This means the model accurately captures the true value with little bias and high confidence.\n",
"\n",
"A quick and dirty way to gain an understanding of how good point estimates and uncertainty estimates capture the \"true\" parameters, assuming the generative model is well-specified. For this, we will draw more samples from the posteriors in order to get smaller Monte Carlo error."
]
},
{
"cell_type": "code",
"execution_count": 32,
"execution_count": null,
"id": "descending-election",
"metadata": {},
"outputs": [],
Expand Down Expand Up @@ -1053,7 +1099,7 @@
},
{
"cell_type": "code",
"execution_count": 34,
"execution_count": null,
"id": "virtual-incidence",
"metadata": {},
"outputs": [
Expand Down

0 comments on commit 225a817

Please sign in to comment.