Skip to content
This repository has been archived by the owner on Nov 8, 2024. It is now read-only.

Fixed a few typos #48

Merged
merged 6 commits into from
Jun 26, 2022
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion _literate/1_why_Julia.jl
Original file line number Diff line number Diff line change
Expand Up @@ -317,7 +317,7 @@
# $$ \text{PDF}(\boldsymbol{\mu}, \boldsymbol{\Sigma}) = (2\pi)^{-{\frac{k}{2}}}\det({\boldsymbol{\Sigma}})^{-{\frac {1}{2}}}e^{-{\frac{1}{2}}(\mathbf{x}-{\boldsymbol{\mu}})^{T}{\boldsymbol{\Sigma }}^{-1}(\mathbf{x} -{\boldsymbol{\mu}})} \label{mvnpdf} , $$
#
# where $\boldsymbol{\mu}$ is a vector of means, $k$ is the number of dimensions, $\boldsymbol{\Sigma}$ is a covariance matrix, $\det$ is the determinant and $\mathbf{x}$
# is a vector of values that the PDF is evaluted for.
# is a vector of values that the PDF is evaluated for.

# **SPOILER ALERT**: Julia will beat this C++ Eigen implementation by being almost 100x faster. So I will try to *help* C++ beat Julia (😂)
# by making a bivariate normal class `BiNormal` in order to avoid the expensive operation of inverting a covariance matrix and computing
Expand Down
2 changes: 1 addition & 1 deletion _literate/2_bayes_stats.jl
Original file line number Diff line number Diff line change
Expand Up @@ -408,7 +408,7 @@
# an interval in which we are sure that the value of the parameter of interest is, based on the likelihood conditioned on the observed
# data - $P(y \mid \theta)$; and the prior probability of the parameter of interest - $P(\theta)$. It is basically a "slice" of
# the posterior probability of the parameter restricted to a certain level of certainty. For example: a 95% credibility interval
# shows the interval that we are 95% sure that captures the value of our parameter of intereest. That simple...
# shows the interval that we are 95% sure that captures the value of our parameter of interest. That simple...

# For example, see figure below, which shows a Log-Normal distribution with mean 0 and standard deviation 2. The green dot
# shows the maximum likelihood estimation (MLE) of the value of $\theta$ which is simply the mode of distribution. And in
Expand Down
4 changes: 2 additions & 2 deletions _literate/5_MCMC.jl
Original file line number Diff line number Diff line change
Expand Up @@ -1239,8 +1239,8 @@ savefig(joinpath(@OUTPUT, "traceplot_bad_chain.svg")); # hide
# If your Bayesian model has problems with convergence there are some steps that can be tried[^QR].
# Listed here from the simplest to the most complex:

# 1. **Increase the number of iterations and chains**: First option is to increase the number of MCMC iterations and it is also possible to increase the number of paralle chains to be sampled.
# 2. **Model reparametrization**: the second option is to reparameterize the model. There are two ways to parameterize the model: the first with centered parameterization (CP) and the second with non-centered parameterization (NCP). NCP is most useful in Multilevel Models, therefore we will cover NCP in [10. **Multilevel Models**](/pages/10_multilevel_models/).
# 1. **Increase the number of iterations and chains**: First option is to increase the number of MCMC iterations and it is also possible to increase the number of parallel chains to be sampled.
# 2. **Model reparametrization**: the second option is to reparametrize the model. There are two ways to parameterize the model: the first with centered parametrization (CP) and the second with non-centered parameterization (NCP). NCP is most useful in Multilevel Models, therefore we will cover NCP in [10. **Multilevel Models**](/pages/10_multilevel_models/).
pitmonticone marked this conversation as resolved.
Show resolved Hide resolved
# 3. **Collect more data**: sometimes the model is too complex and we need a larger sample to get stable estimates.
# 4. **Rethink the model**: convergence failure when we have adequate sampling is usually due to a specification of priors and likelihood function that are not compatible with the data. In this case, it is necessary to rethink the data's generative process in which the model's assumptions are anchored.

Expand Down
2 changes: 1 addition & 1 deletion _literate/7_logistic_reg.jl
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ savefig(joinpath(@OUTPUT, "logistic.svg")); # hide
# Logistic regression would add the logistic function to the linear term:

# * $\hat{p} = \text{Logistic}(\text{Linear}) = \frac{1}{1 + e^{-\operatorname{Linear}}}$ - predicted probability of the observation being the value 1
# * $\hat{\mathbf{y}}=\left\{\begin{array}{ll} 0 & \text { if } \hat{p} < 0.5 \\ 1 & \text { if } \hat{p} \geq 0.5 \end{array}\right.$ - predicted discreve value of $\mathbf{y}$
# * $\hat{\mathbf{y}}=\left\{\begin{array}{ll} 0 & \text { if } \hat{p} < 0.5 \\ 1 & \text { if } \hat{p} \geq 0.5 \end{array}\right.$ - predicted discrete value of $\mathbf{y}$

# **Example**:

Expand Down