Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade Docusaurus to v3 #1

Merged
merged 13 commits into from
Nov 26, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,7 @@ sphinx/build/
website/build/
website/i18n/
website/node_modules/
website/.docusaurus/

## Generated for tutorials
website/_tutorials/
Expand All @@ -104,6 +105,5 @@ website/pages/tutorials/*
## Generated for Sphinx
website/pages/api/
website/static/js/*
!website/static/js/mathjax.js
!website/static/js/code_block_buttons.js
website/static/_sphinx-sources/
3 changes: 3 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,5 @@
---

---
This directory contains the source files for BoTorch's Docusaurus documentation.
See the website's [README](../website/README.md) for additional information.
28 changes: 17 additions & 11 deletions docs/acquisition.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,25 +36,27 @@ functions that consider multiple design points jointly (i.e. $q > 1$).
An alternative is to use Monte-Carlo (MC) sampling to approximate the integrals.
An MC approximation of $\alpha$ at $X$ using $N$ MC samples is

$$ \alpha(X) \approx \frac{1}{N} \sum_{i=1}^N a(\xi_{i}) $$
$$
\alpha(X) \approx \frac{1}{N} \sum_{i=1}^N a(\xi_{i})
$$

where $\xi_i \sim \mathbb{P}(f(X) \mid \mathcal{D})$.

For instance, for q-Expected Improvement (qEI), we have:

$$
\text{qEI}(X) \approx \frac{1}{N} \sum_{i=1}^N \max_{j=1,..., q}
\bigl\\{ \max(\xi_{ij} - f^\*, 0) \bigr\\},
\bigl\{ \max(\xi_{ij} - f^*, 0) \bigr\},
\qquad \xi_{i} \sim \mathbb{P}(f(X) \mid \mathcal{D})
$$

where $f^\*$ is the best function value observed so far (assuming noiseless
where $f^*$ is the best function value observed so far (assuming noiseless
observations). Using the reparameterization trick ([^KingmaWelling2014],
[^Rezende2014]),

$$
\text{qEI}(X) \approx \frac{1}{N} \sum_{i=1}^N \max_{j=1,..., q}
\bigl\\{ \max\bigl( \mu(X)\_j + (L(X) \epsilon_i)\_j - f^\*, 0 \bigr) \bigr\\},
\bigl\{ \max\bigl( \mu(X)\_j + (L(X) \epsilon_i)\_j - f^*, 0 \bigr) \bigr\},
\qquad \epsilon_{i} \sim \mathcal{N}(0, I)
$$

Expand All @@ -65,10 +67,10 @@ All MC-based acquisition functions in BoTorch are derived from
[`MCAcquisitionFunction`](../api/acquisition.html#mcacquisitionfunction).

Acquisition functions expect input tensors $X$ of shape
$\textit{batch_shape} \times q \times d$, where $d$ is the dimension of the
$\textit{batch\_shape} \times q \times d$, where $d$ is the dimension of the
feature space, $q$ is the number of points considered jointly, and
$\textit{batch_shape}$ is the batch-shape of the input tensor. The output
$\alpha(X)$ will have shape $\textit{batch_shape}$, with each element
$\textit{batch\_shape}$ is the batch-shape of the input tensor. The output
$\alpha(X)$ will have shape $\textit{batch\_shape}$, with each element
corresponding to the respective $q \times d$ batch tensor in the input $X$.
Note that for analytic acquisition functions, it must be that $q=1$.

Expand Down Expand Up @@ -135,15 +137,19 @@ summary statistics of the posterior distribution at the evaluated point(s).
A popular acquisition function is Expected Improvement of a single point
for a Gaussian posterior, given by

$$ \text{EI}(x) = \mathbb{E}\bigl[
$$
\text{EI}(x) = \mathbb{E}\bigl[
\max(y - f^\*, 0) \mid y\sim \mathcal{N}(\mu(x), \sigma^2(x))
\bigr] $$
\bigr]
$$

where $\mu(x)$ and $\sigma(x)$ are the posterior mean and variance of $f$ at the
point $x$, and $f^\*$ is again the best function value observed so far (assuming
point $x$, and $f^*$ is again the best function value observed so far (assuming
noiseless observations). It can be shown that

$$ \text{EI}(x) = \sigma(x) \bigl( z \Phi(z) + \varphi(z) \bigr)$$
$$
\text{EI}(x) = \sigma(x) \bigl( z \Phi(z) + \varphi(z) \bigr)
$$

where $z = \frac{\mu(x) - f_{\max}}{\sigma(x)}$ and $\Phi$ and $\varphi$ are
the cdf and pdf of the standard normal distribution, respectively.
Expand Down
17 changes: 13 additions & 4 deletions docs/getting_started.md → docs/getting_started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,9 @@ id: getting_started
title: Getting Started
---

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

This section shows you how to get your feet wet with BoTorch.

Before jumping the gun, we recommend you start with the high-level
Expand All @@ -17,16 +20,22 @@ BoTorch is easily installed via `pip` (recommended). It is also possible to
use the (unofficial) [Anaconda](https://www.anaconda.com/distribution/#download-section)
package from the `-c conda-forge` channel.

<!--DOCUSAURUS_CODE_TABS-->
<!--pip-->
<Tabs>
<TabItem value="pip" label="pip" default>

```bash
pip install botorch
```
<!--Conda-->

</TabItem>
<TabItem value="conda" label="Conda">

```bash
conda install botorch -c gpytorch -c conda-forge
```
<!--END_DOCUSAURUS_CODE_TABS-->

</TabItem>
</Tabs>

For more installation options and detailed instructions, please see the
[Project Readme](https://github.com/pytorch/botorch/blob/main/README.md)
Expand Down
2 changes: 1 addition & 1 deletion docs/objectives.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ inputs to a `sample_shape x batch_shape x q`-dimensional tensor of sampled
objective values.

For instance, say you have a multi-output model with $o=2$ outputs, and you want
to optimize a $obj(y) = 1 - \\|y - y_0\\|_2$, where $y_0 \in \mathbb{R}^2$.
to optimize a $obj(y) = 1 - \|y - y_0\|_2$, where $y_0 \in \mathbb{R}^2$.
For this you would use the following custom objective (here we can ignore the
inputs $X$ as the objective does not depend on it):
```python
Expand Down
File renamed without changes.
File renamed without changes.
Loading