Skip to content

Commit 7c436da

Browse files
ryanrussellhrzn
andauthored
docs: readability improvements r2 (#1046)
* docs: readability improvements r2 Signed-off-by: Ryan Russell <[email protected]> * Update docs/userguide/forecasting_overview.md Co-authored-by: Julien Herzen <[email protected]> Co-authored-by: Julien Herzen <[email protected]>
1 parent e4608d1 commit 7c436da

4 files changed

+9
-9
lines changed

docs/userguide/covariates.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -184,7 +184,7 @@ all_past_covariates = [past_covariates1, past_covariates2, ...]
184184
model = NBEATSModel(input_chunk_length=1, output_chunk_length=1)
185185

186186
model.fit(all_targets,
187-
past_covariates=all_past_covarites)
187+
past_covariates=all_past_covariates)
188188

189189
pred = model.predict(n=1,
190190
series=all_targets[0],

docs/userguide/forecasting_overview.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ Below, we give an overview of what these features mean.
77

88
## Generalities
99

10-
All forecasting models work in the same way: first they are built (taking some hyper-paramers in argument), then they are fit on one or several series
10+
All forecasting models work in the same way: first they are built (taking some hyper-parameters in argument), then they are fit on one or several series
1111
by calling the `fit()` function, and finally they are used to obtain one or several forecasts by calling the `predict()` function.
1212

1313
**Example:**
@@ -35,7 +35,7 @@ model.fit([series1, series2]) # fit on two series
3535
forecast = model.predict(series=[series3, series4], n=36) # predict potentially different series
3636
```
3737

38-
Furthermore, we define the following types of time series consummed by the models:
38+
Furthermore, we define the following types of time series consumed by the models:
3939

4040
* **Target series:** the series that we are interested in forecasting.
4141
* **Covariate series:** some other series that we are not interested in forecasting, but that can provide valuable inputs to the forecasting model.
@@ -74,7 +74,7 @@ These models are shown with a "✅" under the `Multivariate` column on the [mode
7474

7575
Some models support being fit on multiple time series. To do this, it is enough to simply provide a Python `Sequence` of `TimeSeries` (for instance a list of `TimeSeries`) to `fit()`. When a model is fit this way, the `predict()` function will expect the argument `series` to be set, containing
7676
one or several `TimeSeries` (i.e., a single or a `Sequence` of `TimeSeries`) that need to be forecasted.
77-
The advantage of training on multiple series is that a single model can be exposed to more patterns occuring across all series in the training dataset. That can often be beneficial, especially for more expre based models.
77+
The advantage of training on multiple series is that a single model can be exposed to more patterns occurring across all series in the training dataset. That can often be beneficial, especially for larger models with more capacity.
7878

7979
In turn, the advantage of having `predict()` providing forecasts for potentially several series at once is that the computation can often be batched and vectorized across the multiple series, which is computationally faster than calling `predict()` multiple times on isolated series.
8080

@@ -167,7 +167,7 @@ pred.plot(label='forecast')
167167
![TCN Laplace regression](./images/probabilistic/example_tcn_laplace.png)
168168

169169

170-
It is also possible to perform quantile regression (using arbitrary quantiles) with neural networks, by using [darts.utils.likelihood_models.QuantileRegression](https://unit8co.github.io/darts/generated_api/darts.utils.likelihood_models.html#darts.utils.likelihood_models.QuantileRegression), in which case the network will be trained with the pinball loss. This produces an empirical non-parametric distrution, and it can often be a good option in practice, when one is not sure of the "real" distribution, or when fitting parametric likelihoods give poor results.
170+
It is also possible to perform quantile regression (using arbitrary quantiles) with neural networks, by using [darts.utils.likelihood_models.QuantileRegression](https://unit8co.github.io/darts/generated_api/darts.utils.likelihood_models.html#darts.utils.likelihood_models.QuantileRegression), in which case the network will be trained with the pinball loss. This produces an empirical non-parametric distribution, and it can often be a good option in practice, when one is not sure of the "real" distribution, or when fitting parametric likelihoods give poor results.
171171
For example, the code snippet below is almost exactly the same as the preceding snippet; the only difference is that it now uses a `QuantileRegression` likelihood, which means that the neural network will be trained with a pinball loss, and its number of outputs will be dynamically configured to match the number of quantiles.
172172

173173
```python

docs/userguide/gpu_and_tpu_usage.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -82,7 +82,7 @@ Epoch 299: 100% 8/8 [00:00<00:00, 42.49it/s, loss=0.00285, v_num=logs]
8282
Now the model is ready to start predicting, which won't be shown here since it's included in the example linked in the start of this guide.
8383

8484
## Use a GPU
85-
GPUs can dramatically improve the performance of your model in terms of processing time. By using an Accelarator in the [Pytorch Lightning Trainer](https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#accelerator), we can enjoy the benefits of a GPU. We only need to instruct our model to use our machine's GPU through PyTorch Lightning Trainer parameters, which are expressed as the `pl_trainer_kwargs` dictionary, like this:
85+
GPUs can dramatically improve the performance of your model in terms of processing time. By using an Accelerator in the [Pytorch Lightning Trainer](https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#accelerator), we can enjoy the benefits of a GPU. We only need to instruct our model to use our machine's GPU through PyTorch Lightning Trainer parameters, which are expressed as the `pl_trainer_kwargs` dictionary, like this:
8686
```python
8787
my_model = RNNModel(
8888
model="RNN",

docs/userguide/torch_forecasting_models.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -130,7 +130,7 @@ We also noticed that our ice-cream sales depend on the day of the week so we wan
130130
- past covariates: measured average daily temperatures in the past `temperature`
131131
- future covariates: day of the week for past and future `weekday`
132132

133-
Checking Table 1, a model that would accomodate this kind of covariates would be a
133+
Checking Table 1, a model that would accommodate this kind of covariates would be a
134134
`SplitCovariatesModel` (if we don't use historic values of future covariates), or
135135
`MixedCovariatesModel` (if we do). We choose a `MixedCovariatesModel` - the `TFTModel`.
136136

@@ -233,7 +233,7 @@ want to predict - the forecast horizon `n` - we distinguish between two cases:
233233
- If `n > output_chunk_length`: we must predict `n` by calling the internal model multiple times. Each call outputs `output_chunk_length` prediction points. We go through as many calls as needed until we get to the final `n` prediction points, in an auto-regressive fashion.
234234
- in our example: predict ice-cream sales for the next 3 days at once (`n = 3`)
235235

236-
To do this we have to supply additional `past_covariates` for the next `n - output_chunk_length = 2` time steps (days) after the end of our 365 days training data. Unfortunately, we do not have measured `temperture` for the future. But let's assume we have access to temperature forecasts for the next 2 days. We can just append them to `temperature` and the prediction will work!
236+
To do this we have to supply additional `past_covariates` for the next `n - output_chunk_length = 2` time steps (days) after the end of our 365 days training data. Unfortunately, we do not have measured `temperature` for the future. But let's assume we have access to temperature forecasts for the next 2 days. We can just append them to `temperature` and the prediction will work!
237237

238238
```python
239239
temperature = temperature.concatenate(temperature_forecast, axis=0)
@@ -271,7 +271,7 @@ In many cases using a GPU will provide a drastic speedup compared to CPU.
271271
It can also incur some overheads (for transferring data to/from the GPU),
272272
so some testing and tuning is often necessary.
273273
We refer to our [GPU/TPU guide](https://unit8co.github.io/darts/userguide/gpu_and_tpu_usage.html)
274-
for more informations on how to setup a GPU (or a TPU) via PyTorch Lightning.
274+
for more information on how to setup a GPU (or a TPU) via PyTorch Lightning.
275275

276276
### Tune the batch size
277277
A larger batch size tends to speed up the training because it reduces the number

0 commit comments

Comments
 (0)