Skip to content

Commit b48a8e6

Browse files
authored
Merge branch 'master' into feat/gridsearch-returns-backtest-score
2 parents e732572 + 7ca7801 commit b48a8e6

8 files changed

+120
-107
lines changed

darts/models/forecasting/block_rnn_model.py

+17-15
Original file line numberDiff line numberDiff line change
@@ -181,22 +181,23 @@ def __init__(
181181
This parameter will be ignored for probabilistic models if the ``likelihood`` parameter is specified.
182182
Default: ``torch.nn.MSELoss()``.
183183
likelihood
184-
The likelihood model to be used for probabilistic forecasts.
184+
One of Darts' :meth:`Likelihood <darts.utils.likelihood_models.Likelihood>` models to be used for
185+
probabilistic forecasts. Default: ``None``.
185186
optimizer_cls
186-
The PyTorch optimizer class to be used (default: ``torch.optim.Adam``).
187+
The PyTorch optimizer class to be used. Default: ``torch.optim.Adam``.
187188
optimizer_kwargs
188189
Optionally, some keyword arguments for the PyTorch optimizer (e.g., ``{'lr': 1e-3}``
189190
for specifying a learning rate). Otherwise the default values of the selected ``optimizer_cls``
190-
will be used.
191+
will be used. Default: ``None``.
191192
lr_scheduler_cls
192193
Optionally, the PyTorch learning rate scheduler class to be used. Specifying ``None`` corresponds
193-
to using a constant learning rate.
194+
to using a constant learning rate. Default: ``None``.
194195
lr_scheduler_kwargs
195-
Optionally, some keyword arguments for the PyTorch learning rate scheduler.
196+
Optionally, some keyword arguments for the PyTorch learning rate scheduler. Default: ``None``.
196197
batch_size
197-
Number of time series (input and output sequences) used in each training pass.
198+
Number of time series (input and output sequences) used in each training pass. Default: ``32``.
198199
n_epochs
199-
Number of epochs over which to train the model.
200+
Number of epochs over which to train the model. Default: ``100``.
200201
model_name
201202
Name of the model. Used for creating checkpoints and saving tensorboard data. If not specified,
202203
defaults to the following string ``"YYYY-mm-dd_HH:MM:SS_torch_model_run_PID"``, where the initial part
@@ -205,13 +206,13 @@ def __init__(
205206
``"2021-06-14_09:53:32_torch_model_run_44607"``.
206207
work_dir
207208
Path of the working directory, where to save checkpoints and Tensorboard summaries.
208-
(default: current working directory).
209+
Default: current working directory.
209210
log_tensorboard
210211
If set, use Tensorboard to log the different parameters. The logs will be located in:
211-
``"{work_dir}/darts_logs/{model_name}/logs/"``.
212+
``"{work_dir}/darts_logs/{model_name}/logs/"``. Default: ``False``.
212213
nr_epochs_val_period
213214
Number of epochs to wait before evaluating the validation loss (if a validation
214-
``TimeSeries`` is passed to the :func:`fit()` method).
215+
``TimeSeries`` is passed to the :func:`fit()` method). Default: ``1``.
215216
torch_device_str
216217
Optionally, a string indicating the torch device to use. By default, ``torch_device_str`` is ``None``
217218
which will run on CPU. Set it to ``"cuda"`` to use all available GPUs or ``"cuda:i"`` to only use
@@ -232,21 +233,21 @@ def __init__(
232233
https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#select-gpu-devices
233234
force_reset
234235
If set to ``True``, any previously-existing model with the same name will be reset (all checkpoints will
235-
be discarded).
236+
be discarded). Default: ``False``.
236237
save_checkpoints
237238
Whether or not to automatically save the untrained model and checkpoints from training.
238239
To load the model from checkpoint, call :func:`MyModelClass.load_from_checkpoint()`, where
239240
:class:`MyModelClass` is the :class:`TorchForecastingModel` class that was used (such as :class:`TFTModel`,
240241
:class:`NBEATSModel`, etc.). If set to ``False``, the model can still be manually saved using
241-
:func:`save_model()` and loaded using :func:`load_model()`.
242+
:func:`save_model()` and loaded using :func:`load_model()`. Default: ``False``.
242243
add_encoders
243244
A large number of past and future covariates can be automatically generated with `add_encoders`.
244245
This can be done by adding multiple pre-defined index encoders and/or custom user-made functions that
245246
will be used as index encoders. Additionally, a transformer such as Darts' :class:`Scaler` can be added to
246247
transform the generated covariates. This happens all under one hood and only needs to be specified at
247248
model creation.
248249
Read :meth:`SequentialEncoder <darts.utils.data.encoders.SequentialEncoder>` to find out more about
249-
``add_encoders``. An example showing some of ``add_encoders`` features:
250+
``add_encoders``. Default: ``None``. An example showing some of ``add_encoders`` features:
250251
251252
.. highlight:: python
252253
.. code-block:: python
@@ -262,14 +263,15 @@ def __init__(
262263
random_state
263264
Control the randomness of the weights initialization. Check this
264265
`link <https://scikit-learn.org/stable/glossary.html#term-random_state>`_ for more details.
266+
Default: ``None``.
265267
pl_trainer_kwargs
266268
By default :class:`TorchForecastingModel` creates a PyTorch Lightning Trainer with several useful presets
267269
that performs the training, validation and prediction processes. These presets include automatic
268270
checkpointing, tensorboard logging, setting the torch device and more.
269271
With ``pl_trainer_kwargs`` you can add additional kwargs to instantiate the PyTorch Lightning trainer
270272
object. Check the `PL Trainer documentation
271273
<https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html>`_ for more information about the
272-
supported kwargs.
274+
supported kwargs. Default: ``None``.
273275
With parameter ``"callbacks"`` you can add custom or PyTorch-Lightning built-in callbacks to Darts'
274276
:class:`TorchForecastingModel`. Below is an example for adding EarlyStopping to the training process.
275277
The model will stop training early if the validation loss `val_loss` does not improve beyond
@@ -298,7 +300,7 @@ def __init__(
298300
parameter ``trainer`` in :func:`fit()` and :func:`predict()`.
299301
show_warnings
300302
whether to show warnings raised from PyTorch Lightning. Useful to detect potential issues of
301-
your forecasting use case.
303+
your forecasting use case. Default: ``False``.
302304
"""
303305
super().__init__(**self._extract_torch_model_params(**self.model_params))
304306

darts/models/forecasting/nbeats.py

+17-15
Original file line numberDiff line numberDiff line change
@@ -514,22 +514,23 @@ def __init__(
514514
This parameter will be ignored for probabilistic models if the ``likelihood`` parameter is specified.
515515
Default: ``torch.nn.MSELoss()``.
516516
likelihood
517-
The likelihood model to be used for probabilistic forecasts.
517+
One of Darts' :meth:`Likelihood <darts.utils.likelihood_models.Likelihood>` models to be used for
518+
probabilistic forecasts. Default: ``None``.
518519
optimizer_cls
519-
The PyTorch optimizer class to be used (default: ``torch.optim.Adam``).
520+
The PyTorch optimizer class to be used. Default: ``torch.optim.Adam``.
520521
optimizer_kwargs
521522
Optionally, some keyword arguments for the PyTorch optimizer (e.g., ``{'lr': 1e-3}``
522523
for specifying a learning rate). Otherwise the default values of the selected ``optimizer_cls``
523-
will be used.
524+
will be used. Default: ``None``.
524525
lr_scheduler_cls
525526
Optionally, the PyTorch learning rate scheduler class to be used. Specifying ``None`` corresponds
526-
to using a constant learning rate.
527+
to using a constant learning rate. Default: ``None``.
527528
lr_scheduler_kwargs
528-
Optionally, some keyword arguments for the PyTorch learning rate scheduler.
529+
Optionally, some keyword arguments for the PyTorch learning rate scheduler. Default: ``None``.
529530
batch_size
530-
Number of time series (input and output sequences) used in each training pass.
531+
Number of time series (input and output sequences) used in each training pass. Default: ``32``.
531532
n_epochs
532-
Number of epochs over which to train the model.
533+
Number of epochs over which to train the model. Default: ``100``.
533534
model_name
534535
Name of the model. Used for creating checkpoints and saving tensorboard data. If not specified,
535536
defaults to the following string ``"YYYY-mm-dd_HH:MM:SS_torch_model_run_PID"``, where the initial part
@@ -538,13 +539,13 @@ def __init__(
538539
``"2021-06-14_09:53:32_torch_model_run_44607"``.
539540
work_dir
540541
Path of the working directory, where to save checkpoints and Tensorboard summaries.
541-
(default: current working directory).
542+
Default: current working directory.
542543
log_tensorboard
543544
If set, use Tensorboard to log the different parameters. The logs will be located in:
544-
``"{work_dir}/darts_logs/{model_name}/logs/"``.
545+
``"{work_dir}/darts_logs/{model_name}/logs/"``. Default: ``False``.
545546
nr_epochs_val_period
546547
Number of epochs to wait before evaluating the validation loss (if a validation
547-
``TimeSeries`` is passed to the :func:`fit()` method).
548+
``TimeSeries`` is passed to the :func:`fit()` method). Default: ``1``.
548549
torch_device_str
549550
Optionally, a string indicating the torch device to use. By default, ``torch_device_str`` is ``None``
550551
which will run on CPU. Set it to ``"cuda"`` to use all available GPUs or ``"cuda:i"`` to only use
@@ -565,21 +566,21 @@ def __init__(
565566
https://pytorch-lightning.readthedocs.io/en/stable/advanced/multi_gpu.html#select-gpu-devices
566567
force_reset
567568
If set to ``True``, any previously-existing model with the same name will be reset (all checkpoints will
568-
be discarded).
569+
be discarded). Default: ``False``.
569570
save_checkpoints
570571
Whether or not to automatically save the untrained model and checkpoints from training.
571572
To load the model from checkpoint, call :func:`MyModelClass.load_from_checkpoint()`, where
572573
:class:`MyModelClass` is the :class:`TorchForecastingModel` class that was used (such as :class:`TFTModel`,
573574
:class:`NBEATSModel`, etc.). If set to ``False``, the model can still be manually saved using
574-
:func:`save_model()` and loaded using :func:`load_model()`.
575+
:func:`save_model()` and loaded using :func:`load_model()`. Default: ``False``.
575576
add_encoders
576577
A large number of past and future covariates can be automatically generated with `add_encoders`.
577578
This can be done by adding multiple pre-defined index encoders and/or custom user-made functions that
578579
will be used as index encoders. Additionally, a transformer such as Darts' :class:`Scaler` can be added to
579580
transform the generated covariates. This happens all under one hood and only needs to be specified at
580581
model creation.
581582
Read :meth:`SequentialEncoder <darts.utils.data.encoders.SequentialEncoder>` to find out more about
582-
``add_encoders``. An example showing some of ``add_encoders`` features:
583+
``add_encoders``. Default: ``None``. An example showing some of ``add_encoders`` features:
583584
584585
.. highlight:: python
585586
.. code-block:: python
@@ -595,14 +596,15 @@ def __init__(
595596
random_state
596597
Control the randomness of the weights initialization. Check this
597598
`link <https://scikit-learn.org/stable/glossary.html#term-random_state>`_ for more details.
599+
Default: ``None``.
598600
pl_trainer_kwargs
599601
By default :class:`TorchForecastingModel` creates a PyTorch Lightning Trainer with several useful presets
600602
that performs the training, validation and prediction processes. These presets include automatic
601603
checkpointing, tensorboard logging, setting the torch device and more.
602604
With ``pl_trainer_kwargs`` you can add additional kwargs to instantiate the PyTorch Lightning trainer
603605
object. Check the `PL Trainer documentation
604606
<https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html>`_ for more information about the
605-
supported kwargs.
607+
supported kwargs. Default: ``None``.
606608
With parameter ``"callbacks"`` you can add custom or PyTorch-Lightning built-in callbacks to Darts'
607609
:class:`TorchForecastingModel`. Below is an example for adding EarlyStopping to the training process.
608610
The model will stop training early if the validation loss `val_loss` does not improve beyond
@@ -631,7 +633,7 @@ def __init__(
631633
parameter ``trainer`` in :func:`fit()` and :func:`predict()`.
632634
show_warnings
633635
whether to show warnings raised from PyTorch Lightning. Useful to detect potential issues of
634-
your forecasting use case.
636+
your forecasting use case. Default: ``False``.
635637
636638
References
637639
----------

darts/models/forecasting/pl_forecasting_module.py

+8-7
Original file line numberDiff line numberDiff line change
@@ -55,18 +55,19 @@ def __init__(
5555
This parameter will be ignored for probabilistic models if the ``likelihood`` parameter is specified.
5656
Default: ``torch.nn.MSELoss()``.
5757
likelihood
58-
The likelihood model to be used for probabilistic forecasts.
58+
One of Darts' :meth:`Likelihood <darts.utils.likelihood_models.Likelihood>` models to be used for
59+
probabilistic forecasts. Default: ``None``.
5960
optimizer_cls
60-
The PyTorch optimizer class to be used (default: ``torch.optim.Adam``).
61+
The PyTorch optimizer class to be used. Default: ``torch.optim.Adam``.
6162
optimizer_kwargs
6263
Optionally, some keyword arguments for the PyTorch optimizer (e.g., ``{'lr': 1e-3}``
6364
for specifying a learning rate). Otherwise the default values of the selected ``optimizer_cls``
64-
will be used.
65+
will be used. Default: ``None``.
6566
lr_scheduler_cls
6667
Optionally, the PyTorch learning rate scheduler class to be used. Specifying ``None`` corresponds
67-
to using a constant learning rate.
68+
to using a constant learning rate. Default: ``None``.
6869
lr_scheduler_kwargs
69-
Optionally, some keyword arguments for the PyTorch learning rate scheduler.
70+
Optionally, some keyword arguments for the PyTorch learning rate scheduler. Default: ``None``.
7071
"""
7172
super().__init__()
7273

@@ -117,15 +118,15 @@ def training_step(self, train_batch, batch_idx) -> torch.Tensor:
117118
-1
118119
] # By convention target is always the last element returned by datasets
119120
loss = self._compute_loss(output, target)
120-
self.log("train_loss", loss, batch_size=train_batch[0].shape[0])
121+
self.log("train_loss", loss, batch_size=train_batch[0].shape[0], prog_bar=True)
121122
return loss
122123

123124
def validation_step(self, val_batch, batch_idx) -> torch.Tensor:
124125
"""performs the validation step"""
125126
output = self._produce_train_output(val_batch[:-1])
126127
target = val_batch[-1]
127128
loss = self._compute_loss(output, target)
128-
self.log("val_loss", loss, batch_size=val_batch[0].shape[0])
129+
self.log("val_loss", loss, batch_size=val_batch[0].shape[0], prog_bar=True)
129130
return loss
130131

131132
def predict_step(

0 commit comments

Comments
 (0)