Releases: GRAAL-Research/poutyne
Releases · GRAAL-Research/poutyne
v1.11
v1.10.1
v1.10
- Add a WandB logger.
- Epoch and batch metrics are now unified. Their only difference is whether the metric for the batch is computed. The main interface is now the
Metric
class. It is compatible with TorchMetrics. Thus, TorchMetrics metrics can now be passed as either batch or epoch metrics. The metrics with the interfacemetric(y_pred, y_true)
are internally wrapped into aMetric
object and are still fully supported. Thetorch_metrics
keyword argument and theEpochMetric
class are now deprecated and will be removed in future versions. Model.get_batch_size
is replaced bypoutyne.get_batch_size()
.
v1.9
- Add support for TorchMetrics metrics.
Experiment
is now an alias forModelBundle
, a class quite similar toExperiment
except that it allows to instantiate an "Experiment" from a Poutyne Model or a network.- Add support for PackedSequence.
- Add flag to
TensorBoardLogger
to allow to put training and validation metrics in different graphs. This allow to have a behavior closer to Keras. - Add support for fscore on binary classification.
- Add
convert_to_numpy
flag to be able to obtain tensors instead of NumPy arrays in evaluate* and predict*.
v1.8
Breaking changes:
- When using epoch metrics
'f1'
,'precision'
,'recall'
and associated classes, the default average has been changed to'macro'
instead of'micro'
. This changes the names of the metrics that is displayed and that is in the log dictionnary in callbacks. This change also applies toExperiment
when usingtask='classif'
. - Exceptions when loading checkpoints in
Experiment
are now propagated instead of being silenced.
v1.7
- Add
plot_history
andplot_metric
functions to easily plot the history returned by Poutyne.Experiment
also saves the figures at the end of the training. - All text files (e.g. CSVs in CSVLogger) are now saved using UTF-8 on all platforms.
v1.6
- PeriodicSaveCallback and all its subclasses now have the
restore_best
argument. Experiment
now contains amonitoring
argument that can be set to false to avoid monitoring any metric and saving uneeded checkpoints.- The format of the ETA time and total time now contains days, hours, minutes when appropriate.
- Add
predict
methods to Callback to allow callback to be call during prediction phase. - Add
infer
methods to Experiment to more easily make inference (predictions) with an experiment. - Add a progress bar callback during predictions of a model.
- Add a method to compare the results of two experiments.
- Add
return_ground_truth
andhas_ground_truth
arguments topredict_dataset
andpredict_generator
.
v1.5
- Add
LambdaCallback
to more easily define a callback from lambdas or functions. - In Jupyter Notebooks, when coloring is enabled, the print rate of progress output is limited to one output every 0.1 seconds. This solves the slowness problem (and the memory problem on Firefox) when there is a great number of steps per epoch.
- Add
return_dict_format
argument totrain_on_batch
andevaluate_on_batch
and allows to return predictions and ground truths inevaluate_*
even whenreturn_dict_format=True
. Furthermore,Experiment.test*
now supportreturn_pred=True
andreturn_ground_truth=True
. - Split Tips and Tricks example into two examples: Tips and Tricks and Sequence Tagging With an RNN.
v1.4
- Add examples for image reconstruction and semantic segmentation with Poutyne.
- Add the following flags in
ProgressionCallback
:
show_every_n_train_steps
,show_every_n_valid_steps
,show_every_n_test_steps
. They allow to show only certain
steps instead of all steps. - Fix bug where all warnings were silenced.
- Add
strict
flag when loading checkpoints. In Model, a NamedTuple is returned as in PyTorch'sload_state_dict
. In
Experiment, a warning is raised when there are missing or unexpected keys in the checkpoint. - In CSVLogger, when multiple learning rates are used, we use the column names
lr_group_0
,lr_group_1
, etc. instead
oflr
. - Fix bug where EarlyStopping would be one epoch late and would anyway disregard the monitored metric at the last epoch.