Skip to content

Releases: GRAAL-Research/poutyne

v1.11

28 Apr 20:55
Compare
Choose a tag to compare

Small version.

  • Remove support for Python 3.6 as PyTorch.
  • Add Dockerfile

v1.10.1

15 Mar 21:42
Compare
Choose a tag to compare
  • Major bug (introduced in v1.10) fix: the state of the loss function was not reset after each epoch/evaluate calls so the values returned
    were averages for the whole lifecycle of the Model class.

v1.10

08 Mar 18:17
Compare
Choose a tag to compare
  • Add a WandB logger.
  • Epoch and batch metrics are now unified. Their only difference is whether the metric for the batch is computed. The main interface is now the Metric class. It is compatible with TorchMetrics. Thus, TorchMetrics metrics can now be passed as either batch or epoch metrics. The metrics with the interface metric(y_pred, y_true) are internally wrapped into a Metric object and are still fully supported. The torch_metrics keyword argument and the EpochMetric class are now deprecated and will be removed in future versions.
  • Model.get_batch_size is replaced by poutyne.get_batch_size().

v1.9

18 Feb 22:48
e6ebf85
Compare
Choose a tag to compare
  • Add support for TorchMetrics metrics.
  • Experiment is now an alias for ModelBundle, a class quite similar to Experiment except that it allows to instantiate an "Experiment" from a Poutyne Model or a network.
  • Add support for PackedSequence.
  • Add flag to TensorBoardLogger to allow to put training and validation metrics in different graphs. This allow to have a behavior closer to Keras.
  • Add support for fscore on binary classification.
  • Add convert_to_numpy flag to be able to obtain tensors instead of NumPy arrays in evaluate* and predict*.

v1.8

17 Dec 18:16
Compare
Choose a tag to compare

Breaking changes:

  • When using epoch metrics 'f1', 'precision', 'recall' and associated classes, the default average has been changed to 'macro' instead of 'micro'. This changes the names of the metrics that is displayed and that is in the log dictionnary in callbacks. This change also applies to Experiment when using task='classif'.
  • Exceptions when loading checkpoints in Experiment are now propagated instead of being silenced.

v1.7

30 Oct 18:41
Compare
Choose a tag to compare
  • Add plot_history and plot_metric functions to easily plot the history returned by Poutyne. Experiment also saves the figures at the end of the training.
  • All text files (e.g. CSVs in CSVLogger) are now saved using UTF-8 on all platforms.

v1.6

27 Aug 21:31
Compare
Choose a tag to compare
  • PeriodicSaveCallback and all its subclasses now have the restore_best argument.
  • Experiment now contains a monitoring argument that can be set to false to avoid monitoring any metric and saving uneeded checkpoints.
  • The format of the ETA time and total time now contains days, hours, minutes when appropriate.
  • Add predict methods to Callback to allow callback to be call during prediction phase.
  • Add infer methods to Experiment to more easily make inference (predictions) with an experiment.
  • Add a progress bar callback during predictions of a model.
  • Add a method to compare the results of two experiments.
  • Add return_ground_truth and has_ground_truth arguments to predict_dataset and predict_generator.

v1.5

22 May 14:42
Compare
Choose a tag to compare
  • Add LambdaCallback to more easily define a callback from lambdas or functions.
  • In Jupyter Notebooks, when coloring is enabled, the print rate of progress output is limited to one output every 0.1 seconds. This solves the slowness problem (and the memory problem on Firefox) when there is a great number of steps per epoch.
  • Add return_dict_format argument to train_on_batch and evaluate_on_batch and allows to return predictions and ground truths in evaluate_* even when return_dict_format=True. Furthermore, Experiment.test* now support return_pred=True and return_ground_truth=True.
  • Split Tips and Tricks example into two examples: Tips and Tricks and Sequence Tagging With an RNN.

v1.4

15 Apr 15:29
Compare
Choose a tag to compare
  • Add examples for image reconstruction and semantic segmentation with Poutyne.
  • Add the following flags in ProgressionCallback:
    show_every_n_train_steps, show_every_n_valid_steps, show_every_n_test_steps. They allow to show only certain
    steps instead of all steps.
  • Fix bug where all warnings were silenced.
  • Add strict flag when loading checkpoints. In Model, a NamedTuple is returned as in PyTorch's load_state_dict. In
    Experiment, a warning is raised when there are missing or unexpected keys in the checkpoint.
  • In CSVLogger, when multiple learning rates are used, we use the column names lr_group_0, lr_group_1, etc. instead
    of lr.
  • Fix bug where EarlyStopping would be one epoch late and would anyway disregard the monitored metric at the last epoch.

v1.3.1

05 Mar 18:27
Compare
Choose a tag to compare
  • Bug fix for when changing the GPU device twice with optimizer having a state would crash.