Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TQDMProgressBar not working in TF-2.2.0rc1 #1495

Closed
bthorsted opened this issue Mar 30, 2020 · 3 comments · Fixed by #1595
Closed

TQDMProgressBar not working in TF-2.2.0rc1 #1495

bthorsted opened this issue Mar 30, 2020 · 3 comments · Fixed by #1595
Labels
bug Something isn't working
Milestone

Comments

@bthorsted
Copy link

System information

  • OS Platform and Distribution: Linux Ubuntu 18.04
  • TensorFlow version and how it was installed (source or binary): TF-2.2.0rc1 (wheel compiled from source)
  • TensorFlow-Addons version and how it was installed (source or binary): 0.8.3 installed via pip
  • Python version: 3.7.6
  • Is GPU used? (yes/no): Yes

Describe the bug

Executing model.fit() with the TQDMProgressBar() callback results in KeyError: 'metrics' because of a change in TF-2.2 that moves initialization of model.metrics (and model.metrics_names) from compile stage to train stage.

Code to reproduce the issue

import numpy as np
import tensorflow as tf
import tensorflow_addons as tfa

x = np.random.random((5,1,5))
y = np.random.random((5,1,5))

inputs = tf.keras.layers.Input(shape=(3,))
outputs = tf.keras.layers.Dense(2, name="out_1")(inputs)
model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer="Adam", loss="mse", metrics=["acc"])

pg = tfa.callbacks.TQDMProgressBar()
model_callbacks = [pg, ]
VERBOSE=0
history = model.fit(
    x,
    y,
    epochs=100,
    verbose=VERBOSE,
    callbacks=model_callbacks
)

Other info / logs

---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-23-fdbb03f574a1> in <module>
     48 #   class_weight=class_weights,
     49     verbose=VERBOSE,
---> 50     callbacks=model_callbacks,
     51 )

~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
     63   def _method_wrapper(self, *args, **kwargs):
     64     if not self._in_multi_worker_mode():  # pylint: disable=protected-access
---> 65       return method(self, *args, **kwargs)
     66 
     67     # Running inside `run_distribute_coordinator` already.

~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
    763       self.stop_training = False
    764       train_function = self.make_train_function()
--> 765       callbacks.on_train_begin()
    766       # Handle fault-tolerance for multi-worker.
    767       # TODO(omalleyt): Fix the ordering issues that mean this has to

~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow/python/keras/callbacks.py in on_train_begin(self, logs)
    445     logs = self._process_logs(logs)
    446     for callback in self.callbacks:
--> 447       callback.on_train_begin(logs)
    448 
    449   def on_train_end(self, logs=None):

~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/tensorflow_addons/callbacks/tqdm_progress_bar.py in on_train_begin(self, logs)
    100     def on_train_begin(self, logs=None):
    101         self.num_epochs = self.params["epochs"]
--> 102         self.metrics = self.params["metrics"]
    103 
    104         if self.show_overall_progress:

KeyError: 'metrics'
@gabrieldemarmiesse
Copy link
Member

I believe that it's going to be fixed with #1365 . We lack tests for those, which explains why they break unexpectedly.

@bthorsted
Copy link
Author

bthorsted commented Mar 30, 2020

@gabrieldemarmiesse I can confirm that the patch you mentioned fixed my issue 👍 It does introduce a new issue, however. Validation metrics are not printed at the end of an epoch anymore.

@gabrieldemarmiesse
Copy link
Member

Yeah we definitly lack some proper testing for the callbacks in Addons.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants