Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: TensorFlow RunInference signature dtype issue #23756

Closed
yeandy opened this issue Oct 20, 2022 · 5 comments
Closed

[Bug]: TensorFlow RunInference signature dtype issue #23756

yeandy opened this issue Oct 20, 2022 · 5 comments
Labels
bug done & done Issue has been reviewed after it was closed for verification, followups, etc. ml P2 python run-inference

Comments

@yeandy
Copy link
Contributor

yeandy commented Oct 20, 2022

What happened?

I am trying to use the TensorFlow MobileNet v2 320x320 model model in RunInference using the API from tfx_bsl.

But I'm running into this error. Please see this #23754. Looks like a signature dtype issue, where the expected type is a string (which has the enum value of 7), but the dtype of the MobileNet model is uint8 (which corresponds to the enum value of 4)

2022-10-20 11:40:58.577845: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Traceback (most recent call last):
  File "/Users/yeandy/.pyenv/versions/3.8.9/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/Users/yeandy/.pyenv/versions/3.8.9/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/yeandy/projects/beam/sdks/python/apache_beam/examples/inference/tensorflow_object_detection.py", line 260, in <module>
    run()
  File "/Users/yeandy/projects/beam/sdks/python/apache_beam/examples/inference/tensorflow_object_detection.py", line 233, in run
    tf_model_handler = CreateModelHandler(inference_spec_type)
  File "/Users/yeandy/.pyenv/versions/mypy-pytorch3.8/lib/python3.8/site-packages/tfx_bsl/public/beam/run_inference.py", line 281, in CreateModelHandler
    return run_inference.create_model_handler(inference_spec_type, None, None)
  File "/Users/yeandy/.pyenv/versions/mypy-pytorch3.8/lib/python3.8/site-packages/tfx_bsl/beam/run_inference.py", line 125, in create_model_handler
    return _get_saved_model_handler(inference_spec_type, load_override_fn)
  File "/Users/yeandy/.pyenv/versions/mypy-pytorch3.8/lib/python3.8/site-packages/tfx_bsl/beam/run_inference.py", line 243, in _get_saved_model_handler
    return _PredictModelHandler(inference_spec_type, load_override_fn)
  File "/Users/yeandy/.pyenv/versions/mypy-pytorch3.8/lib/python3.8/site-packages/tfx_bsl/beam/run_inference.py", line 586, in __init__
    self._io_tensor_spec = self._make_io_tensor_spec()
  File "/Users/yeandy/.pyenv/versions/mypy-pytorch3.8/lib/python3.8/site-packages/tfx_bsl/beam/run_inference.py", line 626, in _make_io_tensor_spec
    raise ValueError(
ValueError: Input dtype is expected to be 7, got 4

Original command:

python -m apache_beam.examples.inference.tensorflow_object_detection \
--input gs://apache-beam-ml/testing/inputs/tensorrt_image_file_names.txt \
--output tf_predictions_val2017.txt \
--model_path gs://apache-beam-testing-yeandy/tfx-inference/model/ssd_mobilenet_v2_320x320_coco17_tpu-8/

Issue Priority

Priority: 2

Issue Component

Component: run-inference

@damccorm
Copy link
Contributor

This is related to #23584 which is caused by https://github.com/tensorflow/tfx-bsl/blob/49baf317527cc53d2dfb7d6c37b59757cd6d83a8/tfx_bsl/beam/run_inference.py#L626 - basically tfx-bsl's implementation of their model handler requires a string

You can workaround it by adding a tf.function like we do in our notebook to convert from string records to uint8 records, and then convert your input types to string tf.examples - https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_tensorflow.ipynb - its ugly, but should be workable

@yeandy
Copy link
Contributor Author

yeandy commented Oct 20, 2022

Thanks for the info. This confirms what I was seeing. @AnandInguva also pointed me to his PR #23456 that introduces a helper function to assist with this conversion.

@yeandy
Copy link
Contributor Author

yeandy commented Oct 24, 2022

To fix this, use the function below https://github.com/apache/beam/blob/8f85a6cdf8d0d71218137fbec149d13ae6fc595b/sdks/python/apache_beam/examples/inference/tfx_bsl/build_tensorflow_model.py#L71
You can pull the PR, or just copy the code directly.

Make sure you have the saved_model of the TF model. Then run the following in your Python shell

import tensorflow as tf
from apache_beam.examples.inference.tfx_bsl.build_tensorflow_model import save_tf_model
path = '/my_path/ssd_mobilenet_v2_320x320_coco17_tpu-8/saved_model'
ssd_mobilenet = tf.saved_model.load(path)
save_tf_model_with_signature('/my_path/ssd_mobilenet_v2_320x320_coco17_tpu-8_string_signature', ssd_mobilenet, input_dtype=tf.uint8)

@AnandInguva
Copy link
Contributor

I think we can close this issue. Please reopen it if there is anything else.

@AnandInguva
Copy link
Contributor

.close-issue

@damccorm damccorm added the done & done Issue has been reviewed after it was closed for verification, followups, etc. label Nov 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug done & done Issue has been reviewed after it was closed for verification, followups, etc. ml P2 python run-inference
Projects
None yet
Development

No branches or pull requests

3 participants