-
Notifications
You must be signed in to change notification settings - Fork 613
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable pytest and bazel tests. #1243
Enable pytest and bazel tests. #1243
Conversation
@rahulunair @pkan2 @RaphaelMeudec You are owners of some files modified in this pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks very much! Excited to unify the test setup
--test_output=errors --local_test_jobs=8 \ | ||
--crosstool_top=//build_deps/toolchains/gcc7_manylinux2010-nvcc-cuda10.1:toolchain \ | ||
//tensorflow_addons/... | ||
bash tools/ci_testing/addons_cpu.sh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would think this is an issue. addons_cpu
will compile the custom-ops without any particular toolchain:
https://github.com/tensorflow/addons/blob/master/tools/ci_testing/addons_cpu.sh#L60-L64
I thought the compile at the subsequent build would be skipped since they're already created, but they won't be compatible with tf from pypi. However the test-release-wheel is passing, so I wanted to verify with you that this isn't reverting to python_op fallback?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add a bazel clean --expunge
after this just for sanity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure thing. I'll make sure the right toolchaine is used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM thanks for all your work!
@gabrieldemarmiesse So this is failing only for py37 since the push:
|
The problem seems to come from the coverage module having problems with the custom_ops docker image. You can reproduce with
but in a normal python docker image:
The error doesn't happen. So there is a problem with sqllite in the custom_ops docker image. The quickest fix right now is to remove code coverage from the CI. People can still use it locally. That's until we find out how to fix this. |
* Enable pytest AND bazel tests. * Added an example of plain test function. * Use tf-cpu with bazel. * Used OLDEST_PY_VERSION * Fix oldest py vesion. * Added a comment for python3 * Add bazel clean.
Does the pytest support allow declaring tests with pytest.marks.parameterized? A lot of our tests use nested loops for parametrization can probably read cleaner converted to parameterized. |
Used the idea of Jason and @Squadrick to make bazel run pytest.
It's possible to lauch the tests with the pytest command and with the bazel command. In the build, tests are all started by pytest but I made a single new build (py35, linux) to test with bazel and ensure we're still compatible.
Some goodies:
The 25 slowest tests are printed in the CI logs
It's just an option from the command line;
--durations=25
.The code coverage is printed in the CI logs
Thanks to the
--cov=tensorflow_addons
command line option (needspip install pytest-cov
).Because we're now people who like fancy things, we can even have a beautiful HTML report by just adding an option from the command line:
--cov-report html
.We can use
assert
all the time.No need to remember specific ones. Note that this is only available if at the bottom of the file,
sys.exit(pytest.main([__file__]))
is used instead oftf.test.main()
. This is possible on all files, and we should enable it progressively by changing the last line. One example is in this pull request: seetensorflow_addons/seq2seq/basic_decoder_test.py
.We don't have to subclass
tf.test.TestCase
, but we can if we needself
.See
tensorflow_addons/register_test.py
where simple functions are used.We can run individual methods instead of the whole file.
pytest ./tensorflow_addons/layers/wrappers_test.py::WeightNormalizationTest::test_removal_Conv2D
Automatic post mortem debugging
--pdb
start the interactive Python debugger on errors or KeyboardInterrupt (useful when tests are stuck).Some baddies
We still need to remember to declare the test in the BUILD file
Otherwise,
pytest ./tensorflow_addons
will run it, but notbazel test
.We still need to remember to use the boilerplate at the bottom of the test file
Otherwise,
pytest ./tensorflow_addons
will run it, but notbazel test
.We should use
sys.exit(pytest.main([__file__]))
from now on (bazel will run the file with pytest when this is written, instead of unittest).Pytest prevents us from using certain functions.
On Windows, for some unknown reason,
tf.compat.v1.test.get_temp_dir()
crash with pytest.Black magic 😨
If this pull request is too big, I can extract some smaller pull requests from it to reduce the size.
And thanks everyone for the great feedback in the meeting and in github!