Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use pytest as our test runner #1177

Closed
gabrieldemarmiesse opened this issue Feb 27, 2020 · 2 comments · Fixed by #1263
Closed

Use pytest as our test runner #1177

gabrieldemarmiesse opened this issue Feb 27, 2020 · 2 comments · Fixed by #1263
Labels
test-cases Related to Addons tests

Comments

@gabrieldemarmiesse
Copy link
Member

See #1174.

We have multiple approches to this change and I believe it would be beneficial to discuss it before implementing anything.

@Squadrick would you be able when you have the time to make a summary of how you see the migration? I may not have fully undertood your plan. I'll also make a summary of how I see a possible pytest adoption afterwards so I can highlight the differences and the points where we're going in the same direction.
Thank you in advance.

@seanpmorgan We might need to discuss it in the monthly meeting.

@seanpmorgan seanpmorgan added the test-cases Related to Addons tests label Feb 28, 2020
@gabrieldemarmiesse
Copy link
Member Author

gabrieldemarmiesse commented Mar 1, 2020

I have a bit of time, so I'll make a proposal here. Currently, we use bazel for 3 things:

  • Compiling custom ops
  • Testing
  • Building the wheels

My plan would be to do:

  • Compiling custom ops -> bazel
  • Testing -> pytest, without any bazel involvement
  • Building the wheels -> bazel

To do that, we need to have TF addons working in editable mode. This is required by python test runners (not only pytest, uniitest too when not working with bazel). This has another benefit: It's a more flexible developement environement. IDEs will work, and we can run python code in scripts using our developement version of Addons. See #1186 .

To sum it up. Here would be the steps:

  • Compiling custom ops:
bazel build //tensorflow_addons/...

(same as before)

  • Testing:

See #1186

pip install -e ./                 # install the python project in editable mode
python configure.py
bash tools/install_so_files.sh    # compiling custom ops + copy in the right directory
pytest ./

when running the tests again:

bash tools/install_so_files.sh    # should be very fast because of the bazel cache
pytest ./

or if no C++/Cuda files, have changed, it's possible to skip the install_so_files.sh step.

  • Building the wheels:

Same as before. See the github workflow files, that doesn't change.

Benefits

  • --durantion=x in pytest shows the x slowest functions (or methods) in the test suite. See Function that print each function's execution time  #1174 (comment)
  • We can enable code coverage very easily: Setup Codecoverage CI #993 (comment) tutorial
  • We can select individual methods/functions to run. No need to run the whole test file. This is done with the :: syntax
  • BUILD files are not used at test time. No more headaches about files not being present when running with bazel. No more forgetting to add the test in the build file.
  • Pytest doesn't need the if __name__ == "__main__": tf.test.main() at the bottom of the file. It means that the tests will run even if the contributor and reviewer forget about this boilerplate. We'll be sure that all the tests are running.
  • In the codebase:
  1. We can use plain assert instead of self.assertEqual, self.assertNotIn, self.assertIn... . This works for many many things. The error messages are better. For example self.assertTrue(a > 15) won't tell you the value of a in uniitest. But pytest, with assert a > 15 will tell you.
  2. We don't have to use subclassing when we don't use tf.test.TestCase.
  3. We can easily make testing functions that we can call in many situations, because we don't need to call self. It's going to become way easier to fix Unified Sub-package Testing #992 . Fixture are a great help.
  4. We can stop declaring the tests in BUILD files.

And many more!

Note that the codebase changes are not mandatory. Pytest will happily run our current codebase, on the condition that the SO files are in the right places (merging #1186 is enough).

Drawbacks

  • No more cache for testing. I believe this is minor because the dependencies are really easy to understand in TF Addons. If you need to run a specific file, you run it from the command line: pytest ./tensorflow_addons/optimizers for example, or pytest ./tensorflow_addons/optimizers/adagrad_test.py.
  • Two steps are needed when testing C++/CUDA code: bash tools/install_so_files.sh && pytest ./ instead of bazel test /tensorflow_addons/.... The first step uses bazel's cache, so speed isn't an issue. It's more having to write the two commands every time.

Changes required

#1186

And after that only some minor fixes here and there. Running pytest is already possible on the huge majority of test files in TF addons given that the SO are in the right places.

Some interesting links

Running unittest with pytest
Minimal tutorial
Command lines options (nice to see what is possible out of the box)

Proof of concept with pytest running in the CI

#1207

@gabrieldemarmiesse
Copy link
Member Author

Final and clean pull request: #1243

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
test-cases Related to Addons tests
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants