-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pytest dying on travis with a simple single test when collecting tests #3527
Comments
GitMate.io thinks possibly related issues are #1123 (does pytest support to this way to collect test cases?), #1978 (pytest not finding any tests under module), #1563 (All pytest tests skipped), #2654 (Terminal writing with a single test produces "collected 1 item s"), and #239 (not collect deselected tests). |
@jonathanunderwood at frist clance this looks like the process gets directly killed at collection this might be a memry constraint, please check with the @travis-ci people |
https://docs.travis-ci.com/user/common-build-problems/#My-build-script-is-killed-without-any-error 3gb is the limit, closing as user error |
But why is it being killed at collection stage? The body of the test isn't run at that point, is it? I would understand if this was dying when the tests are actually being run, but this is a failure during collection, and this is the only test to be collected, so it's unclear to me why there would be a lot of memory consumption, unless I am hitting some kind of bug in pytest. |
reopening since i missed the point |
|
further testing revealed python on travis crashing on the file as well |
OK, so a quick summary in case anyone else hits this problem.
Confirmed by having a simple file like this: def test():
try:
huge = b'\0' * 0x100000000 # 0x100000000 # warning: this allocates 4GB of memory!
except MemoryError:
print('OOM') and locally running:
Shows that during byte compilation, 4GB of memory is used, even though the function isn't actually called at any point. So, I'll close this, as this is a Python problem, not a pytest bug. |
@jonathanunderwood thanks for the followup as well as calling me out on my mistake earlier 👍 |
No worries at all, thank you for patiently working with me on IRC this afternoon to get to the bottom of it. I learnt a lot more about the python byte compilation process than I thought I'd ever need to know! |
For reasons not fully understood, using TensorFlow Probability causes Travis CI to use too much memory and a core dump occurs. So until this is understood all use of tfp should be reverted to a combination of tf.distributions and tf.contrib.distributions (for the Poisson). This is really annoying, but at least a temporary fix that doesn't break all functionality and gains. c.f. - pytest-dev/pytest#3527 - travis-ci/travis-ci#9153
For reasons not fully understood, using TensorFlow Probability causes Travis CI to use too much memory and a core dump occurs. So until this is understood all use of tfp should be reverted to a combination of tf.distributions and tf.contrib.distributions (for the Poisson). This is really annoying, but at least a temporary fix that doesn't break all functionality and gains. c.f. - pytest-dev/pytest#3527 - travis-ci/travis-ci#9153
For reasons not fully understood, using TensorFlow Probability causes Travis CI to use too much memory and a core dump occurs. So until this is understood all use of tfp should be reverted to a combination of tf.distributions and tf.contrib.distributions (for the Poisson). This is really annoying, but at least a temporary fix that doesn't break all functionality and gains. c.f. - pytest-dev/pytest#3527 - travis-ci/travis-ci#9153
For reasons not fully understood, using TensorFlow Probability causes Travis CI to use too much memory and a core dump occurs. So until this is understood all use of tfp should be reverted to a combination of tf.distributions and tf.contrib.distributions (for the Poisson). This is really annoying, but at least a temporary fix that doesn't break all functionality and gains. c.f. - pytest-dev/pytest#3527 - travis-ci/travis-ci#9153
For reasons not fully understood, using TensorFlow Probability causes Travis CI to use too much memory and a core dump occurs. So until this is understood all use of tfp should be reverted to a combination of tf.distributions and tf.contrib.distributions (for the Poisson). This is really annoying, but at least a temporary fix that doesn't break all functionality and gains. c.f. - pytest-dev/pytest#3527 - travis-ci/travis-ci#9153
* Use TensorFlow Probability for distributions The only real change is swapping out tf.distributions for tfp.distributions, however this is a large improvement on the backend. In addition, tfp is able to better approximate the continous approximation to the Poisson distribution's p.m.f. while still providing smooth gradients. Also add some docstrings. * Don't use the Normal approximation for the NumPy backend in testing With improvments to the ML backends it isn't needed * Use torch.distributions.Poisson for poisson p.m.f. approximation Provides better approximation then the Normal approximation * Use PyTorch's Poisson log_prob approximation for MXNet's poisson This is just implimenting the code that exists at the moment in PyTorch's library * Add docstrings to NumPy backend poisson and normal * Move examples to before Args and Returns For style consistency in the docs * Remove use of poisson_from_normal=True from everywhere It is no longer needed for comparisons and so is not needed anywhere Though as a result of this, loosen the tolerance for the standard deviation of results in test_tensor to 5e-5 * Wrap json.load in with clause to safely load and close Resolves issue of ResourceWarning: unclosed file <_io.TextIOWrapper name='<your path here>/pyhf/pyhf/data/spec.json' mode='r' encoding='UTF-8'> Additionally apply autopep8 * Wrap click.open_file in with clause to safely load and close Resolves issue of ResourceWarning: unclosed file * Split pytest into two runs given Travis memory constraints If pytest runs all tests at once then it exceeds the alloted memory for it by Travis CI. To deal with this, tests/test_notebooks is run by itself in a second run of pytest. This is a bit strange, but seems to work. As a result of this and the order that pyest runs the tests, the docstring tests finish just before the import tests run. This means that the last backend tested is still in memory, and so the NumPy backend needs to get set as that is what is needed for the test_import.py * Revert back to using tf.distribution instead of tfp For reasons not fully understood, using TensorFlow Probability causes Travis CI to use too much memory and a core dump occurs. So until this is understood all use of tfp should be reverted to a combination of tf.distributions and tf.contrib.distributions (for the Poisson). This is really annoying, but at least a temporary fix that doesn't break all functionality and gains. c.f. - pytest-dev/pytest#3527 - travis-ci/travis-ci#9153 * Add names to notebook tests for visual clarity * Add a universal fix to starting backend Calling the set default backend before and ensures that the backend gets reset even if it just came off of a docstring test outside of the tests dir As a result, remove the explicit calls to set the NumPy backend in test_import.py * Improve docstring with Gamma function comment Mention that the continuous approximation is done using n! = Gamma(n+1) Additionally use :code: and :math: qualifiers on different terms to give specific highlighting in the rendered docs
I am seeing failures for a really simple test when running on travis - pytest seems to be being killed during the test collection phase. I have isolated down to running a single test file containing only one test:
On travis I see this:
Unfortunately I can't reproduce this locally, so I am at a bit of a loss as to what's happening. Happy to provide more info if needed.
The text was updated successfully, but these errors were encountered: