-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Partial fix for Python 3.12 #158
Conversation
Python 3.12 doesn't call `startTest()` for skipped tests, which caused AttributeErrors for things initialized in our `TestResult.startTest()` (and `OutputFormatter.start_test()`). This is not a complete fix: there are still test failures because they expected "ran 1 tests, 1 skipped" and now get "ran 0 tests, 0 skipped". I don't know if we want to keep the old method of counting; it kind of makes sense to me not to count skipped tests as running ones. I'm not sure how to fix that without adding sys.version_info >= (3, 12) checks everywhere.
@@ -52,7 +52,7 @@ class OutputFormatter: | |||
|
|||
def __init__(self, options): | |||
self.options = options | |||
self.last_width = 0 | |||
self.test_width = self.last_width = 0 | |||
self.compute_max_width() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBH I don't know if it's enough to initialize test_width here, or if it should also be reset to 0 in every test_success()
/test_error()
/test_failure()
.
Does it really no longer count skipped tests at all (or only skipped tests which started before they decided they want the get skipped)?
I assume that failing doctests are the main problem. In this case, a normalizer might be used as a central point to handle the version differences. |
There's also a risk that by skipping (Which is probably still better than the status quo, where the test runner crashes the first time it encounters a skipped test.) |
Marius Gedminas wrote at 2023-12-19 03:23 -0800:
There's also a risk that by skipping `startTest()` there's a combination of `-v`/`-p` options and a sequence of test runs/failures/skips that could mess up the output because I got the `test_width`/`last_width` logic wrong in this PR.
I got an alternative idea:
apparently, the (original) error happened in our `stopTest`.
This means, we already have a way to use our own `stopTest`. Likely,
we can have our own `startTest` in a similar way.
In this case, we could let our `startTest` set a marker "startTest run"
on the test instance and `stopTest` look for it, if not there, `stopTest`
could call `startTest` before its real work.
|
#159 is a simpler and more complete fix for this, closing. |
Python 3.12 doesn't call
startTest()
for skipped tests, which caused AttributeErrors for things initialized in ourTestResult.startTest()
(andOutputFormatter.start_test()
).This is not a complete fix: there are still test failures because they expected "ran 1 tests, 1 skipped" and now get "ran 0 tests, 0 skipped". I don't know if we want to keep the old method of counting; it kind of makes sense to me not to count skipped tests as running ones.
I'm not sure how to fix that without adding sys.version_info >= (3, 12) checks everywhere.
See #157 and python/cpython#106584.