Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clearer error for failed tests which returned a Result Err() #69517

Open
dbr opened this issue Feb 27, 2020 · 7 comments
Open

Clearer error for failed tests which returned a Result Err() #69517

dbr opened this issue Feb 27, 2020 · 7 comments
Labels
A-libtest Area: `#[test]` / the `test` library C-enhancement Category: An issue proposing an enhancement or a PR with one. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.

Comments

@dbr
Copy link

dbr commented Feb 27, 2020

Problem
I have a test function which returns a Result<...> as a convenient way of testing failable methods. For example:

#[test]
fn exampletest() -> Result<(), std::io::Error> {
    let f = std::fs::File::open("/root/nonexistantfile")?;
    assert_eq!(example_method(&f), 0);
    Ok(())
}

The test started failing for whatever reason, and the cargo test output looked like this:

$ cargo test
   Compiling confusing v0.1.0 (/Users/dbr/confusing)
    Finished test [unoptimized + debuginfo] target(s) in 0.42s
     Running target/debug/deps/confusing-3ccdc020a15819b0

running 1 test
test exampletest ... FAILED

failures:

---- exampletest stdout ----
Error: Os { code: 2, kind: NotFound, message: "No such file or directory" }
thread 'exampletest' panicked at 'assertion failed: `(left == right)`
  left: `1`,
 right: `0`: the test returned a termination value with a non-zero status code (1) which indicates a failure', src/libtest/lib.rs:196:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.


failures:
    exampletest

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out

error: test failed, to rerun pass '--bin confusing'

Not reading the output properly, I interpreted the panicked at 'assertion failed: ... as being a failure from my assert_eq! and started trying to work out why my example_method was returning 0.

I also tried running the tests under a debugger, but the backtrace only pointed at the test closure rather than the probelmatic line:

...
  13: std::panicking::begin_panic_fmt
             at src/libstd/panicking.rs:328
  14: test::assert_test_result
             at ./<::std::macros::panic macros>:9
  15: confusing::exampletest::{{closure}}
             at src/main.rs:6
  16: core::ops::function::FnOnce::call_once
             at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/libcore/ops/function.rs:227

..where src/main.rs:6 is the line fn exampletest() -> Result<(), std::io::Error> {

Steps

  1. Write a test case returning a Result<...>
  2. Have the test case fail by returning Err(...)
  3. Misread the output!

Possible Solution(s)
I think there are two aspects to this:

First is the panic message from the assert is talking about "internal" details - the exit code of the test. The fact the test (I assume) runs as a process and the exit-code is used to determine failure isn't something I was thinking about while writing/debugging the test.

For this, I think it would help a lot to just simplify the panic message from saying

thread 'exampletest' panicked at 'assertion failed: `(left == right)`
  left: `1`,
 right: `0`: the test returned a termination value with a non-zero status code (1) which indicates a failure', src/libtest/lib.rs:196:5

to something more concise like:

thread 'exampletest' panicked at 'the test exited with non-zero exit code (1) which indicates a failure', src/libtest/lib.rs:196:5

Second part I'm not so clear about, but it seems like the Result isn't being handled specially - the Err is just being turned into an exit code, then asserted later. Since the panic comes from this assertion, the backtrace doesn't reference the actual line which failed.

This is different to if I had, say, used .unwrap() instead of ?:

...
  16: core::result::Result<T,E>::unwrap
             at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/libcore/result.rs:933
  17: confusing::exampletest
             at src/main.rs:7
  18: confusing::exampletest::{{closure}}
             at src/main.rs:6
  19: core::ops::function::FnOnce::call_once
             at /rustc/73528e339aae0f17a15ffa49a8ac608f50c6cf14/src/libcore/ops/function.rs:227
...

where line 7 is the problem-causing line let _f = std::fs::File::open("/root/nonexistantfile").unwrap(); - it would be nice if using ? resulted in a similar traceback

Notes

Output of cargo version:

cargo 1.40.0 (bc8e4c8be 2019-11-22)
$ rustup show
Default host: x86_64-apple-darwin
rustup home:  /Users/dbr/.rustup

installed toolchains
--------------------

stable-x86_64-apple-darwin
nightly-x86_64-apple-darwin

installed targets for active toolchain
--------------------------------------

arm-unknown-linux-musleabihf
wasm32-unknown-unknown
x86_64-apple-darwin

active toolchain
----------------

stable-x86_64-apple-darwin (default)
rustc 1.40.0 (73528e339 2019-12-16)
@alexcrichton alexcrichton transferred this issue from rust-lang/cargo Feb 27, 2020
@alexcrichton
Copy link
Member

Thanks for the report! I've transferred this to rust-lang/rust instead of rust-lang/cargo since I think it's more about libtest which lives in the rust repo.

@Centril Centril added T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. A-libtest Area: `#[test]` / the `test` library labels Feb 27, 2020
@ehuss
Copy link
Contributor

ehuss commented Feb 27, 2020

There was a PR to improve this (#52453), but perhaps it didn't go far enough. I can see how it can be confusing, but I'm not sure what a better output would look like.

@JohnTitor JohnTitor added the C-enhancement Category: An issue proposing an enhancement or a PR with one. label Apr 19, 2020
@nikclayton-dfinity
Copy link

nikclayton-dfinity commented Feb 22, 2021

I'm not sure what a better output would look like.

Here's some example output when a test panics by using unwrap() on a result:

---- node::node_configuration::into_proto_http stdout ----
thread 'node::node_configuration::into_proto_http' panicked at 'called `Result::unwrap()` on an `Err` value: ConnectionEndpointFailed { source: InvalidScheme { scheme: "bad-scheme" } }', prep/src/node.rs:559:19

Good:

  • Tells me exactly where the error happened

Bad:

  • Error location is at the end of the line, so I can't look for it a specific place on the screen (or may require scrolling to see, as it does in this example)
  • Contains a lot of boilerplate text ("panicked at ... an Err")

Here's the same test, where .unwrap() was replaced by ?:

---- node::node_configuration::into_proto_http stdout ----
Error: could not parse connection endpoint: invalid scheme for endpoint: bad-scheme

Caused by:
    invalid scheme for endpoint: bad-scheme
thread 'node::node_configuration::into_proto_http' panicked at 'assertion failed: `(left == right)`
  left: `1`,
 right: `0`: the test returned a termination value with a non-zero status code (1) which indicates a failure', /private/tmp/nix-build-rustc-1.45.2.drv-0/rustc-1.45.2-src/src/libstd/macros.rs:16:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

Good:

  • Shows me the error message string (which might end up log files)

Bad:

  • Contains even more boilerplate
  • Doesn't tell me where the error happened
  • The reference to panicking is confusing -- nowhere in my code panicked
  • Uses many more lines of precious terminal space
  • The assertion failure is unhelpful
  • Running with RUST_BACKTRACE=1 doesn't show anything helpful (the point where the error occurs is not in the backtrace)

I think something that shows

  1. The location of the line that caused the error
  2. The error message
  3. The error type

and no extraneous panic information would be ideal. Something like:

---- node::node_configuration::into_proto_http stdout ----
prep/src/node.rs:559:19: Error: could not parse connection endpoint: invalid scheme for endpoint: bad-scheme
prep/src/node.rs:559:19: ConnectionEndpointFailed { source: InvalidScheme { scheme: "bad-scheme" } }

Location of the error is always in the same place so I don't have to scan the screen for it.

The second line gives me the human readable error, the third line gives me the error type in case I need to go poking through the code to see where this error can be generated

karlmdavis added a commit to karlmdavis/fhir-benchmarks that referenced this issue Nov 23, 2021
It's kinda' sorta' explained here:
<rust-lang/rust#69517>. Basically, though,
tests that return a `Result<_>` will have less helpful backtraces than
tests that just use asserts and `unwrap()/expect(_)` all over the place.

I recently ran into an undiagnosable test failure that this commit makes
diagnosable, so this is now "the way things are done" for this project,
going forwards.
karlmdavis added a commit to karlmdavis/fhir-benchmarks that referenced this issue Nov 23, 2021
It's kinda' sorta' explained here:
<rust-lang/rust#69517>. Basically, though,
tests that return a `Result<_>` will have less helpful backtraces than
tests that just use asserts and `unwrap()/expect(_)` all over the place.

I recently ran into an undiagnosable test failure that this commit makes
diagnosable, so this is now "the way things are done" for this project,
going forwards.
karlmdavis added a commit to karlmdavis/fhir-benchmarks that referenced this issue Nov 23, 2021
It's kinda' sorta' explained here:
<rust-lang/rust#69517>. Basically, though,
tests that return a `Result<_>` will have less helpful backtraces than
tests that just use asserts and `unwrap()/expect(_)` all over the place.

I recently ran into an undiagnosable test failure that this commit makes
diagnosable, so this is now "the way things are done" for this project,
going forwards.
@Zalathar
Copy link
Contributor

I ran into this myself when starting to write some tests, and it was baffling.

Since I had specifically written a test that doesn’t panic, and the failed test result prominently showed a complicated panic in the test runner internals, I ruled out the possibility of this being intended behaviour. I could only conclude that I must be triggering some kind of obscure bug in the runner, and spent a lot of time trying to track it down.

@dimo414
Copy link
Contributor

dimo414 commented Dec 4, 2022

This surprised me as well, the "returned a termination value with a non-zero status code" error message is outright unhelpful. Adding boilerplate like:

#[test]
fn wrapped() {
  fn wrapper() -> Result<(), std::io::Error> {
    let f = std::fs::File::open("/root/nonexistantfile")?;
    assert_eq!(example_method(&f), 0);
    Ok(())
  }
  wrapper().unwrap();
}

results (IMO) in a clearly better failure message than the default behavior. Playground example.

dimo414 added a commit to dimo414/parameterized_test that referenced this issue Dec 4, 2022
…t 2018's support for Result-returning tests.

This also adds a dependency on anyhow, to simplify result propagation and avoid requiring users to specify a particular Result-type.

Due to rust-lang/rust#69517 we manually unwrap() the user-provided Result for the time being. When that issue is resolved the implementation can be simplified slightly.
@cole-abbeduto-particle
Copy link

cole-abbeduto-particle commented Mar 17, 2023

Since rust 1.66 we have slightly improved in that we now show the error message. But we've also lost the line number and the rest of the trace which unwrap has.

unwrap:

---- test::will_this_fail stdout ----
thread 'test::will_this_fail' panicked at 'called `Result::unwrap()` on an `Err` value: Is this a failure', /src/bin/file.rs:179:65
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace

return Err or ?:

---- test::will_this_fail stdout ----
Error: Is this a failure?

@dbr
Copy link
Author

dbr commented May 31, 2023

Sidenote: There is a crate testresult which does this including line numbers (which I found via this blog post)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-libtest Area: `#[test]` / the `test` library C-enhancement Category: An issue proposing an enhancement or a PR with one. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
Status: No status
Development

Successfully merging a pull request may close this issue.

9 participants