Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does CI fit as a standard xtask? #1

Open
epage opened this issue Oct 16, 2019 · 10 comments
Open

Does CI fit as a standard xtask? #1

epage opened this issue Oct 16, 2019 · 10 comments

Comments

@epage
Copy link
Collaborator

epage commented Oct 16, 2019

I'm concerned that xtask ci will lead to less optimal practices as presented:

  • Some CI steps you want to run under every environment (test) while others you only run once for performance (rustfmt, clippy) but this instead couples the CI tasks together
  • Some CI steps you only want to run under special environments (RUSTFLAGS=-D warn cargo check for consistent warnings) while others you need to run in every environment
  • Care is needed so that both the users and the CI do not think the CI is hung.
  • Can't leverage CI UX features to call out what step in the process failed ("CI failed" vs "clippy failed")

I say this unsure whether to drop this xtask or to find ways to mitigate these concerns.

@epage
Copy link
Collaborator Author

epage commented Oct 16, 2019

There are cases where it can be useful to wrap the standard rust tools

  • Gathering code coverage
  • Translating test output to junit for better CI integration
  • I'd love to see a standard process created between build tools, CI vendors, and Review vendors to open issues in a review for CI failures. Not necessarily tests since they can't point to root cause but compiler errors/warnings, static analysis errors, code style violations, etc.

We can take the approach of providing community tools that wrap them or create libs for the xtasks. I guess one advantage for community tools is CI times since the CI could download a pre-built binary.

@matklad
Copy link
Owner

matklad commented Oct 16, 2019

These are all good questions! I feel that they all have good answers in the xtask world, but, at the same time, I don't think that these answers are obvious, so I agree that maybe it makes sense to remove ci from the set of recommended tasks, until someone comes up with precise guidelines (or even xtask-ci crate).

So, let's remove xtask ci for now, but let me documents the solutions to your questions:

Some CI steps you want to run under every environment (test) while others you only run once for performance (rustfmt, clippy) but this instead couples the CI tasks together

It's important that cargo xtask ci runs everything there is, but you can add cargo xtask ci --check / --tests / --full arguments to narrow down the set of specific checks being run

Some CI steps you only want to run under special environments (RUSTFLAGS=-D warn cargo check for consistent warnings) while others you need to run in every environment

Actually, being able to set RUST_FLAGS=-D warn from withing xtask ci is one of the main reason why I think this is better than specifying this in .travis.yaml.

Care is needed so that both the users and the CI do not think the CI is hung.

This I think is orthogonal to xtasks?

Can't leverage CI UX features to call out what step in the process failed

I think CI providers allow to specify steps by printing start/end "tags" to stdout using some special syntax?

@matklad
Copy link
Owner

matklad commented Oct 16, 2019

Hm, I've decided to just add some cautionary warnings and a link to this issue instead: it still seems to me that for complex multi-provider CI setups xtask approach might be better and it makes sense to bless the ci name: 33bb762

@epage
Copy link
Collaborator Author

epage commented Oct 16, 2019

Care is needed so that both the users and the CI do not think the CI is hung.

This I think is orthogonal to xtasks?

Not quite. I'm referring to there being no user-visible output (which users and some CIs use to determine if something is hung). I've run into cases where Travis thought my non-hung build was hung and I had to do extra steps to make travis not cancel my builds.

@epage
Copy link
Collaborator Author

epage commented Oct 16, 2019

Can't leverage CI UX features to call out what step in the process failed

I think CI providers allow to specify steps by printing start/end "tags" to stdout using some special syntax?

I think AzDO supports this kind of stuff. I've not noticed it from the others. It is also CI-specific which makes it so helper crates either need to know about your CI, we need a special log-like mechanism, or all reporting must be implemented by the user.

@epage
Copy link
Collaborator Author

epage commented Oct 16, 2019

Actually, being able to set RUST_FLAGS=-D warn from withing xtask ci is one of the main reason why I think this is better than specifying this in .travis.yaml.

Are you referring to the fact that the user can more easily reproduce what the CI does? That doesn't address the concern that checking for warnings in a CI should only happens in specific contexts. We could make this a flag as well.

As for xtasks ci blocking the user when there are warnings, the developer will still be running check and test as part of their cycle since warnings (like unused variables) are likely to popup during code changes but shouldn't block the user from testing. However if a user then runs xtask ci at the end, nothing new will be needed for building causing the warnings to not popup, so it feels like the benefit is limited.

@epage
Copy link
Collaborator Author

epage commented Oct 16, 2019

Some CI steps you want to run under every environment (test) while others you only run once for performance (rustfmt, clippy) but this instead couples the CI tasks together

It's important that cargo xtask ci runs everything there is, but you can add cargo xtask ci --check / --tests / --full arguments to narrow down the set of specific checks being run

I assume it is important for the sake of a user being able to test the standard they are being held to without having to dig into contributing docs or build pipelines?

I had concerns over the flag names you mentioned (not quite communicating the policy) but looking at your commit, leaving the flags as implementation defined I think is the right approach.

@matklad
Copy link
Owner

matklad commented Oct 16, 2019

Not quite. I'm referring to there being no user-visible output

I would think that xtask ci would inherit stdio to all the subprocess, so you'd just see the usual output?

However if a user then runs xtask ci at the end, nothing new will be needed for building causing the warnings to not popup

If xtask ci sets RUSTFLAGS, then everything will be rebuild, which sucks, but is also cool, because you wouldn't miss the warnings. I personally constantly miss warnings because not everything is rebuild (but I think this is actually being fixed in Cargo these days?).

@epage
Copy link
Collaborator Author

epage commented Oct 16, 2019

I would think that xtask ci would inherit stdio to all the subprocess, so you'd just see the usual output?

So you are assuming that a call to cargo test would inherit stdout and just pipe its output to it?

  • Getting the right verbosity to ensure enough output is a concern
  • If we do any programmatic processing of the output (like generating junit) then we need to ensure we still print to the screen.

@epage
Copy link
Collaborator Author

epage commented Oct 16, 2019

If xtask ci sets RUSTFLAGS, then everything will be rebuild, which sucks, but is also cool, because you wouldn't miss the warnings. I personally constantly miss warnings because not everything is rebuild (but I think this is actually being fixed in Cargo these days?).

Oh, I didn't know that RUSTFLAGS is included in whether to rebuild or not!

And yeah, I remember there being talk of fixing the warnings issue; i just wasn't sure if it was far enough along to include in my analysis.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants