You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
benchmarks that produce HTML/json/whatever: think criterion outputs
it could just be a haskell_binary with some bazel tag we pick that identifies it as a benchmark: at the very least the user can ask bazel to list/run all rules with the tag then.
I think we definitely should just start with haskell_binary alias with a rule tag.
The text was updated successfully, but these errors were encountered:
At novadiscovery, we use a haskell_test for that, with a special benchmark tag which is excluded by default, so that bazel test //... won't run the benchmarks and bazel test //... --test_tags_filter=benchmark will run all of them.
To keep the outputs, we use a custom criterion runner (i.e. a wrapper around Criterion.defaultMain) which puts all the outputs under $TEST_UNDECLARED_OUTPUTS_DIR, so that they are available after under bazel-testlogs/my/target/test.outputs/result.{html,csv,json}
That woks quite well for us, though that require some specific setup inside the repository (a test --test_tag_filters=-benchmark in .bazelrc and a library exporting the custom runner).
The current rules_haskell setup is heavily Nix-based (including the Bazel
binary) and its usability relies on having Cachix or some other custom Nix
cache.
- Switch to an Ubuntu-based container
- Add and use cc_configure.bzl.
- Update the versions of nixpkgs for consistency
- Use the FormationAI/rules_haskell fork, temporarily
We may want some benchmark support. Some ideas:
haskell_binary
with some bazel tag we pick that identifies it as a benchmark: at the very least the user can ask bazel to list/run all rules with the tag then.I think we definitely should just start with haskell_binary alias with a rule tag.
The text was updated successfully, but these errors were encountered: