Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmark tests to CI #243

Merged
merged 2 commits into from
Aug 13, 2024
Merged

Add benchmark tests to CI #243

merged 2 commits into from
Aug 13, 2024

Conversation

KitHat
Copy link
Member

@KitHat KitHat commented Jul 10, 2024

Fixes #242

Modified script that runs benchmarks to also build the binary for them.
Added the benchmark run to the CI.
Added a checkbox to verify the pallet inclusion to the benchmarking run.

PR Checklist

  • Tests
  • Documentation
  • v2 branch is backmerged to master, it contains a fix for the pallet_treasury benchmark run in treasury pallet

@KitHat KitHat requested review from 4meta5 and ozgunozerk July 10, 2024 13:51
@KitHat KitHat self-assigned this Jul 10, 2024
Copy link

netlify bot commented Jul 10, 2024

Deploy Preview for docs-oz-polkadot canceled.

Name Link
🔨 Latest commit 35ba8f3
🔍 Latest deploy log https://app.netlify.com/sites/docs-oz-polkadot/deploys/66ba338709978a0008fdfb2b

Copy link
Collaborator

@ggonzalez94 ggonzalez94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great to have as part of the CI! I left a few small comments, but have one bigger question:
Are the benchmarks stored somewhere? I think not right? If not maybe we should consider outputting them to the console or finding a way they can be consumed on each run of the action don't you think?

Copy link
Contributor

@4meta5 4meta5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On board with this change! A few questions to confirm understanding:

  1. this change runs the benchmarks to generate weights in each PR instead of just checking compilation as before?
  2. are the generated weights written to the PR or does this job simply verify that they run without errors?

Additional Qs related to how this impacts our developer experience:
3. how much time do you think this will add to each CI run?
4. will the job auto-cancel if another commit is pushed to the PR?

For (4), I think an auto-cancel on new commits would be ideal so that the CI is not blocked by slow runs on old commits. Happy to do this in a follow up if we all agree it would be helpful.

@KitHat
Copy link
Member Author

KitHat commented Jul 10, 2024

This is great to have as part of the CI! I left a few small comments, but have one bigger question: Are the benchmarks stored somewhere? I think not right? If not maybe we should consider outputting them to the console or finding a way they can be consumed on each run of the action don't you think?

Benchmarks are not stored and it does not make any sense. Their contents are not really representative -- for release they should be ran on a special machine, not in the CI runner. That way, the main goal of this job is to verify that there are no errors in the benchmark run.

@KitHat
Copy link
Member Author

KitHat commented Jul 10, 2024

On board with this change! A few questions to confirm understanding:

  1. this change runs the benchmarks to generate weights in each PR instead of just checking compilation as before?
  2. are the generated weights written to the PR or does this job simply verify that they run without errors?

Additional Qs related to how this impacts our developer experience: 3. how much time do you think this will add to each CI run? 4. will the job auto-cancel if another commit is pushed to the PR?

For (4), I think an auto-cancel on new commits would be ideal so that the CI is not blocked by slow runs on old commits. Happy to do this in a follow up if we all agree it would be helpful.

  1. Correct.
  2. Only verify that there are no errors, there are no points to write them to PR because CI runner is not a referent machine.
  3. On my machine it does not take more than 10 minutes. However, it may take much more on CI machine. If it takes much more time, we can lower the number of iterations for CI run.
  4. That's a question to GitHub CI setup. @ozgunozerk could you please help us with that.

@KitHat
Copy link
Member Author

KitHat commented Jul 11, 2024

@4meta5 I have updated the steps and repeats for the benchmarks, the whole job with compilation now takes 20 minutes for EVM template.

@ggonzalez94
Copy link
Collaborator

ggonzalez94 commented Jul 11, 2024

  1. will the job auto-cancel if another commit is pushed to the PR?

@KitHat @4meta5 I just added this setting to the pipeline. Is just a concurrency group setting. So we shouldn't be wasting CI time anymore after this.

Only verify that there are no errors, there are no points to write them to PR because CI runner is not a referent machine.

Makes sense. But shouldn't we use this benchmark(even if not really accurate) to detect if a change really impacted the benchmarks? We can take this discussion for later, but I suggest we check what other projects are doing with their benchmark results for example and if there's a way to break the CI if we detect the change made it much worse.

Copy link
Collaborator

@ggonzalez94 ggonzalez94 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job, thanks! 🚀

@KitHat KitHat force-pushed the benchmark-testing branch from e193093 to 35ba8f3 Compare August 12, 2024 16:08
@KitHat KitHat merged commit bac8e41 into main Aug 13, 2024
5 checks passed
@KitHat KitHat deleted the benchmark-testing branch August 13, 2024 11:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: ✅ Done
Development

Successfully merging this pull request may close these issues.

🏗️ [Core Feature]: Add benchmarking runs to the CI pipeline
4 participants