-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Write how-to doc on dataflow cost benchmarking #33702
Conversation
assign set of reviewers |
Assigning reviewers. If you would like to opt out of this review, comment R: @tvalentyn for label python. Available commands:
The PR bot will only process comments in the main thread (not review comments). |
### Choosing a Pipeline | ||
Pipelines that are worth benchmarking in terms of performance and cost have a few straightforward requirements. | ||
|
||
1. The transforms used in the pipeline should be native to Beam *or* be lightweight and readily available in the given pipeline |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lightweight and readily available
How do we know if this requirement is met?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case I mean "short and simple code that is contained in the source code of the pipeline if it isn't a native beam transform." This is a somewhat subjective criterion, but the idea is that we want to minimize the performance impact of code that isn't Beam-provided since custom code is more variable (and generally outside our control)
Pipelines that are worth benchmarking in terms of performance and cost have a few straightforward requirements. | ||
|
||
1. The transforms used in the pipeline should be native to Beam *or* be lightweight and readily available in the given pipeline | ||
1. The pipeline itself should run on a consistent data set and have consistent internals (such as model versions for `RunInference` workloads.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
such as model versions for
RunInference
workloads
Do you mean: such as example benchmarks of Runinference workloads? or what is model versions? should this include a link?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is referring to keeping the same version of a model in a RunInference pipeline rather than doing something like automatically updating to the latest version. A fully specified benchmark should be running on an identical configuration every time, from details like model version and framework all the way up to the GCP region the job runs in. I'll see if I can nail down better wording
Pipelines that are worth benchmarking in terms of performance and cost have a few straightforward requirements. | ||
|
||
1. The transforms used in the pipeline should be native to Beam *or* be lightweight and readily available in the given pipeline | ||
1. The pipeline itself should run on a consistent data set and have consistent internals (such as model versions for `RunInference` workloads.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
have consistent internals
How do we know if this requirement is met?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same sentiment as above, every part of the environment that can be specified needs to be identical from run to run or clearly marked if something changes (most commonly this would be beam incrementing a dependency version that impacts the benchmark)
```yaml | ||
- name: Run wordcount on Dataflow | ||
uses: ./.github/actions/gradle-command-self-hosted-action | ||
timeout-minutes: 30 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens to test runs that timed out? are the collected metrics ignored (because they are likely incorrect)? Will the failure surface somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the run times out the workflow itself will fail, so we'd get a surfaced error in GitHub. The metrics would likely never surface in that situation, since the workflow is more likely stuck in the pipeline step rather than the metrics gathering/writing step.
Co-authored-by: tvalentyn <[email protected]>
Updated the doc with some more elaborative wording |
Creates a quick overview on how to write cost benchmarks within the current framework.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123
), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>
instead.CHANGES.md
with noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.