Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Write how-to doc on dataflow cost benchmarking #33702
Write how-to doc on dataflow cost benchmarking #33702
Changes from 5 commits
9cc4581
3eaeb91
4af236d
f5a4a80
c2f40e9
de90d54
e78d1cc
9fd1a68
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we know if this requirement is met?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this case I mean "short and simple code that is contained in the source code of the pipeline if it isn't a native beam transform." This is a somewhat subjective criterion, but the idea is that we want to minimize the performance impact of code that isn't Beam-provided since custom code is more variable (and generally outside our control)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean: such as example benchmarks of Runinference workloads? or what is model versions? should this include a link?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is referring to keeping the same version of a model in a RunInference pipeline rather than doing something like automatically updating to the latest version. A fully specified benchmark should be running on an identical configuration every time, from details like model version and framework all the way up to the GCP region the job runs in. I'll see if I can nail down better wording
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we know if this requirement is met?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same sentiment as above, every part of the environment that can be specified needs to be identical from run to run or clearly marked if something changes (most commonly this would be beam incrementing a dependency version that impacts the benchmark)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens to test runs that timed out? are the collected metrics ignored (because they are likely incorrect)? Will the failure surface somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the run times out the workflow itself will fail, so we'd get a surfaced error in GitHub. The metrics would likely never surface in that situation, since the workflow is more likely stuck in the pipeline step rather than the metrics gathering/writing step.