Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write how-to doc on dataflow cost benchmarking #33702

Merged
merged 8 commits into from
Feb 4, 2025

Conversation

jrmccluskey
Copy link
Contributor

@jrmccluskey jrmccluskey commented Jan 21, 2025

Creates a quick overview on how to write cost benchmarks within the current framework.


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@jrmccluskey jrmccluskey marked this pull request as ready for review January 28, 2025 16:47
@jrmccluskey jrmccluskey changed the title [WIP] Write how-to doc on dataflow cost benchmarking Write how-to doc on dataflow cost benchmarking Jan 28, 2025
@jrmccluskey
Copy link
Contributor Author

assign set of reviewers

Copy link
Contributor

Assigning reviewers. If you would like to opt out of this review, comment assign to next reviewer:

R: @tvalentyn for label python.

Available commands:

  • stop reviewer notifications - opt out of the automated review tooling
  • remind me after tests pass - tag the comment author after tests pass
  • waiting on author - shift the attention set back to the author (any comment or push by the author will return the attention set to the reviewers)

The PR bot will only process comments in the main thread (not review comments).

sdks/python/apache_beam/testing/benchmarks/README.md Outdated Show resolved Hide resolved
### Choosing a Pipeline
Pipelines that are worth benchmarking in terms of performance and cost have a few straightforward requirements.

1. The transforms used in the pipeline should be native to Beam *or* be lightweight and readily available in the given pipeline
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lightweight and readily available

How do we know if this requirement is met?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case I mean "short and simple code that is contained in the source code of the pipeline if it isn't a native beam transform." This is a somewhat subjective criterion, but the idea is that we want to minimize the performance impact of code that isn't Beam-provided since custom code is more variable (and generally outside our control)

Pipelines that are worth benchmarking in terms of performance and cost have a few straightforward requirements.

1. The transforms used in the pipeline should be native to Beam *or* be lightweight and readily available in the given pipeline
1. The pipeline itself should run on a consistent data set and have consistent internals (such as model versions for `RunInference` workloads.)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

such as model versions for RunInference workloads

Do you mean: such as example benchmarks of Runinference workloads? or what is model versions? should this include a link?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is referring to keeping the same version of a model in a RunInference pipeline rather than doing something like automatically updating to the latest version. A fully specified benchmark should be running on an identical configuration every time, from details like model version and framework all the way up to the GCP region the job runs in. I'll see if I can nail down better wording

Pipelines that are worth benchmarking in terms of performance and cost have a few straightforward requirements.

1. The transforms used in the pipeline should be native to Beam *or* be lightweight and readily available in the given pipeline
1. The pipeline itself should run on a consistent data set and have consistent internals (such as model versions for `RunInference` workloads.)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have consistent internals

How do we know if this requirement is met?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same sentiment as above, every part of the environment that can be specified needs to be identical from run to run or clearly marked if something changes (most commonly this would be beam incrementing a dependency version that impacts the benchmark)

sdks/python/apache_beam/testing/benchmarks/README.md Outdated Show resolved Hide resolved
sdks/python/apache_beam/testing/benchmarks/README.md Outdated Show resolved Hide resolved
```yaml
- name: Run wordcount on Dataflow
uses: ./.github/actions/gradle-command-self-hosted-action
timeout-minutes: 30
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what happens to test runs that timed out? are the collected metrics ignored (because they are likely incorrect)? Will the failure surface somewhere?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the run times out the workflow itself will fail, so we'd get a surfaced error in GitHub. The metrics would likely never surface in that situation, since the workflow is more likely stuck in the pipeline step rather than the metrics gathering/writing step.

@jrmccluskey
Copy link
Contributor Author

Updated the doc with some more elaborative wording

@jrmccluskey jrmccluskey merged commit 14df78c into apache:master Feb 4, 2025
93 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants