Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-22.1: spanconfig/job: improve retry behaviour under failures #78220

Merged
merged 1 commit into from
Mar 22, 2022

Conversation

blathers-crl[bot]
Copy link

@blathers-crl blathers-crl bot commented Mar 22, 2022

Backport 1/1 commits from #78117 on behalf of @irfansharif.

/cc @cockroachdb/release


Previously if the reconciliation job failed (say, with retryable buffer
overflow errors from the sqlwatcher1), we relied on the jobs
subsystem's backoff mechanism to re-kick the reconciliation job. The
retry loop there however is far too large, and has a max back-off of
24H; far too long for the span config reconciliation job. Instead we can
control the retry behavior directly within the reconciliation job,
something this PR now does. We still want to bound the number of
internal retries, possibly bouncing the job around to elsewhere in the
cluster afterwards. To do so, we now use the spanconfig.Manager's
periodic checks (every 10m per node) -- we avoid the jobs subsystem
retry loop by marking every error as a permanent one.

Release justification: low risk, high benefit change
Release note: None


Release justification:

Footnotes

  1. In future PRs we'll introduce tests adding 100k-1M tables in large
    batches; when sufficiently large it's possible to blow past the
    sqlwatcher's rangefeed buffer limits on incremental updates. In
    these scenarios we want to gracefully fail + recover by re-starting
    the reconciler and re-running the initial scan.

Previously if the reconciliation job failed (say, with retryable buffer
overflow errors from the sqlwatcher[1]), we relied on the jobs
subsystem's backoff mechanism to re-kick the reconciliation job. The
retry loop there however is far too large, and has a max back-off of
24H; far too long for the span config reconciliation job. Instead we can
control the retry behavior directly within the reconciliation job,
something this PR now does. We still want to bound the number of
internal retries, possibly bouncing the job around to elsewhere in the
cluster afterwards. To do so, we now use the spanconfig.Manager's
periodic checks (every 10m per node) -- we avoid the jobs subsystem
retry loop by marking every error as a permanent one.

[1]: In future PRs we'll introduce tests adding 100k-1M tables in large
     batches; when sufficiently large it's possible to blow past the
     sqlwatcher's rangefeed buffer limits on incremental updates. In
     these scenarios we want to gracefully fail + recover by re-starting
     the reconciler and re-running the initial scan.

Release justification: low risk, high benefit change
Release note: None
@blathers-crl blathers-crl bot requested a review from a team as a code owner March 22, 2022 04:04
@blathers-crl blathers-crl bot force-pushed the blathers/backport-release-22.1-78117 branch from 10dc1f1 to 5e9d0a5 Compare March 22, 2022 04:04
@blathers-crl blathers-crl bot requested a review from ajwerner March 22, 2022 04:04
@blathers-crl
Copy link
Author

blathers-crl bot commented Mar 22, 2022

Thanks for opening a backport.

Please check the backport criteria before merging:

  • Patches should only be created for serious issues or test-only changes.
  • Patches should not break backwards-compatibility.
  • Patches should change as little code as possible.
  • Patches should not change on-disk formats or node communication protocols.
  • Patches should not add new functionality.
  • Patches must not add, edit, or otherwise modify cluster versions; or add version gates.
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
  • There is a high priority need for the functionality that cannot wait until the next release and is difficult to address in another way.
  • The new functionality is additive-only and only runs for clusters which have specifically “opted in” to it (e.g. by a cluster setting).
  • New code is protected by a conditional check that is trivial to verify and ensures that it only runs for opt-in clusters.
  • The PM and TL on the team that owns the changed code have signed off that the change obeys the above rules.

Add a brief release justification to the body of your PR to justify this backport.

Some other things to consider:

  • What did we do to ensure that a user that doesn’t know & care about this backport, has no idea that it happened?
  • Will this work in a cluster of mixed patch versions? Did we test that?
  • If a user upgrades a patch version, uses this feature, and then downgrades, what happens?

@blathers-crl blathers-crl bot added blathers-backport This is a backport that Blathers created automatically. O-robot Originated from a bot. labels Mar 22, 2022
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@irfansharif irfansharif merged commit 168e1a1 into release-22.1 Mar 22, 2022
@irfansharif irfansharif deleted the blathers/backport-release-22.1-78117 branch March 22, 2022 15:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blathers-backport This is a backport that Blathers created automatically. O-robot Originated from a bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants