Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-21.2: kvserver: increase Migrate application timeout to 1 minute #73061

Merged

Conversation

erikgrinaker
Copy link
Contributor

@erikgrinaker erikgrinaker commented Nov 22, 2021

Backport 2/2 commits from #72987.

/cc @cockroachdb/release

Release justification: fixes bug that may prevent upgrade migrations from succeeding.


kvserver: increase Migrate application timeout to 1 minute

This increases the timeout when waiting for application of a Migrate
command on all range replicas to 1 minute, up from 5 seconds. It also
adds a cluster setting kv.migration.migrate_application.timeout to
control this.

When encountering a range that's e.g. undergoing rebalancing, it can
take a long time for a learner replica to receive a snapshot and respond
to this request, which would cause the timeout to trigger. This is
especially likely in clusters with many ranges and frequent rebalancing
activity.

Touches #72931.

Release note (bug fix): The timeout when checking for Raft application
of upgrade migrations has been increased from 5 seconds to 1 minute, and
is now controllable via the cluster setting
kv.migration.migrate_application_timeout. This makes migrations much
less likely to fail in clusters with ongoing rebalancing activity during
upgrade migrations.

migration: add informative log message for sep intents migrate failure

The separated intents migration has been seen to go into failure loops
in the wild, with a generic "context deadline exceeded" error. This adds
a more informative log entry with additional hints on how to resolve the
problem.

Release note: None

This increases the timeout when waiting for application of a `Migrate`
command on all range replicas to 1 minute, up from 5 seconds. It also
adds a cluster setting `kv.migration.migrate_application.timeout` to
control this.

When encountering a range that's e.g. undergoing rebalancing, it can
take a long time for a learner replica to receive a snapshot and respond
to this request, which would cause the timeout to trigger. This is
especially likely in clusters with many ranges and frequent rebalancing
activity.

Release note (bug fix): The timeout when checking for Raft application
of upgrade migrations has been increased from 5 seconds to 1 minute, and
is now controllable via the cluster setting
`kv.migration.migrate_application.timeout`. This makes migrations much
less likely to fail in clusters with ongoing rebalancing activity during
upgrade migrations.
The separated intents migration has been seen to go into failure loops
in the wild, with a generic "context deadline exceeded" error. This adds
a more informative log entry with additional hints on how to resolve the
problem.

Release note: None
@erikgrinaker erikgrinaker self-assigned this Nov 22, 2021
@erikgrinaker erikgrinaker requested a review from a team as a code owner November 22, 2021 18:49
@erikgrinaker erikgrinaker requested a review from a team November 22, 2021 18:49
@blathers-crl
Copy link

blathers-crl bot commented Nov 22, 2021

Thanks for opening a backport.

Please check the backport criteria before merging:

  • Patches should only be created for serious issues or test-only changes.
  • Patches should not break backwards-compatibility.
  • Patches should change as little code as possible.
  • Patches should not change on-disk formats or node communication protocols.
  • Patches should not add new functionality.
  • Patches must not add, edit, or otherwise modify cluster versions; or add version gates.
If some of the basic criteria cannot be satisfied, ensure that the exceptional criteria are satisfied within.
  • There is a high priority need for the functionality that cannot wait until the next release and is difficult to address in another way.
  • The new functionality is additive-only and only runs for clusters which have specifically “opted in” to it (e.g. by a cluster setting).
  • New code is protected by a conditional check that is trivial to verify and ensures that it only runs for opt-in clusters.
  • The PM and TL on the team that owns the changed code have signed off that the change obeys the above rules.

Add a brief release justification to the body of your PR to justify this backport.

Some other things to consider:

  • What did we do to ensure that a user that doesn’t know & care about this backport, has no idea that it happened?
  • Will this work in a cluster of mixed patch versions? Did we test that?
  • If a user upgrades a patch version, uses this feature, and then downgrades, what happens?

@cockroach-teamcity
Copy link
Member

This change is Reviewable

@erikgrinaker erikgrinaker removed the request for review from a team November 22, 2021 19:01
Copy link
Contributor

@ajwerner ajwerner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:lgtm:

Reviewed 2 of 2 files at r1, 1 of 1 files at r2, all commit messages.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @miretskiy and @tbg)

@erikgrinaker erikgrinaker merged commit a8e1118 into cockroachdb:release-21.2 Nov 22, 2021
@erikgrinaker erikgrinaker deleted the backport21.2-72987 branch November 25, 2021 19:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants