-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement core scripts and logic for running upgrades #211
Conversation
AFAIK, |
It looks like it makes sense to join this PR with #199 |
054683e
to
f6134ec
Compare
f6134ec
to
cfa2df5
Compare
/hold cancel |
/lgtm |
LGTM label has been added. Git tree hash: 13d2cb7982c33cd2d140e109e3aec9bace1f6bc2
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: kron4eg The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As this PR is already reviewed, this can be a follow-up if anything needs to be changed.
return err | ||
} | ||
|
||
func upgradeKubeadmDebian(ctx *util.Context) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function could actually be a single function like upgradeKubeadmExecutor(*util.Context, string)
. Actually, it could be reused for kubelet
upgrade as well as they have the same variables and basically do the same thing.
We have similar situation in the installer
package as well.
return err | ||
} | ||
|
||
func upgradeKubeletDebian(ctx *util.Context) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned in a comment above, this could be a single function for both kubeadm
and kubelet
@@ -7,18 +7,30 @@ import ( | |||
) | |||
|
|||
const ( | |||
labelUpgradeLock = "kubeone.io/upgrading-in-process" | |||
labelUpgradeLock = "kubeone.io/upgrade-in-progress" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we decide to go with this change, the proposal should be updated as well (can be done in this PR)
{fn: determineHostname, errMsg: "unable to determine hostname"}, | ||
{fn: determineOS, errMsg: "unable to determine operating system"}, | ||
{fn: runPreflightChecks, errMsg: "preflight checks failed"}, | ||
{fn: upgradeLeader, errMsg: "unable to upgrade leader control plane"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking a lot how to handle this but still unsure is this the right way..
I have two big functions in upgrade_leader.go
and upgrade_follower.go
that contains all tasks for leader and followers. Reason for creating a big function instead of just adding task by task here is that I was not sure how to control the process correctly.
In case of the leader, it would be quite easy, as we have only leader and just listing tasks would do the job. This might not be a case for followers. I believe that if we started upgrading one follower, we should fully finish upgrade there before proceeding to the next follower. If I'd add task by task here instead of using one big function, it might not be easy to ensure that multiple tasks will finish on the same node before doing same for another.
Of course, it's possible to use different approach, but I wanted to be consistent and not expand on too much.
} | ||
|
||
// mergeStringMap merges two string maps into destination string map | ||
func mergeStringMap(modified *bool, destination *map[string]string, required map[string]string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already implement this function in the template
package. There are two questions: should we reuse function from another, non-upgardes related, package? Or should we instead create an util
(e.g. pkg/util
) package and put such functions there? Although, I'm against pkg/util
because it's an anti-pattern in Go, but having packages in pkg/util
, like pkg/util/merge
would work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no more util
s please. and those that we already have should be dismissed, and refactored into conscientiously named packages
corev1types "k8s.io/client-go/kubernetes/typed/core/v1" | ||
) | ||
|
||
func determineHostname(ctx *util.Context) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
determineHostname
and determineOS
are implemented in the installer
package in a similar form. We should consider moving those functions to an utility package instead of just reimplementing them everywhere.
} | ||
|
||
func upgradeLeaderExecutor(ctx *util.Context, node *config.HostConfig, conn ssh.Connection) error { | ||
logger := ctx.Logger.WithField("node", node.PublicAddress) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure is this needed, but I thought it would be nice to have on what node the task is executed.
func upgradeLeaderExecutor(ctx *util.Context, node *config.HostConfig, conn ssh.Connection) error { | ||
logger := ctx.Logger.WithField("node", node.PublicAddress) | ||
|
||
logger.Infoln("Labeling leader control plane…") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need leader
in logs or just go with control plane
?
} | ||
|
||
func upgradeFollowerExecutor(ctx *util.Context, node *config.HostConfig, conn ssh.Connection) error { | ||
ctx.Logger.Infoln("Labeling follower control plane…") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RunTaskOnFollowers
automatically modifies logger to include the IP address, so we don't need to create a new logger like for upgradeLeaderExecutor
.
return errors.Wrap(err, "failed to label leader control plane node") | ||
} | ||
|
||
ctx.Logger.Infoln("Upgrading kubeadm on follower control plane…") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar as for leader, do we need follower
in logs?
What this PR does / why we need it:
kubeadm
andkubelet
(cluster upgrade scripts #198),kubeadm upgrade
on both leader and follower nodes (cluster upgrade scripts #198),kubeone.io/upgrade-in-progress
label before starting the upgrade process,kubeone.io/upgrade-in-progress
label after successfully finishing the upgrade process,Breaking changes: I propose to rename the
kubeone.io/upgrading-in-process
label tokubeone.io/upgrade-in-progress
because that seems more grammatically correct.This PR is supposed to be merged once we have scripts for upgrading packages in the place, until then
/hold
This PR is based on the Kubernetes documentation: Upgrading kubeadm HA clusters from v1.12 to v1.13.
Questions for reviewers (check review comments for other questions):
Do we need to run(see Implement core scripts and logic for running upgrades #211 (comment))kubeadm plan
? To my understanding, it basically does nothing beside showing what will be changed. This can be useful with verbose, but not beside that. I done some testing and it even doesn't error if you put non-existing version or something similar.Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #198
Release note:
/assign @kron4eg