-
Notifications
You must be signed in to change notification settings - Fork 299
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues during upgrade 4.5 to 4.6 #397
Comments
Confirmed 4.5 to 4.6 is a problem with rpm-ostree so workaround required but I guess once your cluster is at 4.6 the fedora repos should be disabled and therefore this wont be a problem going forward |
Which timeout is it? We're disabling all available repos before proceeding to update, as all necessary RPMs are already included in machine-os-content. Please collect a must-gather |
My 4.5.0-0.okd-2020-10-15-235428 to 4.6.0-0.okd-2020-11-27-200126 failed with the same issue, I added the dropin to set the proxy environment variables which allowed my upgrade to proceed. Just poked at one of my masters post upgrade, these repos are still enabled:
As per our discussion in #389 this didnt seem to be an issue on a fresh 4.6 UPI install. Same check on my 4.6 cluster that was built from scratch:
|
Is the 4.5 to 4.6 upgrade really supported ? |
Right, that seems to be a bug - OKD doesn't need enabled repos when updating, as it ships all RPMs with it. This is resolved during fresh install (we disable all Fedora repos by default), but still open on update - and proxy env is not set there. @thurcombe mind filing a separate bug for that? |
That's expected - CVO checks image signatures stored on GCS. Seems to be a docs bug
Is the cluster installed on GCP or this service being run mistakenly (#396)? |
Just for info, after our previous discussion re a fresh 4.6 install I noted that gcp hostname was listed as a failed unit in my UPI. I figured it was a non-issue but mentioning it here in case it helps. I'll raise a new defect for the repo problem. |
Another issue is Apparently tuner pods cannot mount their volume configmap after an upgrade. Solution is to delete all pods. Then they will be running. |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hello,
I tried to update 4.5.0-0.okd-2020-10-15-235428 to 4.6.0-0.okd-2020-11-27-200126.
I had multiple issues due to being behind a proxy.
First I allowed
storage.googleapis.com
to validate the 4.6 image. Error was:Unable to apply 4.6.0-0.okd-2020-11-27-200126: the image may not be safe to use
.Then I had an issue due to rpm-ostree getting timeout. It was not using the proxy.
I followed coreos/rpm-ostree#762 (comment) to solve it.
Is it possible to manage it via ignition files?
Can we specify a mirror to use instead of trying them all twice (80/443)?
Last issue was gcp rewriting the hostname.
Fix: sudo hostnamectl set-hostname okdmaster1a.example.com
The text was updated successfully, but these errors were encountered: