-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
machineController.provider field is not respected when deploying machine-controller #765
Comments
This is actually making things a little bit complicated, because if you manually provide these secrets afterwards and patch the deployment of machine-controller and machine-controller-webhook, each time you upgrade, everything is overwritten. So as a workaround, you need to provide the secret and the patching manually after each upgrade. |
A quick fix for this problem is to create a secret with the following specs: # machine-controller-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: machine-controller-openstack
namespace: kube-system
type: Opaque
data:
OS_AUTH_URL: XXX
OS_USER_NAME: XXX
OS_PASSWORD: XXX
OS_DOMAIN_NAME: XXX
OS_TENANT_NAME: XXX
OS_TENANT_ID: XXX
OS_REGION_NAME: XXX The secret can be created directly and you have to provide this only once, due to the fact that it gets not overwritten. Additionally you need two patch files: # envfrom-controller-patch.yaml
spec:
template:
spec:
containers:
- name: machine-controller
envFrom:
- secretRef:
name: machine-controller-openstack # envfrom-webhook-patch.yaml
spec:
template:
spec:
containers:
- name: machine-controller-webhook
envFrom:
- secretRef:
name: machine-controller-openstack After this you can patch the deployments by running: kubectl -n kube-system patch deploy machine-controller --patch "$(cat envfrom-controller-patch.yaml)"
kubectl -n kube-system patch deploy machine-controller-webhook --patch "$(cat envfrom-webhook-patch.yaml)" You will have to do this after every update. |
It was a mistake to provide 2 fields for cloud provider, I suggest in v1beta1 we will drop machineController.provider |
@mkjoerg is it necessary to patch the deployments while KubeOne is provisioning? Seems a little bit like a chicken in the egg problem. If I run KubeOne which then fails (which is expected), I patch and rerun KubeOne the deployments are reverted by KubeOne itself. I would need to patch "while" KubeOne is running as soon as Another solution was to rebuild the machine-controller image like this:
and put it on the control-plane nodes before start KubeOne provisioning. KubeOne will use the pre-existing machine-controller images which have the cert built-in (workaround). |
@ivomarino I just disabled the machine-deployment while installing (removed the terraform output so it is not in the tf.json) and then I let kubeone run successfully. After this, I patch the controllers and when everything is finished I apply a prepared machine-deployment. |
@mkjoerg thanks for this info, how did you deploy the prepared machine-deployment afterwards? Thanks |
We are discussing how that field should be handled in #828, so I'm going to close this issue. |
@xmudrii: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
Setting a different provider for machine-controller using the
.machineController.provider
key doesn't seem to work at all. Even if the key is set, credentials would be deployed based on.cloudProvider.name
.What is the expected behavior:
The expected behavior is that the secret and environment variables bindings for machine-controller are created respecting
.machineController.provider
.How to reproduce the issue:
The common use case is using the
none
provider, such as:Information about the environment:
KubeOne version (
kubeone version
):master
@63afc8fThe text was updated successfully, but these errors were encountered: