Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

machineController.provider field is not respected when deploying machine-controller #765

Closed
xmudrii opened this issue Dec 18, 2019 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@xmudrii
Copy link
Member

xmudrii commented Dec 18, 2019

What happened:

Setting a different provider for machine-controller using the .machineController.provider key doesn't seem to work at all. Even if the key is set, credentials would be deployed based on .cloudProvider.name.

What is the expected behavior:

The expected behavior is that the secret and environment variables bindings for machine-controller are created respecting .machineController.provider.

How to reproduce the issue:

The common use case is using the none provider, such as:

apiVersion: kubeone.io/v1alpha1
kind: KubeOneCluster
name: demo-cluster
versions:
  kubernetes: 1.16.1
cloudProvider:
  name: none
  external: true
machineController:
  deploy: true
  provider: openstack

Information about the environment:
KubeOne version (kubeone version): master@63afc8f

@xmudrii xmudrii added the kind/bug Categorizes issue or PR as related to a bug. label Dec 18, 2019
@xmudrii xmudrii changed the title machineController.provider field is not respected by KubeOne machineController.provider field is not respected when deploying machine-controller Dec 18, 2019
@containerpope
Copy link

This is actually making things a little bit complicated, because if you manually provide these secrets afterwards and patch the deployment of machine-controller and machine-controller-webhook, each time you upgrade, everything is overwritten. So as a workaround, you need to provide the secret and the patching manually after each upgrade.

@containerpope
Copy link

A quick fix for this problem is to create a secret with the following specs:

# machine-controller-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: machine-controller-openstack
  namespace: kube-system
type: Opaque
data:
  OS_AUTH_URL: XXX
  OS_USER_NAME: XXX
  OS_PASSWORD: XXX
  OS_DOMAIN_NAME: XXX
  OS_TENANT_NAME: XXX
  OS_TENANT_ID: XXX
  OS_REGION_NAME: XXX

The secret can be created directly and you have to provide this only once, due to the fact that it gets not overwritten.

Additionally you need two patch files:

# envfrom-controller-patch.yaml
spec:
  template:
    spec:
      containers:
      - name: machine-controller
        envFrom:
          - secretRef:
              name: machine-controller-openstack
# envfrom-webhook-patch.yaml
spec:
  template:
    spec:
      containers:
      - name: machine-controller-webhook
        envFrom:
          - secretRef:
              name: machine-controller-openstack

After this you can patch the deployments by running:

kubectl -n kube-system patch deploy machine-controller --patch "$(cat envfrom-controller-patch.yaml)"
kubectl -n kube-system patch deploy machine-controller-webhook --patch "$(cat envfrom-webhook-patch.yaml)"

You will have to do this after every update.

@kron4eg
Copy link
Member

kron4eg commented Feb 18, 2020

It was a mistake to provide 2 fields for cloud provider, I suggest in v1beta1 we will drop machineController.provider

@ivomarino
Copy link

@mkjoerg is it necessary to patch the deployments while KubeOne is provisioning? Seems a little bit like a chicken in the egg problem. If I run KubeOne which then fails (which is expected), I patch and rerun KubeOne the deployments are reverted by KubeOne itself. I would need to patch "while" KubeOne is running as soon as kubeconfig is available.

Another solution was to rebuild the machine-controller image like this:

FROM docker.io/kubermatic/machine-controller:v1.8.0
COPY ./ca-certificates/SubCA02_2.crt /usr/local/share/ca-certificates/
USER  root
RUN  update-ca-certificates
USER nobody

and put it on the control-plane nodes before start KubeOne provisioning. KubeOne will use the pre-existing machine-controller images which have the cert built-in (workaround).

@containerpope
Copy link

@ivomarino I just disabled the machine-deployment while installing (removed the terraform output so it is not in the tf.json) and then I let kubeone run successfully. After this, I patch the controllers and when everything is finished I apply a prepared machine-deployment.

@ivomarino
Copy link

@mkjoerg thanks for this info, how did you deploy the prepared machine-deployment afterwards? Thanks

@xmudrii
Copy link
Member Author

xmudrii commented Apr 3, 2020

We are discussing how that field should be handled in #828, so I'm going to close this issue.
/close

@kubermatic-bot
Copy link
Contributor

@xmudrii: Closing this issue.

In response to this:

We are discussing how that field should be handled in #828, so I'm going to close this issue.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

5 participants