-
Notifications
You must be signed in to change notification settings - Fork 281
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default deployment of rke2-ingress-nginx has load balancer service enabled in RKE2 1.21.2 #1446
Comments
Can confirm, the ingress is now successfully wasting one of my IP addresses. Any workarounds? |
This comes from the upstream ingress-nginx chart, which ships with We previously shipped an old (no longer supported) version of the chart which had this defaulted to false. If you want this old behavior back, you can provide a rke2-ingress-nginx HelmChartConfig manifest that sets the value to false. |
I know how to change it but was assuming that rke2 would like to have a different default and maybe also document the change in the release notes? Changing a default behavior IMO should be documented..
Am 26.07.2021 um 19:44 schrieb Brad Davidson ***@***.***>:
This comes from the upstream ingress-nginx chart, which ships with service.enabled=true:
https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/values.yaml#L397
We previously shipped an old (no longer supported) version of the chart which had this defaulted to false. If you want this old behavior back, you can provide a rke2-ingress-nginx HelmChartConfig manifest that sets the value to false.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#1446 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADF6LDP4WVH62MOT2LO3MO3TZWNIHANCNFSM5AZULP7Q>.
|
For those who don't know, you need to add this to "/var/lib/rancher/rke2/server/manifests/rke2-ingress-nginx-config.yaml"
Before I did this, not only did I have the extra LoadBalancer ingress showing up, but my other ingresses showed as pending or incomplete. |
Just out of curiosity, why do you have 'extra' ingresses? Are you deploying your own ingress controller alongside the built in one? |
I don't know if this is relevant but I have just deployed v1.21.3+rke2r1 (fresh install, CentOS 8 - previously deployed with rancherd but cleaned that off and made a seperate VM for Rancher to run in) and I have an ingress controller, default NGINX (I'm using the built in) sitting in pending state and none of my L7 ingress definitions will initialize. Not sure what is going on here. |
@brandond sorry, I'm new to the space so my language may not be precise here. :) What I was trying to describe is the same thing as the OP and as pictured by @JDB1976. Not using any other ingress controllers but the OOTB NGINX one. Modifying the helm chart settings with service.enabled: false gets rid of the new "rke2-ingress-nginx-controller" LB and service (which stays at Pending), but the ingress pod crashes. Adding publishService.enabled: false stops the crashing. This seems to get us back to the previous behavior. |
Might it be because the cluster deploys the NGINX service as a L4 external load balancer service by default? I've just replicated this behavior with my K8S CentOS 8 fresh install where I deployed a fresh NGINX. Once I reinstalled the deployment to deploy as a daemonset with helm and moved the service type to "Cluster IP (internal only)", all was right in the world again. :) Along the lines of this:
I have not yet looked at this for the RKE2 install however. CONFIRMED: the default deployment in RKE2 of the NGINX ingress controller is a daemonset but the service is defined as a L4 external load balancer. When I changed to "Cluster IP (Internal only)" the pending state disappeared and all my defined ingresses deployed and function now! So maybe there should be a note somewhere to remind us newbies that NGINX deploys it's service definition by default as an external L4 load balancer and also as a replica set and you need to change it to something conducive to your local environment if you don't use such a service and/or want to run as a demonset. I know, RTFM - but there are SO many FM's in this area it's hard to read them all. :) |
@erikwilson can you take a look at this since you worked on the nginx helm chart most recently? It sounds like the current version of the chart doesn't work without an external LB controller. |
Would this be a good time to investigate deploying as a daemonset by default as well? Then extra steps like this won't be necessary: https://rancher.com/docs/rancher/v2.5/en/installation/resources/k8s-tutorials/ha-rke2/#5-configure-nginx-to-be-a-daemonset |
It is already a daemonset by default. |
Need to understand why this was not caught in our testing/automation? Need QA to verify what happens on upgrade. Will this breaking functional setups upgrading from 1.20.x? We may need to modify our 1.21.2 and 1.21.3 release notes to call out this regression. |
@cjellick QA was only testing with the AWS cloud provider configured; we've asked them to also test without a cloud provider to duplicate standalone environments. We also cleared up some confusion about how to test Ingress resources. rancher/rke2-charts#123 modifies the defaults to match the previous shipped configuration. |
Ironically, we just hit this in Rancher provisioning v2: rancher/rancher#33775 EDIT: first time we hit it for RKE1 not too long ago: rancher/rancher#30356 |
/forwardport v1.22.0+rke2r1 |
Validated on v1.21.3-rc4+rke2r2
|
Environmental Info:
RKE2 Version: 1.21.2-rke2r1
Cluster Configuration:
cis-1.6
Describe the bug:
kubectl get services -n kube-system shows load balancer service for the ingress controller
Steps To Reproduce:
Install RKE2 1.21.2 and check for services in kube-system
Expected behavior:
The load balancer service for nginx-ingress should not be there (has not been the case in pre-1.21 versions)
Actual behavior:
Load Balancer service is there and pending
The text was updated successfully, but these errors were encountered: