-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
backend-protocol https does not work with tlsv1.3 #8257
Comments
@bh-tt: This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I am reading here https://www.cloudflare.com/learning/ssl/why-use-tls-1.3/ and if that means that using TLS 1.3 disables use of TLS 1.2 then backward compatibility breaks. I am not a developer and a developer needs to comment. |
As far as I remember, TLS v1.3 has a lot of compatibility changes, that means that it cannot co-exist with TLS v1.2. I might be wrong and need to re-read about it. Have you tried in your specific ingress setting:
? |
The same error happens (nginx returns 502):
I have confirmed i can access the application itself from the pod (using kubectl exec) and the service. 8081 is the TLS port of the application. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
I'm also hitting this same issue with a GRPCS backend and |
It might be that the ciphers are incorrect: |
#7084 might also be related. Seems that even though I set proxy-ssl-ciphers it's probably not doing anything since it only works when |
For gRPC it might also be that |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
This work. |
So I am closing this issue for now. THe original creator of the issue can re-open the issue if data is posted here that shows a problem in the controller or implies a action0item on the project thanks /close |
@longwuyuan: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
That's fine, we have migrated to istio quite a while ago which supports it all just fine. |
Could this be re-opened? Yes, it works with the configuration-snippet, but that has recently become disabled by default (#10393). It's not advised to enable this setting due to CVE-2021-25742. This all means that in order to adhere to good security practices (Use TLSv1.3), you need to ignore the CVE mitigation by enabling, and using, the configuration snippets. It would be ideal if This issue was mentioned briefly here but then someone mentions that there "are open issues" relating to this annotation. As best as I can figure out, this issue is the only one dealing with it, and it's been closed. |
@Tommyf can you paste every small detail using kubectl commands, curl commands, other commands, on a kind/minikube cluster with backend-protocol set to HTTPS and proxy-ssl-protocols annotation set to tlsv1.3 |
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
NGINX Ingress controller
Release: v1.1.1
Build: a17181e
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:25:17Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:19:12Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
Environment:
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release): "Debian GNU/Linux 11 (bullseye)"
Kernel (e.g.
uname -a
): 5.10.0-11-amd64 Basic structure #1 SMP Debian 5.10.92-1 (2022-01-18) x86_64 GNU/LinuxInstall tools: kubeadm
Basic cluster related info:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-intra-rd-master0 Ready control-plane,master 174d v1.23.3 10.247.9.68 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
k8s-intra-rd-master1 Ready control-plane,master 174d v1.23.3 10.247.9.69 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
k8s-intra-rd-master2 Ready control-plane,master 174d v1.23.3 10.247.9.70 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
k8s-intra-rd-node0 Ready 174d v1.23.3 10.247.9.72 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
k8s-intra-rd-node1 Ready 174d v1.23.3 10.247.9.73 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
k8s-intra-rd-node2 Ready 174d v1.23.3 10.247.9.74 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
k8s-intra-rd-node3 Ready 174d v1.23.3 10.247.9.75 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
k8s-intra-rd-node4 Ready 174d v1.23.3 10.247.9.76 Debian GNU/Linux 11 (bullseye) 5.10.0-11-amd64 containerd://1.4.12
How was the ingress-nginx-controller installed:
ingress-nginx ingress-nginx 14 2022-02-07 16:21:08.741712029 +0100 CET deployed ingress-nginx-4.0.17 1.1.1
Current State of the controller:
kubectl describe ingressclasses
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.1.1
helm.sh/chart=ingress-nginx-4.0.17
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
Controller: k8s.io/ingress-nginx
Events:
kubectl -n <ingresscontrollernamespace> get all -A -o wide
\pod/ingress-nginx-controller-7445b7d6dc-z4mvd 1/1 Running 0 10d 10.244.3.49 k8s-intra-rd-node0
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ingress-nginx-controller LoadBalancer 10.106.136.24 10.247.9.80 80:30308/TCP,443:30065/TCP 174d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission ClusterIP 10.106.134.166 443/TCP 174d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/ingress-nginx-controller 1/1 1 1 174d controller k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/ingress-nginx-controller-54bfb9bb 0 0 0 84d controller k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=54bfb9bb
replicaset.apps/ingress-nginx-controller-568764d844 0 0 0 35d controller k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=568764d844
replicaset.apps/ingress-nginx-controller-5c8d66c76d 0 0 0 112d controller k8s.gcr.io/ingress-nginx/controller:v1.0.4@sha256:545cff00370f28363dad31e3b59a94ba377854d3a11f18988f5f9e56841ef9ef app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=5c8d66c76d
replicaset.apps/ingress-nginx-controller-7445b7d6dc 1 1 1 10d controller k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7445b7d6dc
replicaset.apps/ingress-nginx-controller-77f4468d76 0 0 0 86d controller k8s.gcr.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=77f4468d76
replicaset.apps/ingress-nginx-controller-fd7bb8d66 0 0 0 174d controller k8s.gcr.io/ingress-nginx/controller:v1.0.0@sha256:0851b34f69f69352bf168e6ccf30e1e20714a264ab1ecd1933e4d8c0fc3215c6 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=fd7bb8d66
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
Namespace: ingress-nginx
Priority: 0
Node: k8s-intra-rd-node0/10.247.9.72
Start Time: Mon, 07 Feb 2022 17:01:53 +0100
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=7445b7d6dc
Annotations:
Status: Running
IP: 10.244.3.49
IPs:
IP: 10.244.3.49
Controlled By: ReplicaSet/ingress-nginx-controller-7445b7d6dc
Containers:
controller:
Container ID: containerd://a84c1fbcde1a35054a45a2df900a806dbc763aaa9967f21a967c67474a5b03eb
Image: k8s.gcr.io/ingress-nginx/controller:v1.1.1@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de
Image ID: k8s.gcr.io/ingress-nginx/controller@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
State: Running
Started: Mon, 07 Feb 2022 17:02:04 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-7445b7d6dc-z4mvd (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9g28q (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-9g28q:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
Normal RELOAD 37m (x9 over 10d) nginx-ingress-controller NGINX reload triggered due to a change in configuration
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.1.1
helm.sh/chart=ingress-nginx-4.0.17
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.106.136.24
IPs: 10.106.136.24
IP: 10.247.9.80
LoadBalancer Ingress: 10.247.9.80
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30308/TCP
Endpoints: 10.244.3.49:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30065/TCP
Endpoints: 10.244.3.49:443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Current state of ingress object, if applicable:
kubectl -n <appnnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
Name: intranetws
Labels: app.kubernetes.io/instance=intranetws
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=intranetws
helm.sh/chart=webservice-1.0.31
Namespace: intranet
Address: 10.247.9.80
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" is forbidden: User "bh" cannot get resource "endpoints" in API group "" in the namespace "kube-system">)
TLS:
intra-rd-ingress terminates intranetws.k8s-intra-rd.local
Rules:
Host Path Backends
intranetws.k8s-intra-rd.local
/ intranetws:443 (10.244.2.250:8081,10.244.3.4:8081)
Annotations: meta.helm.sh/release-name: intranetws
meta.helm.sh/release-namespace: intranet
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
Events:
Type Reason Age From Message
Normal Sync 51m (x4 over 102m) nginx-ingress-controller Scheduled for sync
< HTTP/2 502
< date: Fri, 18 Feb 2022 10:33:26 GMT
< content-type: text/html
< content-length: 150
< strict-transport-security: max-age=31536000
< cache-control: no-cache, no-store, must-revalidate
< pragma: no-cache
< referrer-policy: no-referrer
< x-content-type-options: nosniff
< x-frame-options: sameorigin
< x-xss-protection: 1; mode=block
<
502 Bad Gateway
nginx * Connection #0 to host intranetws.k8s-intra-rd.local left intact
kubectl describe ...
of any custom configmap(s) created and in useName: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.1.1
helm.sh/chart=ingress-nginx-4.0.17
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
Data
add-headers:
ingress-nginx/custom-headers
allow-snippet-annotations:
true
client-body-buffer-size:
64k
disable-access-log:
true
ssl-protocols:
TLSv1.3
BinaryData
Events:
Type Reason Age From Message
Normal UPDATE 42m nginx-ingress-controller ConfigMap ingress-nginx/ingress-nginx-controller
What happened:
We have webservices with self-signed certificates which we would like to access using ingress/nginx. When we enable TLSv1.2 on the webservices everything works, however when it is disabled the following error message occurs:
This happens even when
ssl-protocols
is set toTLSv1.3
only in the ingress controller config. It appears that ingress uses TLSv1.2 no matter the settings.What you expected to happen:
We expected ingress/nginx to use TLSv1.3 if the application supports it and when configured to only use TLSv1.3.
How to reproduce it:
I'm assuming this can be reproduced using any application which supports only TLSv.1.3, but I have not yet tried it.
Anything else we need to know:
The text was updated successfully, but these errors were encountered: