-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Repeatly found and lost a service endpoints. #3060
Comments
@JerryChaox the spring-boot-terminal-manager has probes? Please check the pod logs for failures. |
@aledbf I have not set the probe. Is it the root cause? |
I have a same problem, my webserver return http 503 error code. is this a bug ? |
@aledbf I have add a probes to my service and the pods didn't log any failure but nginx-controller still log "Service **** does not have any active EndPoint.". |
The behaviour is still present in the rook-ceph-mgr-dashboard service. |
W0925 01:07:25.478161 7 controller.go:359] Service "ingress-nginx/default-http-backend" does not have any active Endpoint ============================================================= W0925 00:45:53.408222 7 controller.go:359] Service "ingress-nginx/default-http-backend" does not have any active Endpoint Ingress version: 0.17.1 |
same problem:
my yml: apiVersion: v1
kind: Namespace
metadata:
name: seafile
---
apiVersion: v1
kind: Service
metadata:
name: seafile
namespace: seafile
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 80
targetPort: 8000
protocol: TCP
---
apiVersion: v1
kind: Endpoints
metadata:
name: seafile
namespace: seafile
subsets:
- addresses:
- ip: 119.27.169.191
ports:
- port: 8000
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: seafile
namespace: seafile
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: seafile.lomo.cc
http:
paths:
- path: /
backend:
serviceName: seafile
servicePort: 80
How to fix? |
@zhuleiandy888 in your case, please update to 0.19.0. There was an error in 0.15 that triggered reloads #2636 |
@aledbf Thanks! Now it looks like nothing is wrong with version 0.19.0. I'll keep testing it for a while. |
This issue still occurs in my case with 0.19.0 |
@JerryChaox to check your k8s cluster status of nodes, check the last event info of nodes. be sure that kube-controller-manager service hasn't error log, the nodes report self status to master is normal. To check if the heartbeat timeout is reported from the node, check that the parameter "--node-monitor-grace-period" and "--node-monitor-period" settings are reasonable. I hope that will help you. |
@zhuleiandy888 @aledbf I found that the nginx controller lost endpoint info when the kube-controller-manager print error as follows:
I hava no idea to debug from this Infromation. Could anyone give me some help? |
Helps a lot for understanding its internal mechanism. Recently, our cluster also encounters this problem. During these 503 time, no deployment update and health check is OK . |
Our Cluster version : 1.6.6 Thanks for the explanation , and I will try use a higher version ingress-controller. @aledbf One more guess, will the problem related to internal endpoints cache ? if exists. Just guess. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.):
What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
NGINX Ingress controller version: 0.19.0
Kubernetes version (use
kubectl version
): 1.11.0Environment:
uname -a
): 3.10.0-693.2.2.el7.x86_64 Basic structure #1 SMP Tue Sep 12 22:26:13 UTC 2017 x86_64 x86_64 x86_64 GNU/LinuxWhat happened:
nginx controller repeatly found a service endpoint and lost a service endpoint, then nginx reloads the configuration frequently.
What you expected to happen:
Makes service endpoint stable
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
My resources
Yaml config:
The text was updated successfully, but these errors were encountered: