-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubeadm creating multi master cluster DNS Resolution not working #1415
Comments
@abizake |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is this a request for help?
If no, delete this section and continue on.
What keywords did you search in kubeadm issues before filing this one?
If you have found any duplicates, you should instead reply there and close this page.
If you have not found any duplicates, delete this section and continue on.
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Choose one: BUG REPORT or FEATURE REQUEST
Versions
kubeadm version (use
kubeadm version
):1.12.1
Environment:
kubectl version
): 1.12.1uname -a
):3.10.0-957.5.1.el7.x86_64Docker Version : 18.6.2
What happened?
When trying to test the dns resolution working or not on a newly set up k8s cluster , when I try :
kubectl exec -ti busybox -- nslookup kubernetes.default
it gives the following ouput
Server: 10.0.0.10 Address 1: 10.0.0.10 nslookup: can't resolve 'kubernetes.default
What you expected to happen?
kubectl exec -ti busybox -- nslookup kubernetes.default
should give the following
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?
Kube Proxy Logs (also on master nodes) :
0219 10:56:34.395729 1 server_others.go:189] Using ipvs Proxier.
W0219 10:56:34.396299 1 proxier.go:343] IPVS scheduler not specified, use rr by default
I0219 10:56:34.396803 1 server_others.go:216] Tearing down inactive rules.
E0219 10:56:34.514331 1 proxier.go:430] Failed to execute iptables-restore for nat: exit status 1 (iptables-restore: line 7 failed
)
I0219 10:56:34.523628 1 server.go:447] Version: v1.12.1
I0219 10:56:34.546089 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0219 10:56:34.546246 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0219 10:56:34.546396 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0219 10:56:34.546478 1 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0219 10:56:34.547102 1 config.go:102] Starting endpoints config controller
I0219 10:56:34.547123 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0219 10:56:34.547165 1 config.go:202] Starting service config controller
I0219 10:56:34.547179 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0219 10:56:34.647319 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0219 10:56:34.647324 1 controller_utils.go:1034] Caches are synced for service config controller
Kube adm Config File :-
kube-dns logs
Please advise . Seems I am missing some important configuration while setting up the cluster.
The text was updated successfully, but these errors were encountered: