-
Notifications
You must be signed in to change notification settings - Fork 225
Conversation
If you want to test it with vagrant you can use project: https://github.com/FujitsuEnablingSoftwareTechnologyGmbH/kube-deploy-vagrant It creates cluster with master and node |
I tested it. Also checked the network settings. Everything seems to work and setup properly. Finally, we have a starting point. Yippee. First step is to upgrade cni to latest version. |
Nice. I'm having a PR up for CNI that makes it compile on arm64/ppc64le too. @zreigz Checked out your patches, and noticed that instead of downloading CNI at runtime on the host, we should package the latest binaries in hyperkube. @mikedanese already sent a patch for this, but it seems like the version now is too old for being useful. Generally I think everything that hyperkube needs should be packed inside the container. Good that we've taken steps forward with this :)
|
We should also package a default configuration for flannel to hyperkube. But proably not at default location. |
@@ -18,7 +18,7 @@ | |||
|
|||
kube::multinode::main(){ | |||
LATEST_STABLE_K8S_VERSION=$(kube::helpers::curl "https://storage.googleapis.com/kubernetes-release/release/stable.txt") | |||
K8S_VERSION=${K8S_VERSION:-${LATEST_STABLE_K8S_VERSION}} | |||
K8S_VERSION=v1.3.0-beta.1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please leave as-is.
Instead, we should provide two "routes" of configuration as long as k8s supports v1.2, i.e. we may remove the current docker-bootstrap solution when v1.4 is released.
Yes I will continue with this. Something new should be done to the end of the week. |
We are blocked until CNI is updated in Hyperkube, right? |
Yes but I can make some changes to support booth solutions (split code to cni and bootstrap scripts, extract common code, some code refator, etc.) |
By default docker-bootstrap will be enabled for now |
PTAL |
I suggest to switch between bootsrap and cni using Kubernetes version. In other words: in case user is running kubernetes 1.3 then he must use cni. This way we get rid of docker-bootstrap with next version. I tend to name the files This indicates the how the files belong together. (cni is just an implementation detail) |
@luxas PTAL |
kube::multinode::restart_docker_systemd | ||
else | ||
DOCKER_CONF="/etc/default/docker" | ||
sed -i.bak 's/^\(MountFlags=\).*/\1shared/' $DOCKER_CONF |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
MountFlag is a Systemd option, not a docker option.
We can implement like this:
Or maybe, we just remove the flag...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested manual mount tests and hyperkube. Everything seems to work without mount flag. Also, I found a somewhat related bug: docker/machine#3029
I think it is better to remove/comment the line in systemd unit file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting shared
for this flag shows clear intention and purpose and doesn't require any comments. When we get rid of this flag then it can be unclear for others and we have to add additional comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, as long as it is set by docker installer, we should leave it. But you will remove the following code, right?
DOCKER_CONF="/etc/default/docker"
sed -i.bak 's/^\(MountFlags=\).*/\1shared/' $DOCKER_CONF
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes for this it doesn't have sense because it is not a service file.
@zreigz First off, thanks for writing this! I'm excited to begin using CNI now, but there are a few blockers. I guess kubernetes/kubernetes#28673 is one of them. I suppose you've run this with your own, patched hyperkube images when testing this? Sorry, but here's too much duplicated code. I made a first pass at the script and ended up with something like kube::cni::restart_docker(){
if kube::helpers::command_exists systemctl; then
DOCKER_CONF=$(systemctl cat docker | head -1 | awk '{print $2}')
# If we can find MountFlags but not MountFlags=shared, set MountFlags to shared
if [[ ! -z $(grep "MountFlags" ${DOCKER_CONF}) && -z $(grep "MountFlags=shared" ${DOCKER_CONF}) ]]; then
sed -i.bak 's/^\(MountFlags=\).*/\1shared/' ${DOCKER_CONF}
systemctl daemon-reload
systemctl restart docker
kube::log::status "Restarted docker with the new flannel settings"
fi
fi
} No more. Again, we should keep the code here to absolutely a minimum. I'll upload my branch as soon as I have CNI working, I haven't got it fully working yet. Logs:
I think this might be related to an upstream issue... (kubernetes/kubernetes#28178) However, how is your docker daemon configured? I might have to turn off |
Thanks for review I will check my code again and I will try get rid off duplicates. I hope during the review process we make it better. I use default setting for docker daemon startup. Officially we still don't have hypercube with CNI support but unofficially we use our prepared for this purpose image: So for the next iteration I will make code simple as possible and rebase the code |
@zreigz Have you come across the errors I got above? What options are you passing to docker, and what's FLANNEL_IPMASQ when you're running it? |
@luxas do you mean duplicated code between docker-bootstrap.sh and cni-plugin.sh? My idea was that we do not touch docker-bootstrap.sh in the future (only important bugfixes). The hole file will be dropped with 1.4 release. So, the duplicate code will not hurt that much. (Under this assumption it might simplify the code) But, I am fine with refactoring, too |
|
||
# Utility functions for Kubernetes in docker setup and for cni network plugin. | ||
|
||
kube::cni::restart_docker(){ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could rename this to ensure_shared_mount
Got it working, but a few gotchas:
|
DNS: we launch kublet with --cluster-dns flag. This value is added to I didn't test, but skydns has a Restart BTW: @zreigz is on holiday this week |
469d1b3
to
06f1ec2
Compare
I am back. PTAL |
@zreigz could you verify that pods cannot resolve public internet names without cni? |
|
||
kube::helpers::parse_version ${K8S_VERSION} | ||
|
||
if [[ ${USE_CNI} == "true" && \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just put ${USE_CNI} == true here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean without checking version and architecture ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I'll add that later.
CNI should be automatically chosen with version v1.4.0-alpha.2
and above when we consider it stable.
Today I will proceed some extra tests to verify/fix mentioned issues. |
systemctl daemon-reload | ||
systemctl restart docker | ||
|
||
kube::log::status "Restarted docker with the new flannel settings" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be Restarted docker with MountFlags=shared
Great! I'll try to get this through today! |
@cheld @luxas Regarding to DNS. I've started cluster in docker-bootstrap mode. I've created some pods and entered to one of this: In console I've executed |
88c01a8
to
11e35a8
Compare
Regarding to:
I've added check for it in cni-plugin.sh |
Regarding this:
I can confirm the problem. I can take this for today. |
I've pushed a new code with work around for cni0 bridge: but I think the problem should be resolve in cni plugin code. I will try take a look for this code maybe I will be able propose something. Right now script uses |
} | ||
|
||
# Install network utils: ifconfig, brctl | ||
kube::multinode::install_network_utils() { | ||
case "${lsb_dist}" in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please just check for yum
or apt-get
and install the packages if one of those package managers exist.
I'd like to get rid of the OS-specific code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point I will do it.
Yes, It clearly should solved in the CNI code itself. |
Yes sure |
Thanks! LGTM We can iretate more on this in new PRs |
@luxas You mentioned about getting rid of OS distribution dependencies. If you don't mind I can take this task. What do you think ? |
Sounds good. We like to keep them as few as possible.
|
Ok thanks I will create PR for it after weekend |
This solution get rid of docker bootstrap service and uses cni plugin instead.
The newest hypercube must be use for this solution.
Tests result: