- If there is no docker on the target host, pai will install the latest docker-ce.
- Change the host file in the path
/etc/hosts
- The Script Link
- Because hadoop service will be deployed on hostnetworking environment. And some configuration of hosts file will have a bad influence on hadoop’s name resolve behavior.
- Prepare kubelet environment and start kubelet through docker
- Prepare kubelet environment accoding to node role
- The Python Module To Deploy PAI
- The Environment Preparation Script Link
- The Starting Script Link
-
Why does this issue happen on your cluster?
- Because the usage of the disk has reach the limit which we set through kubelet.
- To change the threshold according to your environment. You should change the field the line
--eviction-hard="memory.available<5%,nodefs.available<5%,imagefs.available<5%,nodefs.inodesFree<5%,imagefs.inodesFree<5%"
in the kubelet.sh.template
-
Some useful references:
-
Please ensure whether you could access to
gcr.io
or not. If you couldn't access togcr.io
, you will failed to pull kubernetes image.- In the kubernetes-configuration.yaml, you could find a field
docker-registry
. This field is set the docker registry used in the k8s deployment. To use the official k8s Docker images, set this field togcr.io/google_containers
, the deployment process will pull Kubernetes component's image fromgcr.io/google_containers/hyperkube
. But if you can't access to it, you can also set the docker registry todocker.io/openpai
, which is maintained by pai.
- In the kubernetes-configuration.yaml, you could find a field