Skip to content
This repository was archived by the owner on Nov 3, 2023. It is now read-only.

Add docs showing how to validate images are loaded #132

Merged
merged 3 commits into from
Jul 1, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 9 additions & 1 deletion .github/workflows/pull_request.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ on:

env:
GO_VERSION: "1.16"
CRI_DOCKERD_VERSION: "0.2.3"
CRI_DOCKERD_DEB_VERSION: "0.2.3.3-0"

jobs:
test-unit:
Expand Down Expand Up @@ -58,6 +60,7 @@ jobs:
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $USER $HOME/.kube/config
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
kubectl wait --for=condition=ready --timeout=30s node --all
kubectl get nodes -o wide
Expand Down Expand Up @@ -87,18 +90,23 @@ jobs:
sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo swapoff -a
# Install cri-dockerd
curl -fsSLo /tmp/cri-dockerd.amd64.deb https://github.com/Mirantis/cri-dockerd/releases/download/v${CRI_DOCKERD_VERSION}/cri-dockerd_${CRI_DOCKERD_DEB_VERSION}.ubuntu-focal_amd64.deb
# Note: Default docker setup (cgroupfs) is incompatible with default kubelet (systemd) so one has to be changed
# since k8s recommends against cgroupfs, we'll use systemd
sudo sh -c "echo '{\"exec-opts\": [\"native.cgroupdriver=systemd\"]}' > /etc/docker/daemon.json"
sudo systemctl restart docker
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
# Note, package deps in cri-dockerd missing moby-containerd as option
sudo dpkg --force-depends -i /tmp/cri-dockerd.amd64.deb
docker info
sudo kubeadm init -v 5 || (sudo journalctl -u kubelet; exit 1)
sudo kubeadm init -v 5 --cri-socket unix:///var/run/cri-dockerd.sock || (sudo journalctl -u kubelet; exit 1)
mkdir -p $HOME/.kube/
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $USER $HOME/.kube/config
kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
kubectl wait --for=condition=ready --timeout=30s node --all
kubectl get nodes -o wide
Expand Down
74 changes: 74 additions & 0 deletions docs/validation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
# Validating BuildKit CLI for kubectl

While this CLI can be used for CI usecases where the images are always pushed to a registry, the primary
usecase is for developer scenarios where the image is immediately available within the Kubernetes cluster.
The example listed here can be used to validate everything is set up properly and you're able to build images
and run pods with those images.

## Remove any prior default builder
```
kubectl buildkit rm buildkit
```

## Build a simple image
We'll first build a simple image which prints "Success!" after it runs. We won't tag this with latest since that would cause Kubernetes to try to pull it from a registry.
```
cat << EOF | kubectl build -t acme.com/some/image:1.0 -f - .
FROM busybox
ENTRYPOINT ["echo", "Success!"]
EOF
```

## Run a Job with the image we just built
```
cat << EOF | kubectl apply -f -
apiVersion: batch/v1
kind: Job
metadata:
name: testjob1
spec:
template:
spec:
restartPolicy: Never
containers:
- name: jobc
image: acme.com/some/image:1.0
imagePullPolicy: Never
EOF
```

## Confirm Success
```
kubectl get job testjob1
```
You want to see `COMPLETIONS` showing `1/1`

If not, then troubleshoot below...

## Troubleshooting

You can try to look for the expected "Success!" message with the following:
```
kubectl logs -l job-name=testjob1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would you look for in the logs here?

```

If no logs are available, you can inspect the status and events related to the pod with the following:
```
kubectl describe pod -l job-name=testjob1
```

If the images are not getting loaded properly into the container runtime, you'll see an Event that looks like this:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was a little confusing because there are the two commands above. I'm not sure if the warning will also show up in the logs, or only when you describe the job.


```
Warning ErrImageNeverPull 4s (x11 over 99s) kubelet Container image "acme.com/some/image:1.0" is not present with pull policy of Never
```

In that case, take a closer look at the output from the build command and/or get logs for the builder pod(s)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How would you fix it if this was the case? Are there any specific failures that the user should be looking out for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have a good set of common/known failure modes, but I expanded this section a bit with some more guidance.

to see if there are any error messages. You may be able to use the `kubectl buildkit create` options to
solve or workaround the problem. If you aren't able to find a working configuration, please file a bug
and include the details of your k8s environment and relevant logs so we can assist.

## Cleanup
```
kubectl delete job testjob1
```