Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Caveat: referencing images by tag #136

Closed
DazWilkin opened this issue Nov 19, 2020 · 2 comments · Fixed by #234
Closed

Caveat: referencing images by tag #136

DazWilkin opened this issue Nov 19, 2020 · 2 comments · Fixed by #234
Assignees
Labels
bug Something isn't working

Comments

@DazWilkin
Copy link
Contributor

DazWilkin commented Nov 19, 2020

Describe the bug

Some minor feedback on using image tags rather than digests.

If the developer rebuilds|repushes e.g. agent image, a Kubernetes cluster may not repull the image even when specs are reapplied. This is because even though the image has changed, it's tag has not e.g. the Helm chart references images by tag, and Kubernetes doesn't (!?) repull an image if it has the image (tag) cached.

helm install akri ... \
...
--set agent.image.repository="${REPO}/agent" \
--set agent.image.tag="v0.0.XX-amd64" \
--set controller.image.repository="${REPO}/controller" \
--set controller.image.tag="v0.0.XX-amd64"

This can cause problems if the developer assumes that repushing causes e.g. Kubernetes to repull the image when the spec is reapplied.

The preferred mechanism would be to always (even in Helm) reference images by digest|hash as this is very likely to change every time the image changes.

An alternative is to eyeball the image digests after changes to ensure the images cached by Kubernetes reflect the digests of the images in the repo.

In the case of MicroK8s, it's possible to enumerate the cluster's cached images using crictl and to remove stale versions:

sudo crictl --runtime-endpoint=unix:///var/snap/microk8s/common/run/containerd.sock images

sudo crictl --runtime-endpoint=unix:///var/snap/microk8s/common/run/containerd.sock rmi ...

Output of kubectl get pods,akrii,akric -o wide

N/A

Kubernetes Version: [e.g. Native Kubernetes 1.19, MicroK8s 1.19, Minikube 1.19, K3s]

N/A

To Reproduce
Steps to reproduce the behavior:

  1. Create cluster
  2. Install Akri with the Helm command
  3. Enumerate cluster images using crictl
  4. Delete Akri
  5. Change the agent or controller and repush (same tag)
  6. Reinstall Akri
  7. use crictl to confirm that the most recent image hash was not used

Expected behavior

Any images changes (e.g. agent, controller, brokers) should cause Kubernetes repulls of the image on recreates.

Logs (please share snips of applicable logs)

N/A

Additional context

N/A

@DazWilkin DazWilkin added the bug Something isn't working label Nov 19, 2020
@bfjelds
Copy link
Collaborator

bfjelds commented Jan 11, 2021

I'm not super sure what to do about this. I see the problem for sure.

It seems like no matter what we do, people will have to change their Helm install command to include a specific image (via tag or digest).

Perhaps we could make things more apparent by changing Makefile so that people developing (the official builds and helm chart defaults would remain unchanged) would end up with a timestamped label (or some equivalent uniqueness).

PREFIX ?= $(REGISTRY)/$(UNIQUE_ID)
VERSION=$(shell cat version.txt)
TIMESTAMP=$(shell date +"%Y%m%d_%H%M%S")     # <-----
VERSION_LABEL=v$(VERSION)-$(TIMESTAMP)       # <-----
LABEL_PREFIX ?= $(VERSION_LABEL)

With this, we'd generate images with labels like v0.1.5-20210111_120445-amd64.

Then the development documentation could tell users to specify the tag/digest that aligns to what they want to test?

Does that sound like an approach that would improve things?

@DazWilkin
Copy link
Contributor Author

I was thinking about this too recently.

I was going to propose more consistent versioning; I've been lazily reusing e.g. v0.0.44-amd64 incessantly

I think your approach is elegant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: Done
2 participants