-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multistage build in same container fails because cross stage deps are not cleaned up ( symlinks ) #1406
Comments
I had the same problem, you need to run https://github.com/GoogleContainerTools/kaniko#--cleanup |
Thanks for the suggestion @alanhughes . I've tested with the image from original report (repo digest
but result is the same - Then I updated Kaniko image:
which is |
@RoSk0 Have you found a solution. - I'm running out of ideas. |
Unfortunately no. I've splited my build job into three :(
|
I just went into same problem ;( I'm using
|
I also have encountered this issue and have split my job into three separate jobs. The --cleanup, --no-push combo did not resolve this for me. |
Unfortunately it is still an issue with the latest release :( executor version
Kaniko version : v1.6.0 |
This is basically a duplicate of my (currently closed) issue #1217 and I provided a minimal reproduction there, which I just updated to v1.6.0: |
I am also seeing this issue with the latest release. Only occurs when building multiple images in the same container with the |
I can replicate this with npm and a two stage build, but it happens in the first stage:
Running on AKS with Gitlab CI. time /kaniko/executor \
--context "${CI_PROJECT_DIR}" \
--dockerfile "${CI_PROJECT_DIR}/Dockerfile" \
--cache=true \
--destination "${IMAGE_TAG}" \
--build-arg NODE_IMAGE="${NODE_IMAGE}" \
--build-arg VERSION="${VERSION}" \
--build-arg VERSION_SEMVER="${VERSION_SEMVER}" ARG NODE_IMAGE
ARG VERSION="not_set"
FROM $NODE_IMAGE as build
ARG VERSION
ENV APP_VERSION=${VERSION}
ARG VERSION_SEMVER="not_set"
WORKDIR /app
COPY package.json package-lock.json .npmrc ./
RUN npm version "${VERSION_SEMVER}" \
&& npm ci
COPY . .
RUN npm run build:ci
# -----------------------------------------------------
FROM $NODE_IMAGE
# ... |
The following workaroud works for me. After each execution I add:
For example: execute() {
/kaniko/executor --context . --build-arg=MYARG=1$ --cleanup --destination myregistry.com/repo:tag-$1
rm -rf /kaniko/0
}
while read -r line; do
execute $line
done < my_file |
I had the same issue and on top of that, if you have more than two stages, Kaniko will also create |
Having the same problem here, up to, and including 1.9.1. |
Is there any repair plan |
I can still reproduce it with version gcr.io/kaniko-project/executor:v1.16.0-debug Using dockerfile as @AndreKR mentioned #1406 (comment) Also idk whats going under the hood exactly in kaniko but is it desired that when building image we are working on / fs of kaniko container? With such dockerfile
im able to remove kaniko executor on kaniko continer.
|
So I dig for a while and found that
After reconsidering the topic i think the best solution for this issue is to distinguish cleanup of the filesystem after the stage and clean filesystem with the flag
|
I hope this gets merged. Tanks. However wanted to post this in case someone else finds this problem that I managed to solve: Hello after finding this issue here and trying a lot of things to find a generic fix for my case, I found what I think solves most my error cases. In my case this solved all the failing builds with the different dockerfiles of around 15 projects (not all were failing but the ones with dockerfiles with more stages were more prone to fail). My use case of kaniko is inside a jenkins pipeline that is using kubernetes plugin to run jobs inside kubernetes agent pods. Those agents have defined 1 single kaniko container and my need was to build the image twice with that single kaniko container, once as a tar to scan it with Trivy (a tool to scan containers) and after some quality checks are met use again the kaniko container to just build the image again and upload it to ECR. My solution was adding this to my first call of building the image as a tar: && rm -rf /kaniko/*[0-9]* && rm -rf /kaniko/Dockerfile && mkdir -p /workspace Call ending like this.
Not a huge kaniko user myself but found this /kaniko directory was filled with some files after the 1st execution as some people in this thread mentioned. those files were messing the next execution. Those commands after the 1st build remove those problematic files and second execution works as a charm. Hope this helps other people that find this issue. Thanks. |
@ricardllop you can also use crane to upload a container tar. No need to rebuild the image. I mean i don’t know your usecase in detail, but it sounds like. https://github.com/google/go-containerregistry/blob/main/cmd/crane/doc/crane.md |
Is it possible for files in these locations to be overwritten rather than rewritten? |
It's concerning that the issue persists despite its age. I faced a similar problem using GitLab CI to build images in a Kubernetes cluster. To avoid splitting stages into different jobs, consider creating a cleanup script and aliasing it as follows: before_script:
- alias kaniko-cleanup='ls /kaniko | grep -v "docker-credential-acr-env\|docker-credential-gcr\|docker-credential-ecr-login\|executor\|warmer\|ssl" | xargs -I {} rm -rf /kaniko/{}' Then, in your scripts: scripts:
- /kaniko/executor ...
- kaniko-cleanup
- /kaniko/executor ...
- kaniko-cleanup
- /kaniko/executor ...
- kaniko-cleanup This approach resolved the issue for me. But ideally, kaniko should handle symlinks identically to Docker and do a proper cleanup following each job, especially considering its role in producing images from Docker files. Hope this helps. |
Actual behavior
I want to set up image building for our project as part of CI pipeline using GitLab CI capabilities.
Following https://docs.gitlab.com/13.2/ee/ci/docker/using_kaniko.html#building-a-docker-image-with-kaniko I done CI configuration and it works perfect if you build one image per job. It is not GitLab issue, just bear with me.
We have a multi stage Dockerfile to build our images. So if you try build multiple targets inside same ( and this is crucial ) container it will fail with:
Expected behavior
Two (in my case) images built.
To Reproduce
Output of commands that successfully ran is omitted:
I've tried to raise verbosity level to debug - nothing useful. With trace it shows way too much to digest.
Directory content of vendor/bin is:
Additional Information
Included in the steps to reproduce
Included in the steps to reproduce
Triage Notes for the Maintainers
--cache
flagThe text was updated successfully, but these errors were encountered: