-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker assets: cannot build docker images outside the source tree (i.e. against a cdk.out directory) #5807
Comments
As part of our work on [continuous delivery for CDK apps], we decided to store all docker assets in a single ECR repository per environment. This is consistent with file assets, which are stored in a single S3 bucket which is part of the bootstrap stack. This environment-specific ECR repository uses a well-known physical name `aws-cdk/assets` and will be automatically created if needed. In the future, this repository will be created as part of the environment bootstrapping process. The primary change is that the location of docker image assets will be fully determined by the framework. The ECR repository name will be hard-coded to `aws-cdk/assets` and the image tag will be based on the source hash of the docker asset. This basically means that we could get rid of the CloudFormation parameter that the CLI used to assign with the image name which helps to reduce the number of asset parameters (#3463). Since from now on the asset ECR repository will contain different types of images (and not versions of the same image), there is no concept of "latest" image anymore and the optimization that was triggered by the `--ci` flag in the CLI is no longer relevant (pull the "latest"). Luckily CodeBuild now supports docker image layer caching, so this should be the preferred way to optimize docker build times. The `--ci` feature of the CLI is no longer doing anything. Furthermore, before this change, in order to clean up ECR repositories, a custom resource called `AdoptedRepository` was automatically added to the stack for each asset. The purpose of this resource was to remove the asset's ECR repository it if the asset was no longer referenced by this stack. To address this need with the centralized repository, we plan to introduce a garbage collection capability that users will be able to invoke in order to clean up unused assets both from ECR and S3. We will introduce a way to customize asset repository names as part of the CI/CD project. In the meantime, if you need to override the default "aws-cdk/assets" name, you can specify a repo name through the context key `assets-ecr-repository-name` (`--context` in the CLI, `cdk.json`, `new App({ context })` or `stack.setContext`). BACKWARDS COMPATIBILITY As per our guidelines for backwards compatibility, the CLI must be backwards compatible with apps from before this change. However, apps that use CDK >= 1.21.0 will require an upgrade of the CLI. Therefore, to introduce this change, we have made the following non-breaking changes in cx-api: 1. Make `imageNameParameter` optional. If it is specified, the CLI will continue ti 2. Add an optional `imageTag` which instructs the CLI what tag to use for the image. If omitted (by previous versions), `latest` will be used as before. To make it easy to reason about the behavior for old apps, the CLI now has a new implementations for `prepareContainerAsset` called `prepareContainerImageAssetNew`. This new code path is triggered when the asset metadata *does not include* `imageNameParameter`. The new implementation requires that both `repositoryName` and `imageTag` will be defined. The old code path was only modified to support the new optional `imageTag` property (although it is unlikely to be exercised). Additional backwards compatibility concerns: - New ECR repositories the CLI creates will not have the lifecycle policy that retains only the last 5 docker images. This should not have a functional impact on users, but goes back to the imminent garbage collection project. - The removal of the `AdoptedRepository` resource from all stacks will result in the deletion of all ECR previously created ECR repositories (this is what the AdoptedRepository resource is designed to do). This can be harmful since these repositories are being referenced by the stack. To address this, we invalidate the image ID by salting the source hash. This means that after this change, all container images will have a new ID, which is not maintained by the removed adopted repository resource. TESTING - Unit tests for `prepareContainerImage` were duplicated and extended to exercise the new code path while preserving tests for old path. - All CLI integration tests were executed successfully against the new version. - Manually tested that the new CLI works with old apps. This change also fixes #5807 so that custom docker file names are relative and not absolute paths. [continuous delivery for CDK apps]: #3437 BREAKING CHANGE: all docker image assets are now pushed to a single ECR repository named `aws-cdk/assets` with an image tag based on the hash of the docker build source directory (the directory where your `Dockerfile` resides). See PR #5733 for details and discussion.
* feat(ecr-assets): simplify docker asset publishing As part of our work on [continuous delivery for CDK apps], we decided to store all docker assets in a single ECR repository per environment. This is consistent with file assets, which are stored in a single S3 bucket which is part of the bootstrap stack. This environment-specific ECR repository uses a well-known physical name `aws-cdk/assets` and will be automatically created if needed. In the future, this repository will be created as part of the environment bootstrapping process. The primary change is that the location of docker image assets will be fully determined by the framework. The ECR repository name will be hard-coded to `aws-cdk/assets` and the image tag will be based on the source hash of the docker asset. This basically means that we could get rid of the CloudFormation parameter that the CLI used to assign with the image name which helps to reduce the number of asset parameters (#3463). Since from now on the asset ECR repository will contain different types of images (and not versions of the same image), there is no concept of "latest" image anymore and the optimization that was triggered by the `--ci` flag in the CLI is no longer relevant (pull the "latest"). Luckily CodeBuild now supports docker image layer caching, so this should be the preferred way to optimize docker build times. The `--ci` feature of the CLI is no longer doing anything. Furthermore, before this change, in order to clean up ECR repositories, a custom resource called `AdoptedRepository` was automatically added to the stack for each asset. The purpose of this resource was to remove the asset's ECR repository it if the asset was no longer referenced by this stack. To address this need with the centralized repository, we plan to introduce a garbage collection capability that users will be able to invoke in order to clean up unused assets both from ECR and S3. We will introduce a way to customize asset repository names as part of the CI/CD project. In the meantime, if you need to override the default "aws-cdk/assets" name, you can specify a repo name through the context key `assets-ecr-repository-name` (`--context` in the CLI, `cdk.json`, `new App({ context })` or `stack.setContext`). BACKWARDS COMPATIBILITY As per our guidelines for backwards compatibility, the CLI must be backwards compatible with apps from before this change. However, apps that use CDK >= 1.21.0 will require an upgrade of the CLI. Therefore, to introduce this change, we have made the following non-breaking changes in cx-api: 1. Make `imageNameParameter` optional. If it is specified, the CLI will continue ti 2. Add an optional `imageTag` which instructs the CLI what tag to use for the image. If omitted (by previous versions), `latest` will be used as before. To make it easy to reason about the behavior for old apps, the CLI now has a new implementations for `prepareContainerAsset` called `prepareContainerImageAssetNew`. This new code path is triggered when the asset metadata *does not include* `imageNameParameter`. The new implementation requires that both `repositoryName` and `imageTag` will be defined. The old code path was only modified to support the new optional `imageTag` property (although it is unlikely to be exercised). Additional backwards compatibility concerns: - New ECR repositories the CLI creates will not have the lifecycle policy that retains only the last 5 docker images. This should not have a functional impact on users, but goes back to the imminent garbage collection project. - The removal of the `AdoptedRepository` resource from all stacks will result in the deletion of all ECR previously created ECR repositories (this is what the AdoptedRepository resource is designed to do). This can be harmful since these repositories are being referenced by the stack. To address this, we invalidate the image ID by salting the source hash. This means that after this change, all container images will have a new ID, which is not maintained by the removed adopted repository resource. TESTING - Unit tests for `prepareContainerImage` were duplicated and extended to exercise the new code path while preserving tests for old path. - All CLI integration tests were executed successfully against the new version. - Manually tested that the new CLI works with old apps. This change also fixes #5807 so that custom docker file names are relative and not absolute paths. [continuous delivery for CDK apps]: #3437 BREAKING CHANGE: all docker image assets are now pushed to a single ECR repository named `aws-cdk/assets` with an image tag based on the hash of the docker build source directory (the directory where your `Dockerfile` resides). See PR #5733 for details and discussion. * update test expectations
I am not convinced this has been completely resolved. Please see this repo. This is a sample repo I have built that is structured exactly the same a real project I am working on where all projects should share the same docker context but each project has its own or multiple docker files. Note that the cdk.out folder does not contain any docker files but running the AwsStack project does not throw any errors. Also note these lines of code from the project. AwsCdkDockerSample/AwsStack/DevStack.cs: Lines 54-60 Thanks, any insight would be much appreciated. |
I am also having a problem with this after upgrading to Below is a the snippet of what I'm trying to do:
Trying to deploy the stack results in an error with
The |
Hi @MrJonBirch. You wrote:
I see that CloudAssembly appAssembly = app.Synth(); Did you expect the Follwing that, a good addition would be to validate the existence of the The reason the Is there a specific reason you explicitly ignore Finally, if you remove that entry from |
Hi @ric-sapasap. You wrote:
This behavior implies that you are using a If so, to apply the fix, you need to re-synthesize your app using version If not, can you perhaps share a snippet from the {
"type": "aws:cdk:asset",
"data": {
"id": "79452e39b3d4fc2df04cf69f52a637fe5822ba5ef5b86733e471b5089017057d",
"packaging": "container-image",
"path": "asset.79452e39b3d4fc2df04cf69f52a637fe5822ba5ef5b86733e471b5089017057d",
"sourceHash": "79452e39b3d4fc2df04cf69f52a637fe5822ba5ef5b86733e471b5089017057d",
"imageNameParameter": "AssetParameters79452e39b3d4fc2df04cf69f52a637fe5822ba5ef5b86733e471b5089017057dImageName3509FBEE",
"repositoryName": "cdk/{repo-name}",
"file": "{some-path-to-a-docker-file}"
}
} |
Hi @iliapolo Yes, the error I was getting was at the deploy phase. I did not think it worth mentioning as I was able to trace it back to synth not putting the docker files in the asset folders.
The reason I was expecting an error or thought it might be a bug was because if you point to a file that does not exists it fails so it seemed like the validation check was working but the copy or asset creation stage was pointing to the wrong directory. As far as the .dockerignore, it is the default one provided by visual studio when you either create a project with docker support or add docker support to existing project. You are right removing Line 18 "**/Dockerfile*" fixed my issue. However I am a little confused how. Does the ContainerImage.FromAsset process the .dockerignore file? I thought that would have been done at deploy time when the container is actually built. Is there somewhere in the documentation that expands on this? Thank You |
@ric-sapasap Excellent 👍 Glad the issue is resolved, we will be sure to make this upgrade process clearer going forward. Happy to help :) |
Hi @MrJonBirch. Let me clarify what exactly is happening. Like you mentioned, during During You are right that this is actually an undocumented feature/behavior. I can point you to the following issues to get more context, and will make sure we add this to the documentation.
Hope this helps, and thanks! |
Hi @iliapolo That clears things up beautifully. Thank you for all your help on this. |
@MrJonBirch Perfect, my pleasure 😊 |
@iliapolo Been around the .dockerignore issue for 3 days. |
@iliapolo thanks for the explanation above. I haven't seen anything like it in the documentation. It helped me focus on cdk.out for the problem, but I wasn't able to solve mine with just that information. My .dockerignore file looks like this:
so I don't want to copy anything else over to the docker context besides the jar, but it is actually not copied over to cdk.out. I have tried different variations to get the jar copied over, but nothing worked besides moving the jar to the root of the project instead of having it in a directory which is not realistic. Because of this the COPY command in the Dockerfile fails. Any ideas how I could get the jar copied over so |
The feature to support customizing the docker file name introduced in #5652 will pass the full absolute path of the docker file name to
docker build
. This happens both for the case where a custom docker file name is specified and when it is not specified (defaults to/full/path/to/Dockerfile
).However, since we are staging the build directory into
cdk.out
(which can potentially be ported to a different system, e.g. as part of a CI/CD process), we need the name of the custom docker file to be relative and not absolute.See https://github.com/aws/aws-cdk/pull/5652/files#r366863153
This is 🐛 Bug Report
The text was updated successfully, but these errors were encountered: