Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple docker builds causing slow build #3510

Closed
srbartlett opened this issue Apr 28, 2022 · 8 comments
Closed

Multiple docker builds causing slow build #3510

srbartlett opened this issue Apr 28, 2022 · 8 comments
Labels
guidance Issue requesting guidance or information about usage

Comments

@srbartlett
Copy link

Hi, I am looking for guidance on the build pipeline step. Firstly, thanks for providing such a great tool chain. Life is better with Copilot 👍

After upgrading from 1.5 to 1.16, I rebuilt the pipeline and have noticed my build step takes significantly longer to complete. I have x2 environments and x2 services (LBS and Backend Service).

The build step runs docker build four times. Is it necessary to build an image per environment per services? I would have through a docker build for each service which is tagged for the different environments would suffice. I am missing something?

@iamhopaul123
Copy link
Contributor

Hello @srbartlett. Thank you for your love for Copilot! As for the build time, it is a great question. I think the reason is because previously, for each service we use the same image for all the environments it deploys to. However, after v1.16, we use different images. In order to speed up the image building, you could modify your buildspec so that for each service can use the same image for all the environments it deploys to (see here).

However, I think for the second build for the same image. For example, in your case, LBS image for the second environment, though it requires another docker build, it should be real quick because it will get from cache like the following

Step 1/4 : FROM public.ecr.aws/nginx/nginx:latest
--
213 | ---> fa5269854a5e
214 | Step 2/4 : COPY nginx.conf /etc/nginx/nginx.conf
215 | ---> Using cache
216 | ---> c5e3c195a161
217 | Step 3/4 : RUN mkdir -p /www/data/frontend
218 | ---> Using cache
219 | ---> a7c736f8961b
220 | Step 4/4 : COPY index.html /www/data/frontend
221 | ---> Using cache
222 | ---> 39bd699b2b7d

Could you help me to verify if it is the same for your pipeline?

@srbartlett
Copy link
Author

Hi @iamhopaul123 , thanks for your comment.

Regarding you first point to modify the buildspec, I am correct to assume if I remove ${env} from tag it will only build the image once? If so, I'll give it a go.

Regarding the cache, yes subsequent builds use the cache. In fact, I added cache_from to my manifest which helps the initial build.

However, my dockerfile includes a RUN bundle exec rake assets:precompile which is slow and is not cached for some reason. To counter this I created a Multistage dockerfile and added a target: sidekiq to my Backend service manifest (assets are not needed by sidekiq). Unfortunately, the buildspec doesn't use the target specified. It would be good if the build echoed the docker command to help with debugging. Below is the image section from my manifest:

image:
  build:
    dockerfile: ./Dockerfile
    target: sidekiq
    cache_from:
      - xxx.amazonaws.com/yyy/sidekiq:latest

@iamhopaul123
Copy link
Contributor

I am correct to assume if I remove ${env} from tag it will only build the image once? If so, I'll give it a go.

Yeah feel free to do so if you want to use the same image.

It would be good if the build echoed the docker command to help with debugging.

We don't do very fancy docker command translation, so the docker build command would be something like

docker build -f ./Dockerfile --target sidekiq --cache-from xxx.amazonaws.com/yyy/sidekiq:latest

Do you think that'll help your debugging 🤔

@srbartlett
Copy link
Author

thanks @iamhopaul123, yet to experiment with the tag change as suggest but I'll update you once I do.

For whatever reason the image target param has no effect when run from my Codebuild project. It works as expected when I run copilot svc package locally. Same version of Copilot but different version of Docker. Codebuild is running 18 which support Multi-Stage. I'll bump the version to see what happens.

@chrisflatley
Copy link

This may not help you @srbartlett but I had the same issue (except with 20+ services).

The main problem I think is that:

./copilot-linux svc package -n $svc -e $env --output-dir './infrastructure' --tag $tag --upload-assets;

Is output to the ./infrastructure folder which means that in my Dockerfile the COPY . . will always change for ever service and every environment - so nothing is cached after that.

I've added ./infrastructure to .dockerignore to stop this.

Also the buildspec.yml references docker:19 but now the version in ASL2 is docker 20. My Docker targets seem to work now.

@dannyrandall dannyrandall added the guidance Issue requesting guidance or information about usage label May 26, 2022
@srbartlett
Copy link
Author

Thanks @chrisflatley - that was really helpful 👍

@Drakula2k
Copy link

I think this was solved by #1999, now it's possible to specify an image already built for another service (with location) and override command and/or entrypoint.

@dannyrandall
Copy link
Contributor

I'm going to close this issue as it looks like it's been resolved thanks to the workaround shared by @chrisflatley! Feel free to reopen if there are any questions remaining.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
guidance Issue requesting guidance or information about usage
Projects
None yet
Development

No branches or pull requests

5 participants