Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker in docker not working after upgrading nodes to 1.14 #334

Closed
yrsurya opened this issue Sep 13, 2019 · 4 comments
Closed

docker in docker not working after upgrading nodes to 1.14 #334

yrsurya opened this issue Sep 13, 2019 · 4 comments

Comments

@yrsurya
Copy link

yrsurya commented Sep 13, 2019

What happened:
We use jenkins to build image and push to ECR which uses underlying docker so we mount /var/run/docker.sock in Kubernetes plugin configuration in jenkins. It works fine till now but broke after upgrading needs to 1.14 which affects our CI builds.
What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • AWS Region:
  • Instance Type(s):
  • EKS Platform version (use aws eks describe-cluster --name <name> --query cluster.platformVersion):"eks.1"
  • Kubernetes version (use aws eks describe-cluster --name <name> --query cluster.version):"1.14"
  • AMI Version:AMI ID
    amazon-eks-node-1.14-v20190906 (ami-08739803f18dcc019)
  • Kernel (e.g. uname -a):Linux 4.14.138-114.102.amzn2.x86_64 Template is missing source_ami_id in the variables section #1 SMP Thu Aug 15 15:29:58 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  • Release information (run cat /etc/eks/release on a node):NAME="Amazon Linux"
    VERSION="2"
    ID="amzn"
    ID_LIKE="centos rhel fedora"
    VERSION_ID="2"
    PRETTY_NAME="Amazon Linux 2"
    ANSI_COLOR="0;33"
    CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
    HOME_URL="https://amazonlinux.com/"
@M00nF1sh
Copy link
Member

M00nF1sh commented Sep 14, 2019

Hi yaramada,

did you updated an existing old cluster to 1.14?
There should be an podSecurityPolicy for run Privileged pods in 1.14(https://kubernetes.io/docs/concepts/policy/pod-security-policy/#privileged), i guess this might be the cause.

Reference: https://docs.aws.amazon.com/eks/latest/userguide/pod-security-policy.html

@yrsurya
Copy link
Author

yrsurya commented Sep 15, 2019

we updated existing clusters to 1.14 and after adding this
spec:
securityContext:
fsGroup: dockergid

it started working back again.

@bonifaido
Copy link

I would rather use Kaniko or something similar on Kubernetes instead of Docker for building OCI Images, we use Kaniko in our CI on Kubernetes happily, without any issues: https://banzaicloud.com/blog/unprivileged-builds/

@M00nF1sh
Copy link
Member

M00nF1sh commented Sep 16, 2019

@yrsurya
I'll resolve this issue since it's 1.14 behavior change instead of worker ami
Feel free to reopen if the statement isn't correct :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants