-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fsgroup is not set on volume provided by Vmware CSI #370
Comments
@RaunakShah @divyenpatel Any insight on above issue. |
@RaunakShah On analysis We could see issue with image: quay.io/k8scsi/csi-provisioner:v2.0.0-rc2. This image resolved our Volume going in released state. But it is not honoring Fsgroup. |
@Anil-YadavK8s can you try doing this without CSI involved? Basically exec into the Pod and try mounting manually.. |
@RaunakShah Yes with empty dir/hostvolume , fsgroup is working. |
@Anil-YadavK8s Can you share all the YAML files (maybe as a github gist). We can try to reproduce this issue on a local setup and get back to you. |
I also encountered this issue. Here's the statefulset I used: And I ran a shell with kubectl exec -it sh Provisioner used: csi.vsphere.vmware.com (Note that I am not getting this issue with Portworx, Ceph, EBS, etc. when I apply the exact same statefulset yaml) It is expected that an unprivileged Pod running as a non-root UID can access/delete/created file/dir in the mounted PVCs when fsgroup is specified in the pod's security context. Yet with "csi.vsphere.vmware.com" this is not the case. |
@dickeyf thanks for the YAML. Will get back to you shortly. |
Seeing this too - sample code here: |
Since 1.19 Kubernetes does a check here to verify whether a CSI Driver supports fsGroup: The field it checks ultimately comes from the CSIDriver object's spec. (https://kubernetes-csi.github.io/docs/csi-driver-object.html, see fsGroupPolicy field). However the default value seems OK, and according to source, seems to retain old behavior. Could this be related to it? We were using K8S 1.19 when testing this. @Anil-YadavK8s Do you remember what version you used? |
This is the CSIDriver object we had when testing:
I did confirm this versions would have the proper defaulting:
|
@dickeyf @RobbieJVMW @Anil-YadavK8s thanks for all the updates. I was able to reproduce this issue locally. I used the sts yaml provided in #370 (comment) on a default storage class on my setup. This is the original storage class spec i used:
As you can see, the parameters section is empty. However according to Kubelet's internal spec, no
And here's the reference code - https://github.com/kubernetes/kubernetes/blob/v1.18.9/pkg/volume/csi/csi_mounter.go#L383 I fixed the issue locally, by specifying the Once i do that,
And i can touch files at the mount point:
Can you try to apply the |
Here's another link that's informative - https://github.com/kubernetes-csi/external-provisioner/blob/8b0707649212d770624008edbd127f312121aff9/cmd/csi-provisioner/csi-provisioner.go#L77 If external-provisioner fsType isn't set, and SC fsType isn't set, then none is assumed. |
Will try that, thanks a lot! I believe this was our issue all along.
So if either one is set it would fix the issue? |
Yes it would. Setting it on external-provisioner helps avoid setting it on each individual storage class. Setting it on a storage class supercedes whatever is set on the external-provisioner. Note that you need to be using external-provisioner I've verified both options, they work. Can you give it a try on your setup too and let me know how it goes? |
@dickeyf @Anil-YadavK8s can you guys also tell me what external-provisioner version you are using? |
I've tested this against my VSAN lab on TKG 1.2 and its working successfully when you provide fsType. @dickeyf this should fix the issue we have been having. |
If you don't provide this param to the storageclass definition should be generating a 'permissions' error from inside the container or throwing the issue out before we reach that point ? Maybe simply highlighting it in docs ? |
/bug |
Updated the YAMLs with default fs type for now, which is the short term fix suggested by the community. |
@RaunakShah: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I am having same issue without any luck with applying work around. Appreciate feed back. |
@true64gurus can you paste the sts yaml that you are attempting to deploy? |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is this a BUG REPORT or FEATURE REQUEST?:
When we are deploying a Pod with security context fstype and non-root user to access Vmware volume (PV/PVC).
Fsgroup failed to assigned setgid in the files on the volumes
What happened:
Pod presented with Vmware CSI's PV/PVC , unable to fsgroup on the data volume.
What you expected to happen:
Vmware CSI's PV/PVC should support fsgroup for less privileges Pod.
How to reproduce it (as minimally and precisely as possible):
kind: StatefulSet
metadata:
name: alpine-privileged
labels:
app: alpine-privileged
spec:
replicas: 1
selector:
matchLabels:
app: alpine-privileged
template:
metadata:
labels:
app: alpine-privileged
spec:
serviceAccountName: test-sa-psp
securityContext:
runAsUser: 1000
fsGroup: 2000
containers:
- name: alpine-privileged
image: alpine:3.9
command: ["sleep", "1800"]
volumeMounts:
- name: data
mountPath: /data
securityContext:
readOnlyRootFilesystem: false
volumeClaimTemplates:
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: data
spec:
accessModes:
resources:
requests:
storage: 1Gi
storageClassName: test
volumeMode: Filesystem
ccdadmin@seliicbl01481-ns2-testbed2-m1:~> kubectl -n=test-1 exec alpine-privileged-0 -it -- /bin/sh
/ $ ls -lrth
total 8
drwxr-xr-x 11 root root 125 Apr 23 13:10 var
drwxr-xr-x 7 root root 66 Apr 23 13:10 usr
drwxrwxrwt 2 root root 6 Apr 23 13:10 tmp
drwxr-xr-x 2 root root 6 Apr 23 13:10 srv
drwx------ 2 root root 6 Apr 23 13:10 root
drwxr-xr-x 2 root root 6 Apr 23 13:10 opt
drwxr-xr-x 2 root root 6 Apr 23 13:10 mnt
drwxr-xr-x 5 root root 44 Apr 23 13:10 media
drwxr-xr-x 5 root root 185 Apr 23 13:10 lib
drwxr-xr-x 2 root root 6 Apr 23 13:10 home
drwxr-xr-x 2 root root 4.0K Apr 23 13:10 sbin
drwxr-xr-x 2 root root 4.0K Apr 23 13:10 bin
dr-xr-xr-x 13 root root 0 Sep 14 11:21 sys
drwxr-xr-x 1 root root 21 Sep 18 09:21 run
dr-xr-xr-x 587 root root 0 Sep 18 09:21 proc
drwxr-xr-x 1 root root 66 Sep 18 09:21 etc
drwxr-xr-x 5 root root 360 Sep 18 09:21 dev
drwxr-xr-x 3 root root 18 Sep 18 09:21 data
/ $ cd data/
/data $ ls
demo
/data $ ls -lrth
total 4
drwxr-xr-x 3 root root 4.0K Sep 18 09:21 demo
/data $
/data $
/data $ mkdir test
mkdir: can't create directory 'test': Permission denied
/data $
/data $
Anything else we need to know?:
Environment:
csi-vsphere version: vmware/vsphere-block-csi-driver:v2.0.0
vsphere-cloud-controller-manager version: gcr.io/cloud-provider-vsphere/cpi/release/manager:latest
Kubernetes version: 1.17.3
vSphere version: 6.7U3
OS (e.g. from /etc/os-release): SUSE Linux Enterprise Server 15 SP1
Kernel (e.g.
uname -a
): Linux master-node 4.12.14-197.45-default Create a SECURITY_CONTACTS file. #1 SMP Thu Jun 4 11:06:04 UTC 2020 (2b6c749) x86_64 x86_64 x86_64 GNU/LinuxInstall tools:
Others:
The text was updated successfully, but these errors were encountered: