Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Access denied occurs when delete a dynamically created PV (DeleteVolumeRequest does not have VolumeCapabilities) #260

Closed
huanghantao opened this issue Dec 31, 2021 · 13 comments · Fixed by #262

Comments

@huanghantao
Copy link

huanghantao commented Dec 31, 2021

What happened:
I now dynamically created the PV through NFS StorageClass, but when I deleted the PVC, the NFS provisioner reported the following error:

I1231 05:17:34.125960       1 controller.go:1472] delete "pvc-3bb08883-6f23-4016-ad15-9607971b68cd": started
E1231 05:17:34.165645       1 controller.go:1482] delete "pvc-3bb08883-6f23-4016-ad15-9607971b68cd": volume deletion failed: rpc error: code = Internal desc = failed to mount nfs server: rpc error: code = Internal desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 192.168.1.188:/home/codinghuang/nfs/pipeline-volume /tmp/pvc-3bb08883-6f23-4016-ad15-9607971b68cd
Output: mount.nfs: access denied by server while mounting 192.168.1.188:/home/codinghuang/nfs/pipeline-volume
W1231 05:17:34.165687       1 controller.go:1013] Retrying syncing volume "pvc-3bb08883-6f23-4016-ad15-9607971b68cd", failure 7
E1231 05:17:34.165708       1 controller.go:1031] error syncing volume "pvc-3bb08883-6f23-4016-ad15-9607971b68cd": rpc error: code = Internal desc = failed to mount nfs server: rpc error: code = Internal desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 192.168.1.188:/home/codinghuang/nfs/pipeline-volume /tmp/pvc-3bb08883-6f23-4016-ad15-9607971b68cd
Output: mount.nfs: access denied by server while mounting 192.168.1.188:/home/codinghuang/nfs/pipeline-volume

it's my storageclass:

package storageclassservice
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cloud-cicd-nfs-csi
provisioner: nfs.csi.k8s.io
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
  server: 192.168.1.188
  share: /home/codinghuang/nfs/pipeline-volume
mountOptions:
  - hard
  - nfsvers=3

What you expected to happen:

The PVC can be successfully deleted when the PV is deleted.

How to reproduce it:

install nfs server:

sudo yum -y install nfs-utils
sudo vim /etc/sysconfig/nfs

LOCKD_TCPPORT=30001
LOCKD_UDPPORT=30002
MOUNTD_PORT=30003
STATD_PORT=30004
sudo systemctl restart rpcbind.service
sudo systemctl restart nfs-server.service
sudo systemctl enable rpcbind.service
sudo systemctl enable nfs-server.service
sudo vim /etc/exports

/home/codinghuang/nfs/pipeline-volume   *(rw,no_root_squash,async)
sudo systemctl restart nfs-server.service
showmount -e localhost

install csi

install csi from here

create storageclass

# storageclass.yaml
package storageclassservice
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: cloud-cicd-nfs-csi
provisioner: nfs.csi.k8s.io
reclaimPolicy: Delete
volumeBindingMode: Immediate
parameters:
  server: 192.168.1.188
  share: /home/codinghuang/nfs/pipeline-volume
mountOptions:
  - hard
  - nfsvers=3

create pvc

# test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-dynamic
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 50Mi
  storageClassName: cloud-cicd-nfs-csi

when I delete pvc:

kubectl get pv

NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                                       STORAGECLASS          REASON   AGE
pvc-df8989e5-e378-465a-bad2-0731005c95d6   50Mi       RWX            Delete           Released    default/pvc-nfs-dynamic                                     cloud-cicd-nfs-csi             18s

and the log:

I1231 05:49:42.491366       1 controller.go:1332] provision "default/pvc-nfs-dynamic" class "cloud-cicd-nfs-csi": started
I1231 05:49:42.491944       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pvc-nfs-dynamic", UID:"0092891d-eb84-4e6f-bbdc-a8ad176cdc91", APIVersion:"v1", ResourceVersion:"42656897", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/pvc-nfs-dynamic"
I1231 05:49:42.528760       1 controller.go:839] successfully created PV pvc-0092891d-eb84-4e6f-bbdc-a8ad176cdc91 for PVC pvc-nfs-dynamic and csi volume name 192.168.1.188/home/codinghuang/nfs/pipeline-volume/pvc-0092891d-eb84-4e6f-bbdc-a8ad176cdc91
I1231 05:49:42.528790       1 controller.go:1439] provision "default/pvc-nfs-dynamic" class "cloud-cicd-nfs-csi": volume "pvc-0092891d-eb84-4e6f-bbdc-a8ad176cdc91" provisioned
I1231 05:49:42.528804       1 controller.go:1456] provision "default/pvc-nfs-dynamic" class "cloud-cicd-nfs-csi": succeeded
I1231 05:49:42.531545       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"pvc-nfs-dynamic", UID:"0092891d-eb84-4e6f-bbdc-a8ad176cdc91", APIVersion:"v1", ResourceVersion:"42656897", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-0092891d-eb84-4e6f-bbdc-a8ad176cdc91
I1231 05:49:47.803549       1 controller.go:1472] delete "pvc-0092891d-eb84-4e6f-bbdc-a8ad176cdc91": started
E1231 05:49:47.861179       1 controller.go:1482] delete "pvc-0092891d-eb84-4e6f-bbdc-a8ad176cdc91": volume deletion failed: rpc error: code = Internal desc = failed to mount nfs server: rpc error: code = Internal desc = mount failed: exit status 32
Mounting command: mount
Mounting arguments: -t nfs 192.168.1.188:/home/codinghuang/nfs/pipeline-volume /tmp/pvc-0092891d-eb84-4e6f-bbdc-a8ad176cdc91
Output: mount.nfs: access denied by server while mounting 192.168.1.188:/home/codinghuang/nfs/pipeline-volume
W1231 05:49:47.861235       1 controller.go:1013] Retrying syncing volume "pvc-0092891d-eb84-4e6f-bbdc-a8ad176cdc91", failure 0
E1231 05:49:47.861263       1 controller.go:1031] error syncing volume "pvc-0092891d-eb84-4e6f-bbdc-a8ad176cdc91": rpc error: code = Internal desc = failed to mount nfs server: rpc error: code = Internal desc = mount failed: exit status 32

Anything else we need to know?:

Environment:

  • CSI Driver version:

  • Kubernetes version (use kubectl version): 1.20.2

  • OS (e.g. from /etc/os-release):

 cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
  • Kernel (e.g. uname -a):
uname -a
Linux k8s-worker-node-1 3.10.0-1160.el7.x86_64 #1 SMP Mon Oct 19 16:18:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:
@andyzhangx
Copy link
Member

does manual mount on that agent node works? it said permission denied error:

mount -v -t nfs 192.168.1.188:/home/codinghuang/nfs/pipeline-volume /tmp/pvc-3bb08883-6f23-4016-ad15-9607971b68cd 

@huanghantao
Copy link
Author

huanghantao commented Dec 31, 2021

image

at the master node:

sudo mount -v -t nfs 192.168.1.188:/home/codinghuang/nfs/pipeline-volume /tmp/pvc-3bb08883-6f23-4016-ad15-9607971b68cd
[sudo] password for codinghuang:
mount.nfs: mount point /tmp/pvc-3bb08883-6f23-4016-ad15-9607971b68cd does not exist
[codinghuang@k8s-master-node ~]$ mkdir /tmp/pvc-3bb08883-6f23-4016-ad15-9607971b68cd
[codinghuang@k8s-master-node ~]$ sudo mount -v -t nfs 192.168.1.188:/home/codinghuang/nfs/pipeline-volume /tmp/pvc-3bb08883-6f23-4016-ad15-9607971b68cd
mount.nfs: timeout set for Fri Dec 31 01:20:09 2021
mount.nfs: trying text-based options 'vers=4.1,addr=192.168.1.188,clientaddr=192.168.1.107'
mount.nfs: mount(2): Permission denied
mount.nfs: trying text-based options 'vers=4.0,addr=192.168.1.188,clientaddr=192.168.1.107'
mount.nfs: mount(2): Permission denied
mount.nfs: trying text-based options 'addr=192.168.1.188'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.1.188 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.1.188 prog 100005 vers 3 prot UDP port 30003

@andyzhangx
Copy link
Member

so it's permission denied , nfs server config issue?

@huanghantao
Copy link
Author

huanghantao commented Dec 31, 2021

mount with nfsvers=3 is ok:

sudo umount /tmp/pvc-3bb08883-6f23-4016-ad15-9607971b68cd

sudo mount -v -t nfs -o nfsvers=3 192.168.1.188:/home/codinghuang/nfs/pipeline-volume /tmp/pvc-3bb08883-6f23-4016-ad15-9607971b68cd
mount.nfs: timeout set for Fri Dec 31 01:26:35 2021
mount.nfs: trying text-based options 'nfsvers=3,addr=192.168.1.188'
mount.nfs: prog 100003, trying vers=3, prot=6
mount.nfs: trying 192.168.1.188 prog 100003 vers 3 prot TCP port 2049
mount.nfs: prog 100005, trying vers=3, prot=17
mount.nfs: trying 192.168.1.188 prog 100005 vers 3 prot UDP port 30003

@andyzhangx
Copy link
Member

then add nfsvers=3 in mountOptions in storage class?

@huanghantao
Copy link
Author

huanghantao commented Dec 31, 2021

then add nfsvers=3 in mountOptions in storage class?

I had add nfsvers=3 in storageclass, so I can create pvc successffully.

but Why didn't mount use the nfsvers=3 option I specified when deleting pv?

@andyzhangx
Copy link
Member

then add nfsvers=3 in mountOptions in storage class?

I had add nfsvers=3 in storageclass, so I can create pvc successffully.

but Why didn't mount use the nfsvers=3 option I specified when deleting pv?

@msau42 @jsafrane that's because DeleteVolumeRequest does not have VolumeCapabilities field which contains mountOptions (CreateVolumeRequest has this field), and in this driver, we need mountOptions to mount and delete sub folders. Do you think we could add VolumeCapabilities field in DeleteVolumeRequest, any other workaround?

@andyzhangx andyzhangx changed the title Access denied occurs when delete a dynamically created PV Access denied occurs when delete a dynamically created PV (DeleteVolumeRequest does not have VolumeCapabilities) Dec 31, 2021
@andyzhangx
Copy link
Member

related code:

// Mount nfs base share so we can delete the subdirectory
if err = cs.internalMount(ctx, nfsVol, nil); err != nil {
return nil, status.Errorf(codes.Internal, "failed to mount nfs server: %v", err.Error())
}

@andyzhangx
Copy link
Member

add a new field VolumeCapabilities in DeleteVolumeRequest should be time costing effort, I would suggest add a default mountOptions in this driver setting, so if mountOptions is not set, just use the default mountOptions, that should workaround.
It looks like we only need mountOptions in DeleteVolume now.

@madhosoi
Copy link

madhosoi commented May 10, 2022

Hi @andyzhangx,
I was setting the mountOptions as proposed in the documentation, but I only make it works defining it with "nfsvers=3" without any other option.

Is it what is expected?
If it is , please update the example. If not, could you tell me how to define any other option, because I cannot manage to make it works.

Thanks!
Miguel

@andyzhangx
Copy link
Member

Hi @andyzhangx, I was setting the mountOptions as proposed in the documentation, but I only make it works only defining it only with "nfsvers=3" without any other option.

Is it what is expected? If it is , please update the example. If not, could you tell me how to define any other option, because I cannot manage to make it works.

Thanks! Miguel

@madhosoi do you know what's the non working mountOptions, and what's the error msg? I think that depends on the nfs server

@madhosoi
Copy link

Hi @andyzhangx , thanks for the quick answer!
I've tried with nfsvers=3, hard as it is defined in the documentation, then with nfsvers=3, hard, noatime, tcp, rw as it is done in the storageClass definition.

kubectl create secret generic mount-options --from-literal mountOptions="nfsvers=3,hard"

But I saw in the code a condition that compares the string mountOptions with nfsvers=3 or with nfsvers=4 .
For me It is fine, as it is working, but I don't know if any other option is mandatory due to the NFS server setup, if it will affect to work properly.

BTW, Thanks for the update, without your involvement, we cannot use it!

Cheers,
Miguel

@andyzhangx
Copy link
Member

seems it's better we use soft mount according to https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/What_are_the_differences_between_hard_mount_and_soft_mount?msclkid=08a281b5d0fa11ec91c1711b4a0c890d

soft
Generates a soft mount of the NFS file system. If an error occurs, the stat() function returns with an error. If the option hard is used, stat() does not return until the file system is available.

TerryHowe added a commit to TerryHowe/csi-driver-nfs that referenced this issue Dec 9, 2024
98f23071 Merge pull request kubernetes-csi#260 from TerryHowe/update-csi-driver-version
e9d8712d Merge pull request kubernetes-csi#259 from stmcginnis/deprecated-kind-kube-root
faf79ff6 Remove --kube-root deprecated kind argument
734c2b95 Merge pull request kubernetes-csi#265 from Rakshith-R/consider-main-branch
f95c855b Merge pull request kubernetes-csi#262 from huww98/golang-toolchain
3c8d966f Treat main branch as equivalent to master branch
e31de525 Merge pull request kubernetes-csi#261 from huww98/golang
fd153a9e Bump golang to 1.23.1
a8b3d050 pull-test.sh: fix "git subtree pull" errors
6b05f0fc use new GOTOOLCHAIN env to manage go version
18b6ac6d chore: update CSI driver version to 1.15

git-subtree-dir: release-tools
git-subtree-split: 98f23071d946dd3de3188a7e1f84679067003162
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants