-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stale NFS file handle #419
Comments
the error shows |
Yes, this is correct. It's perplexing why it's showing stale nfs when it is smb. I am definitely using the smb csi driver and not NFS. |
I have the same issue using the smb csi driver. For some reason the error "stale NFS file handle" only affects Linux pods trying to mount the smb share. Windows pods show no error message and continue to mount the share successfully after smb server restart. Another issue - restarting csi-smb-node pods didn't seem to fix the problem... |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Restarting csi-smb-node pods didn't recover the stale mount for us either. Is there another way to recover from this issue without rebooting the node? |
From time to time we have the same
|
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
@andyzhangx |
@DarkFM I think that's related. |
this issue should be fixed in latest version |
What happened:
When the smb server is terminated while still mounted to pods, the csi driver node does not umount the stale mount and create a new mount. This prevents new pods from starting even when the server is back online. Instead an error results. This can be resolved manually by restarting all the
csi-smb-node
pods.What you expected to happen:
Not have to restart the
csi-smb-node
pod manually to mount.How to reproduce it:
Anything else we need to know?:
This may be related to #164, but it is showing a different error message.
Environment:
kubectl version
): v1.20uname -a
):The text was updated successfully, but these errors were encountered: