-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Output: mount: unknown filesystem type 'glusterfs' #1709
Comments
We don't have glusterfs installed in the image that we use. However, we can add it. |
Thanks, that'd be great. Is there a way to install it manually as a workaround in the meantime? Is it a bug that these are said to be supported by kubernetes but not minikube? Thanks again. |
You may be able to install it manually inside the minikube VM by working in the As far as feature/bug, as far as I know, I'm not sure we can offer any guarantees on features that require special features on the nodes themselves (gpus, certain cloud features, etc.) |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
I also need glusterfs to be available in minikube. I've been trying to find a way to install it, but I couldn't find any package manager in the VM. Do I need to mess with Buildroot to install glusterfs? |
/remove-lifecycle stale |
Is there any known workaround for this issue? It seems like a real pain to try to onstall glusterfs-client in the minikube TinyLinux iso. The only solution I see would be to install glusterfs in the pod and then mount the volume manually with a postStartHook, but it feels like it's not the right time to do it in the Pod lifecycle... Any suggestion? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale I would really like to see this happen. |
I think this was fixed in #2925 |
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Minikube version (use
minikube version
):minikube version: v0.20.0
Environment:
cat ~/.minikube/machines/minikube/config.json | grep DriverName
): virtualboxcat ~/.minikube/machines/minikube/config.json | grep -i ISO
orminikube ssh cat /etc/VERSION
): minikube-v1.0.6.isoWhat happened:
Received following error on statefulset start up:
SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "certificates-storage-backend-development-0", which is unexpected.
MountVolume.SetUp failed for volume "kubernetes.io/glusterfs/0fb66a0a-6aae-11e7-999d-080027a863a3-certificates-storage" (spec.Name: "certificates-storage") pod "0fb66a0a-6aae-11e7-999d-080027a863a3" (UID: "0fb66a0a-6aae-11e7-999d-080027a863a3") with: glusterfs: mount failed: mount failed: exit status 32 Mounting command: mount Mounting arguments: 10.0.0.111:/certificates-volume /var/lib/kubelet/pods/0fb66a0a-6aae-11e7-999d-080027a863a3/volumes/kubernetes.io~glusterfs/certificates-storage glusterfs [log-level=ERROR log-file=/var/lib/kubelet/plugins/kubernetes.io/glusterfs/certificates-storage/backend-development-0-glusterfs.log] Output: mount: unknown filesystem type 'glusterfs' the following error information was pulled from the glusterfs log to help diagnose this issue: glusterfs: could not open log file for pod: backend-development-0
What you expected to happen:
Mount the volume within the pod
How to reproduce it (as minimally and precisely as possible):
01-gluster-storage-class.yml
02-gluster-endpoint.yml
03-persistent-volume.yml
04-persistent-volume-claim.yml
05-statefulset.yml
glusterfs-client-install.sh
Dockerfile:
mkdir gluster-test-case
add all of the above files to the gluster-test-case directory changing ip addresses to gluster service as appropriate.
cd gluster-test-case
chmod +x glusterfs-client-install.sh
docker build -t mount-test-gluster .
kubectl create -f 01-gluster-storage-class.yml
kubectl create -f 02-gluster-endpoint.yml
kubectl create -f 03-persistent-volume.yml
kubectl create -f 04-persistent-volume-claim.yml
kubectl create -f 05-statefulset.yml
Anything else do we need to know:
If I comment out the volume section of the statefulset and then get the pods up and running, I can docker exec into the container and mount manually as expected.
The text was updated successfully, but these errors were encountered: