-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[1.12.0] docker volume (nfs) is not being mounted in all service replicas [swarm-mode] #25202
Comments
Ping @stevvooe! |
Here is the syntax you should use:
|
To clarify, Optionally, you can use the syntax provided by @cpuguy83 to auto-create the volume before the task runs on the target node. Since the volume did not exist on Second and Third, I'm surprised this didn't fail. There may be a bug there. |
@cpuguy83 How should I use "source" option ?
|
@stevvooe The volume is being created in all nodes that get a replica, but the NFS share is only being mounted in the node-1 host.
|
@stevvooe After
And I can re-create them with the same name, over and over ... (Is it the right behavior ?)
But even so, still no luck to list files in node-2 or node-3
I will create a new volume with different name in every node and after that I will create the service with 3 replicas ... |
@stevvooe Good news! It works when creating the volume in each node, and after that create the service with 3 replicas ...
I believe the behavior for services with volumes should be one of the following: Thank you for patience and help ... |
This works if you use the syntax I have you. Source is the volume name. |
Thank you @cpuguy83, I got it now. "source" and "target" are mandatory options when using volume-*. The volume will be created in each node with the "source" name.
|
@cpuguy83 your proposed solution is a workaround or the expected way of working? |
@manast Why would it defeat the purpose of the volume? You still have to tell the service to go get the volume, this just allows you to also tell it how to create the volume as well. There is no built-in cluster-wide driver for volumes like there is for networking. Note @tiagoalves83 I'm going to go ahead and close this as it is resolved. |
@cpuguy83 because, as I understand, one of the purposes of volumes is to allow separation of concerns. I define my volumes elsewhere, and then I just use them in my services. I should not need to know anything about how a volume is created in order to use it from a service. Maybe I am missing something because this seems quite important to me. |
@manast The service object is an orchestrator. I think it's quite important here to define how the service is to operate. Using the Also specifically, services have mounts, not volumes. Mounts can be provided by a volume. Mounts can also be updated/changed by the operator after a service is created. |
@cpuguy83 I really don't get it. I am probably missing something very fundamental here, but I see many others having the same issues and concerns, so probably the current behaviour is not so intuitive. For instance, why if I create a nfs volume using netshare plugin, when I create a service using that said volume, all the nodes where the service start containers on also get a volume, with the same name, but it is not using the nfs driver, it uses the local driver, which means the container starts and run, but using wrong storage. Either you give an error, or it works, because otherwise this silent failing, or the almost-working-but-not-really kind of behaviour makes very difficult to understand what is the correct way of doing things... |
@manast Feel free to make a proposal in a separate issue/PR. I still think you should be defining your volumes in the mount spec if it is important to the service. |
@cpuguy83 ok, I will, no problem :). Its just that until very recently (today?) I did not know that volumes behave the way they do, although I have been setting up a swarm cluster for several weeks now, which indicates me that either the documentation needs some completion, or the functionality is not as intuitive as it could be (or that I am stupid, but I need to rule that out for my own sake). |
@tiagoalves83 when you did your test for the nfs mount was the nfs mounted on the host. I have a NAS with an ip 192.168.12.52 which is mounted on all three nodes. I want the swarm container to be able to access it from where ever they are deployed. So far I cannot get it to work by replicating your steps. Create this on all three nodesdocker volume create --driver local --opt type=nfs --opt o=addr=192.168.12.52,rw --opt device=:/media/dataShare/BAS --name nfsshare docker service create --mount type=volume,source=nfsshare,target=/media --replicas 3 --name nfstest-3 alpine /bin/sh -c "while true; do echo 'OK'; sleep 3; done" where /media/dataShare/BAS is the mapped drive on the host. Does this look correct? Aslo we have a usern and passwrod set up do I need to enter that as well in the driver-opts? |
In my case, NFS server's share folder permission is
In client, I have some docker machine nodes, and services.
The service will create nfs volume on node.
Then get errors
this is volume folder
change the NFS server's folder to so what should I do with swarm mode + NFS ? |
You need to use |
I'm locking the conversation on this issue. Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting bugs and feature requests. For other type of questions, consider using one of;
|
Output of
docker version
:Output of
docker info
:Additional environment details (AWS, VirtualBox, physical, etc.):
docker-machine v 0.8.0
virtualbox
Mac OSX
Steps to reproduce the issue:
Describe the results you received:
Describe the results you expected:
At # Second NODE (Worker) and # Third NODE (Worker), I expected to list NFS Volume Files as # First NODE (MANAGER)
Additional information you deem important (e.g. issue happens only occasionally):
Same issue happens in 1.12.0-rc2 , 1.12.0-rc4 and 1.12.0-rc5
The text was updated successfully, but these errors were encountered: