-
Notifications
You must be signed in to change notification settings - Fork 16.7k
Decide on NFS Provisioner chart direction #4142
Comments
@kingdonb #2559 (comment) seems to support favoring #3870 |
I'll try summarise some differences, I'm going to refer to each PR by the number, and the authors name, in an attempt to make it easier to follow. Automated vs Manual Provisioning
Based on $what container
Honestly. I have no idea what the differences between each are. Helm "Standards Compliance" / "Best Practice"
Deployment vs StatefulSet
Creating of matching StorageClass object to consume the NFS volumes
I'm sure there are more differences. But this is probably a reasonable start. |
Hi @kiall, thanks for starting this thread to unify the existing proposals. Some thoughts on my behalf regarding the PRs. NFS-Server
Deployment
StorageClass
So what would be way forward? I would currently prefer #3870 as it is more feature complete and uses the "official" image. But currently my main use case is not covered. Maybe we can join forces. An idea would be to make the NFS server optional. In this case the user could provide some external NFS storage himself and only use the provisioning part. |
Is it the official image? I wasn't aware of the other 2 images when I wrote it, so, no idea :)
If the general sentiment is to keep #3870 (@kiall) - then - lets see if we can make the NFS server optional, so it covers your use case. I can look at that late next week, but, if others get to it first - great. We can either just sync here until we get everything in order, or a PR could be sent to the branch #3870 is sourced from (https://github.com/kiall/kubernetes-charts/tree/add-nfs-provisioner-chart ) If the general sentiment is NOT to keep #3870 (@kiall) - then - I'm happy to collaborate anywhere that works. |
I went digging to see what the differences are between the different images:
So - I actually think the differences between #3870 (@kiall - quay.io/kubernetes_incubator/nfs-provisioner) and #2779 (@nbyl - quay.io/external_storage/nfs-client-provisioner) warrant two different charts. Ideally, we'd try and make sure the distinction is made clear and keep things like naming as close and clear as possible - e.g. call the charts "nfs-server-provisioner" and "nfs-client-provisioner". |
This discussion is awesome, I wish I got here sooner! Can I offer to review both charts, or is there a clear winner, or ... someone mentioned that this might be warranting of two separate charts, I think I figured that out as well during my own attempt at this it looks like one of your implementations is only a Client (expects an outside provider of the NFS resource), whereas the other one is both Client and Server (@kiall actually provisions a NFS server using StatefulSet) Is that a valid characterization? The amount of writeup here and discussion of the differences between all three PRs is really fantastic, I am so glad there are options now, because this has been an itch I've needed to scratch! |
Kinda :) #2779 (@nbyl) expects an external NFS server to exist, yes. And #3870 (@kiall) expects to manage it's own NFS server inside the cluster, but it doesn't offer a clean way to tie into an existing NFS server[1].
In my opinion, #3870 (@kiall) and #2779 (@nbyl) both serve different purposes, both use different+supported external-storage subprojects, and both should be reviewed + merged. i.e. I don't think there is as much overlap between the charts as there might seem at face value. [1]: It looks like #3870 (@kiall) can tie into another server at face value, but in reality, it can only tie into a server that's running in the same pod (or same machine if ran outside K8S) - which isn't all that useful when running inside K8S. |
Huge +1, thanks for all the discussion @kiall, @nbyl, @kingdonb & @scottrigby . There's been some interest in an NFS provisioner to make the WordPress chart scalable in #5009 and #5049 , so I'm interested in making a decision here. I suggest closing out #2559 (@kingdonb) and #5049 (@vtuson) in favour of the StorageClass/provisioner approach. I agree that both #3870 (@kiall) and #2779 (@nbyl) should exist as different options so I suggest we work on getting these reviewed and merged. |
@prydonius - makes sense to me! |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions. |
It looks like PR #3870 was merged, and #2779 was closed in favor of #6433 which has also been merged now. I'm not sure there's any reason to keep this issue open anymore! It looks like we have a chart that can be used to make PVCs against a backing ReadWriteOnce, and make them available via NFS to ReadWriteMany, ... then another chart that can take an existing NFS service and access it as a ReadWriteMany PV, via its own PVC. Can anyone who has used both charts comment on whether there are any gaps remaining between them? |
One uses https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client - the other is for https://github.com/helm/charts/tree/master/stable/nfs-server-provisioner . Yes I think this issue should be closed. |
I'm reading the READMEs now. Trying to decide if it's possible to use them together... it seems like it should be! The point of #2559 was originally supposed to be to provide a way for ReadWriteMany PVCs to consume a single PV block device over NFS. It vaguely looks like stable/nfs-client-provisioner does that pretty handily, as long as you already have that block device exposed via NFS. I haven't really been following these closely, but now that they are both merged I may have some issues with the documentation of at least the client chart. But if so that is a separate issue, I'll reopen separately. So I'll go ahead and test some more, but... I vote in favor of closing #4142. |
There are several competing chart PRs for this, which take different approaches:
Let's discuss and choose the direction.
The text was updated successfully, but these errors were encountered: