-
Notifications
You must be signed in to change notification settings - Fork 560
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rbd node service: flattern image when it has references to parent #1543
Comments
Currently as part of node service we add rbd flatten task for new PVC creates. Ideally we should add flatten task only for snapshots/cloned PVCs as required. Fixes: ceph#1543 Signed-off-by: Prasanna Kumar Kalever <[email protected]>
Currently as part of node service we add rbd flatten task for new PVC creates. Ideally we should add flatten task only for snapshots/cloned PVCs as required. Fixes: ceph#1543 Signed-off-by: Prasanna Kumar Kalever <[email protected]>
Currently as part of node service we add rbd flatten task for new PVC creates. Ideally we should add flatten task only for snapshots/cloned PVCs as required. Fixes: ceph#1543 Signed-off-by: Prasanna Kumar Kalever <[email protected]>
Currently as part of node service we add rbd flatten task for new PVC creates. Ideally we should add flatten task only for snapshots/cloned PVCs as required. Fixes: ceph#1543 Signed-off-by: Prasanna Kumar Kalever <[email protected]>
@pkalever i tried to reproduce it on ceph octopus but am not able to do it
this is with a cephcsi canary image. let me know still you are able to reproduce it. I would like to check a few things. |
Is there any update on this? I discussed seeing flattening happening in #1800 but it was mentioned that none of the operations there would cause flattening. As it stands right now, if I create a PVC, then snapshot that PVC, then create a clone of the snapshot and try to mount it I'm getting this error from a
It is eventually able to enter a running state, but this is due to the flatten operation completing. I don't want any flattening to occur, as it defeats the point of me using cloning altogether 😞 I did see this in the output of my
Is this flattening happening because I'm running a kernel that doesn't support deep flatten? 🤔 |
Ah, it seems this comment suggests you must have kernel 5.1+ to avoid a full flatten: #693 (comment) Presumably this is the problem, as minikube is using 4.19? |
@cjheppell kernel less than 5.1+ does not support mapping of rbd images with deep-flatten image feature for that we need to flatten the image first and map it on the node. |
Was this a change between v2.1.x and v3? As described in #1800, when I performed the same actions on v2.1.2 I didn't see this flattening behaviour. |
Yes, this is a change in v3.x as we reworked the rbd snapshot and clone implementation. |
Presumably that's what the "Snapshot Alpha is no longer supported" in the v3.0.0 release notes is referring to? https://github.com/ceph/ceph-csi/releases/tag/v3.0.0 I must admit, this is very surprising and completely unexpected behaviour as a user. It seems that unless I'm on a kernel 5.1+ then cloning from snapshots is fundamentally not performing the copy-on-write behaviour that Ceph claims to offer. Even moreso, that's very hidden from me as from glancing at the behaviour in Kubernetes it appears that cloning is working. But it's only when I mount the clone that the flatten is revealed to me. If that snapshot contains hundreds of gigabytes of data, then that operation is likely to take a very long time. Even moreso, the only way I was able to determine that I needed a 5.1+ kernel is by digging through issues and pull request comments. Could this perhaps be documented more clearly somewhere? It would've saved me an awful lot of time from digging through the lines of code and various pull requests associated with this behaviour. |
in kubernetes, both snapshot and pvc are the independent objects. this is a new design (v3.x+) to handle that. rbd clone will be created when a user requests kubernetes snapshots.
yes as the clones are created with the deep-flatten feature if the kernel version is less than 5.1 the nodeplugin tries to flatten the image and then maps it. you also have an option to flatten the image during the snapshot create operation itself
Yes will update the documentation for the minimum required kernel version to support snapshot and clone |
Quite right, but given I'm using a Ceph driver to fulfil the operations of k8s concept of snapshot/clone I'd still expect the behaviour to represent that documented in Ceph's own snapshot/clone semantics. It appears this is true for kernels 5.1+ on v3.x.x, and it was true for kernels <5.1 on releases v2.1.x but is no longer the case for kernels <5.1 on v3.x.x releases. My point is that as a user, one of the important features Ceph offers is unavailable to me unless some prerequisites are met, and those prerequisites aren't clear. Perhaps this behaviour could be also be opt-in? I'm aware that kubernetes presents the relationship between snapshot and pvc as independent, but if I consciously acknowledge that that hidden relationship is present then we could avoid the need for flatten for kernels <5.1 on v3.x.x releases?
Many thanks. That will be very helpful. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation. |
Describe the bug
Currently, as part of node service, we add rbd flatten task for new PVC creates. Ideally, we should add a flatten task only for snapshots/cloned PVCs as required.
Environment details
Steps to reproduce
Actual results
Flatten task is added for new PVC
Expected behavior
No Flattern task
The text was updated successfully, but these errors were encountered: