Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for smart PVC cloning #695

Closed
wants to merge 17 commits into from
Closed

Add support for smart PVC cloning #695

wants to merge 17 commits into from

Conversation

Madhu-1
Copy link
Collaborator

@Madhu-1 Madhu-1 commented Oct 22, 2019

This PR adds the support for rbd to create smart cloning from a PVC, to create a new image from an existing image we are following below steps

Create a PVC from PVC

  • Create a temporary snapshot from the parent volume
  • Clone a new image from a temporary snapshot with options --rbd-default-clone-format 2 --image-feature layering,deep-flatten
  • Delete temporary snapshot created
  • Create a snapshot with requested Name from cloned volume
  • Create a clone from a snapshot with user-provided image features

Delete a PVC

  • Move the image to the trash
  • Add a task to remove the image from the trash

example commands:-

1) rbd snap create <RBD image for src k8s volume>@<random snap name>
2) rbd clone --rbd-default-clone-format 2 --image-feature
layering,deep-flatten <RBD image for src  k8s volume>@<random snap <RBD image for temporary snap image>
3) rbd snap rm <RBD image for src k8s volume>@<random snap name>
4) rbd snap create <RBD image for temporary snap image>@<random snap name> <k8s snap name>
5) rbd clone --rbd-default-clone-format 2 --image-feature <k8s dst vol config>
<RBD image for temporary snap image>@<random snap name> <RBD image for k8s dst vol>
6) rbd snap rm <RBD image for temporary snap image>@<random snap name> 

Fixes: #675

Note: This PR is built on top of #693, will squash/remove commits once #693 is merged

Signed-off-by: Madhu Rajanna <[email protected]>
refractor E2E for cephfs and rbd

Signed-off-by: Madhu Rajanna <[email protected]>
with the current implementation in ceph-csi, it's not possible to
delete the cloned volume if the snapshot is present
due to the child linking, to remove this dependency
we had a discussion and come up with an idea to separate
out the clone and snapshot, so that we can delete the
snapshot and cloned image in any order.

the steps followed to create an independent snapshot as follows

* Create  a temporary snapshot from the parent volume
* Clone a new image from a temporary snapshot with options
`--rbd-default-clone-format 2 --image-feature layering,deep-flatten`
* Delete temporary snapshot created
* Create a snapshot with requested Name

* Clone a new image from the snapshot with user-provided options
* Check the depth of the image as the maximum number of nested volume
clones can be (max 16 can be changed based on the configuration)
if the depth is reached flatten the newly cloned image

* Delete the cloned image (earlier we were removing the image with `rbd rm`
command with the  new design we will be moving the images to the trash)
same applies for normal volume deletion also

* Delete the temporary cloned image which was created for a snapshot
* Delete the snapshot

example commands:-
```
1) rbd snap create <RBD image for src k8s volume>@<random snap name>
2) rbd clone --rbd-default-clone-format 2 --image-feature
layering,deep-flatten <RBD image for src  k8s volume>@<random snap <RBD image for temporary snap image>
3) rbd snap rm <RBD image for src k8s volume>@<random snap name>
4) rbd snap create <RBD image for temporary snap image>@<random snap name> <k8s snap name>
5) rbd clone --rbd-default-clone-format 2 --image-feature <k8s dst vol config>
<RBD image for temporary snap image>@<random snap name> <RBD image for k8s dst vol>
```

Signed-off-by: Madhu Rajanna <[email protected]>
Signed-off-by: Madhu Rajanna <[email protected]>
Signed-off-by: Madhu Rajanna <[email protected]>
Signed-off-by: Madhu Rajanna <[email protected]>
Signed-off-by: Madhu Rajanna <[email protected]>
Signed-off-by: Madhu Rajanna <[email protected]>
Signed-off-by: Madhu Rajanna <[email protected]>
nixpanic added a commit to nixpanic/ceph-csi that referenced this pull request Oct 30, 2019
STEP: creating mount and staging directories
STEP: creating a volume from source snapshot
• Failure [1.001 seconds]
Controller Service [Controller Server]
/home/vagrant/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:44
  CreateVolume
  /home/vagrant/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:390
    should fail when the volume source volume is not found [It]
    /home/vagrant/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:827

    Expected
        <codes.Code>: 3
    to equal
        <codes.Code>: 5

    /home/vagrant/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846

See-also: ceph#695

volumeID := volumeSource.GetVolumeId()
if volumeID == "" {
return status.Error(codes.InvalidArgument, "volume content source volume ID cannot be empty")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should return status.NotFound in case cloning from a Volume is intended, but the reference to the SourceVolume is incorrect.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will take care of it, same applies for the snapshot source also

@humblec
Copy link
Collaborator

humblec commented Mar 3, 2020

@Madhu-1 can you revisit this PR ?

@Madhu-1
Copy link
Collaborator Author

Madhu-1 commented Mar 3, 2020

this will be based on #693, once it is merged we will revisit this one

@humblec humblec mentioned this pull request Mar 17, 2020
6 tasks
@humblec
Copy link
Collaborator

humblec commented Mar 17, 2020

[status update]
Last week we had a meeting around snapshot/clone design/implementation we introduced recently with this PR, to support further use cases like Bakcup/DR , Data replication..etc. The main reason behind this meeting was to avoid a revamp again with snapshot/clone implementation while we start supporting these use cases too in near future. Atm, we are listing down the requirements to support it along with possible changes/enhancements we need with current implementation/design. Further technical notes will be followed/pointed here to those discussion. Considering we are at code freeze of v2.1.0 and the state we are with this features, the decision is to defer it to next month release (v3.0.0 #865 ) and release v2.1.0 as planned with other bug fixes and enhancements.

@ShyamsundarR
Copy link
Contributor

[status update]
Last week we had a meeting around snapshot/clone design/implementation we introduced recently with this PR, to support further use cases like Bakcup/DR , Data replication..etc. The main reason behind this meeting was to avoid a revamp again with snapshot/clone implementation while we start supporting these use cases too in near future. Atm, we are listing down the requirements to support it along with possible changes/enhancements we need with current implementation/design. Further technical notes will be followed/pointed here to those discussion. Considering we are at code freeze of v2.1.0 and the state we are with this features, the decision is to defer it to next month release (v3.0.0 #865 ) and release v2.1.0 as planned with other bug fixes and enhancements.

Details of other use-cases is present here.

The decision is to continue with the current design and implementation as it does not impede the future uses as understood at present.

nixpanic added a commit to nixpanic/ceph-csi that referenced this pull request Mar 20, 2020
STEP: creating mount and staging directories
STEP: creating a volume from source snapshot
• Failure [1.001 seconds]
Controller Service [Controller Server]
/home/vagrant/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/tests.go:44
  CreateVolume
  /home/vagrant/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:390
    should fail when the volume source volume is not found [It]
    /home/vagrant/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:827

    Expected
        <codes.Code>: 3
    to equal
        <codes.Code>: 5

    /home/vagrant/go/src/github.com/kubernetes-csi/csi-test/pkg/sanity/controller.go:846

See-also: ceph#695
@nixpanic nixpanic added this to the release-3.0.0 milestone Apr 9, 2020
@nixpanic nixpanic added enhancement New feature or request component/rbd Issues related to RBD labels Apr 19, 2020
@mergify
Copy link
Contributor

mergify bot commented May 21, 2020

This pull request now has conflicts with the target branch. Could you please resolve conflicts and force push the corrected changes? 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/rbd Issues related to RBD enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add support for smart cloning in rbd
4 participants