-
Notifications
You must be signed in to change notification settings - Fork 498
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for incremental backups in Ceph #6411
Comments
I was doing the same thing and I was thinking in create a driver for Backy or Benji, in order to build a more profesional tools, because the conversion to qcow2 have better portability than the raw image, but dealing with the raw images can speed up the process! Backy/benji simplifies the whole process because they use the xml that represent the rbd diff! |
Signed-off-by: Guillermo Ramos <[email protected]>
Hi @rsmontero I think that a better approach for Ceph Incremental Backups could be achieved using known projects like Backy2. Store the increments in This is just my humble opinion, which is aligned with @nachowork90 proposal. |
Agree, in fact the final implementation in 6.10.1 does not stores deltas in qcow2 format but in native rbd diff format. Don't know the internals of other tools but directly implementing this incremental backups on top of the OpenNebula backup framework is aimed to provide the best experience for our user base. Thanks for your comments |
Implementation overview: - Incremental points are saved as dedicated rbd snapshots under the "one_backup_<increment_id>" namespace. This snapshots are used to generate delta files in rbdiff format. - The rbdiff formats are stored in the backup server to restore the rbd volumes. - The restore process is performed directly on the Ceph cluster importing the base image (first full backup in the chain, rbd import) and then applying the increments (rbd import-diff) up to the target increment. - Two new pseudo-protocols has been implemented to adopt the restore pattern above (restic+rbd, rsync+rbd). This protocols bundle of the rbdiff files in a tarball for transfer from the backup server. Note: reconstruct process uses the Ceph BRIDGE_LIST and not the backup server (as opposed to qcow2 backups) Other bug fixes - This commit also fixes #6741, resetting the backup chain after a restore - The original ceph drivers do not receive the full action information, this now has been fixed by including VM information in the STDIN string sent to the driver. Compatibility note. - backup actions should return now the backup format used raw, rbd, ... If not provided oned (6.10.x) will use raw as a default to accommodate any third party driver implementation. It is recommended to include this third argument. Signed-off-by: Guillermo Ramos <[email protected]> Co-authored-by: Guillermo Ramos <[email protected]>
Signed-off-by: Guillermo Ramos <[email protected]>
Signed-off-by: Guillermo Ramos <[email protected]> (cherry picked from commit 860e3ff)
Signed-off-by: Guillermo Ramos <[email protected]>
Signed-off-by: Guillermo Ramos <[email protected]>
Signed-off-by: Guillermo Ramos <[email protected]>
Signed-off-by: Guillermo Ramos <[email protected]> (cherry picked from commit 80ff9a5)
Signed-off-by: Guillermo Ramos <[email protected]>
Implementation overview: - Incremental points are saved as dedicated rbd snapshots under the "one_backup_<increment_id>" namespace. This snapshots are used to generate delta files in rbdiff format. - The rbdiff formats are stored in the backup server to restore the rbd volumes. - The restore process is performed directly on the Ceph cluster importing the base image (first full backup in the chain, rbd import) and then applying the increments (rbd import-diff) up to the target increment. - Two new pseudo-protocols has been implemented to adopt the restore pattern above (restic+rbd, rsync+rbd). This protocols bundle of the rbdiff files in a tarball for transfer from the backup server. Note: reconstruct process uses the Ceph BRIDGE_LIST and not the backup server (as opposed to qcow2 backups) Other bug fixes - This commit also fixes #6741, resetting the backup chain after a restore - The original ceph drivers do not receive the full action information, this now has been fixed by including VM information in the STDIN string sent to the driver. Compatibility note. - backup actions should return now the backup format used raw, rbd, ... If not provided oned (6.10.x) will use raw as a default to accommodate any third party driver implementation. It is recommended to include this third argument. Signed-off-by: Guillermo Ramos <[email protected]> Co-authored-by: Guillermo Ramos <[email protected]> (cherry picked from commit 5f7b370)
Description
This feature is to implement incremental backups in Ceph. The overall process would be:
Interface Changes
None
Progress Status
The text was updated successfully, but these errors were encountered: