Skip to content

Commit

Permalink
Replication doc updates for CRD version update and module upgrade (#460)
Browse files Browse the repository at this point in the history
  • Loading branch information
santhoshatdell authored and rajkumar-palani committed Feb 27, 2023
1 parent e6c7ed6 commit 5319a35
Show file tree
Hide file tree
Showing 4 changed files with 17 additions and 6 deletions.
2 changes: 1 addition & 1 deletion content/docs/replication/architecture/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Any replication related operation is always carried out on all the volumes prese

```yaml
kind: DellCSIReplicationGroup
apiVersion: replication.storage.dell.com/v1alpha1
apiVersion: replication.storage.dell.com/v1
metadata:
name: rg-e6be24c0-145d-4b62-8674-639282ebdd13
spec:
Expand Down
2 changes: 1 addition & 1 deletion content/docs/replication/deployment/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ git clone github.com/dell/csm-replication
cd csm-replication
kubectl create ns dell-replication-controller
# Copy and modify values.yaml file if you wish to customize your deployment in any way
cp ../helm/csm-replication/values.yaml ./myvalues.yaml
cp ./helm/csm-replication/values.yaml ./myvalues.yaml
bash scripts/install.sh --values ./myvalues.yaml
```
>Note: Current installation method allows you to specify custom `<FQDN>:<IP>` entries to be appended to controller's `/etc/hosts` file. It can be useful if controller is being deployed in private environment where DNS is not set up properly, but kubernetes clusters use FQDN as API server's address.
Expand Down
5 changes: 5 additions & 0 deletions content/docs/replication/uninstall.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,11 @@ If you used `controller.yaml` manifest with either `kubectl` or `repctl` use thi
kubectl delete -f deploy/controller.yaml
```

To delete the replication CRD you can run the command:
```shell
kubectl delete crd dellcsireplicationgroups.replication.storage.dell.com
```

> NOTE: Be sure to run chosen command on all clusters where you want to uninstall replication controller.
## Uninstalling the replication sidecar
Expand Down
14 changes: 10 additions & 4 deletions content/docs/replication/upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,11 +27,11 @@ To upgrade the CSM Replication sidecar that is installed along with the driver,

### PowerScale

On PowerScale systems, an additional step is needed when upgrading from CSM v1.5 to CSM v1.6. Because the SyncIQ policy created on the target-side storage array is no longer used, it must be deleted for any existing replication groups after performing the upgrade to the CSM Replication sidecar and PowerScale CSI driver. These steps should be performed before the replication groups are used with the new version of the CSI driver. Until this step is performed, Replication Groups created on CSM v1.5 will display an UNKNOWN link state in CSM v1.6.
On PowerScale systems, an additional step is needed when upgrading to CSM Replication v1.4.0 or later. Because the SyncIQ policy created on the target-side storage array is no longer used, it must be deleted for any existing `DellCSIReplicationGroup` objects after performing the upgrade to the CSM Replication sidecar and PowerScale CSI driver. These steps should be performed before the `DellCSIReplicationGroup` objects are used with the new version of the CSI driver. Until this step is performed, existing `DellCSIReplicationGroup` objects will display an UNKNOWN link state.

1. Log in to the target PowerScale array.
2. Navigate to the `Data Protection > SyncIQ` page and select the `Policies` tab.
3. Delete all disabled, target-side SyncIQ policies that are used for CSM Replication. Such policies will be distinguished by their names, of the format `<prefix>-<kubernetes namespace>-<IP of replication destination>-<RPO duration>`.
3. Delete disabled, target-side SyncIQ policies that are used for CSM Replication. Such policies will be distinguished by their names, of the format `<prefix>-<kubernetes namespace>-<IP of replication destination>-<RPO duration>`.

## Updating CSM Replication controller

Expand All @@ -40,7 +40,7 @@ On PowerScale systems, an additional step is needed when upgrading from CSM v1.5
This option will only work if you have previously installed replication with helm chart available since version 1.1. If you used simple manifest or `repctl` please use [upgrading with repctl](#upgrading-with-repctl)

**Steps**
1. Update the `image` value in the values files to reference the new CSM Replication sidecar image or use a new version of the csm-replication helm chart
1. Update the `image` value in the values files to reference the new CSM Replication controller image or use a new version of the csm-replication helm chart
2. Run the install script with the option `--upgrade` by running: `cd ./scripts && ./install.sh --values ./myvalues.yaml --upgrade`
3. Run the same command on the second Kubernetes cluster if you use multi-cluster replication topology

Expand All @@ -54,5 +54,11 @@ This option will only work if you have previously installed replication with hel
**Steps**
1. Find a new version of deployment manifest that can be found in `deploy/controller.yaml`, with newer `image` pointing to the version of CSM Replication controller you want to upgrade to
2. Apply said manifest using the usual `repctl create` command like so
`./repctl create -f ./deploy/controller.yaml`. The output should have this line `Successfully updated existing deployment: dell-replication-controller-manager`
`./repctl create -f ../deploy/controller.yaml`. The output should have this line `Successfully updated existing deployment: dell-replication-controller-manager`
3. Check if everything is OK by querying your Kubernetes clusters using `kubectl` like this `kubectl get pods -n dell-replication-controller`, your pods should be READY and RUNNING

### Replication CRD version update

CRD `dellcsireplicationgroups.replication.storage.dell.com` has been updated to version `v1` in CSM Replication v1.4.0. To facilitate the continued use of existing `DellCSIReplicationGroup` CR objects after upgrading to CSM Replication v1.4.0 or later, an `init container` will be deployed during upgrade. The `init container` updates the existing CRs with necessary steps for their continued use.

> Note: Do not update the CRD as part of upgrade. The `init container` takes care of updating existing CRD and CR versions.

0 comments on commit 5319a35

Please sign in to comment.