The list below represents the cluster configuration level changes necessary to connect a cluster to zadara (or perhaps other) external storage.
- Run './bin/rake cluster:new' and choose one of the zadara variants. If you
don't know the path to volume you're exporting or the IP to the zadara NFS
server, that's fine. Enter anything that looks like a path or IP address and
you can use
./bin/rake cluster:edit
to fix it later. - Create your VPC via
./bin/rake vpc:init
- Now create your zadara storage volumes (see below). Come back and continue with the next step when that's done.
- Provision the rest of your cluster:
./bin/rake admin:cluster:init
You should not see a "Storage" layer. - Start your instances via
./bin/rake stack:instances:start
You should now be using zadara provisioned storage. Be sure to implement monitoring and alerts for your external storage.
Zadara VPSA creation is discussed in more detail here.
- Create the VPSA in the main zadara web console with a controller and some drives
- Send an email to zadara with the AWS account name and account number, under the "my account" menu option in the aws web console.
- While you're waiting for the VPSA, create a virtual private gateway.
- Accept the virtual interface zadara created under "direct connect" in the aws console and link to the virtual private gateway you created above.
- Attach the virtual private gateway to the VPC you created for your cluster.
- Allow the virtual private gateway provided routes to propagate in all the route tables of your VPC - both private and public subnets. This is under "Route Tables", and then the "Route Propagation" tab. It probably makes sense to filter by your VPC to make things easier. There's a UI bug that makes it look like routes are propagating but they may not be - switch to each route and refresh the page to ensure you've actually made a change and that it's taken.
- Log in to the remote VPSA through an SSH tunnel over your VPC, something
like
ssh -L 8080:<zadara hostname>:80 <external IP in your cluster>
. The VPSA gui should now be available onhttp://localhost:8080
. The easiest way to do this is to add a throwaway custom layer that contains a single instance with a public IP and the default chef recipes. Start up this instance and it will allow you to access the VPSA GUI from the correct VPC. After you've successfully connected your cluster, you can remove the layer and the throwaway instance. - Create a RAID group from your drives that'll be used to populate a pool.
- Create NAS users with username/UID mappings, for matterhorn, uid 2122 and
custom_metrics
uid 997 - Create NAS groups with group name / GID mappings, for matterhorn, gid 2122
and
custom_metrics
uid 997 - Carve a NAS volume from the pool you previously created. The export name is set by the volume, as an NFS server can have multiple exports. Use a name that makes sense for your cluster.
- Create a server with a CIDR block that matches your VPC and/or relevant subnets. Ensure that "root squash" is enabled.
- Attach the volume you created above to this server.
- You should now have the information you need to update your cluster configuration for external storage. Return the previous section.
Removing a zadara cluster is almost the same process as removing a normal
cluster - ./bin/rake admin:cluster:delete
.
The VPC will probably not delete cleanly - you should:
- manually detach the virtual private gateway,
- manually delete the VPC,
- remove the cloudformation stack, and then
- run
./bin/rak admin:cluster:delete
again.
You might want to remove and/or reformat the volume you've exported to free up space.