Skip to content

Commit

Permalink
M #-: Add mention of thin provisioning in NFS/NAS Datastore page
Browse files Browse the repository at this point in the history
Signed-off-by: Pedro Ielpi <[email protected]>
  • Loading branch information
pedroielpi3 committed Dec 9, 2024
1 parent 3776869 commit c0bbbec
Showing 1 changed file with 7 additions and 5 deletions.
12 changes: 7 additions & 5 deletions source/open_cluster_deployment/storage_setup/nas_ds.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,22 +4,24 @@
NFS/NAS Datastores
================================================================================

This storage configuration assumes that your Hosts can access and mount a shared volume located on a NAS (Network Attached Storage) server. You will use this shared volumes to store VM disk images files. The Virtual Machines will boot also from the shared volume.
This storage configuration assumes that your Hosts can access and mount a shared volume located on a NAS (Network Attached Storage) server. You will use this shared volume to store VM disk images files. The Virtual Machines will boot also from the shared volume.

The scalability of this solution is bounded to the performance of your NAS server. However you can use multiple NAS server simultaneously to improve the scalability of your OpenNebula cloud. The use of multiple NFS/NAS datastores will let you:
The scalability of this solution will be bound to the performance of your NAS server. However, you can use multiple NAS server simultaneously to improve the scalability of your OpenNebula cloud. The use of multiple NFS/NAS datastores will allow you to:

* Balance I/O operations between storage servers.
* Apply different SLA policies (e.g., backup) to different VM types or users.
* Apply different SLA policies (e.g. backup) to different VM types or users.
* Easily add new storage.

Using an NFS/NAS Datastore provides a straightforward solution for implementing thin provisioning for VMs, which is enabled by default when using the **qcow2** image format.

Front-end Setup
================================================================================

Simply mount the **Image** Datastore directory in the Front-end in ``/var/lib/one/datastores/<datastore_id>``. Note that if all the Datastores are of the same type you can mount the whole ``/var/lib/one/datastores`` directory.

.. note:: The Front-end only needs to mount the Image Datastores and **not** the System Datastores.

.. note:: **NFS volumes mount tips**. The following options are recommended to mount NFS shares:``soft, intr, rsize=32768, wsize=32768``. With the documented configuration of libvirt/kvm the image files are accessed as ``oneadmin`` user. If the files must be read by ``root``, the option ``no_root_squash`` must be added.
.. note:: **NFS volumes mount tips**. The following options are recommended to mount NFS shares:``soft, intr, rsize=32768, wsize=32768``. With the documented configuration of libvirt/kvm, the image files can be accessed as the ``oneadmin`` user. If the files must be read by ``root``, the option ``no_root_squash`` must be added.

Host Setup
================================================================================
Expand All @@ -30,7 +32,7 @@ The configuration is the same as for the Front-end above: simply mount in each H

OpenNebula Configuration
================================================================================
Once the Host and Front-end storage is setup, the OpenNebula configuration comprises the creation of an Image and System Datastores.
Once Host and Front-end storage have been is set up, the OpenNebula configuration comprises the creation of an Image and System Datastores.

Create System Datastore
--------------------------------------------------------------------------------
Expand Down

0 comments on commit c0bbbec

Please sign in to comment.