Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using controller node for mv method #19

Open
krakazyabra opened this issue Apr 24, 2024 · 6 comments
Open

Using controller node for mv method #19

krakazyabra opened this issue Apr 24, 2024 · 6 comments

Comments

@krakazyabra
Copy link

krakazyabra commented Apr 24, 2024

Hi!
When you call method STOP for VM, opennebula uses controller as DST:

Wed Apr 24 13:59:12 2024 [Z0][VM][I]: New LCM state is SAVE_STOP
Wed Apr 24 13:59:14 2024 [Z0][VMM][I]: ExitCode: 0
Wed Apr 24 13:59:14 2024 [Z0][VMM][I]: Successfully execute virtualization driver operation: save.
Wed Apr 24 13:59:14 2024 [Z0][VMM][I]: ExitCode: 0
Wed Apr 24 13:59:14 2024 [Z0][VMM][I]: Successfully execute network driver operation: clean.
Wed Apr 24 13:59:14 2024 [Z0][VM][I]: New LCM state is EPILOG_STOP
Wed Apr 24 14:00:11 2024 [Z0][TrM][I]: Command execution failed (exit code: 1): /var/lib/one/remotes/tm/linstor/mv onenode-prg-01:/var/lib/one//datastores/109/22/disk.0 hosting-cntl:/var/lib/one//datastores/109/22/disk.0 22 108

here hosting-cntl is just opennebula-controller (oned, sunstone etc). It's just virtual machine without any configured LVM pools. Moreover this host is not in Opennebula's hosts list.

some details:

linstor -m n l
root@linstor-cntl:~# linstor -m n l
[
  [
    {
      "name": "onenode-prg-01",
      "type": "SATELLITE",
      "props": {
        "CurStltConnName": "default",
        "NodeUname": "onenode-prg-01"
      },
      "net_interfaces": [
        {
          "name": "default",
          "address": "10.249.252.10",
          "satellite_port": 3366,
          "satellite_encryption_type": "PLAIN",
          "is_active": true,
          "uuid": "0e8a0453-b155-4e23-92ce-bbaa9ea23e2a"
        }
      ],
      "connection_status": "ONLINE",
      "uuid": "39c10b95-7335-4315-9e41-cc0e028f8748",
      "storage_providers": [
        "DISKLESS",
        "LVM",
        "LVM_THIN",
        "FILE",
        "FILE_THIN",
        "REMOTE_SPDK",
        "EBS_INIT",
        "EBS_TARGET"
      ],
      "resource_layers": [
        "DRBD",
        "WRITECACHE",
        "CACHE",
        "STORAGE"
      ],
      "unsupported_providers": {
        "SPDK": [
          "IO exception occured when running 'rpc.py spdk_get_version': Cannot run program \"rpc.py\": error=2, No such file or directory"
        ],
        "STORAGE_SPACES": [
          "This tool does not exist on the Linux platform."
        ],
        "ZFS_THIN": [
          "'cat /sys/module/zfs/version' returned with exit code 1",
          "IO exception occured when running 'zfs '-?'': Cannot run program \"zfs\": error=2, No such file or directory"
        ],
        "ZFS": [
          "'cat /sys/module/zfs/version' returned with exit code 1",
          "IO exception occured when running 'zfs '-?'': Cannot run program \"zfs\": error=2, No such file or directory"
        ],
        "EXOS": [
          "IO exception occured when running 'lsscsi --version': Cannot run program \"lsscsi\": error=2, No such file or directory"
        ],
        "STORAGE_SPACES_THIN": [
          "This tool does not exist on the Linux platform."
        ]
      },
      "unsupported_layers": {
        "LUKS": [
          "IO exception occured when running 'cryptsetup --version': Cannot run program \"cryptsetup\": error=2, No such file or directory"
        ],
        "NVME": [
          "IO exception occured when running 'nvme version': Cannot run program \"nvme\": error=2, No such file or directory"
        ],
        "BCACHE": [
          "IO exception occured when running 'make-bcache -h': Cannot run program \"make-bcache\": error=2, No such file or directory"
        ]
      }
    },
    {
      "name": "onenode-prg-02",
      "type": "SATELLITE",
      "props": {
        "CurStltConnName": "default",
        "NodeUname": "onenode-prg-02"
      },
      "net_interfaces": [
        {
          "name": "default",
          "address": "10.249.252.11",
          "satellite_port": 3366,
          "satellite_encryption_type": "PLAIN",
          "is_active": true,
          "uuid": "31b19016-95ad-4a8a-ae3e-6d04e9d73a9a"
        }
      ],
      "connection_status": "ONLINE",
      "uuid": "777348da-5329-4a5b-8e79-8be0c36813a1",
      "storage_providers": [
        "DISKLESS",
        "LVM",
        "LVM_THIN",
        "FILE",
        "FILE_THIN",
        "REMOTE_SPDK",
        "EBS_INIT",
        "EBS_TARGET"
      ],
      "resource_layers": [
        "DRBD",
        "WRITECACHE",
        "CACHE",
        "STORAGE"
      ],
      "unsupported_providers": {
        "SPDK": [
          "IO exception occured when running 'rpc.py spdk_get_version': Cannot run program \"rpc.py\": error=2, No such file or directory"
        ],
        "STORAGE_SPACES": [
          "This tool does not exist on the Linux platform."
        ],
        "ZFS_THIN": [
          "'cat /sys/module/zfs/version' returned with exit code 1",
          "IO exception occured when running 'zfs '-?'': Cannot run program \"zfs\": error=2, No such file or directory"
        ],
        "ZFS": [
          "'cat /sys/module/zfs/version' returned with exit code 1",
          "IO exception occured when running 'zfs '-?'': Cannot run program \"zfs\": error=2, No such file or directory"
        ],
        "EXOS": [
          "IO exception occured when running 'lsscsi --version': Cannot run program \"lsscsi\": error=2, No such file or directory"
        ],
        "STORAGE_SPACES_THIN": [
          "This tool does not exist on the Linux platform."
        ]
      },
      "unsupported_layers": {
        "LUKS": [
          "IO exception occured when running 'cryptsetup --version': Cannot run program \"cryptsetup\": error=2, No such file or directory"
        ],
        "NVME": [
          "IO exception occured when running 'nvme version': Cannot run program \"nvme\": error=2, No such file or directory"
        ],
        "BCACHE": [
          "IO exception occured when running 'make-bcache -h': Cannot run program \"make-bcache\": error=2, No such file or directory"
        ]
      }
    },
    {
      "name": "onenode-prg-03",
      "type": "SATELLITE",
      "props": {
        "CurStltConnName": "default",
        "NodeUname": "onenode-prg-03"
      },
      "net_interfaces": [
        {
          "name": "default",
          "address": "10.249.252.12",
          "satellite_port": 3366,
          "satellite_encryption_type": "PLAIN",
          "is_active": true,
          "uuid": "5f2954be-e062-4dbb-90c7-95eadd88c911"
        }
      ],
      "connection_status": "ONLINE",
      "uuid": "6433ff3c-6e61-443a-9788-fc1cbd55b29f",
      "storage_providers": [
        "DISKLESS",
        "LVM",
        "LVM_THIN",
        "FILE",
        "FILE_THIN",
        "REMOTE_SPDK",
        "EBS_INIT",
        "EBS_TARGET"
      ],
      "resource_layers": [
        "DRBD",
        "NVME",
        "WRITECACHE",
        "CACHE",
        "STORAGE"
      ],
      "unsupported_providers": {
        "SPDK": [
          "IO exception occured when running 'rpc.py spdk_get_version': Cannot run program \"rpc.py\": error=2, No such file or directory"
        ],
        "STORAGE_SPACES": [
          "This tool does not exist on the Linux platform."
        ],
        "ZFS_THIN": [
          "'cat /sys/module/zfs/version' returned with exit code 1",
          "IO exception occured when running 'zfs '-?'': Cannot run program \"zfs\": error=2, No such file or directory"
        ],
        "ZFS": [
          "'cat /sys/module/zfs/version' returned with exit code 1",
          "IO exception occured when running 'zfs '-?'': Cannot run program \"zfs\": error=2, No such file or directory"
        ],
        "EXOS": [
          "IO exception occured when running 'lsscsi --version': Cannot run program \"lsscsi\": error=2, No such file or directory"
        ],
        "STORAGE_SPACES_THIN": [
          "This tool does not exist on the Linux platform."
        ]
      },
      "unsupported_layers": {
        "LUKS": [
          "IO exception occured when running 'cryptsetup --version': Cannot run program \"cryptsetup\": error=2, No such file or directory"
        ],
        "BCACHE": [
          "IO exception occured when running 'make-bcache -h': Cannot run program \"make-bcache\": error=2, No such file or directory"
        ]
      }
    }
  ]
]
onedatastore show 108
oneadmin@hosting-cntl:/root$ onedatastore show 108
DATASTORE 108 INFORMATION
ID             : 108
NAME           : linstor-images
USER           : oneadmin
GROUP          : oneadmin
CLUSTERS       : 0
TYPE           : IMAGE
DS_MAD         : linstor
TM_MAD         : linstor
BASE PATH      : /var/lib/one//datastores/108
DISK_TYPE      : BLOCK
STATE          : READY

DATASTORE CAPACITY
TOTAL:         : 5.2T
FREE:          : 5.2T
USED:          : 6.6G
LIMIT:         : -

PERMISSIONS
OWNER          : um-
GROUP          : u--
OTHER          : ---

DATASTORE TEMPLATE
ALLOW_ORPHANS="yes"
BRIDGE_LIST="onenode-prg-01 onenode-prg-02 onenode-prg-03"
CLONE_TARGET="SELF"
CLONE_TARGET_SHARED="SELF"
CLONE_TARGET_SSH="SELF"
COMPATIBLE_SYS_DS="109"
DISK_TYPE="BLOCK"
DISK_TYPE_SHARED="BLOCK"
DISK_TYPE_SSH="BLOCK"
DS_MAD="linstor"
LINSTOR_CONTROLLERS="10.249.252.2:3370"
LINSTOR_RESOURCE_GROUP="on_image_rg"
LN_TARGET="NONE"
LN_TARGET_SHARED="NONE"
LN_TARGET_SSH="NONE"
RESTRICTED_DIRS="/"
SAFE_DIRS="/var/tmp"
TM_MAD="linstor"
TM_MAD_SYSTEM="ssh,shared"
TYPE="IMAGE_DS"

IMAGES
16
onedatastore show 109
oneadmin@hosting-cntl:/root$ onedatastore show 109
DATASTORE 109 INFORMATION
ID             : 109
NAME           : linstor-system
USER           : oneadmin
GROUP          : oneadmin
CLUSTERS       : 0
TYPE           : SYSTEM
DS_MAD         : -
TM_MAD         : linstor
BASE PATH      : /var/lib/one//datastores/109
DISK_TYPE      : BLOCK
STATE          : READY

DATASTORE CAPACITY
TOTAL:         : 5.2T
FREE:          : 5.2T
USED:          : 6.6G
LIMIT:         : -

PERMISSIONS
OWNER          : um-
GROUP          : u--
OTHER          : ---

DATASTORE TEMPLATE
ALLOW_ORPHANS="yes"
BRIDGE_LIST="onenode-prg-01 onenode-prg-02 onenode-prg-03"
DISK_TYPE="BLOCK"
DS_MIGRATE="YES"
LINSTOR_CONTROLLERS="10.249.252.2:3370"
LINSTOR_RESOURCE_GROUP="on_system_rg"
RESTRICTED_DIRS="/"
SAFE_DIRS="/var/tmp"
SHARED="YES"
TM_MAD="linstor"
TYPE="SYSTEM_DS"

IMAGES
full log
oneadmin@hosting-cntl:/root$ cat /var/log/one/22.log
Wed Apr 24 13:58:52 2024 [Z0][VM][I]: New state is ACTIVE
Wed Apr 24 13:58:52 2024 [Z0][VM][I]: New LCM state is PROLOG
Wed Apr 24 13:58:59 2024 [Z0][VM][I]: New LCM state is BOOT
Wed Apr 24 13:58:59 2024 [Z0][VMM][I]: Generating deployment file: /var/lib/one/vms/22/deployment.0
Wed Apr 24 13:59:00 2024 [Z0][VMM][I]: Successfully execute transfer manager driver operation: tm_context.
Wed Apr 24 13:59:00 2024 [Z0][VMM][I]: ExitCode: 0
Wed Apr 24 13:59:00 2024 [Z0][VMM][I]: Successfully execute network driver operation: pre.
Wed Apr 24 13:59:00 2024 [Z0][VMM][I]: ExitCode: 0
Wed Apr 24 13:59:00 2024 [Z0][VMM][I]: Successfully execute virtualization driver operation: /bin/mkdir -p.
Wed Apr 24 13:59:00 2024 [Z0][VMM][I]: ExitCode: 0
Wed Apr 24 13:59:00 2024 [Z0][VMM][I]: Successfully execute virtualization driver operation: /bin/cat - >/var/lib/one//datastores/109/22/vm.xml.
Wed Apr 24 13:59:00 2024 [Z0][VMM][I]: ExitCode: 0
Wed Apr 24 13:59:00 2024 [Z0][VMM][I]: Successfully execute virtualization driver operation: /bin/cat - >/var/lib/one//datastores/109/22/ds.xml.
Wed Apr 24 13:59:01 2024 [Z0][VMM][I]: XPath set is empty
Wed Apr 24 13:59:01 2024 [Z0][VMM][I]: ExitCode: 0
Wed Apr 24 13:59:01 2024 [Z0][VMM][I]: Successfully execute virtualization driver operation: deploy.
Wed Apr 24 13:59:01 2024 [Z0][VMM][I]: ExitCode: 0
Wed Apr 24 13:59:01 2024 [Z0][VMM][I]: Successfully execute network driver operation: post.
Wed Apr 24 13:59:01 2024 [Z0][VM][I]: New LCM state is RUNNING
Wed Apr 24 13:59:12 2024 [Z0][VM][I]: New LCM state is SAVE_STOP
Wed Apr 24 13:59:14 2024 [Z0][VMM][I]: ExitCode: 0
Wed Apr 24 13:59:14 2024 [Z0][VMM][I]: Successfully execute virtualization driver operation: save.
Wed Apr 24 13:59:14 2024 [Z0][VMM][I]: ExitCode: 0
Wed Apr 24 13:59:14 2024 [Z0][VMM][I]: Successfully execute network driver operation: clean.
Wed Apr 24 13:59:14 2024 [Z0][VM][I]: New LCM state is EPILOG_STOP
Wed Apr 24 14:00:11 2024 [Z0][TrM][I]: Command execution failed (exit code: 1): /var/lib/one/remotes/tm/linstor/mv onenode-prg-01:/var/lib/one//datastores/109/22/disk.0 hosting-cntl:/var/lib/one//datastores/109/22/disk.0 22 108
Wed Apr 24 14:00:11 2024 [Z0][TrM][I]: Traceback (most recent call last):
Wed Apr 24 14:00:11 2024 [Z0][TrM][I]: File "/var/lib/one/.local/lib/python3.9/site-packages/one/util.py", line 392, in run_main
Wed Apr 24 14:00:11 2024 [Z0][TrM][I]: main_func()
Wed Apr 24 14:00:11 2024 [Z0][TrM][I]: File "/var/lib/one/remotes/tm/linstor/mv", line 206, in main
Wed Apr 24 14:00:11 2024 [Z0][TrM][I]: move_none_linstor_host(lin, src_host, src_path, dst_host, dst_path, dst_dir, res_name)
Wed Apr 24 14:00:11 2024 [Z0][TrM][I]: File "/var/lib/one/remotes/tm/linstor/mv", line 91, in move_none_linstor_host
Wed Apr 24 14:00:11 2024 [Z0][TrM][I]: raise RuntimeError("Unable to copy linstor resource from {from_host} to new host {host}: {err}"
Wed Apr 24 14:00:11 2024 [Z0][TrM][I]: RuntimeError: Unable to copy linstor resource from onenode-prg-01 to new host hosting-cntl: ERROR: bash: Command "dd" failed:
Wed Apr 24 14:00:11 2024 [Z0][TrM][I]:
Wed Apr 24 14:00:11 2024 [Z0][TrM][E]: Unable to copy linstor resource from onenode-prg-01 to new host hosting-cntl: ERROR: bash: Command "dd" failed:
Wed Apr 24 14:00:11 2024 [Z0][TrM][E]: Error executing image transfer script: Traceback (most recent call last):   File "/var/lib/one/.local/lib/python3.9/site-packages/one/util.py", line 392, in run_main     main_func()   File "/var/lib/one/remotes/tm/linstor/mv", line 206, in main     move_none_linstor_host(lin, src_host, src_path, dst_host, dst_path, dst_dir, res_name)   File "/var/lib/one/remotes/tm/linstor/mv", line 91, in move_none_linstor_host     raise RuntimeError("Unable to copy linstor resource from {from_host} to new host {host}: {err}" RuntimeError: Unable to copy linstor resource from onenode-prg-01 to new host hosting-cntl: ERROR: bash: Command "dd" failed:   ERROR: Unable to copy linstor resource from onenode-prg-01 to new host hosting-cntl: ERROR: bash: Command "dd" failed:
Wed Apr 24 14:00:11 2024 [Z0][VM][I]: New LCM state is EPILOG_STOP_FAILURE
journalctl -u opennebula
Apr 24 13:58:52 hosting-cntl clone[1547739]: INFO Entering tm clone src:hosting-cntl:OpenNebula-Image-16 dst:onenode-prg-01:/var/lib/one//datastores/109/22/disk.0
Apr 24 13:58:52 hosting-cntl clone[1547739]: INFO running shell command: onedatastore show --xml 108
Apr 24 13:58:52 hosting-cntl clone[1547739]: INFO running shell command: onevm show -x 22
Apr 24 13:58:53 hosting-cntl clone[1547739]: INFO Cloning from resource 'OpenNebula-Image-16' to 'OpenNebula-Image-16-vm22-disk0'.
Apr 24 13:58:59 hosting-cntl clone[1547739]: INFO ssh 'onenode-prg-01' cmd: mkdir -p /var/lib/one/datastores/109/22 && ln -fs /dev/drbd1005 /var/lib/one/datastores/109/22/disk.0
Apr 24 13:58:59 hosting-cntl clone[1547739]: INFO Exiting tm clone
Apr 24 13:58:59 hosting-cntl context[1547779]: INFO Entering tm/context on onenode-prg-01:/var/lib/one//datastores/109/22/disk.1.
Apr 24 13:58:59 hosting-cntl context[1547779]: INFO running shell command: onedatastore show --xml 109
Apr 24 13:59:00 hosting-cntl context[1547779]: INFO running shell command: onevm show -x 22
Apr 24 13:59:00 hosting-cntl context[1547779]: INFO tm/ssh/context finished with 0.
Apr 24 13:59:14 hosting-cntl mv[1547871]: INFO Entering tm mv, from='onenode-prg-01:/var/lib/one//datastores/109/22/disk.0' to='hosting-cntl:/var/lib/one//datastores/109/22/disk.0'
Apr 24 13:59:14 hosting-cntl mv[1547871]: INFO running shell command: onedatastore show --xml 108
Apr 24 13:59:14 hosting-cntl mv[1547871]: INFO running shell command: onevm show -x 22
Apr 24 13:59:14 hosting-cntl mv[1547871]: INFO OpenNebula-Image-16-vm22-disk0 is a non-persistent OS or DATABLOCK image
Apr 24 13:59:14 hosting-cntl mv[1547871]: INFO None Linstor dst node: hosting-cntl
Apr 24 13:59:14 hosting-cntl mv[1547871]: INFO running shell command: bash -c source /var/lib/one/remotes//scripts_common.sh && ssh_make_path hosting-cntl /var/lib/one/datastores/109/22
Apr 24 13:59:15 hosting-cntl mv[1547871]: INFO ssh 'onenode-prg-01' cmd: set -e -o pipefail && dd if=/dev/drbd1005 bs=4M iflag=direct conv=sparse | ssh hosting-cntl -- 'dd of=/var/lib/one/datastores/109/22/disk.0 bs=4M conv=sparse'
Apr 24 14:00:11 hosting-cntl mv[1547871]: ERROR Traceback (most recent call last):
                                            File "/var/lib/one/.local/lib/python3.9/site-packages/one/util.py", line 392, in run_main
                                              main_func()
                                            File "/var/lib/one/remotes/tm/linstor/mv", line 206, in main
                                              move_none_linstor_host(lin, src_host, src_path, dst_host, dst_path, dst_dir, res_name)
                                            File "/var/lib/one/remotes/tm/linstor/mv", line 91, in move_none_linstor_host
                                              raise RuntimeError("Unable to copy linstor resource from {from_host} to new host {host}: {err}"
                                          RuntimeError: Unable to copy linstor resource from onenode-prg-01 to new host hosting-cntl: ERROR: bash: Command "dd" failed:
@rp-
Copy link
Collaborator

rp- commented Apr 26, 2024

Well yes, this is the call the plugin gets from OpenNebula:
Apr 24 13:59:14 hosting-cntl mv[1547871]: INFO Entering tm mv, from='onenode-prg-01:/var/lib/one//datastores/109/22/disk.0' to='hosting-cntl:/var/lib/one//datastores/109/22/disk.0'
So it just tells me it wants to move this disk to this host.

And if this manage node is part of the linstor cluster AFAIK it would maximum create a diskless resource and keep the everything as it is.
If not I try to copy the disk to this host and was convinced that this is how things work in OpenNebula

@krakazyabra
Copy link
Author

Hello!
Am I right that I need to create and configure thin pool on controller and add it to Linstor? And then OpenNebula will be able to make mv?

@rp-
Copy link
Collaborator

rp- commented Apr 26, 2024

I think it should be enough if it is only a diskless node. So no extra storage pool needed.

@krakazyabra
Copy link
Author

Got it, thank you.

@krakazyabra
Copy link
Author

Hello again :)
this way doesn't work

on linstor-controller:

linstor node create hosting-cntl 10.249.252.4
Output
SUCCESS:
Description:
    New node 'hosting-cntl' registered.
Details:
    Node 'hosting-cntl' UUID is: 3a2f900c-4bf5-4f00-a28e-f8f47a0da2ed
SUCCESS:
Description:
    Node 'hosting-cntl' authenticated
Details:
    Supported storage providers: [diskless, lvm, lvm_thin, file, file_thin, remote_spdk, ebs_init, ebs_target]
    Supported resource layers  : [writecache, cache, storage]
    Unsupported storage providers:
        ZFS: 'cat /sys/module/zfs/version' returned with exit code 1
             IO exception occured when running 'zfs '-?'': Cannot run program "zfs": error=2, No such file or directory
        ZFS_THIN: 'cat /sys/module/zfs/version' returned with exit code 1
                  IO exception occured when running 'zfs '-?'': Cannot run program "zfs": error=2, No such file or directory
        SPDK: IO exception occured when running 'rpc.py spdk_get_version': Cannot run program "rpc.py": error=2, No such file or directory
        EXOS: IO exception occured when running 'lsscsi --version': Cannot run program "lsscsi": error=2, No such file or directory
              '/bin/bash -c 'cat /sys/class/sas_phy/*/sas_address'' returned with exit code 1
              '/bin/bash -c 'cat /sys/class/sas_device/end_device-*/sas_address'' returned with exit code 1
        STORAGE_SPACES: This tool does not exist on the Linux platform.
        STORAGE_SPACES_THIN: This tool does not exist on the Linux platform.

    Unsupported resource layers:
        DRBD: IOException occurred when checking the 'drbdadm --version'
              IOException occurred when checking the 'drbdadm --version'
        LUKS: IO exception occured when running 'cryptsetup --version': Cannot run program "cryptsetup": error=2, No such file or directory
        NVME: IO exception occured when running 'nvme version': Cannot run program "nvme": error=2, No such file or directory
        BCACHE: IO exception occured when running 'make-bcache -h': Cannot run program "make-bcache": error=2, No such file or directory

then

linstor storage-pool create lvmthin hosting-cntl data drbdpool/thinpool
Output
SUCCESS:
    Successfully set property key(s): StorDriver/StorPoolName
SUCCESS:
Description:
    New storage pool 'data' on node 'hosting-cntl' registered.
Details:
    Storage pool 'data' on node 'hosting-cntl' UUID is: 9705ff5d-b1e6-410e-8e38-d9a1b9d59b82
SUCCESS:
    (hosting-cntl) Changes applied to storage pool 'data'
root@linstor-cntl:~# linstor n l
╭──────────────────────────────────────────────────────────────────╮
┊ Node           ┊ NodeType  ┊ Addresses                  ┊ State  ┊
╞══════════════════════════════════════════════════════════════════╡
┊ hosting-cntl   ┊ SATELLITE ┊ 10.249.252.4:3366 (PLAIN)  ┊ Online ┊
┊ onenode-prg-01 ┊ SATELLITE ┊ 10.249.252.10:3366 (PLAIN) ┊ Online ┊
┊ onenode-prg-02 ┊ SATELLITE ┊ 10.249.252.11:3366 (PLAIN) ┊ Online ┊
┊ onenode-prg-03 ┊ SATELLITE ┊ 10.249.252.12:3366 (PLAIN) ┊ Online ┊
╰──────────────────────────────────────────────────────────────────╯

root@linstor-cntl:~# linstor  sp l
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool          ┊ Node           ┊ Driver   ┊ PoolName          ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName                          ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ hosting-cntl   ┊ DISKLESS ┊                   ┊              ┊               ┊ False        ┊ Ok    ┊ hosting-cntl;DfltDisklessStorPool   ┊
┊ DfltDisklessStorPool ┊ onenode-prg-01 ┊ DISKLESS ┊                   ┊              ┊               ┊ False        ┊ Ok    ┊ onenode-prg-01;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ onenode-prg-02 ┊ DISKLESS ┊                   ┊              ┊               ┊ False        ┊ Ok    ┊ onenode-prg-02;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ onenode-prg-03 ┊ DISKLESS ┊                   ┊              ┊               ┊ False        ┊ Ok    ┊ onenode-prg-03;DfltDisklessStorPool ┊
┊ data                 ┊ hosting-cntl   ┊ LVM_THIN ┊ drbdpool/thinpool ┊    29.93 GiB ┊     29.93 GiB ┊ True         ┊ Ok    ┊ hosting-cntl;data                   ┊
┊ data                 ┊ onenode-prg-01 ┊ LVM_THIN ┊ drbdpool/thinpool ┊     3.49 TiB ┊      3.49 TiB ┊ True         ┊ Ok    ┊ onenode-prg-01;data                 ┊
┊ data                 ┊ onenode-prg-02 ┊ LVM_THIN ┊ drbdpool/thinpool ┊     3.49 TiB ┊      3.49 TiB ┊ True         ┊ Ok    ┊ onenode-prg-02;data                 ┊
┊ data                 ┊ onenode-prg-03 ┊ LVM_THIN ┊ drbdpool/thinpool ┊     3.49 TiB ┊      3.49 TiB ┊ True         ┊ Ok    ┊ onenode-prg-03;data                 ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

But still cannot call STOP (method "save"):

Fri Apr 26 13:18:13 2024 [Z0][VM][I]: New LCM state is SAVE_STOP
Fri Apr 26 13:18:15 2024 [Z0][VMM][I]: ExitCode: 0
Fri Apr 26 13:18:15 2024 [Z0][VMM][I]: Successfully execute virtualization driver operation: save.
Fri Apr 26 13:18:15 2024 [Z0][VMM][I]: ExitCode: 0
Fri Apr 26 13:18:15 2024 [Z0][VMM][I]: Successfully execute network driver operation: clean.
Fri Apr 26 13:18:15 2024 [Z0][VM][I]: New LCM state is EPILOG_STOP
Fri Apr 26 13:18:16 2024 [Z0][TrM][I]: Command execution failed (exit code: 1): /var/lib/one/remotes/tm/linstor/mv onenode-prg-01:/var/lib/one//datastores/109/36/disk.0 hosting-cntl:/var/lib/one//datastores/109/36/disk.0 36 108
Fri Apr 26 13:18:16 2024 [Z0][TrM][I]: Traceback (most recent call last):
Fri Apr 26 13:18:16 2024 [Z0][TrM][I]: File "/var/lib/one/.local/lib/python3.9/site-packages/one/util.py", line 392, in run_main
Fri Apr 26 13:18:16 2024 [Z0][TrM][I]: main_func()
Fri Apr 26 13:18:16 2024 [Z0][TrM][I]: File "/var/lib/one/remotes/tm/linstor/mv", line 203, in main
Fri Apr 26 13:18:16 2024 [Z0][TrM][I]: move_linstor_resource(lin, src_host, src_path, dst_host, dst_path, dst_dir, res_name)
Fri Apr 26 13:18:16 2024 [Z0][TrM][I]: File "/var/lib/one/remotes/tm/linstor/mv", line 51, in move_linstor_resource
Fri Apr 26 13:18:16 2024 [Z0][TrM][I]: res.activate(dst_host)
Fri Apr 26 13:18:16 2024 [Z0][TrM][I]: File "/usr/lib/python3.9/dist-packages/linstor/resource.py", line 212, in wrapper
Fri Apr 26 13:18:16 2024 [Z0][TrM][I]: ret = f(self, *args, **kwargs)
Fri Apr 26 13:18:16 2024 [Z0][TrM][I]: File "/usr/lib/python3.9/dist-packages/linstor/resource.py", line 521, in activate
Fri Apr 26 13:18:16 2024 [Z0][TrM][I]: raise linstor.LinstorError('Could not make resource {} available on node {}: {}'
Fri Apr 26 13:18:16 2024 [Z0][TrM][I]: linstor.errors.LinstorError: Error: Could not make resource OpenNebula-Image-16-vm36-disk0(OpenNebula-Image-16-vm36-disk0) available on node hosting-cntl: ERRO:Autoplacer could not find diskless stor pool on node hosting-cntl matching resource-groups autoplace-settings
Fri Apr 26 13:18:16 2024 [Z0][TrM][E]: Error: Could not make resource OpenNebula-Image-16-vm36-disk0(OpenNebula-Image-16-vm36-disk0) available on node hosting-cntl: ERRO:Autoplacer could not find diskless stor pool on node hosting-cntl matching resource-groups autoplace-settings
Fri Apr 26 13:18:16 2024 [Z0][TrM][E]: Error executing image transfer script: Traceback (most recent call last):   File "/var/lib/one/.local/lib/python3.9/site-packages/one/util.py", line 392, in run_main     main_func()   File "/var/lib/one/remotes/tm/linstor/mv", line 203, in main     move_linstor_resource(lin, src_host, src_path, dst_host, dst_path, dst_dir, res_name)   File "/var/lib/one/remotes/tm/linstor/mv", line 51, in move_linstor_resource     res.activate(dst_host)   File "/usr/lib/python3.9/dist-packages/linstor/resource.py", line 212, in wrapper     ret = f(self, *args, **kwargs)   File "/usr/lib/python3.9/dist-packages/linstor/resource.py", line 521, in activate     raise linstor.LinstorError('Could not make resource {} available on node {}: {}' linstor.errors.LinstorError: Error: Could not make resource OpenNebula-Image-16-vm36-disk0(OpenNebula-Image-16-vm36-disk0) available on node hosting-cntl: ERRO:Autoplacer could not find diskless stor pool on node hosting-cntl matching resource-groups autoplace-settings ERROR: Error: Could not make resource OpenNebula-Image-16-vm36-disk0(OpenNebula-Image-16-vm36-disk0) available on node hosting-cntl: ERRO:Autoplacer could not find diskless stor pool on node hosting-cntl matching resource-groups autoplace-settings
Fri Apr 26 13:18:16 2024 [Z0][VM][I]: New LCM state is EPILOG_STOP_FAILURE

@rp-
Copy link
Collaborator

rp- commented Apr 28, 2024

linstor rg l ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants