Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Un special-case system drive btrfs-in-partition treatment #2824

Closed
4 tasks done
phillxnet opened this issue Mar 29, 2024 · 2 comments
Closed
4 tasks done

Un special-case system drive btrfs-in-partition treatment #2824

phillxnet opened this issue Mar 29, 2024 · 2 comments
Assignees

Comments

@phillxnet
Copy link
Member

phillxnet commented Mar 29, 2024

In the olden days, pre:

we managed only whole disk btrfs, and special cased our system drive as it was partitioned. This is no longer necessary and represents an inconsistency that adds complexity - bringing with it a maintenance burden.

It is proposed that we leverage the work in the interim years re our data dives to apply similarly to our system drive (drives in the future). With the aim of removing legacy special treatment that was originally intended solely to present a partitioned system pool member as if it was a whole disk: the only entity we understood at that time.

I.e. our low-level internal representation from system.osi.scan_disks() for the system disk is presented below.
The following are older real-system test data from when our OS pool was labelled rockstor_rockstor but otherwise similar.
Test reference:

def test_scan_disks_btrfs_in_partition(self):
"""
Test btrfs in partition on otherwise generic install. System disk sda
data disk (for btrfs in partition) virtio with serial "serial-1"
prepared as follows with regard to partition / formatting:

Disk(
                    name="/dev/sda3",
                    model="QEMU HARDDISK",
                    serial="QM00005",
                    size=7025459,
                    transport="sata",
                    vendor="ATA",
                    hctl="2:0:0:0",
                    type="part",
                    fstype="btrfs",
                    label="rockstor_rockstor",
                    uuid="355f53a4-24e1-465e-95f3-7c422898f542",
                    parted=True,
                    root=True,
                    partitions={},
                )

Where-as the equivalent btrfs-in-partition pool member would be represented as follows:

                # Note partitions entry within vda, consistent with cli prep.
                Disk(
                    name="/dev/vda",
                    model=None,
                    serial="serial-1",
                    size=4194304,
                    transport=None,
                    vendor="0x1af4",
                    hctl=None,
                    type="disk",
                    fstype="btrfs",
                    label="btrfs-in-partition",
                    uuid="55284332-af66-4ca0-9647-99d9afbe0ec5",
                    parted=True,
                    root=False,
                    partitions={"/dev/vda1": "vfat", "/dev/vda2": "btrfs"},
                )

I.e. normalise on our btrfs-in-partition internal representation and remove our now inconsistent special treatment of the system drive. So that all drives are referenced, at the scan_disk() level, by their base canonical name, not a partition reference (sda, not sda3 in above) and that the system disk, as per all data disks now, carries a partitions reference: which in turn would also normalise the parted=True reference as pertaining to the canonically named "Disk" entry.

We still have the potentially inconsistent fstype=... element: when dealing with partitioned devices, but this is consistent with our single btrfs partition per device specification/limit.

The primary aim here is to simplify our scan_disks() and normalise its output across both system and data drives. This is turn should ease ongoing maintenance (scan_disks() has excessive cyclomatic complexity) and help to accommodate future enhancements such as btrfs multi-device "ROOT" pool capability. It should also help with normalising our pool management at all higher levels in the code: given the OS pool member/s will be akin to our current data pool members.

Check list of proposed approach:

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Mar 29, 2024
Includes:
- Modernisation re type hints in scan_disks().
- Programmatically establish `lsblk` field derived Disk namedtuple fields.
- Refactoring for readability.
- Black reformatting.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Mar 30, 2024
Replace by-hand lsblk output line parser with dictionary comprehension.
Successive work towards modernisation/simplification of scan_disks().

Employ defaults in Disk named tuple to ease creation using lsblk info,
as these defaults pertain to our extra flag info.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Mar 30, 2024
By normalising on lowecase we can ease the transition to using
our Disk named tuple as the working copy: to help readability
and highlight where we write (via Disk.replace(root=True)). This
should also simplify the build of our final return value.

We also have a large number of tests that use the existing lowercase
Disk fields.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Mar 30, 2024
Explicit, rather than programmatic, namedtuple difintion works
way better for PyCharm editor assistance. Otherwise it fails
through to generic tuple hints/fencing.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Mar 30, 2024
 By working directly with the Disk namedtuple we gain readability
 and the advantages of an immutable type: i.e. more explicit where
 writes/changes happen due to requirement to use ._replace(). We
 also require less type transitions.
@phillxnet
Copy link
Member Author

phillxnet commented Mar 31, 2024

A 'special case' element of how we currently process the root drive within scan_disks(), that significantly complicates our process, is as follows.

  • For current btrfs-in-partition data (non system (root=False)) drives we hold the parent drive, i.e. sdg, as canonical and it already has, by way of lsblk most of the info we require: model & serial for example. Upon processing any of its partitions we back-port to the parent block device each partition's fstype and name.

  • For current system btrfs-in-partition we do almost the opposite. We hold the partition ("/" mount), i.e. sda3 as canonical, and have to forward port (parent devices are listed first in lsblk) some of the info we require to this partition.

There are caveats here however:

  • data pool members default, and are strongly encouraged to be, whole disk: unpartitioned.
  • the currently singular system pool member is almost always partitioned as it is a requirement for current boot arrangements, curtousy of both legacy and uefi boot hardware.

Note that fstype, uuid info etc are held by the direct container (the partition if btrfs-in-partition), not the parent. however we still gain simplification if we treat all drives similarly.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Apr 3, 2024
@phillxnet phillxnet added this to the 5.1.X-X Stable release milestone Apr 3, 2024
@phillxnet phillxnet self-assigned this Apr 4, 2024
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Apr 5, 2024
Removing legacy system drive special treatment that was originally
intended solely to present a partitioned system pool member as if
it was a whole disk: the only entity we understood prior to our data
pool btrfs-in-partition capability. This normalises/simplifies our
lowest level scan_disks() procedure.

Includes:
Update all scan_disks() unittest to reflect:
- The proposed new behaviour from un special-casing of the system drive.
- Our more recent 5GB default (was 1 GB): where existing test data allows.
- Fix recent tests data/result re lsblk -p option (dev path).
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Apr 6, 2024
…ockstor#2824

Here we drop the auto-creation of our prior 'special case' OS Pool.
Instead, by adding an auto redirect role (if "/" is in a partition), we
can manage this pool, including its import, by our regular data pool
means.

## Includes
- Update disks_table.jst to enable info and import icons for OS drive.
- Simplify/speed-up _update_disk_state()
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Apr 10, 2024
## Includes
- Initial introduction to Pool as (btrfs volume) &
Share (btrfs subvolume) as portion of Pool.
- Web-UI contextual link to Pool import via Disk member.
- Explicit use of 'Managed' re unimported pools.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Apr 10, 2024
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Apr 15, 2024
…ckstor#2824

Clarifies our use of DB Disk.btrfs_uuid field.
Minor refactoring to improve readability re variable names.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Apr 16, 2024
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Apr 16, 2024
…ockstor#2832

This role is used internally by the Pool model, and btrfs.py
shares_info() to account for boot-to-snapshot system Pool mount
accommodations.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Apr 17, 2024
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Apr 17, 2024
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Apr 18, 2024
Minor refactoring re fstrings, and removing a transient variable.
phillxnet added a commit to phillxnet/rockstor-core that referenced this issue Apr 18, 2024
Removing legacy system drive special treatment that was originally
intended solely to present a partitioned system pool member as if
it was a whole disk: the only entity we understood prior to our data
pool btrfs-in-partition capability. This normalises/simplifies our
lowest level scan_disks() procedure. Predominantly as we now only
migrate info in one direction: from partition to parent, rather than
the prior system drive only special-case of from parent to partition.

Includes:
Updating all scan_disks() unittests to reflect:
- The proposed new behaviour from un special-casing of the system drive.
- Our more recent 5GB default (was 1 GB): where existing test data allows.
- Fix recent tests data/result re lsblk -p option (dev path).
And:
- Major refactoring to improve scan_disks() and _update_disk_state()
readability.
- Adapt _update_disk_state() re un-special-case OS btrfs-in-partition.
Here we drop the auto-creation of our prior 'special case' OS Pool.
Instead, by adding an auto redirect role (if "/" is in a partition), we
can manage this pool, including its import, by our regular data pool
means.
- Update disks_table.jst to enable info and import icons for OS drive.
- Simplify/speed-up _update_disk_state()
- Adapt Pools & Shares overview pages to no managed Pools.
- Initial introduction (no Pools) now explains Pool as btrfs volume, &
Share as btrfs subvolume. Where Share = portion of a Pool.
- Web-UI contextual link to Pool import via Disk member.
- Explicit Web-UI use of 'Un Managed' to reference an un imported pool.
- Comments to clarify our use of DB Disk.btrfs_uuid field.
- Add TODO re attached.uuid use for Pool member attribution.
- Ensure OS Pool import sets in-progress Pool.role="root"
This role is used internally by the Pool model, and btrfs.py
shares_info() to account for boot-to-snapshot system Pool mount
accommodations.

Incidental functional change:
- Enable nvme SMART as now supported by smartmontools.
phillxnet added a commit that referenced this issue Apr 18, 2024
…rive-btrfs-in-partition-treatment

Un special-case system drive btrfs-in-partition treatment #2824
@phillxnet
Copy link
Member Author

Closing as:
Fixed by #2835

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant