Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

overflow of disk.role db field #1709

Closed
phillxnet opened this issue May 6, 2017 · 6 comments
Closed

overflow of disk.role db field #1709

phillxnet opened this issue May 6, 2017 · 6 comments
Assignees
Labels
Milestone

Comments

@phillxnet
Copy link
Member

phillxnet commented May 6, 2017

Thanks to forum members wellman and scanlom for reporting and confirming this issue. In the linked forum post there is evidence of a Disk.model field being insufficiently long:

DataError: value too long for type character varying(256)

Given the only field in our Disk model that is 256 characters long is the Disk.role:

    role = models.CharField(max_length=256, null=True)

it is the most obvious candidate.

Currently the suspicion is that we have a combination of many partitions on a device with a long by-id name: resulting in an overly lengthy 'partitions' role value.

Linking to the forum thread for context:
https://forum.rockstor.com/t/drive-discovery/3053
(ongoing discussion in thread to confirm the cause of the db field overflow)

@phillxnet
Copy link
Member Author

From the above referenced forum thread we have confirmation of 8 partitions on a single drive where upon removal of the partitions normal function returned. Thanks to scanlom for confirming this.
The output from the requested command:

ls -la /dev/disk/by-id/

Results in a mock-up role field including just the obligatory 'partitions' role of:

{"partitions": {"ata-TOSHIBA_MK6461GSY_31DBT7P0T-part1": "ext4", "ata-TOSHIBA_MK6461GSY_31DBT7P0T-part2": "ext4", "ata-TOSHIBA_MK6461GSY_31DBT7P0T-part3": "ext4", "ata-TOSHIBA_MK6461GSY_31DBT7P0T-part4": "ext4", "ata-TOSHIBA_MK6461GSY_31DBT7P0T-part5": "ext4", "ata-TOSHIBA_MK6461GSY_31DBT7P0T-part6": "ext4", "ata-TOSHIBA_MK6461GSY_31DBT7P0T-part7": "ext4", "ata-TOSHIBA_MK6461GSY_31DBT7P0T-part8": "ext4"}}

just including the partitions role gives 408 characters: almost double the current 256 char limit.
37 char dev name and assuming 4 chars per fstype and in this case 8 partitions.
Failure point is upon the 5 partition as the first 4 partitions in the above example take 213 chars.

I have recorded a 50 char name (including partition) ie:
"usb-SanDisk_Extreme_AA011219150433452149-0:0-part3"

So per disk we have eg:
'"partitions": ' 14 chars.
'{{}}' 4 chars for outer and inner json denominators.
and per partition we have eg:
'"by-id name"' 52 chars from the above long example and including ""
': "ext4", ' 10 chars (mostly) for example 4 char "ext4" fstype value against each partition.

= 18 + part-count*(52+10)
for partitions < 10

As such I propose that we quadrupal the length of this field to 1024 characters as although directly after install we may see an 8 partition long file name device which would give around 514 chars (so double is not good) but it might be that a user would then wish to add a redirect role:

'"redirect": "usb-SanDisk_Extreme_AA011219150433452149-0:0-part3", ' = 66 characters more.

Hence the suggestion that we should be good at 1024 characters.

This would mean we are good up to 15 partitions plus a redirect role given the above assumptions on dev name length (50+"") and ext4 = 1020 chars. This is an unlikely configuration and is likely to shortly there after be subject to a complete wipe but would at least facilitate such external drive use were it to be required. Although that may very well require an additional (as yet not implemented) role.

Our current default of 256 chars and the given long dev name is good for only 2 partitions with and 3 partitions without a redirect role. This is clearly insufficient and an over-site on my part in pr #1622 and specifically commit:
25396a6
and more specifically it's extension:
bf0580f

We have however had 2 report in > 5 months so this is not a common overflow but would be very nice to extend given the above exposition.

User work around until we extend this model/field is to wipe the problematic drives (via command line) and refresh the page. There-by removing the cause of the overflow. It is unlikely in normal Rockstor use (given no partitions are the preferred configuration) that this role will overflow there after.

@phillxnet
Copy link
Member Author

Another as yet theoretical instance of db role field overflow would be that of an "encrypt my data" install which results in an encrypted system disk. Hence an open LUKS volume which is also partitioned. This arrangement is proposed as a supported configuration (for system disk only) in the currently open #550 issue to account for default encrypted installs.

An example of the resulting role field in this case for the system disk on the longest so far observed device name would be:

{"LUKS": {"crypttab": "none", "uuid": "3ecc3899-9563-4b35-9f59-49371aa00112", "keyfileExists": false, "unlocked": true}, "partitions": {"usb-SanDisk_Extreme_AA011219150433452149-0:0-part3": "crypto_LUKS"}}

Resulting in a 206 character role field. Leaving 256-206 = 50 characters remaining. However a non standard partition arrangement or custom keyfile arrangement could result in a disk.role field overflow.

A more likely scenario for disk.role overflow is that of encountering an existing partitioned Open LUKS Volume (not supported) which would immediately exceed the 256 character limit. So although this is not supported it is not an unlikely encounter on a fresh install.

@phillxnet
Copy link
Member Author

phillxnet commented May 23, 2017

In the above linked issue (now closed as a duplicate) #1715 GitHub user @tomvancutsem also experienced this db field overflow on prior use (partitioned) disks. As per the originally referenced forum post, wiping these prior use partitions resolved the issue. Noting here as confirmation of the current workaround; and thank to @tomvancutsem for their confirmation and feedback.

phillxnet added a commit to phillxnet/rockstor-core that referenced this issue May 23, 2017
As this field's use has expanded it has now become
insufficient in some reported instances so an increase
in the max_length is required.
@phillxnet
Copy link
Member Author

Please also update the following forum thread and forum user HBDK with this issues resolution:
https://forum.rockstor.com/t/storageadmin-util-44-error/3289

@phillxnet
Copy link
Member Author

Thanks to forum member HBDK in the following forum thread for initially reporting and then assisting with diagnosing an additional trigger for this db field overload. And for supplying the now longest so far seen device name of 51 character. One char better than the previously referenced 'worst case' device:
"ata-Corsair_Force_LS_SSD_16088024000104781293-part3"

With mdraid system disks installs such as in:
http://rockstor.com/docs/mdraid-mirror/boot_drive_howto.html
each backing device has the usual 3 partitions however as they are all identified as FSTYPE="linux_raid_member" the swap is not excluded. In my prior tests I had assumed the exclusion was due to type swap but in fact it was down to a < 1GB size. Forum member HBDK's supplied info brought attention to this over-site. In short the raid partitions on the backing device are all of type "linux_raid_member" and so are all included, along with the additional role of mdraid.

The resulting role field would then be (using HBDK's info):

{"mdraid": "linux_raid_member", "partitions": {"ata-Corsair_Force_LS_SSD_16088024000104781293-part1": "linux_raid_member", "ata-Corsair_Force_LS_SSD_16088024000104781293-part2": "linux_raid_member","ata-Corsair_Force_LS_SSD_16088024000104781293-part3": "linux_raid_member"}}

representing a 275 char requirement.
Hence the reported breakage.

Referenced forum thread:
https://forum.rockstor.com/t/storageadmin-util-44-error/3289

@phillxnet
Copy link
Member Author

@schakrava Given that we have now had a few reports of breakage due to this field length overflow I suggest that the associated pr be considered for merge at the earliest convenience. Especially given the additional pressure that #1716 places on this field. It is still the case that all reports are semi-exceptional, ie many partitions on prior use data drives but the mdraid install just detailed above is a far more difficult problem to work around on client installs.
Thanks.

schakrava added a commit that referenced this issue May 30, 2017
…_field

increase max_length of disk.role db field. Fixes #1709
@schakrava schakrava added the bug label May 31, 2017
@schakrava schakrava added this to the Point Bonita milestone May 31, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants