Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add virtio_scsi disk controller for QEMU virtual machines #3045

Closed
tommie opened this issue Apr 22, 2023 · 7 comments · Fixed by #3047
Closed

Add virtio_scsi disk controller for QEMU virtual machines #3045

tommie opened this issue Apr 22, 2023 · 7 comments · Fixed by #3047
Assignees
Labels
area/metal Bare metal support status/in-progress This issue is currently being worked on type/enhancement New feature or request

Comments

@tommie
Copy link

tommie commented Apr 22, 2023

What I'd like: To run Bottlerocket OS on Hetzner Cloud.

Device type (e.g. network interface, disk controller): Disk controller

Device vendor: QEMU

Device model: N/A

Driver used on other Linux distribition: virtio_scsi / CONFIG_SCSI_VIRTIO

Any alternatives you've considered:

Looked at #842 (comment)

I can reproduce the error in VirtualBox by using a LsiLogic, BusLogic or virtio-scsi controller instead of the default IDE/ATA. I haven't tested a patched Bottlerocket with virtio_scsi, so I can't say if it's enough, but given the list in the linked comment, it's the only piece missing.

@tommie tommie added area/metal Bare metal support status/needs-triage Pending triage or re-evaluation type/enhancement New feature or request labels Apr 22, 2023
@tommie
Copy link
Author

tommie commented Apr 24, 2023

Having had a second look...

The driver is actually being built, but only as a module. Since there is no initrd, I don't think there is a way for me to use the module before verity has brought up the root filesystem. Would it be possible to extend bootconfig to a simple initrd that loads modules?

Another option: if the kernel were to use multiboot, GRUB could be made to preload modules from the private partition and load it that way. AFAIK, this is not possible with the linux loader. This would require optionally reading a GRUB config snippet from the private partition, similar to the kind of hook bootconfig.data is for the kernel.

@stmcginnis
Copy link
Contributor

Hi @tommie. Thanks for digging in to this.

This is typically handled by a bootstrap container. From there you would be able to load additional kernel modules.

An example of loading something similar is discussed here: #2409 (comment)

Hopefully that gives enough context to see what needs to be done.

@tommie
Copy link
Author

tommie commented Apr 24, 2023

The problem is the root file system can't be mounted (/dev/mapper/dm-0 never becomes ready), so a boot container won't help.

I see the same kernel messages as #842 (comment)

@bcressey
Copy link
Contributor

Are you using one of the metal-* variants? We'd need to add this to config-bottlerocket-metal there if so.

@foersleo what do you think?

When Secure Boot support arrives after #2501, we wouldn't want to extend the GRUB config, since that could be used to bypass the intended protections. #2729 already disabled any unexpected use of an initrd. So I'd prefer that the kernel just have the required modules built in.

@tommie
Copy link
Author

tommie commented Apr 25, 2023

@bcressey Yes, sorry, I'm using the metal version. I had a look at "vmware" too, but it doesn't have any kernel config overrides, so I ignored that.

Including virtio_scsi in the kernel makes sense to me (but I might be biased in this thread ;).

Since I like modularity, I was also experimenting with an initramfs (up until I realized the CONFIG_INITRAMFS_FORCE was in effect...) I made a firstdog init that simply loads modules from a directory and does switch_root. Combining SB with forced initramfs and the modularity of modules seems tricky, even if module signing is mandatory.

  • We can't lock a GRUB include file to only module statements.
  • We can't force a specific kind of initramfs to be present in PRIVATE.

However, we could

  • Load signed kernel modules from a well-known directory on the GRUB partition, since we know that's accessible through EFI/BIOS. Provided that GRUB supports wildcard loading of modules. This requires multiboot and a tool to copy modules from root FS to boot FS.
  • Signing an initramfs that is built in a boot-container using a locked down tool with a MOK or similar. This could then reside in the same place. I.e. this tool would only take a list of modules to be populated from the root FS. This requires removing the initramfs forcing, but the signature should help. I'm guessing SELinux could be used to not even grant access to the MOK private key from the admin container? For high-security situations, the signing could of course be done on a separate build host.

Both of these require boot FS to be writeable, which isn't very nice.

@foersleo
Copy link
Contributor

I think the case for building in the module is solid, considering the options seem to be a lot more headache for comparably low advantages.

I will prepare a PR and have a look if that pulls in any unforeseen dependencies. Going by the config definition I would not expect anything big that would be pulled in though.

@foersleo foersleo self-assigned this Apr 25, 2023
@foersleo foersleo added status/in-progress This issue is currently being worked on and removed status/needs-triage Pending triage or re-evaluation labels Apr 25, 2023
@tommie
Copy link
Author

tommie commented Apr 26, 2023

Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/metal Bare metal support status/in-progress This issue is currently being worked on type/enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants