-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Host local storage support #1076
Conversation
a42a608
to
0c9ef1d
Compare
bf1340d
to
dedcf22
Compare
11818f9
to
5e5c898
Compare
5e5c898
to
0592d66
Compare
This pull request is now in conflict. Could you fix it @torchiaf? 🙏 |
c3f64da
to
c53b2ff
Compare
9273c79
to
28d3fab
Compare
I've done some initial testing of this, and provisioning and unprovisioning LHv1 and LHv2 disks seems to work, but I've found two things so far that need tweaking:
...but it should be:
|
28d3fab
to
bcef064
Compare
Thanks @tserong , the code is now updated to handle the |
This pull request is now in conflict. Could you fix it @torchiaf? 🙏 |
Thanks @torchiaf, the The StorageClass is not quite right - the I also have a question: previously there was a "force formatted" checkbox for Longhorn V1: ...but this isn't present anymore, although there's still a tip about it: Is this correct? I also have a problem provisioning Longhorn V1 disks. They seem to add successfully, but shortly thereafter, they aren't mounted and are unschedulable. I suspect this is actually a problem in node-disk-manager. I'll report back once I've figured out what's going on with this. |
...oh also, one more thing: After a disk is provisioned, do we need to show what provisioner is used when viewing the disks on that node? Because there's no indication of that currently - you just see the disk like this: @Vicente-Cheng, WDTY? Also, I haven't tested what's shown here in the LVM case. |
bcef064
to
0cecf90
Compare
@tserong I've fixed the For the other points:
|
I checked with the latest changes. It looks like @torchiaf already added the provisioner fields. I thought the current display looked good to me. WDYT? @tserong
I thought we could keep that, but only for the v1 engine.
BTW, I checked the whole LVM flow, and everything looks fine. Thanks! |
Thanks @Vicente-Cheng, @torchiaf! Will re-test this morning. |
Signed-off-by: Francesco Torchia <[email protected]>
Signed-off-by: Francesco Torchia <[email protected]>
Signed-off-by: Francesco Torchia <[email protected]>
Signed-off-by: Francesco Torchia <[email protected]>
Signed-off-by: Francesco Torchia <[email protected]>
Signed-off-by: Francesco Torchia <[email protected]>
f2b29a9
to
ba16fd2
Compare
Signed-off-by: Francesco Torchia <[email protected]>
ba16fd2
to
21cbfa6
Compare
@tserong @Vicente-Cheng I 've fixed the
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The LHv2 changes are all behaving correctly in my testing. Thanks for all your work on this @torchiaf!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tested with LVM part (with addon enable/vg creation/volume creation)
Thanks!
Are |
Delete unsaved disk and then add disk back gets previous settings. Do we want to fix this behavior ? |
Does Longhorn V2 volume support resize ? cc @tserong @Vicente-Cheng |
Does LHv2 volume support |
Maybe we should disable LHv2 volume to |
Hi @a110605,
We already have webhook on v0.1.3. I thought you were using the v0.1.2 on the lab environment.
As I mentioned before, this behavior is related to UI behavior.
We can remove and re-add it to handle this user error, but I will also create a webhook to check it.
I think v2 volume does not support resize. (REF: longhorn/longhorn#8022)
+1, cc @tserong
Looks reasonable to me if we cannot restore the v2 volume. cc @tserong |
Agree with @Vicente-Cheng to disable expand and export to image. As for snapshots, I have created a LHv2 volume snapshot and (seem to have) successfully restored it to a new volume, so IMO we should leave this enabled. My one question is: can we do this extra little bit of work separately later, or should we do it as part of this PR, bearing in mind we really want to get this PR merged as soon as possible for RC1? (My preference is to merge as-is if there's no other issues) |
Yeah, I agree! |
Let's open a new Issue/PR to change the remaining functionalities. |
@mergify backport release-harvester-v1.4 |
✅ Backports have been created
|
Summary
PR Checklist
Related Issue harvester/harvester#6057
Related Issue harvester/harvester#5953
Occurred changes and/or fixed issues
Technical notes summary
Areas or cases that should be tested
Areas which could experience regressions
Screenshot/Video