Skip to content

Commit

Permalink
Merge pull request #46 from jillian-maroket/add-community-updates6
Browse files Browse the repository at this point in the history
Add text from community doc PRs (Dec 2024 - Jan 2025)
  • Loading branch information
jillian-maroket authored Jan 9, 2025
2 parents 608822d + 5cefecd commit fb5d590
Show file tree
Hide file tree
Showing 14 changed files with 252 additions and 2 deletions.
15 changes: 14 additions & 1 deletion versions/v1.3/modules/en/pages/add-ons/vm-dhcp-controller.adoc
Original file line number Diff line number Diff line change
@@ -1,12 +1,25 @@
= VM DHCP Controller (Managed DHCP)

Beginning with v1.3.0, you can configure IP pool information and serve IP addresses to VMs running on SUSE® Virtualization clusters using the embedded Managed DHCP feature. This feature, which is an alternative to the standalone DHCP server, leverages the vm-dhcp-controller add-on to simplify guest cluster deployment.
You can configure IP pool information and serve IP addresses to VMs running on SUSE® Virtualization clusters using the embedded Managed DHCP feature. This feature, which is an alternative to the standalone DHCP server, leverages the vm-dhcp-controller add-on to simplify guest cluster deployment.

[NOTE]
====
SUSE® Virtualization uses the planned infrastructure network so you must ensure that network connectivity is available and plan the IP pools in advance.
====

== Unique Features

* DHCP leases are stored in etcd as the single source of truth across the entire cluster.
* Each of the leases is static by nature and works well with your current network infrastructure.
* The Managed DHCP agents can still serve DHCP requests for existing entities even if the cluster's control plane stops working, ensuring that your virtual machine workload's network remains available.

== Limitations

* The Managed DHCP feature only works with the network interfaces specified in the VirtualMachine CRs. Network interfaces created in the virtual machine are not supported.
* IP addresses are not allocated or deallocated when you add or remove network interfaces after the virtual machine is created. The actual MAC addresses are recorded in the VirtualMachineNetworkConfig CRs.
* The DHCP RELEASE operation is currently not supported.
* IPPool configuration updates take effect only after you manually restart the relevant agent pods.

== Install and Enable the vm-dhcp-controller Add-On

The vm-dhcp-controller add-on is not packed into the SUSE® Virtualization ISO, but you can download it from the https://github.com/harvester/experimental-addons[experimental-addons repository]. You can install the add-on by running the following command:
Expand Down
14 changes: 14 additions & 0 deletions versions/v1.3/modules/en/pages/add-ons/vm-import-controller.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,7 @@ metadata:
namespace: default
spec:
virtualMachineName: "alpine-export-test"
folder: "Discovered VM" #optional folder name, in case your vm is placed in a folder
networkMapping:
- sourceNetwork: "dvSwitch 1"
destinationNetwork: "default/vlan1"
Expand Down Expand Up @@ -189,3 +190,16 @@ spec:
OpenStack allows users to have multiple instances with the same name. In such a scenario, users are advised to use the Instance ID. The reconciliation logic tries to perform a name-to-ID lookup when a name is used.
====

==== Known Issues

* *Source virtual machine name is not RFC1123-compliant*: When creating a virtual machine object, the vm-import-controller add-on uses the name of the source virtual machine, which may not meet the Kubernetes object https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names[naming criteria]. You may need to rename the source virtual machine to allow successful completion of the import.
+
* *Virtual machine image name is too long*: The vm-import-controller add-on labels each imported disk using the format `vm-import-$VMname-$DiskName`. If a label exceeds 63 characters, you will see the following error message in the vm-import-controller logs:
+
[,shell]
----
harvester-vm-import-controller-5698cd57c4-zw9l5 time="2024-08-30T19:20:34Z" level=error msg="error syncing 'default/mike-mr-tumbleweed-test': handler virtualmachine-import-job-change: error creating vmi: VirtualMachineImage.harvesterhci.io \"image-z
nqsp\" is invalid: metadata.labels: Invalid value: \"vm-import-mike-mr-tumbleweed-test-mike-mr-tumbleweed-test-default-disk-0.img\": must be no more than 63 characters, requeuing"
----
+
You may need to modify the assigned labels to allow successful completion of the import.
21 changes: 21 additions & 0 deletions versions/v1.3/modules/en/pages/troubleshooting/cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,27 @@ Example:
kubectl cp harvester-system/supportbundle-manager-bundle-dtl2k-69dcc69b59-w64vl:/tmp/support-bundle-kit/supportbundle_db25ccb6-b52a-4f9d-97dd-db2df2b004d4_2024-02-02T11-18-10Z.zip bundle.zip
----

=== Manually Collect Data for the Support Bundle

Harvester is unable to collect data and generate a support bundle when the node is inaccessible or not ready. The workaround is to run a script and compress the generated files.

. Prepare the environment.
+
[,sh]
----
mkdir -p /tmp/support-bundle # ensure /tmp/support-bundle exists
echo JOURNALCTL="/usr/bin/journalctl -o short-precise" > /tmp/common
export SUPPORT_BUNDLE_NODE_NAME=$(hostname)
----
+
. Run the following commands:
+
* Download the script: `curl -o collector-harvester https://raw.githubusercontent.com/rancher/support-bundle-kit/refs/heads/master/hack/collector-harvester`
* Add executable permissions: `chmod +x collector-harvester`
* Run the script: `./collector-harvester / /tmp/support-bundle`
+
. Compress the files in `/tmp/support-bundle`, and then attach the archive to the related issue.

=== Known Limitations

* Replacing the backing pod prevents the support bundle file from being downloaded.
Expand Down
32 changes: 32 additions & 0 deletions versions/v1.3/modules/en/pages/upgrades/v1-3-2-to-v1-4-0.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -65,4 +65,36 @@ done
echo "removing longhorn services"
kubectl delete svc longhorn-engine-manager -n longhorn-system --ignore-not-found=true
kubectl delete svc longhorn-replica-manager -n longhorn-system --ignore-not-found=true
----

=== 3. Upgrade stuck on waiting for Fleet

When upgrading from v1.3.2 to v1.4.0, the upgrade process may become stuck on waiting for Fleet to become ready. This issue is caused by a race condition when Rancher is redeployed.

Check the Harvester logs and Fleet history for the following indicators:

* The manifest pod is stuck in the `deployed` status.
* The upgrade is pending with a chart version that has been deployed.

Example:

[,shell]
----
> kubectl logs -n harvester-system -l harvesterhci.io/upgradeComponent=manifest
wait helm release cattle-fleet-system fleet fleet-104.0.2+up0.10.2 0.10.2 deployed
> helm history -n cattle-fleet-system fleet
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
26 Tue Dec 10 03:09:13 2024 superseded fleet-103.1.5+up0.9.5 0.9.5 Upgrade complete
27 Sun Dec 15 09:26:54 2024 superseded fleet-103.1.5+up0.9.5 0.9.5 Upgrade complete
28 Sun Dec 15 09:27:03 2024 superseded fleet-103.1.5+up0.9.5 0.9.5 Upgrade complete
29 Mon Dec 16 05:57:03 2024 deployed fleet-103.1.5+up0.9.5 0.9.5 Upgrade complete
30 Mon Dec 16 05:57:13 2024 pending-upgrade fleet-103.1.5+up0.9.5 0.9.5 Preparing upgrade
----

You can run the following command to fix the issue.

[,shell]
----
helm rollback fleet -n cattle-fleet-system <last-deployed-revision>
----
15 changes: 14 additions & 1 deletion versions/v1.4/modules/en/pages/add-ons/vm-dhcp-controller.adoc
Original file line number Diff line number Diff line change
@@ -1,12 +1,25 @@
= VM DHCP Controller (Managed DHCP)

Beginning with v1.3.0, you can configure IP pool information and serve IP addresses to VMs running on SUSE® Virtualization clusters using the embedded Managed DHCP feature. This feature, which is an alternative to the standalone DHCP server, leverages the vm-dhcp-controller add-on to simplify guest cluster deployment.
You can configure IP pool information and serve IP addresses to VMs running on SUSE® Virtualization clusters using the embedded Managed DHCP feature. This feature, which is an alternative to the standalone DHCP server, leverages the vm-dhcp-controller add-on to simplify guest cluster deployment.

[NOTE]
====
SUSE® Virtualization uses the planned infrastructure network so you must ensure that network connectivity is available and plan the IP pools in advance.
====

== Unique Features

* DHCP leases are stored in etcd as the single source of truth across the entire cluster.
* Each of the leases is static by nature and works well with your current network infrastructure.
* The Managed DHCP agents can still serve DHCP requests for existing entities even if the cluster's control plane stops working, ensuring that your virtual machine workload's network remains available.

== Limitations

* The Managed DHCP feature only works with the network interfaces specified in the VirtualMachine CRs. Network interfaces created in the virtual machine are not supported.
* IP addresses are not allocated or deallocated when you add or remove network interfaces after the virtual machine is created. The actual MAC addresses are recorded in the VirtualMachineNetworkConfig CRs.
* The DHCP RELEASE operation is currently not supported.
* IPPool configuration updates take effect only after you manually restart the relevant agent pods.

== Install and Enable the vm-dhcp-controller Add-On

The vm-dhcp-controller add-on is not packed into the SUSE® Virtualization ISO, but you can download it from the https://github.com/harvester/experimental-addons[expreimental-addons repository]. You can install the add-on by running the following command:
Expand Down
14 changes: 14 additions & 0 deletions versions/v1.4/modules/en/pages/add-ons/vm-import-controller.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,7 @@ metadata:
namespace: default
spec:
virtualMachineName: "alpine-export-test"
folder: "Discovered VM" #optional folder name, in case your vm is placed in a folder
networkMapping:
- sourceNetwork: "dvSwitch 1"
destinationNetwork: "default/vlan1"
Expand Down Expand Up @@ -189,3 +190,16 @@ spec:
OpenStack allows users to have multiple instances with the same name. In such a scenario, users are advised to use the Instance ID. The reconciliation logic tries to perform a name-to-ID lookup when a name is used.
====

==== Known Issues

* *Source virtual machine name is not RFC1123-compliant*: When creating a virtual machine object, the vm-import-controller add-on uses the name of the source virtual machine, which may not meet the Kubernetes object https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names[naming criteria]. You may need to rename the source virtual machine to allow successful completion of the import.
+
* *Virtual machine image name is too long*: The vm-import-controller add-on labels each imported disk using the format `vm-import-$VMname-$DiskName`. If a label exceeds 63 characters, you will see the following error message in the vm-import-controller logs:
+
[,shell]
----
harvester-vm-import-controller-5698cd57c4-zw9l5 time="2024-08-30T19:20:34Z" level=error msg="error syncing 'default/mike-mr-tumbleweed-test': handler virtualmachine-import-job-change: error creating vmi: VirtualMachineImage.harvesterhci.io \"image-z
nqsp\" is invalid: metadata.labels: Invalid value: \"vm-import-mike-mr-tumbleweed-test-mike-mr-tumbleweed-test-default-disk-0.img\": must be no more than 63 characters, requeuing"
----
+
You may need to modify the assigned labels to allow successful completion of the import.
21 changes: 21 additions & 0 deletions versions/v1.4/modules/en/pages/troubleshooting/cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,27 @@ Example:
kubectl cp harvester-system/supportbundle-manager-bundle-dtl2k-69dcc69b59-w64vl:/tmp/support-bundle-kit/supportbundle_db25ccb6-b52a-4f9d-97dd-db2df2b004d4_2024-02-02T11-18-10Z.zip bundle.zip
----

=== Manually Collect Data for the Support Bundle

Harvester is unable to collect data and generate a support bundle when the node is inaccessible or not ready. The workaround is to run a script and compress the generated files.

. Prepare the environment.
+
[,sh]
----
mkdir -p /tmp/support-bundle # ensure /tmp/support-bundle exists
echo JOURNALCTL="/usr/bin/journalctl -o short-precise" > /tmp/common
export SUPPORT_BUNDLE_NODE_NAME=$(hostname)
----
+
. Run the following commands:
+
* Download the script: `curl -o collector-harvester https://raw.githubusercontent.com/rancher/support-bundle-kit/refs/heads/master/hack/collector-harvester`
* Add executable permissions: `chmod +x collector-harvester`
* Run the script: `./collector-harvester / /tmp/support-bundle`
+
. Compress the files in `/tmp/support-bundle`, and then attach the archive to the related issue.

=== Known Limitations

* Replacing the backing pod prevents the support bundle file from being downloaded.
Expand Down
32 changes: 32 additions & 0 deletions versions/v1.4/modules/en/pages/upgrades/v1-3-2-to-v1-4-0.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -65,4 +65,36 @@ done
echo "removing longhorn services"
kubectl delete svc longhorn-engine-manager -n longhorn-system --ignore-not-found=true
kubectl delete svc longhorn-replica-manager -n longhorn-system --ignore-not-found=true
----

=== 3. Upgrade stuck on waiting for Fleet

When upgrading from v1.3.2 to v1.4.0, the upgrade process may become stuck on waiting for Fleet to become ready. This issue is caused by a race condition when Rancher is redeployed.

Check the Harvester logs and Fleet history for the following indicators:

* The manifest pod is stuck in the `deployed` status.
* The upgrade is pending with a chart version that has been deployed.

Example:

[,shell]
----
> kubectl logs -n harvester-system -l harvesterhci.io/upgradeComponent=manifest
wait helm release cattle-fleet-system fleet fleet-104.0.2+up0.10.2 0.10.2 deployed
> helm history -n cattle-fleet-system fleet
REVISION UPDATED STATUS CHART APP VERSION DESCRIPTION
26 Tue Dec 10 03:09:13 2024 superseded fleet-103.1.5+up0.9.5 0.9.5 Upgrade complete
27 Sun Dec 15 09:26:54 2024 superseded fleet-103.1.5+up0.9.5 0.9.5 Upgrade complete
28 Sun Dec 15 09:27:03 2024 superseded fleet-103.1.5+up0.9.5 0.9.5 Upgrade complete
29 Mon Dec 16 05:57:03 2024 deployed fleet-103.1.5+up0.9.5 0.9.5 Upgrade complete
30 Mon Dec 16 05:57:13 2024 pending-upgrade fleet-103.1.5+up0.9.5 0.9.5 Preparing upgrade
----

You can run the following command to fix the issue.

[,shell]
----
helm rollback fleet -n cattle-fleet-system <last-deployed-revision>
----
Original file line number Diff line number Diff line change
Expand Up @@ -306,6 +306,11 @@ The schedule is automatically suspended when the number of consecutive failed ba
+
{harvester-product-name} does not allow you to resume a suspended schedule for backup creation if the backup target is not reachable.

[NOTE]
====
If a schedule was automatically suspended because the **Max Failure** value was exceeded, you must explicitly resume that schedule after verifying that the backup or snapshot can be created successfully. For example, when the backup target becomes reachable again after a period of disconnection, you can first create a backup manually and check the result.
====

=== Virtual Machine Operations and {harvester-product-name} Upgrades

Before you upgrade {harvester-product-name}, ensure that no virtual machine backups or snapshots are in use, and that all virtual machine schedules are suspended. The {harvester-product-name} UI displays the following error messages when upgrade attempts are rejected:
Expand Down
13 changes: 13 additions & 0 deletions versions/v1.5/modules/en/pages/add-ons/vm-dhcp-controller.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,19 @@ Beginning with v1.3.0, you can configure IP pool information and serve IP addres
SUSE® Virtualization uses the planned infrastructure network so you must ensure that network connectivity is available and plan the IP pools in advance.
====

== Unique Features

* DHCP leases are stored in etcd as the single source of truth across the entire cluster.
* Each of the leases is static by nature and works well with your current network infrastructure.
* The Managed DHCP agents can still serve DHCP requests for existing entities even if the cluster's control plane stops working, ensuring that your virtual machine workload's network remains available.

== Limitations

* The Managed DHCP feature only works with the network interfaces specified in the VirtualMachine CRs. Network interfaces created in the virtual machine are not supported.
* IP addresses are not allocated or deallocated when you add or remove network interfaces after the virtual machine is created. The actual MAC addresses are recorded in the VirtualMachineNetworkConfig CRs.
* The DHCP RELEASE operation is currently not supported.
* IPPool configuration updates take effect only after you manually restart the relevant agent pods.

== Install and Enable the vm-dhcp-controller Add-On

The vm-dhcp-controller add-on is not packed into the SUSE® Virtualization ISO, but you can download it from the https://github.com/harvester/experimental-addons[expreimental-addons repository]. You can install the add-on by running the following command:
Expand Down
14 changes: 14 additions & 0 deletions versions/v1.5/modules/en/pages/add-ons/vm-import-controller.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,7 @@ metadata:
namespace: default
spec:
virtualMachineName: "alpine-export-test"
folder: "Discovered VM" #optional folder name, in case your vm is placed in a folder
networkMapping:
- sourceNetwork: "dvSwitch 1"
destinationNetwork: "default/vlan1"
Expand Down Expand Up @@ -189,3 +190,16 @@ spec:
OpenStack allows users to have multiple instances with the same name. In such a scenario, users are advised to use the Instance ID. The reconciliation logic tries to perform a name-to-ID lookup when a name is used.
====

==== Known Issues

* *Source virtual machine name is not RFC1123-compliant*: When creating a virtual machine object, the vm-import-controller add-on uses the name of the source virtual machine, which may not meet the Kubernetes object https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#dns-subdomain-names[naming criteria]. You may need to rename the source virtual machine to allow successful completion of the import.
+
* *Virtual machine image name is too long*: The vm-import-controller add-on labels each imported disk using the format `vm-import-$VMname-$DiskName`. If a label exceeds 63 characters, you will see the following error message in the vm-import-controller logs:
+
[,shell]
----
harvester-vm-import-controller-5698cd57c4-zw9l5 time="2024-08-30T19:20:34Z" level=error msg="error syncing 'default/mike-mr-tumbleweed-test': handler virtualmachine-import-job-change: error creating vmi: VirtualMachineImage.harvesterhci.io \"image-z
nqsp\" is invalid: metadata.labels: Invalid value: \"vm-import-mike-mr-tumbleweed-test-mike-mr-tumbleweed-test-default-disk-0.img\": must be no more than 63 characters, requeuing"
----
+
You may need to modify the assigned labels to allow successful completion of the import.
21 changes: 21 additions & 0 deletions versions/v1.5/modules/en/pages/troubleshooting/cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,27 @@ Example:
kubectl cp harvester-system/supportbundle-manager-bundle-dtl2k-69dcc69b59-w64vl:/tmp/support-bundle-kit/supportbundle_db25ccb6-b52a-4f9d-97dd-db2df2b004d4_2024-02-02T11-18-10Z.zip bundle.zip
----

=== Manually Collect Data for the Support Bundle

Harvester is unable to collect data and generate a support bundle when the node is inaccessible or not ready. The workaround is to run a script and compress the generated files.

. Prepare the environment.
+
[,sh]
----
mkdir -p /tmp/support-bundle # ensure /tmp/support-bundle exists
echo JOURNALCTL="/usr/bin/journalctl -o short-precise" > /tmp/common
export SUPPORT_BUNDLE_NODE_NAME=$(hostname)
----
+
. Run the following commands:
+
* Download the script: `curl -o collector-harvester https://raw.githubusercontent.com/rancher/support-bundle-kit/refs/heads/master/hack/collector-harvester`
* Add executable permissions: `chmod +x collector-harvester`
* Run the script: `./collector-harvester / /tmp/support-bundle`
+
. Compress the files in `/tmp/support-bundle`, and then attach the archive to the related issue.

=== Known Limitations

* Replacing the backing pod prevents the support bundle file from being downloaded.
Expand Down
Loading

0 comments on commit fb5d590

Please sign in to comment.