-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Big-bang bang update (#333) broke most of my LXC updates #404
Comments
Under-provisioning LXC containers is generally not a recommended practice. Containers only use the resources they actually need, regardless of what is allocated to them. For example, even if you assign 64 GB of RAM, a container requiring only 250 MB will only use that amount. Setting resource limits too low can lead to avoidable issues, particularly during resource-intensive tasks like build or update processes. We introduced these checks to ensure stability and avoid scenarios where builds fail or installations break due to insufficient resources. This was a frequent issue in the past and caused significant frustration for users. While the old approach of temporarily over-provisioning resources might have worked in some cases, it also introduced its own challenges, especially for users who were unaware of these requirements. The current method ensures that containers meet the minimum resource requirements upfront, providing a more reliable experience overall. If you prefer to maintain custom resource limits, you could consider temporarily increasing resources before running updates and then reverting them afterward. However, consistently meeting the minimum requirements for updates will likely save you time and effort in the long run. |
That is all true and valid and I totally understand the reason why it was done. Another reason why setting different resource limits, specifically memory, is to help with garbage collection. While it is true that containers will only consume the resources they need, many (if not all) garbage collected languages actually obey cgroup limits and it has a direct impact on garbage collection and the consumed resources. Hence, running at lower resources limits can help reduce the amount of resources a container thinks it needs. I am running 40 LXC containers on my Proxmox VE. Having to temporarily update the resource settings to run an update turns a simple, automatable task into a full day of work (see steps involved above). So, yeah, I basically wanted to gauge the interest in changing the current approach. |
The problem is, if I add the Y/N question, and we update e.g. an intensive script like Vaultwarden or others, the user has only 4GB RAM and during the update process the installer flies away, so we are only bugfixing. If it is still possible at all, because the process has deleted the folder beforehand. In other words, we are talking about potentially major data loss, which will probably come back to us last. One idea would have been that you can have a 10-25% deviation from the recommended resources, but even then, we can't ensure that everything is running -> so we get bug reports again. The right way would rather be to size the LXC's properly, e.g. if you say LXC “Test” has 2 cores and 2 GB RAM according to our specification, but only needs 786MB RAM even when updating, you could scale it down. |
I understand.
Yeah, I don't think that 10-25% are going to cut it.
That is kind-of the problem. |
Adding a data loss warning and requiring a full
|
Can we do, if there are many issues in the next time because this, we revert |
I can also imagine a rollback mechanism, that would not necessarily incur data loss, by having the old release next to the new one and switching symlinks on them. |
Do an Example with 2-3 CT.sh and then we take a look. |
I think I will look at these for the implementation, for reasons mentioned below:
If you have other suggestions which ones to look at, please throw them my way. EDIT: while looking through a few more update scripts I noticed that in many cases, like zigbee2mqtt for example, the dreaded data loss can be easily avoided by properly configuring the runtime and not store the user data inside the git checkout, i.e. setting EDIT2: |
Hi, |
Yes, you are right, LXC resources can be adjusted as desired during active operation. This is not possible in the update script because we are not in the Proxmox main node. In other words, you cannot manipulate the RAM and CPU values via this location, this is only possible from the main node. To realize that, all >220 scripts would have to be completely rebuilt, a completely new logic would have to be created. That's not going to happen. Also from a security point of view. |
Thanks, makes perfect sense. Sorry for butting in! |
Closed for now, is included in my PR. |
Hi @MickLesk which PR is this? (just curious :) ) And @dsiebel it would still be nice to explore the update/backup symlink logic, maybe we can start a discussion for that? There are several scripts where in an update the |
not pushed yet, i think tomorrow or today evening (GMT) |
@burgerga sure, feel free to start one and mention me for tracking. |
Please verify that you have read and understood the guidelines.
yes
A clear and concise description of the issue.
In #333 the resource checks were changed in a way that now breaks almost all my LXC updates, e.g.:
I fine-tuned the resources of my LXC containers to what they actually need, the new resource checks however would require me to:
Am I holding this wrong? Is there a better way to do the updates?
This only happened after running
sed -i 's/tteck\/Proxmox/community-scripts\/ProxmoxVE/g' /usr/bin/update
to switch to the new repo (😢).The old update scripts, still work fine, but will be outdated soon enough.
To be honest, I think the previous approach of briefly over-provisioning the container,
in case of resource intensive build processes, was perfectly fine.
Especially considering the resources were reverted, after install or update, to the chosen/previous values, which makes this effortless for the user.
What settings are you currently utilizing?
Which Linux distribution are you employing?
Debian 12
If relevant, including screenshots or a code block can be helpful in clarifying the issue.
No response
Please provide detailed steps to reproduce the issue.
sed -i 's/tteck\/Proxmox/community-scripts\/ProxmoxVE/g' /usr/bin/update
update
The text was updated successfully, but these errors were encountered: