-
Notifications
You must be signed in to change notification settings - Fork 941
Wrong disk size in metrics for btrfs backend instances #15265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Yes i was thinking #8468 sounded similar. |
I think I have identified the cause of this issue. The problem stems from how different filesystems expose quota information to the kernel's VFS layer. While ZFS integrates quota limits directly into its filesystem statistics (making statfs calls correctly report the quota-limited size), BTRFS reports the entire pool's statistics regardless of any quotas applied to specific subvolumes. Currently, our metrics code relies on the standard filesystem statistics, which works correctly for ZFS but not for BTRFS. I'm working on a fix that will specifically handle the BTRFS case by directly querying the BTRFS quota information (we could parse the output of |
@gabrielmougard which code path is the problem at currently, link please :) |
In the |
Obviously, we have the same issue in Line 285 in c3e921c
|
@gabrielmougard can you give me a |
Sure! Also, I want to stress that this idea of mine is a guess for now as I need to log the output of statfs with the reproducer scenario. But this guess seems to be corroborated by https://lore.kernel.org/linux-btrfs/[email protected]/T/ If I understand correctly: statfs(), only allows the kernel to report two numbers to describe space usage: total blocks and free blocks and the only space tracking we have at a subvolume level is qgroups, hence why the idea of using |
Distribution
snap
Distribution version
6.3
Output of "snap list --all lxd core20 core22 core24 snapd"
Issue description
For an instance on a btrfs pool with limited main disk size, the reported lxd_filesystem_size_bytes in the
GET /1.0/metrics
endpoint is wrongly sending the total storage pool size.With other storage pool drivers, like zfs or directory, the size in the metrics result correctly returns the instance limit, not the total pool size.
I suspect this be an issue with btrfs integration.
See also canonical/lxd-ui#1155
Might be related to #8468
Steps to reproduce
GET /1.0/metrics
endpointThe text was updated successfully, but these errors were encountered: