You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our cgroup logic works by trying to detect available controllers from /proc/cgroups and uses the memory then pids where they should always be available, however due this change memory v1 controller is not available there, so we switch to use pids, but I suspect that test is failing due to some race conditions as we run the test quickly, however in container load tests are passing which suggest the race.
Anyway since now the memory v1 logic was split, we have to check:
If memory v1 is not mounted, adapt the userspace detection logic and ensure that we continue to read cgroup names correctly so we work transparently with both cgroupv1 or cgroupv2
Added more checks to ensure that cgroupv1 is properly set, and ensured that cgroupv2 also works.
Fix the failing test and maybe cover new cases
We have tests to cover all cases, some already detected this change, and we added new ones.
Tetragon Version
head
Kernel Version
6.11
Kubernetes Version
No response
Bugtool
No response
Relevant log output
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered:
tixxdz
changed the title
Cgroup: align with upstream linux-next CONFIG_MEMCG_V1
Cgroup: align with upstream 6.11 CONFIG_MEMCG_V1 CONFIG_CPUSETS_V1
Oct 29, 2024
What happened?
Tests on bpf-next are failing https://github.com/cilium/tetragon/actions/runs/11500178416/job/32010347721?pr=2615 due to memory controller being split, where v1 is now under its own:
Our cgroup logic works by trying to detect available controllers from /proc/cgroups and uses the memory then pids where they should always be available, however due this change memory v1 controller is not available there, so we switch to use pids, but I suspect that test is failing due to some race conditions as we run the test quickly, however in container load tests are passing which suggest the race.
Anyway since now the memory v1 logic was split, we have to check:
Tetragon Version
head
Kernel Version
6.11
Kubernetes Version
No response
Bugtool
No response
Relevant log output
No response
Anything else?
No response
The text was updated successfully, but these errors were encountered: