You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Deploy a Fleet Managed Elastic Agent DaemonSet to Kubernetes (with "Agent monitoring" enabled)
Observe memory utilization
Disable "Agent monitoring"
Observe memory utilization
Observe that the memory utilization decreases by ~200-250MB (~25% of total agent pod/container memory utilization)
The drop in memory on the above graph is when "Agent monitoring" was disabled. The right-hand legend, is the total (sum) of all memory utilization for Elastic Agents in an environment. The total goes from ~39.5GB (with "Agent Monitoring" enabled), to ~30GB (with "Agent monitoring" disabled).
If observed on a per pod basis, the average memory utilization goes from ~980MB to ~740MB.
I believe that this is a bug, as it doesn't make sense as to why simply enabling monitoring of the Elastic Agent itself increases the memory utilization by ~200-250MB/~25% of total memory used by the agent.
The text was updated successfully, but these errors were encountered:
When you turn on monitoring it causes agent to start 3 new Beat sub-processes, each of which increases memory usage by ~75 MB just to exist.
We have a big architecture change in progress now to move away from sub-processes where we can which should reduce steady state memory usage quite a bit beyond just the monitoring components once it's all done.
We observe this same problem internally as we use Elastic Agent for observability in our own cloud. We are just starting the work to make and deploy the change needed to fix this for the monitoring components there first to prove it out before turning it on for external users.
The drop in memory on the above graph is when "Agent monitoring" was disabled. The right-hand legend, is the total (sum) of all memory utilization for Elastic Agents in an environment. The total goes from ~39.5GB (with "Agent Monitoring" enabled), to ~30GB (with "Agent monitoring" disabled).
If observed on a per pod basis, the average memory utilization goes from ~980MB to ~740MB.
I believe that this is a bug, as it doesn't make sense as to why simply enabling monitoring of the Elastic Agent itself increases the memory utilization by ~200-250MB/~25% of total memory used by the agent.
The text was updated successfully, but these errors were encountered: