-
Notifications
You must be signed in to change notification settings - Fork 25.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data tiers usage is 0 for partially mounted indices #86055
Labels
>bug
:Data Management/Stats
Statistics tracking and retrieval APIs
Team:Data Management
Meta label for data/management team
Comments
Pinging @elastic/es-data-management (Team:Data Management) |
dakrone
added a commit
to dakrone/elasticsearch
that referenced
this issue
May 9, 2022
The telemetry for data tiers was using the size in bytes, however, for the frozen tier using searchable snapshots, this was the disk usage rather than the size of the actual data. This commit changes the telemetry to use `total_data_set_size` as introduced in elastic#70625 so that the telemetry is correct. Resolves elastic#86055
elasticsearchmachine
pushed a commit
that referenced
this issue
May 12, 2022
The telemetry for data tiers was using the size in bytes, however, for the frozen tier using searchable snapshots, this was the disk usage rather than the size of the actual data. This commit changes the telemetry to use `total_data_set_size` as introduced in #70625 so that the telemetry is correct. Resolves #86055
dakrone
added a commit
to dakrone/elasticsearch
that referenced
this issue
May 12, 2022
…c#86580) The telemetry for data tiers was using the size in bytes, however, for the frozen tier using searchable snapshots, this was the disk usage rather than the size of the actual data. This commit changes the telemetry to use `total_data_set_size` as introduced in elastic#70625 so that the telemetry is correct. Resolves elastic#86055
dakrone
added a commit
to dakrone/elasticsearch
that referenced
this issue
May 12, 2022
…c#86580) The telemetry for data tiers was using the size in bytes, however, for the frozen tier using searchable snapshots, this was the disk usage rather than the size of the actual data. This commit changes the telemetry to use `total_data_set_size` as introduced in elastic#70625 so that the telemetry is correct. Resolves elastic#86055
elasticsearchmachine
pushed a commit
that referenced
this issue
May 12, 2022
#86749) The telemetry for data tiers was using the size in bytes, however, for the frozen tier using searchable snapshots, this was the disk usage rather than the size of the actual data. This commit changes the telemetry to use `total_data_set_size` as introduced in #70625 so that the telemetry is correct. Resolves #86055
elasticsearchmachine
pushed a commit
that referenced
this issue
May 12, 2022
#86748) The telemetry for data tiers was using the size in bytes, however, for the frozen tier using searchable snapshots, this was the disk usage rather than the size of the actual data. This commit changes the telemetry to use `total_data_set_size` as introduced in #70625 so that the telemetry is correct. Resolves #86055
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
>bug
:Data Management/Stats
Statistics tracking and retrieval APIs
Team:Data Management
Meta label for data/management team
Elasticsearch Version
8.1.2
Installed Plugins
No response
Java Version
bundled
OS Version
ESS on Azure
Problem Description
The
_xpack/usage
API reports the size of the data set per data tier. However, it uses the local disk size as the metric it reports on and this is 0 (zero) for partially mounted indices, hiding the size of the data set mounted on the frozen tier.We should switch to using the
total_data_set_size
metric that was introduced when we added the frozen tier/partially mounted indices.Steps to Reproduce
Mount a partially mounted index on a dedicated frozen tier. Use the
_xpack/usage
API and see that no (or very little) data is reported on frozen tier.Logs (if relevant)
No response
The text was updated successfully, but these errors were encountered: