-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compactor should ignore dirty data or automatically delete corrupted data #4046
Comments
I think we can/should do a better sanity check on these blocks. I.e. check if meta.json, index and chunks are present. Seeing the errors, it just assumes there is an index file. I make some assumptions on the logic here but compactor sees the ULID -> downloads the entire folder -> does it's thing. In your case it fails because the index file is not there. Yet what if the index file is in the object store, but it simply failed to download all the contents? What if someone made an 'oopsie' and had the wrong permissions on a file, or what if the disk is full. Even this could happen when for example the sidecar ships its stuff but fails 'somewhere' and mistakenly misses the index. Now the block is not broken at all and could be fixed. IMO the compactor job is pretty much irreversible. The behaviour of 'dirty data' or even 'corrupted data' should be the exception rather than a standard. Therefore my initial thought here is that we should not make assumptions and unfortunately do manual actions as sanity check before proceeding. Do you perhaps have a proposal on when we could define data as corrupted and 100% not fixable? Or WDYT? |
I get you meaning and agree with you the behaviour of 'dirty data' or even 'corrupted data' should't be a standard. I think we can throw exception when meeting corrupted data but not keep container restart loop. We should ensure that the application will not crash due to some exceptions. WDYT? @wiardvanrij |
Sorry, I have been busy recently. |
Thanks a lot for this report, we sometimes hit a similar issue when we have broken manually uploaded block - compactor is halted and awaiting a manual decision. Unfortunately skipping right now is not easy, we don't have a mechanism for that. Unfortunately continuing like in PR is not helping, (compaction will pick the same block in the next iteration). We are discussing Compact v2 today, and I will bring this issue. |
Ok. I hope we can find a reasonable solution and optimize it, and I am willing to assist in the completion. |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
Closing for now as promised, let us know if you need this to be reopened! 🤗 |
Thanos, Prometheus and Golang version used:
Thanos version: v0.17.2
Prometheus: v2.11.0
go version go1.14.4
Object Storage Provider:
Ceph S3.
What happened:
Thanos compact stuck in continuous restart loop when index file missing.
Error log:
List ceph data chunk folder:
I think we can actively skip the damaged chunk and continue to merge.
https://github.com/thanos-io/thanos/blob/4bf956deb64b2a12c8c26d28f03a6795d75ada4f/pkg/compact/compact.go#L722-739.
What you expected to happen:
Don't keep restarting.
How to reproduce it (as minimally and precisely as possible):
When chunk data loss.
The text was updated successfully, but these errors were encountered: