-
Notifications
You must be signed in to change notification settings - Fork 181
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Monstache did not back off writing data when ElasticSearch disk was full (http code 429), causing log spam #702
Comments
Hi, colleague of Manuel here. The specific error message we got was
It was repeated 24 500 000 times in a duration of 10 minutes, totaling roughly 4 GiB of logs. The steps to reproduce are (though we did not investigate yet whether these can be minimized):
|
Additionally here is a redacted copy of the config file with which we observed the issue:
I'm curious and investigating possible causes in the source code right now. A brief look tells me that the ElasticSearch library just indiscriminately calls the error handler for everything thrown at it via Add(), so if the ingress side works / provides data we'll end up with one error per ingested item. It's unclear to me however at which point throttling should best take place. |
Hi, pushed a new release to back off when indexing errors happen to mitigate the log flooding. |
Problem: Monstache doesn't back off writing data to Elasticsearch even on http status code 429.
Details:
What Should Happen:
Monstache should stop and wait before trying again when it gets a 429 error from Elasticsearch. This would help prevent too many logs and does not flood internal monitoring systems.
The text was updated successfully, but these errors were encountered: