You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When one or more Items fails ingestion into Elasticsearch during a bulk write, the returned error message is long and unhelpful as it is a big string with info from all the 'docs' that were in the batch, here's a sample:
Therefore one record starts with "_index" and ends with "status". If they were successful they will have "failed=0"
However, a failed record actually looks different:
_index=items, _type=doc, _id=LC80231212013343LGN00, status=400, type=mapper_parsing_exception, reason=failed to parse [geometry], type=parse_exception, reason=invalid number of points in LinearRing (found [1] - must be >= [4]),
It doesn't contain "successful", "failed" or other terms and instead has status=400 and a "reason" which is the actual error.
We don't want to log the entire string, instead we want to log each item that failed in the batch, if any.
We just need to log the "_id" field and the "reason" field for the error.
The text was updated successfully, but these errors were encountered:
When one or more Items fails ingestion into Elasticsearch during a bulk write, the returned error message is long and unhelpful as it is a big string with info from all the 'docs' that were in the batch, here's a sample:
Therefore one record starts with "_index" and ends with "status". If they were successful they will have "failed=0"
However, a failed record actually looks different:
It doesn't contain "successful", "failed" or other terms and instead has status=400 and a "reason" which is the actual error.
We don't want to log the entire string, instead we want to log each item that failed in the batch, if any.
We just need to log the "_id" field and the "reason" field for the error.
The text was updated successfully, but these errors were encountered: