Compaction job failed #7655
-
For some reason the compaction job is failing and the reason it is giving is the "request entity too large". |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 5 replies
-
Hello, it seems that compactor has problem with uploading new block back to object storage. File that's being uploaded is typically about 500 MB big. AWS S3 could handle files like that without any problem, but perhaps there's some proxy or you use different storage (not S3)? |
Beta Was this translation helpful? Give feedback.
Nginx has an option to disable this check (
client_max_body_size
), or at least increase the value. It defaults to "1m". Files in blocks are often bigger (~500 MB for chunks files, index file can be in gigabytes, but that depends on how much data your Mimir is ingesting).https://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size