You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When OpenSearch service is unavailable and buffers are full, we see BufferOverflowError errors and messages sent to the the ERROR label, when shutting down fluentd.
Example errors:
2022-03-19 18:29:59 +0000 [warn]: #2 send an error event stream to @ERROR: error_class=Fluent::Plugin::Buffer::BufferOverflowError error="buffer space has too many data" location="/usr/local/bundle/gems/fluentd-1.14.5/lib/fluent/plugin/buffer.rb:327:in `write'" tag=""
2022-03-19 18:29:59 +0000 [warn]: #3 send an error event stream to @ERROR: error_class=Fluent::Plugin::Buffer::BufferOverflowError error="buffer space has too many data" location="/usr/local/bundle/gems/fluentd-1.14.5/lib/fluent/plugin/buffer.rb:327:in `write'" tag=""
2022-03-19 18:29:59 +0000 [warn]: #0 send an error event stream to @ERROR: error_class=Fluent::Plugin::Buffer::BufferOverflowError error="buffer space has too many data" location="/usr/local/bundle/gems/fluentd-1.14.5/lib/fluent/plugin/buffer.rb:327:in `write'" tag=""
Not sure this issue is limited to the OpenSearch Output plugin
OS version: Docker image built off ghcr.io/calyptia/fluentd:v1.14.5-debian-1.0
Bare Metal or within Docker or Kubernetes or others?: Docker image built off ghcr.io/calyptia/fluentd:v1.14.5-debian-1.0
Fluentd v1.0 or later: 1.14.5
OpenSearch plugin version: fluent-plugin-opensearch version 1.0.2
The text was updated successfully, but these errors were encountered:
This parameter is not intended to improve throughput. overflow_action block should be used for batch operation that is likely for Embulk like operation.
For ordinary cases, we should use throw_exception or drop_oldest_chunk actions when buffer is full.
FYI: Fluentd official document says that:
block: wait until buffer can store more data.
After buffer is ready for storing more data, writing buffer is retried.
Because of such behavior, block is suitable for processing batch execution,
so do not use for improving processing throughput or performance.
@cosmo0920 the aforementioned configuration is not intended to improve throughput.
It is intended to serve as an 'at-least-once' style aggregator.
With the noted behavior, messages are arguably lost, in that the do not stay in the standard pipeline and instead get sent off to ERROR.
Again - I am not sure this issue is limited to the OpenSearch Output plugin
This should be occurred with selecting the wrong option. You should select throw_exception or drop_oldest_chunk actions. This is Fluentd specification.
(check apply)
Steps to replicate
Example config:
Expected Behavior or What you need to ask
When OpenSearch service is unavailable and buffers are full, we see BufferOverflowError errors and messages sent to the the ERROR label, when shutting down fluentd.
Example errors:
Not sure this issue is limited to the OpenSearch Output plugin
note - when using the monitor_agent, we see buffer_available_buffer_space_ratios actually go to a negative number
https://docs.fluentd.org/input/monitor_agent
Using Fluentd and OpenSearch plugin versions
OS version: Docker image built off ghcr.io/calyptia/fluentd:v1.14.5-debian-1.0
Bare Metal or within Docker or Kubernetes or others?: Docker image built off ghcr.io/calyptia/fluentd:v1.14.5-debian-1.0
Fluentd v1.0 or later: 1.14.5
OpenSearch plugin version: fluent-plugin-opensearch version 1.0.2
The text was updated successfully, but these errors were encountered: