Releases: aws/aws-for-fluent-bit
AWS for Fluent Bit 2.31.1
2.31.1
This release includes:
- Fluent Bit 1.9.10
- Amazon CloudWatch Logs for Fluent Bit 1.9.1
- Amazon Kinesis Streams for Fluent Bit 1.10.1
- Amazon Kinesis Firehose for Fluent Bit 1.7.1
Compared to 2.31.0
, this release adds the following feature that we are working on getting accepted upstream:
- Bug - Support Retry_Limit option in S3 plugin to set retries fluent-bit:6475
- Bug - Resolve a rare Datadog segfault that occurs when remapping tags aws-for-fluent-bit:491
Same as 2.31.0
, this release includes the following fixes and features that we are working on getting accepted upstream:
- Feature - Add
kinesis_firehose
andkinesis_streams
support fortime_key_format
milliseconds with%3N
option, and nanoseconds9N
and%L
options fluent-bit:2831 - Feature - Support OpenSearch Serverless data ingestion via OpenSearch plugin fluent-bit:6448
- Enhancement - Transition S3 to fully synchronous file uploads to improve plugin stability fluent-bit:6573
- Bug - Mitigate Datadog output plugin issue by reverting recent PR aws-for-fluent-bit:491
- Bug - Format S3 filename with timestamp from the first log in uploaded file, rather than the time the first log was buffered by the s3 output aws-for-fluent-bit:459
- Bug - Resolve S3 logic to display
log_key
missing warning message if the configuredlog_key
field is missing from log payload fluent-bit:6557 - Bug - ECS Metadata filter gracefuly handle task metadata query errors and cache metadata processing state to improve performance aws-for-fluent-bit:505
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | source | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|---|
kinesis_firehose | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
kinesis_firehose | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
kinesis_streams | stdstream | Log Loss | ✅ | ✅ | 0%(847) |
Log Duplication | ✅ | 0%(23517) | 0%(51518) | ||
kinesis_streams | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | stdstream | Log Loss | 0%(20000) | 16%(2525305) | 29%(5276105) |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | tcp | Log Loss | ✅ | ✅ | 16%(3000958) |
Log Duplication | ✅ | ✅ | ✅ |
plugin | source | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|---|
cloudwatch_logs | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
cloudwatch_logs | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.31.0
2.31.0
This release includes:
- Fluent Bit 1.9.10
- Amazon CloudWatch Logs for Fluent Bit 1.9.1
- Amazon Kinesis Streams for Fluent Bit 1.10.1
- Amazon Kinesis Firehose for Fluent Bit 1.7.1
Compared to 2.30.0
, this release adds the following feature that we are working on getting accepted upstream:
- Feature - Add
kinesis_firehose
andkinesis_streams
support fortime_key_format
milliseconds with%3N
option, and nanoseconds9N
and%L
options fluent-bit:2831 - Bug - Format S3 filename with timestamp from the first log in uploaded file, rather than the time the first log was buffered by the s3 output aws-for-fluent-bit:459
- Enhancement - Transition S3 to fully synchronous file uploads to improve plugin stability fluent-bit:6573
- Bug - Resolve S3 logic to display
log_key
missing warning message if the configuredlog_key
field is missing from log payload fluent-bit:6557 - Bug - ECS Metadata filter gracefuly handle task metadata query errors and cache metadata processing state to improve performance aws-for-fluent-bit:505
Same as 2.30.0
, this release includes the following fixes and features that we are working on getting accepted upstream:
- Feature - Support OpenSearch Serverless data ingestion via OpenSearch plugin fluent-bit:6448
- Bug - Mitigate Datadog output plugin issue by reverting recent PR aws-for-fluent-bit:491
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | source | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|---|
kinesis_firehose | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
kinesis_firehose | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0%(946) | ||
kinesis_streams | stdstream | Log Loss | ✅ | ✅ | 0%(83093) |
Log Duplication | ✅ | 0%(13294) | 2%(495013) | ||
kinesis_streams | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0%(26270) | ||
s3 | stdstream | Log Loss | ✅ | 3%(541520) | 29%(5305665) |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | tcp | Log Loss | ✅ | ✅ | 9%(1657116) |
Log Duplication | ✅ | ✅ | ✅ |
plugin | source | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|---|
cloudwatch_logs | stdstream | Log Loss | ✅ | 16%(198844) | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
cloudwatch_logs | tcp | Log Loss | 7%(43686) | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.30.0
2.30.0
This release includes:
- Fluent Bit 1.9.10
- Amazon CloudWatch Logs for Fluent Bit 1.9.1
- Amazon Kinesis Streams for Fluent Bit 1.10.1
- Amazon Kinesis Firehose for Fluent Bit 1.7.1
Compared to 2.29.1
, this release adds the following feature that we are working on getting accepted upstream:
- Feature - Support OpenSearch Serverless data ingestion via OpenSearch plugin fluent-bit:6448
Same as 2.29.1
, this release includes the following fix that we are working on getting accepted upstream:
- Bug - Mitigate Datadog output plugin issue by reverting recent PR aws-for-fluent-bit:491
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | source | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|---|
kinesis_firehose | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
kinesis_firehose | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0%(3239) | ||
kinesis_streams | stdstream | Log Loss | ✅ | ✅ | 0%(3239) |
Log Duplication | ✅ | 0%(18526) | 0%(37223) | ||
kinesis_streams | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(2969) | ✅ | ||
s3 | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | source | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|---|
cloudwatch_logs | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
cloudwatch_logs | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.29.1
2.29.1
This release includes:
- Fluent Bit 1.9.10
- Amazon CloudWatch Logs for Fluent Bit 1.9.1
- Amazon Kinesis Streams for Fluent Bit 1.10.1
- Amazon Kinesis Firehose for Fluent Bit 1.7.1
This release includes the following fixes for AWS customers that we are working on getting accepted upstream:
- Bug - Mitigate Datadog output plugin issue by reverting recent PR aws-for-fluent-bit:491
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | source | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|---|
kinesis_firehose | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
kinesis_firehose | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(500) | ✅ | ||
kinesis_streams | stdstream | Log Loss | ✅ | ✅ | 0%(10932) |
Log Duplication | 0%(500) | 0%(23114) | 0%(117546) | ||
kinesis_streams | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0%(29711) | ||
s3 | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | source | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|---|
cloudwatch_logs | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
cloudwatch_logs | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.29.0
2.29.0
This release includes:
- Fluent Bit 1.9.10
- Amazon CloudWatch Logs for Fluent Bit 1.9.1
- Amazon Kinesis Streams for Fluent Bit 1.10.1
- Amazon Kinesis Firehose for Fluent Bit 1.7.1
Compared to 2.28.4
this release adds:
- Feature - Add
store_dir_limit_size
option fluentbit-docs:971 - Feature - New filter for AWS ECS Metadata fluentbit:5898
- Enhancement - Different user agent on windows vs linux fluentbit:6325
- Bug - Resolve Fluent Bit networking hangs affecting CloudWatch plugin by migrating to async networking + sync core scheduler fluentbit:6339
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | source | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|---|
kinesis_firehose | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(5870) | ✅ | ||
kinesis_firehose | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0%(18460) | ||
kinesis_streams | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
kinesis_streams | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | source | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|---|
cloudwatch_logs | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
cloudwatch_logs | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.28.4
2.28.4
This release includes:
- Fluent Bit 1.9.9
- Amazon CloudWatch Logs for Fluent Bit 1.9.1
- Amazon Kinesis Streams for Fluent Bit 1.10.1
- Amazon Kinesis Firehose for Fluent Bit 1.7.1
Important Note:
- Two security vulnerabilities were found in amazonlinux which we use as our base image- ALAS-40674 and ALAS-32207. This new image will be based on an updated version of amazonlinux that resolves this CVE.
Compared to 2.28.3
this release adds the following feature that we are working on getting accepted upstream:
- enhancement - Separate AWS User Agents for windows and linux fluentbit:6325
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | source | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|---|
kinesis_firehose | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(5490) | ✅ | ||
kinesis_firehose | tcp | Log Loss | ✅ | ✅ | 0%(17696) |
Log Duplication | ✅ | ✅ | 0%(21306) | ||
kinesis_streams | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(53500) | 0%(1000) | ||
kinesis_streams | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
cloudwatch_logs | stdstream | Log Loss | 16%(99000) | 28%(339777) | 25%(458092) |
Log Duplication | ✅ | ✅ | ✅ | ||
cloudwatch_logs | tcp | Log Loss | 14%(84144) | 18%(222962) | 43%(779277) |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.28.3
2.28.3
This release includes:
- Fluent Bit 1.9.9
- Amazon CloudWatch Logs for Fluent Bit 1.9.0
- Amazon Kinesis Streams for Fluent Bit 1.10.0
- Amazon Kinesis Firehose for Fluent Bit 1.7.0
Important Note:
- A security vulnerability was found in golang which we use to build our go plugins. This new image builds the go plugins with latest golang and resolves the CVE.
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | source | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|---|
kinesis_firehose | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(2695) | ✅ | ||
kinesis_firehose | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0%(20582) | ||
kinesis_streams | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0%(1000) | ||
kinesis_streams | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 0%(500) | ||
s3 | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | source | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|---|
cloudwatch_logs | stdstream | Log Loss | ✅ | ✅ | 11%(200028) |
Log Duplication | ✅ | ✅ | ✅ | ||
cloudwatch_logs | tcp | Log Loss | ✅ | 5%(66516) | 33%(599933) |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.28.2
2.28.2
This release includes:
- Fluent Bit 1.9.9
- Amazon CloudWatch Logs for Fluent Bit 1.9.0
- Amazon Kinesis Streams for Fluent Bit 1.10.0
- Amazon Kinesis Firehose for Fluent Bit 1.7.0
Compared to 2.28.1
this release adds:
- Bug - Stop trace_error from truncating the OpenSearch API call response fluentbit:5788
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | source | 20 MB/s | 25 MB/s | 30 MB/s | |
---|---|---|---|---|---|
kinesis_firehose | stdstream | Log Loss | 0%(339) | ✅ | 0%(10173) |
Log Duplication | ✅ | 0%(5210) | 1%(358871) | ||
kinesis_firehose | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(964) | 0%(16734) | ||
kinesis_streams | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
kinesis_streams | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | 0%(32586) | 0%(37918) | 0%(25494) | ||
s3 | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
s3 | tcp | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | source | 1 MB/s | 2 MB/s | 3 MB/s | |
---|---|---|---|---|---|
cloudwatch_logs | stdstream | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ | ||
cloudwatch_logs | tcp | Log Loss | ✅ | ✅ | 0%(1370) |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.28.1
2.28.1
This release includes:
- Fluent Bit 1.9.8
- Amazon CloudWatch Logs for Fluent Bit 1.9.0
- Amazon Kinesis Streams for Fluent Bit 1.10.0
- Amazon Kinesis Firehose for Fluent Bit 1.7.0
Compared to 2.28.0
this release adds the following feature that we are working on getting accepted upstream:
- Bug - Resolve long tag segfault issue. Without this patch, Fluent Bit may segfault if it encounters tags over 256 characters in length. fluentbit:5753
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | 20Mb/s | 25Mb/s | 30Mb/s | |
---|---|---|---|---|
kinesis_firehose | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | 0%(500) | ✅ | |
kinesis_streams | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | 0%(23000) | ✅ | 0%(30996) | |
s3 | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | 1Mb/s | 2Mb/s | 3Mb/s | |
---|---|---|---|---|
cloudwatch_logs | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.
AWS for Fluent Bit 2.28.0
2.28.0
This release includes:
- Fluent Bit 1.9.7
- Amazon CloudWatch Logs for Fluent Bit 1.9.0
- Amazon Kinesis Streams for Fluent Bit 1.10.0
- Amazon Kinesis Firehose for Fluent Bit 1.7.0
AWS For Fluent Bit New Feature Announcement:
- New Image Tags - Added
init
tagged images with an init process that downloads multiple config and parser files from S3 and sets ECS Metadata as env vars. Check out the docs for the new Fluent Bit ECS Init Image tags.
Compared to 2.27.0
this release adds:
- Feature - Add gzip compression support for multipart uploads in S3 Output plugin
- Bug - S3 output key formatting inconsistent rendering of
$TAG[n]
aws-for-fluent-bit:376 - Bug - fix concurrency issue in S3 key formatting
- Bug -
cloudwatch_logs
plugin fix skip counting empty events
We’ve run the new released image in our ECS load testing framework and here is the result. This testing result provides benchmarks of aws-for-fluent-bit
under different input load. Learn more about the load test.
plugin | 20Mb/s | 25Mb/s | 30Mb/s | |
---|---|---|---|---|
kinesis_firehose | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | 0%(920) | ✅ | ✅ | |
kinesis_streams | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | 0%(500) | 0%(1000) | 0%(500) | |
s3 | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | ✅ |
plugin | 1Mb/s | 2Mb/s | 3Mb/s | |
---|---|---|---|---|
cloudwatch_logs | Log Loss | ✅ | ✅ | ✅ |
Log Duplication | ✅ | ✅ | 2%(53893) |
Note:
- The green check ✅ in the table means no log loss or no log duplication.
- Number in parentheses means the number of records out of total records. For example, 0%(1064/1.8M) under 30Mb/s throughput means 1064 duplicate records out of 18M input records by which log duplication percentage is 0%.
- For CloudWatch output, the only safe throughput where we consistently don't see log loss is 1 MB/s. At 2 Mb/s and beyond, we occasionally see some log loss and throttling.
- Log loss is the percentage of data lost and log duplication is the percentage of duplicate logs received at the destination. Your results may differ because they can be influenced by many factors like different configs and environment settings. Log duplication is exclusively caused by partially succeeded batches that were retried, which means it is random.