You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 5, 2021. It is now read-only.
We noticed that some log messages were never delivered to SumoLogic. Those same log messages did end up in StackDriver, despite similar log collection setups. (k8s -> fluentd daemonset -> log sink.)
The core issue was that our sumologic-fluentd pods had a memory leak which caused the pods to be SIGKILLed by kubernetes when they exceeded their memory limits.
But, all messages currently in the memory-backed buffer were lost during the restart, and were never delivered.
Feature Request
It would be nice if the default settings would set up a file-based output buffer so that delivery of messages in the output buffer during a crash could be resumed once the pod is restarted.
For example, the GKE-provided fluentd pod sets up a file-based buffer like this.
This would help not only in the case of a crash, but in any other circumstances where k8s might kill or restart the pod.
The text was updated successfully, but these errors were encountered:
Background
We noticed that some log messages were never delivered to SumoLogic. Those same log messages did end up in StackDriver, despite similar log collection setups. (k8s -> fluentd daemonset -> log sink.)
The core issue was that our sumologic-fluentd pods had a memory leak which caused the pods to be
SIGKILL
ed by kubernetes when they exceeded their memory limits.But, all messages currently in the memory-backed buffer were lost during the restart, and were never delivered.
Feature Request
It would be nice if the default settings would set up a file-based output buffer so that delivery of messages in the output buffer during a crash could be resumed once the pod is restarted.
For example, the GKE-provided fluentd pod sets up a file-based buffer like this.
This would help not only in the case of a crash, but in any other circumstances where k8s might kill or restart the pod.
The text was updated successfully, but these errors were encountered: