-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve sql logging by adding additional context. #200
Comments
I'm a little confused about the MDC, but before we go there, I want to make sure you're aware of two existing hooks here.
So the first question is, should we somehow improve the snooper? |
Oh, I'm fine with the markers as long as the default is blank. |
The more I read this, the more I'm convinced I misread the intent. Is this meant to be some sort of in-VM dashboard or is it an external process reading logs and you just want to make the parsing easier? |
Thanks for the quick reply. I didn't know about private static final Logger LOGGER = LoggerFactory.getLogger(ExampleApplication.class);
private static final Marker MARKER = MarkerFactory.getMarker("Example marker"); MDC.put("example metric", String.valueOf(123));
try
{
LOGGER.info(MARKER, "Example metric: " + 123);
}
finally
{
MDC.remove("example metric");
} I set up a logback console appender with this pattern:
And I get output like:
If I remove the marker and mdc from my pattern:
Then they simply aren't logged:
Dropwizard comes with a json log format The same log message gets output like this: {"level":"INFO","logger":"com.example.ExampleApplication","thread":"main","message":"Example metric: 123","mdc":{"example metric":"123"},"timestamp":1567042492086} I just now realized that the marker is missing. I'll try to fix that upstream in Dropwizard. Another problem is that all MDC values are strings. Downsides aside, the json format, as well as other tools like logstash-logback-encoder or elk appenders help me get this data into Kibana/Graphana where I can create dashboards. For example, a histogram that shows the sum of all time spent in database queries for each 5 minute window. For this data to be the most useful, I need it to be per query, and allow Kibana/Graphana to perform the aggregations. |
Exactly. I should have started with that. These additions would make it easier to use the logs in Kibana/Graphana and probably other tools too. |
Ok, I understand the intent: external log crawler that prefers a standard parsable format (json). Please submit the code 😄 |
Will do, thanks 👍 |
Reladomo's sql logging is so useful that I sometimes leave it turned on in production.
I have some ideas on how to make this logging even more useful.
It's currently hard to log only sql, or only timings, without the other. That's because both log statements using the same logger,
com.gs.fw.common.mithra.sqllogs.MyClass
in this example. I propose adding markers for each.And adding the markers to the relevant log statements. For example, change this:
to
In addition, the timing information could be even more useful if I didn't have to parse it out of the logged message. I propose adding the timing information to MDC. That way I more easily create dashboards of the amount of time spent on queries over time.
For example, change this:
to
I'm happy to contribute these changes but I wanted to discuss here before working on the whole thing.
I'm starting with the assumption that changing the existing logging is undesirable because users already rely on the current behavior. So this proposal adds markers and MDC but doesn't change, remove, or combine any messages. What do you think?
The text was updated successfully, but these errors were encountered: