You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have a load test scenario (with k6) which is testing some load on an application.
At a certain point, unfortunately, the prometheus metrics endpoint stops responding and i no longer receive any jvm metrics.
however, the application and the jvm continue to respond, only the exporter hangs.
In the thread dump of the jvm at this point you can see that it is waiting for a jdbc connection.
We know that the jdbc connection pool is under load but still works for the application we testing.
I tried to exclude this via config.yml, but the prometheus exporter still tries to access a jdbc connection.
Is this a bug or am i doing something wrong in the config?
This is one prometheus thread from the thread dump:
"prometheus-http-1-4" - Thread t@196
java.lang.Thread.State: WAITING
at jdk.internal.misc.Unsafe.park(Native Method)
- waiting to lock <6682e21a> (a java.util.concurrent.locks.ReentrantLock$NonfairSync) owned by "http-nio2-10132-exec-3" t@25
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:211)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:715)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:938)
at java.util.concurrent.locks.ReentrantLock$Sync.lock(ReentrantLock.java:153)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:322)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:4052)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:272)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:246)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeQueryInternal(SQLServerStatement.java:743)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.getSchema(SQLServerConnection.java:6990)
at org.apache.tomcat.jdbc.pool.PooledConnection.getSchema(PooledConnection.java:906)
at jdk.internal.reflect.GeneratedMethodAccessor127.invoke(Unknown Source)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:72)
at jdk.internal.reflect.GeneratedMethodAccessor49.invoke(Unknown Source)
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:568)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:262)
at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112)
at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46)
at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
at com.sun.jmx.mbeanserver.MBeanSupport.getAttributes(MBeanSupport.java:213)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttributes(DefaultMBeanServerInterceptor.java:705)
at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttributes(JmxMBeanServer.java:706)
at io.prometheus.jmx.JmxScraper.scrapeBean(JmxScraper.java:197)
at io.prometheus.jmx.JmxScraper.doScrape(JmxScraper.java:149)
at io.prometheus.jmx.JmxCollector.collect(JmxCollector.java:771)
at io.prometheus.jmx.shaded.io.prometheus.client.Collector.collect(Collector.java:45)
at io.prometheus.jmx.shaded.io.prometheus.client.CollectorRegistry$MetricFamilySamplesEnumeration.findNextElement(CollectorRegistry.java:204)
at io.prometheus.jmx.shaded.io.prometheus.client.CollectorRegistry$MetricFamilySamplesEnumeration.nextElement(CollectorRegistry.java:219)
at io.prometheus.jmx.shaded.io.prometheus.client.CollectorRegistry$MetricFamilySamplesEnumeration.nextElement(CollectorRegistry.java:152)
at io.prometheus.jmx.shaded.io.prometheus.client.exporter.common.TextFormat.writeOpenMetrics100(TextFormat.java:202)
at io.prometheus.jmx.shaded.io.prometheus.client.exporter.common.TextFormat.writeFormat(TextFormat.java:57)
at io.prometheus.jmx.shaded.io.prometheus.client.exporter.HTTPServer$HTTPMetricHandler.handle(HTTPServer.java:100)
at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:95)
at sun.net.httpserver.AuthFilter.doFilter(AuthFilter.java:82)
at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:98)
at sun.net.httpserver.ServerImpl$Exchange$LinkHandler.handle(ServerImpl.java:851)
at com.sun.net.httpserver.Filter$Chain.doFilter(Filter.java:95)
at sun.net.httpserver.ServerImpl$Exchange.run(ServerImpl.java:818)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.lang.Thread.run(Thread.java:833)
Thanks,
Torsten
The text was updated successfully, but these errors were encountered:
reschkey
changed the title
promtheus metrics endpoint is no longer available when jdbc connection pool is under load.
prometheus metrics endpoint is no longer available when jdbc connection pool is under load.
Sep 15, 2023
I have a load test scenario (with k6) which is testing some load on an application.
At a certain point, unfortunately, the prometheus metrics endpoint stops responding and i no longer receive any jvm metrics.
however, the application and the jvm continue to respond, only the exporter hangs.
In the thread dump of the jvm at this point you can see that it is waiting for a jdbc connection.
We know that the jdbc connection pool is under load but still works for the application we testing.
I tried to exclude this via config.yml, but the prometheus exporter still tries to access a jdbc connection.
Is this a bug or am i doing something wrong in the config?
here is my config:
This is one prometheus thread from the thread dump:
Thanks,
Torsten
The text was updated successfully, but these errors were encountered: