-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
java.lang.OutOfMemoryError: Java heap space #61
Comments
Settings used by default: |
I tuned it via setting JAVA_OPTS env var in DC for now: |
That should postpone the error. I would suggest to do heap dumps for further analysis. There is an option to do a heap dump when such error occur (– XX:+HeapDumpOnOutOfMemoryError), but also on demand using jvm tools such as jmap. Then we can use VisualVM for example to analyze further. I was also thinking during the flight that when doing performance tests using vegeta we are quite blind. We should attach some sort of profiler to really understand what's the bottleneck. I would also suggest to talk to KC guys about our findings. |
Third option is to get the dump using jmx console available in Wildflower, but that is probably the last resort as our images will have to have admin user configured to get there. |
We're collecting data since this Thursday from pmcd about the performance of the containers in our test cluster. I think the bottleneck is more CPU boundaries rather than memory. But yeah, let's test it with this configuration and ping the KC guys |
For your entertainment https://osd-monitor-keycloak-cluster-test.b6ff.rh-idev.openshiftapps.com/ . You can ask me for the credentials |
Reopening bc we need to investigate it further and tune accordingly. |
Pcp gives you hardware boundaries monitoring, but we still have no idea what is really consuming that much memory from them jvm perspective. That's why heap dump would help here. Worst case there is some memory leaking somewhere. Heap dumps would be definitely helping here. We should take care of storing them in the pv instead of container itself. |
We might also think about using
|
Today our sso.prod-preview died bc of:
16:00:36,097 ERROR [io.undertow.request] (default task-53) UT005023: Exception handling request to /auth/realms/fabric8/.well-known/openid-configuration: java.lang.OutOfMemoryError: Java heap space
The deployment was using 1GB of RAM. As far as I remember the limit is 2GB.
So, we need to check JVM settings in our Wildfly/KC and tune them accordingly.
cc: @hectorj2f @bartoszmajsak
The text was updated successfully, but these errors were encountered: