You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working on migrating an application from WAS ND to OpenShift (using docker images from WAS Base repo). This migration has been pretty straight forward for all applications i have worked on, but there is a specific application which runs fine when only one instance of WAS pod is running, but if i try to start a second pod the application will not start on the second pod. The reason being the application is reading the variable ${WAS_SERVER_NAME} from websphere variables and storing it in a DB table. For the application internal load balancing it looks like they have disabled startup of the jvm's with same name. Since i am using WAS traditional image and all was jvm's are named "server1" I am currently limited to start only one instance of this specific application. I am looking to see if i can generate a random uuid for the JVM name instead of "server1", so that when i start like 5 pods of this application each of them have a random name and i do not run into this issue in future.
I have tried using /work/update_config.sh & /work/update_config.py trying to generate random uuid's and passing it to the python script, trying to see if i can fix this by just updating the websphere variable and having to update the actual jvm name.
I am trying to see if i can implement this by updating the create_profile.sh as well.
Any suggestions or inputs here are helpful, thanks in advance.
The text was updated successfully, but these errors were encountered:
I am working on migrating an application from WAS ND to OpenShift (using docker images from WAS Base repo). This migration has been pretty straight forward for all applications i have worked on, but there is a specific application which runs fine when only one instance of WAS pod is running, but if i try to start a second pod the application will not start on the second pod. The reason being the application is reading the variable ${WAS_SERVER_NAME} from websphere variables and storing it in a DB table. For the application internal load balancing it looks like they have disabled startup of the jvm's with same name. Since i am using WAS traditional image and all was jvm's are named "server1" I am currently limited to start only one instance of this specific application. I am looking to see if i can generate a random uuid for the JVM name instead of "server1", so that when i start like 5 pods of this application each of them have a random name and i do not run into this issue in future.
I have tried using /work/update_config.sh & /work/update_config.py trying to generate random uuid's and passing it to the python script, trying to see if i can fix this by just updating the websphere variable and having to update the actual jvm name.
I am trying to see if i can implement this by updating the create_profile.sh as well.
Any suggestions or inputs here are helpful, thanks in advance.
The text was updated successfully, but these errors were encountered: