Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Waiting for persistence pod to build database in Kubernetes #213

Open
askull100 opened this issue Feb 25, 2022 · 3 comments
Open

Waiting for persistence pod to build database in Kubernetes #213

askull100 opened this issue Feb 25, 2022 · 3 comments
Assignees
Labels

Comments

@askull100
Copy link

askull100 commented Feb 25, 2022

I'm trying to run TeaStore on a Central/Edge Kubernetes environment, using two separate servers each with an Ubuntu VM using KubeEdge. For TeaStore specifically, I'm using the Ribbon load balancer installation on the Central side, and this page as a reference: https://github.com/DescartesResearch/TeaStore/blob/master/GET_STARTED.md#13-run-the-teastore-on-a-kubernetes-cluster

The kubectl command executes fine, and the pods, services, and deployments are all up and running. There are no external IP addresses set up yet, however.
image

But when logging onto the Web UI, TeaStore's services are stuck in Error 500, and the Status page is waiting for Persistence to finish building the database.

500: Internal Exception: null
java.lang.NullPointerException at tools.descartes.teastore.registryclient.rest.RestUtil.throwCommonExceptions(RestUtil.java:38) at tools.descartes.teastore.registryclient.rest.RestUtil.readThrowAndOrClose(RestUtil.java:73) at tools.descartes.teastore.registryclient.rest.LoadBalancedStoreOperations.isLoggedIn_aroundBody6(LoadBalancedStoreOperations.java:136) at tools.descartes.teastore.registryclient.rest.LoadBalancedStoreOperations$AjcClosure7.run(LoadBalancedStoreOperations.java:1) at org.aspectj.runtime.reflect.JoinPointImpl.proceed(JoinPointImpl.java:167) at tools.descartes.teastore.kieker.probes.AbstractOperationExecutionWithParameterAspect.operation(AbstractOperationExecutionWithParameterAspect.java:57) at tools.descartes.teastore.registryclient.rest.LoadBalancedStoreOperations.isLoggedIn(LoadBalancedStoreOperations.java:131) at tools.descartes.teastore.webui.servlet.IndexServlet.handleGETRequest_aroundBody0(IndexServlet.java:58) at tools.descartes.teastore.webui.servlet.IndexServlet$AjcClosure1.run(IndexServlet.java:1) at org.aspectj.runtime.reflect.JoinPointImpl.proceed(JoinPointImpl.java:167) at tools.descartes.teastore.kieker.probes.AbstractOperationExecutionWithParameterAspect.operation(AbstractOperationExecutionWithParameterAspect.java:57) at tools.descartes.teastore.webui.servlet.IndexServlet.handleGETRequest(IndexServlet.java:54) at tools.descartes.teastore.webui.servlet.AbstractUIServlet.doGet(AbstractUIServlet.java:224) at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:665) at jakarta.servlet.http.HttpServlet.service(HttpServlet.java:774) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:224) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:159) at tools.descartes.teastore.registryclient.rest.TrackingFilter.doFilter(TrackingFilter.java:64) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:186) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:159) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:202) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:97) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:543) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:92) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:78) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:353) at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:382) at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:65) at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:870) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1696) at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) at java.base/java.lang.Thread.run(Thread.java:829)

Digging into the pod logs shows this:
persistence_logs.txt

From what I can tell, this is a general-purpose error that implies the persistence pod is having trouble accessing the MySQL database. I don't know why the error is happening specifically though. The central VM doesn't have any other Kubernetes pods active, or any services which should be interfering.

@SimonEismann
Copy link
Contributor

Hi @askull100, it seems like there is most likely a networking issue between the pods, or the persistence is configured to use the wrong IP/port.

Can you go through the steps in this article to test if the persistence pod can talk to the database port:
https://projectcalico.docs.tigera.io/getting-started/kubernetes/hardway/test-networking

For the configuration of the persistence, could you check which IP/DNS/Servicename it is configured to connect to in the template you are using?

@SimonEismann SimonEismann self-assigned this Feb 27, 2022
@askull100
Copy link
Author

Hi @SimonEismann, I went through the article and pod-to-pod communication is currently not possible, even on pods unrelated to Teastore. The persistence pod is configured only with the default values from the ribbon.yaml template, so I can only assume there is a networking issue. The container port is set to 8080, and the database port is set to 3306. I can at least ping the c. Trying to traceroute to any IP other than the gateway fails as well, so I can't even say the pod knows to use the gateway.

After some more digging, I also noticed that the node IP is on 10.43.0.0/24, while the pods are all networked on 10.50.0.0/24. I assume this is probably the issue? If so, I'm not sure if I should forcefully change the network per pod, or if there's a way to change which network the pods use collectively.

@SimonEismann
Copy link
Contributor

Sorry for the late reply. Generally, I am not particularly well versed in Kubernetes networking, so I'm not sure what might be the issue here.

Some ideas:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants