-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
support consul/k8s service discovery #359
Comments
This will be an add-on business logic on top of the existing framework. Internally we have systems on top of Pingora working that way. Do you have a reference to a standard way of doing such thing or are you looking for guidance how to implement such custom logic. |
thx, I'm looking for guidance how to implement such custom logic. |
take a look at this. https://gist.github.com/Object905/6cafd5e8e56dd60670149296411a407f |
Since publishing this gist I've updated code to be more self-contained, removed dependency on crossbeam and made it work with kube-rs>0.92.0, because it changed store internals and old version was deadlocking/not discovering right away. It works on my production and also achieves zero downtime upgrades of services. |
interesting! some question, does the dns discovery doesnt need the updater background service? |
My use case for dns discovery doesn't account for short living dns entries (like coredns in kubernetes), yes. And it will be hard to achieve zero downtime with DNS anyway. That may be remedied by retrying when handling upstream errors, but that seems to be a bit flaky anyway. |
@Object905 Thanks, this gist help me a lot! |
interesting approach. I'm working on using the From k8s, docker, or whatever, I generate a config file that pingora reads once, at startup. the config file has the resolved DNS names (for example) Any drawbacks to this approach? |
Are you referring to pod or service IPs? Pod IPs are not stable |
If pingora upgrade is seamless, then unstable Pod IPs are not a problem. Upgrade every few minutes. But upgrade may not be as seamless as one would hope for (e.g. the HTTP cache doesn't get upgraded. I think, so effectively a flush at every pod IP change). That's why I asked. |
What is the problem your feature solves, or the need it fulfills?
A clear and concise description of why this feature should be added. What is the problem? Who is
this for?
Our microservices are based on k8s, and will be dynamically expanded and reduced during daily operation and maintenance. Currently, static configuration is very inconvenient. We hope to support k8s or consul service discovery.
Describe the solution you'd like
What do you propose to resolve the problem or fulfill the need above? How would you like it to
work?
Describe alternatives you've considered
What other solutions, features, or workarounds have you considered that might also solve the issue?
What are the tradeoffs for these alternatives compared to what you're proposing?
Additional context
This could include references to documentation or papers, prior art, screenshots, or benchmark
results.
The text was updated successfully, but these errors were encountered: