-
Notifications
You must be signed in to change notification settings - Fork 19
Welcome to the nginx loadbalancer for kubernetes WIKI
You are a modern app developer, you use a collection of open source and maybe some commercial tools to write, test, deploy, and manage new apps and containers. You have chosen Kubernetes (K8s) to run these containers and pods in dev, test, staging, and production environments. You've bought into the architectures and concepts of Microservices, the CNCF Foundation, and other modern industry standards.
Kubernetes is indeed powerful. However, you might be surprised to see just how difficult, inflexible, and frustrating it can be, implementing and coordinating changes and updates to routers, firewalls, loadbalancers and other network devices - especially in your own datacenter! It's enough to bring a developer to tears at times. Kubernetes doesn't taste so sweet now, leaving a hint of bitter taste behind.
If you run Kubernetes from a public cloud provider, much of that tedious networking stuff is handled for you. The Service Type Loadbalancer
gives you a Public IP, DNS record, and TCP loadbalancer with just one command, kubectl apply -f loadbalancer.yaml
. I bet you take it for granted. But this is not the case for on premises clusters! You, or your networking peers, must provide these.
Are you shocked to find out that the Service Type Loadbalancer, the front door to your Cluster, does not exist at all?
In order to expose your apps and services outside the cluster, your network team likely requires tickets, approvals, procedures, perhaps even security reviews - before they reconfigure their equipment. Or you might need to do it all yourself, slowing the pace of application development to a crawl. Even worse, you dare not make changes to any K8s services, for if the NodePort changes, the traffic could get blocked! And we all know how much users like getting 500 errors. Your boss probably likes it even less.
Here at NGINX, we built a new Kubernetes Controller, which watches the NGINX Ingress Controller, and automatically updates an external NGINX Server used for loadbalancing. Being very straightforward in its design, it's simple to install and operate.
- New NLK Controller is registered with Kubernetes to watch the
nginx-ingress Service
. - When NLK is notified of changes, it collects the Worker IPs, and the NodePort TCP port numbers.
- NLK then sends these IP:ports to NGINX Plus LoadBalancer Server via API, and updates the NGINX Upstream servers
(no reload required!)
- NGINX LoadBalancer Server can now loadbalance traffic to the correct upstream servers, the correct Kubernetes NodePorts.
- You can add additional NGINX LoadBalancer Servers for high availability (see the docs.)
- Orange is the Service Type Loadbalancer for nginx-ingress
- Red is the EXTERNAL-IP info, the NGINX Plus LoadBalancer Servers
- Blue is the NodePort mapping 80:32111, with matching NGINX Upstream Servers
- Indigo is the NodePort mapping 443:32632, with matching NGINX Upstream Servers
- Green is the log messages from NLK Controller
- Kubernetes Worker Nodes are 10.1.1.8 and 10.1.1.10
The NLK Controller's Source Code is available as open source (under the Apache 2.0 license). All the installation instructions are available in this GitHub repo. We tried to provide all the details you need to deploy and test NLK.
So, if you are frustrated with networking challenges at the edge of your Kubernetes Cluster, take NLK for a spin. Please let us know what you think. We'd really like to how it worked for you and what could be better. Drop us a comment in the repo, or chat with us on the NGINX Community Slack.
- NGINX NLK GitHub Repo: https://github.com/nginxinc/nginx-loadbalancer-kubernetes
- NGINX Community Slack: https://community.nginx.org/joinslack
- NGINX.org: https://nginx.org/
- NLK Introduction
- Chris Akker, Steve Wagner, June 2023
Copyright 2023 F5 NGINX