Deploy microservices with load balancing, access policies, telemetry and reporting leveraging Istio Service Mesh on Kubernetes
Istio is an open platform that provides a uniform way to connect, manage, and secure microservices. Istio is the result of a joint collaboration between IBM, Google and Lyft as a means to support traffic flow management, access policy enforcement and the telemetry data aggregation between microservices, all without requiring changes to the code of your microservice. Istio provides an easy way to create this service mesh by deploying a control plane and injecting sidecars, an extended version of the Envoy proxy, in the same Pod as your microservice.
The BookInfo is a simple application that is composed of four microservices. The application is written in different languages for each of its microservices namely Python, Java, Ruby, and Node.js.
- Istio
- Kubernetes Clusters
- Grafana
- Zipkin
- Prometheus
- Bluemix container service
- Bluemix DevOps Toolchain Service
Part A: Deploy Istio service mesh and sample application on Kubernetes
Part B: Configure and use Istio's features for sample application
Create a Kubernetes cluster with either Minikube for local testing, or with IBM Bluemix Container Service to deploy in cloud. The code here is regularly tested against Kubernetes Cluster from Bluemix Container Service using Travis.
If you want to deploy the BookInfo app directly to Bluemix, click on 'Deploy to Bluemix' button below to create a Bluemix DevOps service toolchain and pipeline for deploying the sample, else jump to Steps
You will need to create your Kubernetes cluster first and make sure it is fully deployed in your Bluemix account.
Please follow the Toolchain instructions to complete your toolchain and pipeline.
- Install Istio on Kubernetes
- Deploy sample BookInfo application on Kubernetes
- Inject Istio envoys on the application
- Access your application running on Istio
- Traffic flow management - Modify service routes
- Access policy enforcement - Configure access control
- Telemetry data aggregation - Collect metrics, logs and trace spans
- Download the latest Istio release for your OS: Istio releases
- Extract and go to the root directory.
- Copy the
istioctl
bin to your local bin
$ cp bin/istioctl /usr/local/bin
## example for macOS
- Run the following command to check if your cluster has RBAC
$ kubectl api-versions | grep rbac
-
Grant permissions based on the version of your RBAC
* If you have an alpha version, run:$ kubectl apply -f install/kubernetes/istio-rbac-alpha.yaml
* If you have a **beta** version, run:
```bash
$ kubectl apply -f install/kubernetes/istio-rbac-beta.yaml
```
* If **your cluster has no RBAC** enabled, proceed to installing the **Control Plane**.
1.3 Install the Istio Control Plane in your cluster
- Run the following command to install Istio.
$ kubectl apply -f install/kubernetes/istio.yaml
# or kubectl apply -f install/kubernetes/istio-auth.yaml
istio-auth.yaml
enables Istio with its Auth feature. It enables mTLS between the services
- You should now have the Istio Control Plane running in Pods of your Cluster.
$ kubectl get pods
NAME READY STATUS RESTARTS
istio-egress-3850639395-30d1v 1/1 Running 0
istio-ingress-4068702052-2st6r 1/1 Running 0
istio-manager-251184572-x9dd4 2/2 Running 0
istio-mixer-2499357295-kn4vq 1/1 Running 0
In this step, it assumes that you already have your own application that is configured to run in a Kubernetes Cluster.
In this journey, you will be using the BookInfo Application that can already run on a Kubernetes Cluster. You can deploy the BookInfo Application without using Istio by not injecting the required Envoys.
- Deploy the BookInfo Application in your Cluster
$ kubectl apply -f samples/apps/bookinfo/bookinfo.yaml
- If you don't have access to external load balancers, you need to use NodePort on the
productpage
service. Run the following command to use a NodePort:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
name: productpage
labels:
app: productpage
spec:
type: NodePort
ports:
- port: 9080
name: http
selector:
app: productpage
EOF
- Output your cluster's IP address and NodePort of your
productpage
service in your terminal: (If you have a load balancer, you can access it through the IP found onkubectl get ingress
)
$ echo $(kubectl get po -l app=productpage -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc productpage -o jsonpath={.spec.ports[0].nodePort})
184.xxx.yyy.zzz:30XYZ
At this point, you can point your browser to http://184.xxx.yyy.zzz:30XYZ/productpage and see the BookInfo Application. The sample BookInfo Application is configured to run on a Kubernetes Cluster.
The next step would be deploying this sample application with Istio Envoys injected. By using Istio, you will have access to Istio's features such as traffic flow management, access policy enforcement and telemetry data aggregation between microservices. You will not have to modify the BookInfo's source code.
You should now delete the sample application to proceed to the next step.
$ kubectl delete -f samples/apps/bookinfo/bookinfo.yaml
Envoys are deployed as sidecars on each microservice. Injecting Envoy into your microservice means that the Envoy sidecar would manage the ingoing and outgoing calls for the service. To inject an Envoy sidecar to an existing microservice configuration, do:
$ kubectl apply -f <(istioctl kube-inject -f samples/apps/bookinfo/bookinfo.yaml)
istioctl kube-inject
modifies the yaml file passed in -f. This injects Envoy sidecar into your Kubernetes resource configuration. The only resources updated are Job, DaemonSet, ReplicaSet, and Deployment. Other resources in the YAML file configuration will be left unmodified.
After a few minutes, you should now have your Kubernetes Pods running and have an Envoy sidecar in each of them alongside the microservice. The microservices are productpage, details, ratings, and reviews. Note that you'll have three versions of the reviews microservice.
$ kubectl get pods
NAME READY STATUS RESTARTS
details-v1-969129648-lwgr3 2/2 Running 0
istio-egress-3850639395-30d1v 1/1 Running 0
istio-ingress-4068702052-2st6r 1/1 Running 0
istio-manager-251184572-x9dd4 2/2 Running 0
istio-mixer-2499357295-kn4vq 1/1 Running 0
productpage-v1-1629799384-00f11 2/2 Running 0
ratings-v1-1194835686-dzf2f 2/2 Running 0
reviews-v1-2065415949-3gdz5 2/2 Running 0
reviews-v2-2593570575-92657 2/2 Running 0
reviews-v3-3121725201-cn371 2/2 Running 0
To access your application, you can check the public IP address of your cluster through kubectl get nodes
and get the NodePort of the istio-ingress service for port 80 through kubectl get svc | grep istio-ingress
. Or you can also run the following command to output the IP address and NodePort:
echo $(kubectl get po -l istio=ingress -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc istio-ingress -o jsonpath={.spec.ports[0].nodePort})
184.xxx.yyy.zzz:30XYZ
Point your browser to:
http://184.xxx.yyy.zzz:30XYZ/productpage
Replace with your own IP and NodePort.
If you refresh the page multiple times, you'll see that the reviews section of the page changes. That's because there are 3 versions of reviews(reviews-v1, reviews-v2, reviews-v3) deployment for our reviews service.
This step shows you how to configure where you want your service to go based on weights and HTTP Headers.
- Set Default Routes to
reviews-v1
for all microservices
This would set all incoming routes on the services (indicated in the linedestination: <service>
) to the deployment with a tagversion: v1
. To set the default routes, run:$ istioctl create -f samples/apps/bookinfo/route-rule-all-v1.yaml
- Set Route to
reviews-v2
of reviews microservice for a specific user
This would set the route for the userjason
(You can login as jason with any password in your deploy web application) to see theversion: v2
of the reviews microservice. Run:$ istioctl create -f samples/apps/bookinfo/route-rule-reviews-test-v2.yaml
- Route 50% of traffic on reviews microservice to
reviews-v1
and 50% toreviews-v3
.
This is indicated by theweight: 50
in the yaml file.$ istioctl replace -f samples/apps/bookinfo/route-rule-reviews-50-v3.yaml # using `replace` should allow you to edit exisiting route-rules.
- Route 100% of the traffic to the
version: v3
of the reviews microservicese
This would set every incoming traffic to the version v3 of the reviews microservice. Run:$ istioctl replace -f samples/apps/bookinfo/route-rule-reviews-v3.yaml
This step shows you how to control access to your services. If you have done the step above, you'll see that your productpage
now just shows red stars on the reviews section and if you are logged in as jason, you'll see black stars. The ratings
service is accessed from the reviews-v2
if you're logged in as jason or it is accessed from reviews-v3
if you are not logged in as jason
.
- To deny access to the ratings service from the traffic coming from
reviews-v3
, you will useistioctl mixer rule create
The$ istioctl mixer rule create global ratings.default.svc.cluster.local -f samples/apps/bookinfo/mixer-rule-ratings-denial.yaml
mixer-rule-ratings-denial.yaml
file creates a rule that denieskind: denials
access from reviews service and has a label of v3selector: source.labels["app"]=="reviews" && source.labels["version"] == "v3"
You can verify usingistioctl mixer rule get global ratings.default.svc.cluster.local
if the mixer rule has been created that way:$ istioctl mixer rule get global ratings.default.svc.cluster.local rules: - aspects: - kind: denials selector: source.labels["app"]=="reviews" && source.labels["version"] == "v3"
- To verify if your rule has been enforced, point your browser to your BookInfo Application, you wouldn't see star ratings anymore from the reviews section unless you are logged in as jason which you will still see black stars (because you would be using the reviews-v2 as you have done in Step 4).
This step shows you how to configure Istio Mixer to gather telemetry for services in your cluster.
-
Install the required Istio Addons on your cluster: Prometheus and Grafana
$ kubectl apply -f install/kubernetes/addons/prometheus.yaml $ kubectl apply -f install/kubernetes/addons/grafana.yaml
-
Verify that your Grafana dashboard is ready. Get the IP of your cluster
kubectl get nodes
and then the NodePort of your Grafana servicekubectl get svc | grep grafana
or you can run the following command to output both:$ echo $(kubectl get po -l app=grafana -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc grafana -o jsonpath={.spec.ports[0].nodePort}) 184.xxx.yyy.zzz:30XYZ
Point your browser to
184.xxx.yyy.zzz:30XYZ/dashboard/db/istio-dashboard
to go directly to your dashboard.
Your dashboard should look like this:
-
To collect new telemetry data, you will use
istio mixer rule create
. For this sample, you will generate logs for Response Size for Reviews service. The configuration YAML file is provided within the BookInfo sample folder. Validate that your Reviews service has no service-specific rules already applied.$ istioctl mixer rule get reviews.default.svc.cluster.local reviews.default.svc.cluster.local Error: the server could not find the requested resource
-
Create a configuration YAML file and name it as
new_rule.yaml
:revision: "1" rules: - aspects: - adapter: prometheus kind: metrics params: metrics: - descriptor_name: response_size value: response.size | 0 labels: source: source.labels["app"] | "unknown" target: target.service | "unknown" service: target.labels["app"] | "unknown" version: target.labels["version"] | "unknown" method: request.path | "unknown" response_code: response.code | 200 - adapter: default kind: access-logs params: logName: combined_log log: descriptor_name: accesslog.combined template_expressions: originIp: origin.ip sourceUser: origin.user timestamp: request.time method: request.method url: request.path protocol: request.scheme responseCode: response.code responseSize: response.size referer: request.referer userAgent: request.headers["user-agent"] labels: originIp: origin.ip sourceUser: origin.user timestamp: request.time method: request.method url: request.path protocol: request.scheme responseCode: response.code responseSize: response.size referer: request.referer userAgent: request.headers["user-agent"]
-
Create the configuration on Istio Mixer.
istioctl mixer rule create reviews.default.svc.cluster.local reviews.default.svc.cluster.local -f new_rule.yaml
-
Send traffic to that service by refreshing your browser to
http://184.xxx.yyy.zzz:30XYZ/productpage
multiple times. You can also docurl
on your terminal to that URL in a while loop. -
Verify that the new metric is being collected by going to your Grafana dashboard again. The graph on the rightmost should now be populated.
-
Verify that the logs stream has been created and is being populated for requests
$ kubectl logs $(kubectl get pods -l istio=mixer -o jsonpath='{.items[0].metadata.name}') | grep \"combined_log\" {"logName":"combined_log","labels":{"referer":"","responseSize":871,"timestamp":"2017-04-29T02:11:54.989466058Z","url":"/reviews","userAgent":"python-requests/2.11.1"},"textPayload":"- - - [29/Apr/2017:02:11:54 +0000] \"- /reviews -\" - 871 - python-requests/2.11.1"} ... ... ...
Collecting Metrics and Logs on Istio
This step shows you how to collect trace spans using Zipkin.
-
Install the required Istio Addon: Zipkin
$ kubectl apply -f install/kubernetes/addons/zipkin.yaml
-
Access your Zipkin Dashboard. Get the IP of your cluster
kubectl get nodes
and then the NodePort of your Zipkin servicekubectl get svc | grep zipkin
or you can run the following command to output both:$ echo $(kubectl get po -l app=zipkin -o jsonpath={.items[0].status.hostIP}):$(kubectl get svc zipkin -o jsonpath={.spec.ports[0].nodePort}) 184.xxx.yyy.zzz:30XYZ
-
Send traffic to that service by refreshing your browser to
http://184.xxx.yyy.zzz:30XYZ/productpage
multiple times. You can also docurl
on your terminal to that URL in a while loop. -
Go to your Zipkin Dashboard again and you will see a number of traces done.
-
Click on one those traces and you will see the details of the traffic you sent to your BookInfo App. It shows how much time it took for the request on
productpage
. It also shows how much time ot took for the requests on thedetails
,reviews
, andratings
services.
- To delete Istio from your cluster
$ kubectl delete -f install/kubernetes/istio-rbac-alpha.yaml # or istio-rbac-beta.yaml
$ kubectl delete -f install/kubernetes/istio.yaml
$ kubectl delete istioconfigs --all
$ kubectl delete thirdpartyresource istio-config.istio.io
- To delete all addons:
kubectl delete -f install/kubernetes/addons
- To delete the BookInfo app and its route-rules:
./samples/apps/bookinfo/cleanup.sh