Skip to content

Commit

Permalink
Merge pull request #61 from priyaaakansha/day_1
Browse files Browse the repository at this point in the history
Fixes of Day 1 to Day 6
  • Loading branch information
shivam-sharma7 authored Oct 11, 2023
2 parents aea1acd + 860726b commit 647d2b6
Show file tree
Hide file tree
Showing 6 changed files with 39 additions and 39 deletions.
10 changes: 5 additions & 5 deletions 70DaysExercises/01-Let's break down a Service Mesh.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,14 @@
> *An introduction to Service Mesh, the use-cases, and the problems it aims to solve.*
### What is a Service Mesh?
In modern distributed environments, applications are broken up into small chunks of code that run inside of a container. These containers need to be able to communicate with each other, and while they normally can, in a Kubernetes environment, there is a higher order of control, visiblity, and security that's required. Each of these containers, or services interact with other services, but must do so in an encrypted an authorized manner. There are other challenges with having coordinate service to service communication. What happens when one particular service is unavailable to provide a response? How would you troubleshoot this, and fix it so it doesn't happen again? How can we tune our applications to respond in an appropriate amount of time?
In modern distributed environments, applications are broken up into small chunks of code that run inside of a container. These containers need to be able to communicate with each other, and while they normally can, in a Kubernetes environment, there is a higher order of control, visibility, and security that's required. Each of these containers, or services interact with other services, but must do so in an encrypted and authorized manner. There are other challenges with having coordinate service to service communication. What happens when one particular service is unavailable to provide a response? How would you troubleshoot this, and fix it so it doesn't happen again? How can we tune our applications to respond in an appropriate amount of time?

These are small subset of challenges when it comes to running and managing applications, or microservices on a network. The unpredictability of the network means we shouldn't rely too much on it being there. We also can't keep changing our code to adapt to changing network conditions, so what do we do?

Enter a Service Mesh. A Service Mesh is an application network layer that handles service-to-service communication, by providing a layer for granular traffic control, AuthN, AuthZ, and observability.


### What are the challenges a Service Mesh aims to solve?m
### What are the challenges a Service Mesh aims to solve?
1. Unreliable and changing networks that are complex, while having to adapt while your microservices scale
2. Ensuring a near zero-trust like environment where, RBAC, AuthZ and AuthN are critical.
3. Ensuring a data-loss prevention approach using encryption and traffic filtration techniques
Expand All @@ -37,7 +37,7 @@ A service mesh usually has a few key components:
- A data plane implemented in both the sidecar and gateways
- The Kubernetes cluster it resides on

To describe how a service mesh behaves, an operator will apply a traffic routing or security policy, and the service mesh control plane will push any configuritions or policy to either the gateways or sidecar proxies. The gateway and sidecars will enforce any traffic rule. In the diagram above, the ingress gateway is the first to receive the external inbound request. It will forward it along to the first service in the request path, service A. Service A has a sidecar to process this request, and send back any telemetry data to the control plane. There's more to this but we'll explore in depth in the following days.
To describe how a service mesh behaves, an operator will apply a traffic routing or security policy, and the service mesh control plane will push any configurations or policy to either the gateways or sidecar proxies. The gateway and sidecars will enforce any traffic rule. In the diagram above, the ingress gateway is the first to receive the external inbound request. It will forward it along to the first service in the request path, service A. Service A has a sidecar to process this request, and send back any telemetry data to the control plane. There's more to this but we'll explore in depth in the following days.

### Relationship to Kubernetes
Kubernetes has some challenges in how it can handle things like multi-cluster and cross-cluster communication, identity stewardship. What a Service Mesh does is it takes on the responsibilities for things like:
Expand All @@ -57,7 +57,7 @@ Istio is an open-source service mesh built by Google, IBM, and Lyft, and current
AppMesh is a service mesh implementation that is proprietary to AWS but primarily focuses in on applications deployed to various AWS services such as ECS, EKS, EC2. Its tight-nit integration into the AWS ecosystem allows for quick onboarding of services into the mesh.

#### Consul
Consul is a serivce mesh offering from Hashicorp that also provides traffic routing, observability, and sercurity much like Istio does.
Consul is a service mesh offering from Hashicorp that also provides traffic routing, observability, and security much like Istio does.

#### Linkerd
Linkerd is an open-source service mesh offering that is lightweight. Similar to Istio, it provides traffic management, observability, and security, using a similar architecture. Linkerd adopts a sidecar-pattern using a Rust-based proxy.
Expand All @@ -66,5 +66,5 @@ Linkerd is an open-source service mesh offering that is lightweight. Similar to
Cilium is a Container Networking Interface that leverages eBPF to optimize packet processing using the Linux kernel. It offers some Service Mesh capabilities, and doesn't use the sidecar model. It proceeds to deploy a per-node instance of Envoy for any sort of Layer 7 processing of requests.

### Conclusion
A serivce mesh is a power application networking layer that provides traffic management, observability, and security. We will explore more in #70DaysofServiceMesh
A service mesh is a power application networking layer that provides traffic management, observability, and security. We will explore more in #70DaysofServiceMesh

26 changes: 13 additions & 13 deletions 70DaysExercises/02-Install and Test a Service Mesh.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,35 +13,35 @@ I spent enough time on some theory on Day 1. Let's dig right into getting a serv
### Installation + Prequisites
I highly advise using something like minikube or a cloud-based K8s cluster that allows you to have load-balancer functionality.

- A Kubernetes cluster running 1.22, 1.23, 1.24, 1.25
- KinD
- Minikube
- Civo K8s
- EKS
- A Kubernetes cluster running 1.22, 1.23, 1.24, 1.25, 1.26,1.27,1.28
- [kinD](https://kind.sigs.k8s.io/)
- [Minikube](https://minikube.sigs.k8s.io/docs/start/)
- [Civo K8s](https://www.civo.com/)
- [EKS](https://docs.aws.amazon.com/eks/latest/userguide/what-is-eks.html)
- Access to a Loadbalancer service
- Metallb
- port-forwarding (not preferred)
- Cloud Load-balancer
- Inlets
- Linux or macOS to run istoctl
- Linux or macOS to run istioctl

**Environment setup**
In my case, I spun up a Civo K3s cluster with 3-nodes, 2 CPU per node, and 4GB of RAM per node.
This is important because you will need enough resources to run the service mesh control plane, which, is Istiod in our case. If you need a cluster in a pinch register for free credit @ civo.com.
This is important because you will need enough resources to run the service mesh control plane, which, is Istiod in our case. If you need a cluster in a pinch register for free credit @civo.com.

#### Install Istio
1. Verify your cluster is up and operational and make sure there aren't any errors. The commands below will output nodes and their IPs and OS info and the running pods in all namespaces, respectively.
```
kubectl get nodes -o wide
kubectl get pods -A
```
2. Download Istio, which will pick up the latest version (at the time of writing its 1.16.1)
2. Download Istio, which will pick up the latest version i.e 1.19.1
```
curl -L https://istio.io/downloadIstio | sh -
```
3. Change to the Istio directory
```
cd istio-1.16.1
cd istio-1.19.1
```
4. Add the istioctl binary to your path
```
Expand Down Expand Up @@ -72,7 +72,7 @@ This is important because you will need enough resources to run the service mesh
```
kubectl get all -n istio-system
```
Your output should look similar in that all components are working. I changed my External-IP to *bring.your.LB.IP*, whcih means your IP will be different. Why do you need mine :P
Your output should look similar in that all components are working. I changed my External-IP to *bring.your.LB.IP*, which means your IP will be different. Why do you need mine :P
```
NAME READY STATUS RESTARTS AGE
pod/istiod-885df7bc9-f9k7c 1/1 Running 0 31m
Expand All @@ -95,7 +95,7 @@ This is important because you will need enough resources to run the service mesh
replicaset.apps/istio-egressgateway-7475c75b68 1 1 1 31m
```
#### Sidecar injection and Bookinfo deployment.
8. While everything looks good, we also want to deploy an application and simulataneously add it to the Service Mesh.
8. While everything looks good, we also want to deploy an application and simultaneously add it to the Service Mesh.
Let's label our default namespace with the *istio-injection=enabled* label. This tells Istiod to push the sidecar to any new microservice deployed to the namespace.
```
kubectl label namespace default istio-injection=enabled
Expand All @@ -104,7 +104,7 @@ Let's label our default namespace with the *istio-injection=enabled* label. This
```
kubectl get ns --show-labels
```
9. Let's deploy our app. Make sure you are the in the same directory as before. If not, change to *istio-1.16.1*
9. Let's deploy our app. Make sure you are the in the same directory as before. If not, change to *istio-1.19.1*
```
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
```
Expand Down Expand Up @@ -148,7 +148,7 @@ Let's label our default namespace with the *istio-injection=enabled* label. This
The one thing to notice here is that all pods have *2/2* containers ready, meaning, the sidecar is now present.

#### Testing functionality
10. One test I'll run is to verify that I can connect to any one of these pods and get a response. Let's deploy a sleep pod. If you were in the same *istio-1.16.1* directory, then you can run this command.
10. One test I'll run is to verify that I can connect to any one of these pods and get a response. Let's deploy a sleep pod. If you were in the same *istio-1.19.1* directory, then you can run this command.
```
kubectl apply -f samples/sleep/sleep.yaml
```
Expand Down
14 changes: 7 additions & 7 deletions 70DaysExercises/03-Comparing Different Service Meshes.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,17 @@ Service Mesh | Open Source or Proprietary | Notes |
---|---|---|
Istio | Open Source | Widely adopted and abstracted
Linkerd | Open Source | Built by Buoyant
Consul | Open Source | Owned by Hashcorp, Cloud offering available
Consul | Open Source | Owned by Hashicorp, Cloud offering available
Kuma | Open Source | Maintained by Kong
Traefik Mesh | Open Source | Specialized Proxy
Open Service Mesh | Open Source | By Microsoft
Gloo Mesh | Proprietary | Built by Solo.io ontop of Istio
AWS App Mesh | Proprietary | AWS specific services
OpenShift Service Mesh | Proprietary | Built by Redhad, based on Istio
OpenShift Service Mesh | Proprietary | Built by Redhat, based on Istio
Tanzu Service Mesh | Proprietary | SaaS based on Istio, built by VMware
Anthos Service Mesh | Proprietary | SaaS based on Istio, built by Google
Bouyant Cloud | Proprietary | SaaS based on Linkerd
Cilium Service Mesh | Open Source | Orginally a CNI
Cilium Service Mesh | Open Source | Originally a CNI


I'll quickly recap some of the key options I'll compare. This was taken from Day 1.
Expand All @@ -32,7 +32,7 @@ Istio is an open-source service mesh built by Google, IBM, and Lyft, and current
AppMesh is a service mesh implementation that is proprietary to AWS but primarily focuses in on applications deployed to various AWS services such as ECS, EKS, EC2. Its tight-nit integration into the AWS ecosystem allows for quick onboarding of services into the mesh.

#### Consul
Consul is a serivce mesh offering from Hashicorp that also provides traffic routing, observability, and sercurity much like Istio does.
Consul is a service mesh offering from Hashicorp that also provides traffic routing, observability, and security much like Istio does.

#### Linkerd
Linkerd is an open-source service mesh offering that is lightweight. Similar to Istio, it provides traffic management, observability, and security, using a similar architecture. Linkerd adopts a sidecar-pattern using a Rust-based proxy.
Expand All @@ -44,7 +44,7 @@ Cilium is a Container Networking Interface that leverages eBPF to optimize packe

Feature | Istio | Linkerd | AppMesh | Consul | Cilium |
---|---|---|---|---|---|
Current Version | 1.16.1 | 2.12 | N/A (it's AWS :D ) | 1.14.3 | 1.12
Current Version | 1.19.1 | 2.14 | N/A (it's AWS :D ) | 1.16.2 | 1.15
Project Creators | Google, Lyft, IBM, Solo | Buoyant | AWS | Hashicorp | Isovalent
Service Proxy | Envoy, Rust-Proxy (experimental) | Linkerd2-proxy | Envoy | Interchangeable, Envoy default | Per-node Envoy
Ingress Capabilities | Yes via the Istio Ingress-Gateway | No; BYO | Yes via AWS | Envoy | Cilium-Based Ingress
Expand All @@ -61,9 +61,9 @@ Sidecar Modes | Sidecar and Sidecar-less | Sidecar | Sidecar | Sidecar | No side
CNI Redirection | Istio CNI Plugin | linkerd-cni | ProxyConfiguration Required | Consul CNI | eBPF Kernel processing
Platform Support | K8s and VMs | K8s | EC2, EKS, ECS, Fargate, K8s on EC2 | K8s, Nomad, ECS, Lambda, VMs | K8s, VMs, Nomad
Multi-cluster Mesh | Yes | Yes | Yes, only AWS | Yes | Yes
Governance and Oversight | Istio Community | Linkered Community | AWS | Hashicorp | Cilium Community
Governance and Oversight | Istio Community | Linkerd Community | AWS | Hashicorp | Cilium Community

### Conclusion
Service Meshes have come a long way in terms of capabilities and the environments they support. Istio appears to be the most feature-complete service mesh, providing a balance of platform support, customizability, extensibility, and is most production ready. Linkered trails right behind with a lighter-weight approach, and is mostly complete as a service mesh. AppMesh is mostly feature-filled but specific to the AWS Ecosystem. Consul is a great contender to Istio and Linkered. The Cilium CNI is taking the approach of using eBPF and climbing up the networking stack to address Service Mesh capabilities, but it has a lot of catching up to do.
Service Meshes have come a long way in terms of capabilities and the environments they support. Istio appears to be the most feature-complete service mesh, providing a balance of platform support, customizability, extensibility, and is most production ready. Linkerd trails right behind with a lighter-weight approach, and is mostly complete as a service mesh. AppMesh is mostly feature-filled but specific to the AWS Ecosystem. Consul is a great contender to Istio and Linkerd. The Cilium CNI is taking the approach of using eBPF and climbing up the networking stack to address Service Mesh capabilities, but it has a lot of catching up to do.

See you on Day 4 of #70DaysOfServiceMesh!
8 changes: 4 additions & 4 deletions 70DaysExercises/04-Traffic Engineering Basics.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Welcome to Day 4 :smile:!!!

In future days of #70DaysofServiceMesh, we'll dig into specific traffic routing resources and how they're used. I'm going to review some of these concepts very briefly and come back and revisit these in detail later.

Traffic management is an important topic in the world of microservices communication because, you have not one or two, you have thousands of services making requests to each other. In the world of physical networking, network devices can be used for flow control and packet routing, but because the size of our networks have grown to accomodate microservices communications, manually creating the path way for each to connect does not scale well.
Traffic management is an important topic in the world of microservices communication because, you have not one or two, you have thousands of services making requests to each other. In the world of physical networking, network devices can be used for flow control and packet routing, but because the size of our networks have grown to accommodate microservices communications, manually creating the path way for each to connect does not scale well.

Kubernetes has done quite a lot to simplify networking for microservices through technologies like CNI, Ingress (and more recently), Gateway API. There are other challenges around traffic routing that can be solved with custom-tailored solutions.

Expand All @@ -23,7 +23,7 @@ Traffic, or requests, will always enter the Service Mesh through some Ingress, s

Client ---> Bookinfo ----> | ProductPage ---> Reviews ---> Ratings |

In the flow above, the client makes a request to Bookinfo (via a DNS name) which is then translated into request towards the first service in the path, ProductPage, which then needs to illicit a respect from Reviews, and Reviews from Ratings.
In the flow above, the client makes a request to Bookinfo (via a DNS name) which is then translated into request towards the first service in the path, ProductPage, which then needs to illicit a response from Reviews, and Reviews from Ratings.

Let's explore the components that make this happen, briefly, and revisit these in the future.

Expand Down Expand Up @@ -176,7 +176,7 @@ Let's apply them:

#### Destination Rule (make sure you are in the right directory)
```
cd istio-1.16.1
cd istio-1.19.1
kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
```

Expand All @@ -192,7 +192,7 @@ kubectl get vs && kubectl get dr

AND THE RESULT
```
marinow@mwm1mbp istio-1.16.1 % kubectl get vs && kubectl get dr
marinow@mwm1mbp istio-1.19.1 % kubectl get vs && kubectl get dr
NAME GATEWAYS HOSTS AGE
bookinfo ["bookinfo-gateway"] ["*"] 4d15h
productpage ["productpage"] 14h
Expand Down
Loading

0 comments on commit 647d2b6

Please sign in to comment.