-
Helm is a package manager for Kubernetes applications
-
The Kubernetes Resources and the Articatory resources; we can package or bundle it using the
HELM
-
It streamlines the process of installing, upgrading, and managing applications deployed on Kubernetes clusters.
-
Helm uses charts, which are packages of pre-configured Kubernetes resources, to simplify the deployment and management of complex applications.
- A chart is a package of pre-configured Kubernetes resources that can be easily deployed.
- It includes YAML manifests describing Kubernetes resources (such as deployments, services, and ingress) and customizable templates for these resources.
- Charts can be versioned and shared, making it easy to distribute and reuse configurations for applications.
- Helm provides a command-line interface (CLI) for interacting with charts and managing Kubernetes applications.
- Developers and operators use the Helm CLI to create, package, install, upgrade, and uninstall charts.
- Helm charts can be stored in a repository, making it easy to share and distribute charts across teams and organizations.
- Helm supports both public and private chart repositories.
-
Tiller was the server-side component of Helm in Helm 2. It interacted with the Kubernetes API server to manage releases.
-
In Helm 3, Tiller has been deprecated, and Helm now interacts directly with the Kubernetes API server. This improves security and simplifies Helm's architecture.
Install the Helm CLI on your local machine. Helm is available for Linux, macOS, and Windows.
Create a Helm chart to define the structure and configuration of your application.
Package the chart into a compressed archive (.tgz file).
Install the chart on a Kubernetes cluster using the Helm CLI. Helm will create a release, which is an instance of a chart running on the cluster.
Use Helm to upgrade or roll back releases as needed. This allows you to make changes to your application's configuration or deploy new versions.
Explore public or private Helm repositories to discover and use charts created by others.
Helm 3, the latest version of Helm, introduced several improvements and changes, including the removal of Tiller. In Helm 3, Helm directly interacts with the Kubernetes API server, enhancing security and simplifying Helm's architecture.
To get started with Helm 3, you can use the following commands:
# Initialize Helm (one-time setup)
helm init --upgrade
# Create a new Helm chart
helm create mychart
# Install a chart
helm install my-release ./mychart
# Upgrade a release
helm upgrade my-release ./mychart
# Uninstall a release
helm uninstall my-release
- Helm is widely used in the Kubernetes ecosystem to manage the deployment and lifecycle of applications, making it easier to package, version, and share Kubernetes configurations.
https://helm.sh/docs/intro/install/
# Create a Namespace "monitoring"
kubectl create namespace monitoring
# Download the repo for Prometheus
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
# Update the Repo
helm repo update
# To list the Charts in all namespaces
helm ls -A
# Install Prometheus from the Charts
helm install prometheus prometheus-community/kube-prometheus-stack -n monitoring
## Gives us the output like
NAME: prometheus
LAST DEPLOYED: Fri Dec 22 13:55:57 2023
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
kubectl --namespace monitoring get pods -l "release=prometheus"
Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
## Switch the namespace to `monitoring` [Refer to kubens installation below]
kubens monitoring
## Chekout pods in monitoring namespaces
kubectl get pods
## Gives out the response as
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 9m57s
prometheus-grafana-ff7876654-rqxrs 3/3 Running 0 10m
prometheus-kube-prometheus-operator-5f84b5dc75-2qkjh 1/1 Running 0 10m
prometheus-kube-state-metrics-6bbff75769-shznd 1/1 Running 0 10m
prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 0 9m57s
prometheus-prometheus-node-exporter-q5xd8 1/1 Running 0 10m
## It not only creates the Pods but also other type of resources like deployments etc within itself like replica set, deployments, daemon sets etc
## We will expose our services for the Prometheus and Grafana
kubectl expose service prometheus-grafana --type=NodePort --name=grafana-lb --port=3000 --target-port=3000 -n monitoring
kubectl expose service prometheus-kube-prometheus-prometheus --type=NodePort --name=prometheus-lb -n monitoring
## Observe the services
kubectl get svc
## O/p as
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana-lb NodePort 10.108.102.55 <none> 3000:32352/TCP 18s
prometheus-lb NodePort 10.110.87.71 <none> 9090:31993/TCP,8080:31641/TCP 18s
- Prometheus is running on the
31993
as we have not provided it theNODE PORT
it randomly chooses it from the30000 - 32767
- Access it on the browser using Minikube IP
# TO access the minikube IP
minikube ip
#GIves out the IP
192.168.49.2
# MINIKUBE IP: NODE_PORT
192.168.49.2:31993
- It is a 3rd Factory Product which is provided by some engineers and wee are using and releasing it; such softwares are called as
OPEN SOURCE
- Prometheus does NOT monitor the service/pods
- Prometheus only monitors the
Endpoints
; we can have 1 different pod but Prometheus will monitor only the endpoints. - For the Application to be monitored by Prometheus there needs to be certain rules which need to be followed by the Application
# Lets monitor the enpoints for the HPA then it will acts as
kubectl get hpa -n default
# Whenever the deployment scales then PODS increases and all entries are maintained in the Endpoints; Prometheus monitoring the endpoints will also get the updated entries of the POD; we can monitor it in the Prometheus as well
kubectl get hpa -n default
kubectl get ep -n default
NAME ENDPOINTS AGE
myapp-production-service 10.244.0.66:9000 9d
- It will be the same as that of the Prometheus steps
- Get the NODEPORT from the service
kubectl get svc grafana-lb -n monitoring
- We will be getting it as
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana-lb NodePort 10.108.102.55 <none> 3000:32352/TCP 29m
- The Grafana Dashboard will run on the PORT
32352
and can access it using theMinikubeIP
- Access it in the browser using the
# minikubeip:NODEPORT
192.168.49.2:32352
- It will pop the Grafana Dashboard
- Login with the credentials like
username: admin
password: prom-operator / admin
-
Grafana acts as another layer that acts on Prometheus
-
The flow will work as An application running on K8's Cluster => Prometheus monitoring the Cluster [Application that follows the rules(adherent to the rules); can only be monitored by Prometheus] => Grafana works on top of Prometheus
-
For Alerting we have some other layers like
OpsGenie
=> responsible for getting automated the calls; so DevOps folks can doon call changes
-
Remember the Flow from Automated Calls to K8's Endpoints
-
Automated Calls[Oncall] => Dashboard of Grafana ==> Made using Prometheus metrics ==> Gets data from Endpoints ==> Endpoint is being monitored by Prometheus
-
The Dashboard we can have monitoring tabs on PODS, DEPLOYMENTS, etc on various other resources.
-
The K8's Infrastructure and the Underlying Application will require some integrations os as to get the application metrics in the Prometheus.
-
The required Integrations are
Prometheus Object
andService Monitoring Object[SMO]
-
The
Prometheus Object is taken care by the Helm
(Helm ensures that the Prometheus object is made when pulling the repo) -
Prometheus Object
Creation is a one-time task -
The
Service Monitoring Object
needs to be created by us for the underlying application. -
In the
SMO
Object we need to define the POD and the ENDPOINT(PORT) -
SMO Object can keep on increasing for the UI, App-layer, DB, Nginx
-
We have it present under the file named
smon.yaml
-
Contents of which are as follows
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
name: myapp-production
namespace: default
spec:
endpoints:
- interval: 30s
port: web
selector:
matchLabels:
app: myapp-production
-
In the context of Kubernetes monitoring, a
ServiceMonitor
is an object used by Prometheus to define how it should monitor a specific service. -
This YAML file contains configuration details for Prometheus to scrape metrics from a service named
myapp-production
in thedefault
namespace. -
spec
: Contains the specification or configuration for the ServiceMonitor. -
endpoints
: Defines the endpoints that Prometheus should scrape metrics from. -
interval
: Specifies the interval at which Prometheus should scrape metrics from the specified endpoint. In this case, it's set to 30s. -
port
: Specifies the service port from which Prometheus should scrape metrics. In this example, it's named web. -
selector
: Specifies the selector to identify the target service. -
matchLabels
: Specifies the labels that Prometheus should use to identify the target service. In this example, it's looking for a service with the label app: myapp-production. -
Apply the smon.yaml file
kubectl apply -f smon.yaml
## Get the SMO object
kubectl get smon -n default
- This will break the
Grafana
- Switch to the monitoring namespace
kubens monitoring
- Get the Prometheus Object and Edit the Details
kubectl edit prometheus prometheus-kube-prometheus-prometheus
- Replace the following snippet
### replace the following code
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector:
matchLabels:
release: prometheus
### With the following code
serviceMonitorNamespaceSelector:
matchLabels:
kubernetes.io/metadata.name: default
serviceMonitorSelector: {}
### Restart the prometheus-kube-prometheus-prometheus
kubectl get pods
kubectl delete pods prometheus-kube-prometheus-operator-<hash>
### new pods should come up in some time
kubectl get pods
### Verify it in the Prometheus Dashboard
-
K8 facilitates us with the logging, but the dashboard for monitoring is one thing that the dashboarding and default monitoring setup.
-
Monitoring Setup is essential in the Production; it provides automated alerts and a tool that is able to call us based on certain pre-filled automation rules and Prometheus, grafana provide us with the complete setup/picture that is missing from Kubernetes.
-
So we can do the data analysis on the Logs and can monitor the Logs even of the previous years.
-
Kubernetes itself does not directly manage or store node-level logs; it delegates this responsibility to the underlying container runtime. Therefore, the exact location and method for accessing logs depend on the container runtime in use in your Kubernetes cluster.
-
In case of Container Runtime as "Docker" logs are stored under
/var/lib/docker/containers/<container-id>/<container-id>-json.log
-
kubens
is a command-line utility that helps you switch between Kubernetes namespaces quickly. It is part of the Kubectx project, which provides enhancements to working with Kubernetes contexts and namespaces. The Kubectx project includes two main tools: kubectx and kubens. -
kubens (Kube Namespace Switcher)
:- The Kubens tool simplifies the process of switching between Kubernetes namespaces.
- It provides an easy-to-use command to list available namespaces and switch to a different namespace.
- The primary goal is to streamline namespace-related operations, making it more convenient for users who work with multiple namespaces in Kubernetes clusters.
- Clone the
kubectx
from the Git
git clone https://github.com/ahmetb/kubectx.git ~/.kubectx
- Add the following lines to your shell profile file (e.g., ~/.bashrc, ~/.zshrc, etc.):
export PATH=~/.kubectx:$PATH
alias kubectx='kubectx'
alias kubens='kubens'
Source the updated profile
source ~/.bashrc