diff --git a/docs/guides/kafka/autoscaler/_index.md b/docs/guides/kafka/autoscaler/_index.md new file mode 100644 index 000000000..22a1e3830 --- /dev/null +++ b/docs/guides/kafka/autoscaler/_index.md @@ -0,0 +1,10 @@ +--- +title: Autoscaling +menu: + docs_{{ .version }}: + identifier: kf-auto-scaling + name: Autoscaling + parent: kf-kafka-guides + weight: 46 +menu_name: docs_{{ .version }} +--- \ No newline at end of file diff --git a/docs/guides/kafka/autoscaler/compute/_index.md b/docs/guides/kafka/autoscaler/compute/_index.md new file mode 100644 index 000000000..78729bab8 --- /dev/null +++ b/docs/guides/kafka/autoscaler/compute/_index.md @@ -0,0 +1,10 @@ +--- +title: Compute Autoscaling +menu: + docs_{{ .version }}: + identifier: kf-compute-auto-scaling + name: Compute Autoscaling + parent: kf-auto-scaling + weight: 46 +menu_name: docs_{{ .version }} +--- diff --git a/docs/guides/kafka/autoscaler/compute/combined.md b/docs/guides/kafka/autoscaler/compute/combined.md new file mode 100644 index 000000000..6e8f80e8d --- /dev/null +++ b/docs/guides/kafka/autoscaler/compute/combined.md @@ -0,0 +1,535 @@ +--- +title: Kafka Combined Autoscaling +menu: + docs_{{ .version }}: + identifier: kf-auto-scaling-combined + name: Combined Cluster + parent: kf-compute-auto-scaling + weight: 20 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Autoscaling the Compute Resource of a Kafka Replicaset Database + +This guide will show you how to use `KubeDB` to autoscale compute resources i.e. cpu and memory of a Kafka combined cluster. + +## Before You Begin + +- At first, you need to have a Kubernetes cluster, and the `kubectl` command-line tool must be configured to communicate with your cluster. + +- Install `KubeDB` Provisioner, Ops-manager and Autoscaler operator in your cluster following the steps [here](/docs/setup/README.md). + +- Install `Metrics Server` from [here](https://github.com/kubernetes-sigs/metrics-server#installation) + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/kafka/concepts/kafka.md) + - [KafkaAutoscaler](/docs/guides/kafka/concepts/kafkaautoscaler.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + - [Compute Resource Autoscaling Overview](/docs/guides/kafka/autoscaler/compute/overview.md) + +To keep everything isolated, we are going to use a separate namespace called `demo` throughout this tutorial. + +```bash +$ kubectl create ns demo +namespace/demo created +``` + +> **Note:** YAML files used in this tutorial are stored in [docs/examples/kafka](/docs/examples/mongodb) directory of [kubedb/docs](https://github.com/kubedb/docs) repository. + +## Autoscaling of Combined Cluster + +Here, we are going to deploy a `Kafka` Combined Cluser using a supported version by `KubeDB` operator. Then we are going to apply `KafkaAutoscaler` to set up autoscaling. + +#### Deploy Kafka Replicaset + +In this section, we are going to deploy a Kafka Replicaset database with version `3.6.1`. Then, in the next section we will set up autoscaling for this database using `KafkaAutoscaler` CRD. Below is the YAML of the `Kafka` CR that we are going to create, + +```yaml +apiVersion: kubedb.com/v1 +kind: Kafka +metadata: + name: mg-rs + namespace: demo +spec: + version: "4.4.26" + replicaSet: + name: "replicaset" + replicas: 3 + storageType: Durable + storage: + resources: + requests: + storage: 1Gi + podTemplate: + spec: + containers: + - name: mongo + resources: + requests: + cpu: "200m" + memory: "300Mi" + limits: + cpu: "200m" + memory: "300Mi" + deletionPolicy: WipeOut + +``` + +Let's create the `Kafka` CRO we have shown above, + +```bash +$ kubectl create -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/compute/mg-rs.yaml +mongodb.kubedb.com/mg-rs created +``` + +Now, wait until `mg-rs` has status `Ready`. i.e, + +```bash +$ kubectl get mg -n demo +NAME VERSION STATUS AGE +mg-rs 4.4.26 Ready 2m53s +``` + +Let's check the Pod containers resources, + +```bash +$ kubectl get pod -n demo mg-rs-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +Let's check the Kafka resources, +```bash +$ kubectl get mongodb -n demo mg-rs -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "200m", + "memory": "300Mi" + }, + "requests": { + "cpu": "200m", + "memory": "300Mi" + } +} +``` + +You can see from the above outputs that the resources are same as the one we have assigned while deploying the mongodb. + +We are now ready to apply the `KafkaAutoscaler` CRO to set up autoscaling for this database. + +### Compute Resource Autoscaling + +Here, we are going to set up compute resource autoscaling using a KafkaAutoscaler Object. + +#### Create KafkaAutoscaler Object + +In order to set up compute resource autoscaling for this replicaset database, we have to create a `KafkaAutoscaler` CRO with our desired configuration. Below is the YAML of the `KafkaAutoscaler` object that we are going to create, + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: mg-as-rs + namespace: demo +spec: + databaseRef: + name: mg-rs + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + replicaSet: + trigger: "On" + podLifeTimeThreshold: 5m + resourceDiffPercentage: 20 + minAllowed: + cpu: 400m + memory: 400Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" +``` + +Here, + +- `spec.databaseRef.name` specifies that we are performing compute resource scaling operation on `mg-rs` database. +- `spec.compute.replicaSet.trigger` specifies that compute autoscaling is enabled for this database. +- `spec.compute.replicaSet.podLifeTimeThreshold` specifies the minimum lifetime for at least one of the pod to initiate a vertical scaling. +- `spec.compute.replicaset.resourceDiffPercentage` specifies the minimum resource difference in percentage. The default is 10%. + If the difference between current & recommended resource is less than ResourceDiffPercentage, Autoscaler Operator will ignore the updating. +- `spec.compute.replicaSet.minAllowed` specifies the minimum allowed resources for the database. +- `spec.compute.replicaSet.maxAllowed` specifies the maximum allowed resources for the database. +- `spec.compute.replicaSet.controlledResources` specifies the resources that are controlled by the autoscaler. +- `spec.compute.replicaSet.containerControlledValues` specifies which resource values should be controlled. The default is "RequestsAndLimits". +- `spec.opsRequestOptions` contains the options to pass to the created OpsRequest. It has 3 fields. Know more about them here : [readinessCriteria](/docs/guides/mongodb/concepts/opsrequest.md#specreadinesscriteria), [timeout](/docs/guides/mongodb/concepts/opsrequest.md#spectimeout), [apply](/docs/guides/mongodb/concepts/opsrequest.md#specapply). + +If it was an `InMemory database`, we could also autoscaler the inMemory resources using Kafka compute autoscaler, like below. + +#### Autoscale inMemory database +To autoscale inMemory databases, you need to specify the `spec.compute.replicaSet.inMemoryStorage` section. + +```yaml + ... + inMemoryStorage: + usageThresholdPercentage: 80 + scalingFactorPercentage: 30 + ... +``` +It has two fields inside it. +- `usageThresholdPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased. Default usage threshold is 70%. +- `scalingFactorPercentage`. If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased by this given scaling percentage. Default scaling percentage is 50%. + +> Note: To inform you, We use `db.serverStatus().inMemory.cache["bytes currently in the cache"]` & `db.serverStatus().inMemory.cache["maximum bytes configured"]` to calculate the used & maximum inMemory storage respectively. + +Let's create the `KafkaAutoscaler` CR we have shown above, + +```bash +$ kubectl apply -f https://github.com/kubedb/docs/raw/{{< param "info.version" >}}/docs/examples/mongodb/autoscaling/compute/mg-as-rs.yaml +mongodbautoscaler.autoscaling.kubedb.com/mg-as-rs created +``` + +#### Verify Autoscaling is set up successfully + +Let's check that the `mongodbautoscaler` resource is created successfully, + +```bash +$ kubectl get mongodbautoscaler -n demo +NAME AGE +mg-as-rs 102s + +$ kubectl describe mongodbautoscaler mg-as-rs -n demo +Name: mg-as-rs +Namespace: demo +Labels: +Annotations: +API Version: autoscaling.kubedb.com/v1alpha1 +Kind: KafkaAutoscaler +Metadata: + Creation Timestamp: 2022-10-27T06:56:34Z + Generation: 1 + Managed Fields: + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:annotations: + .: + f:kubectl.kubernetes.io/last-applied-configuration: + f:spec: + .: + f:compute: + .: + f:replicaSet: + .: + f:containerControlledValues: + f:controlledResources: + f:maxAllowed: + .: + f:cpu: + f:memory: + f:minAllowed: + .: + f:cpu: + f:memory: + f:podLifeTimeThreshold: + f:resourceDiffPercentage: + f:trigger: + f:databaseRef: + f:opsRequestOptions: + .: + f:apply: + f:timeout: + Manager: kubectl-client-side-apply + Operation: Update + Time: 2022-10-27T06:56:34Z + API Version: autoscaling.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:checkpoints: + f:conditions: + f:vpas: + Manager: kubedb-autoscaler + Operation: Update + Subresource: status + Time: 2022-10-27T07:01:05Z + Resource Version: 640314 + UID: ab03414a-67a2-4da4-8960-6e67ae56b503 +Spec: + Compute: + Replica Set: + Container Controlled Values: RequestsAndLimits + Controlled Resources: + cpu + memory + Max Allowed: + Cpu: 1 + Memory: 1Gi + Min Allowed: + Cpu: 400m + Memory: 400Mi + Pod Life Time Threshold: 5m0s + Resource Diff Percentage: 20 + Trigger: On + Database Ref: + Name: mg-rs + Ops Request Options: + Apply: IfReady + Timeout: 3m0s +Status: + Checkpoints: + Cpu Histogram: + Bucket Weights: + Index: 2 + Weight: 10000 + Index: 3 + Weight: 5000 + Reference Timestamp: 2022-10-27T00:00:00Z + Total Weight: 0.3673624107285783 + First Sample Start: 2022-10-27T07:00:42Z + Last Sample Start: 2022-10-27T07:00:55Z + Last Update Time: 2022-10-27T07:01:00Z + Memory Histogram: + Reference Timestamp: 2022-10-28T00:00:00Z + Ref: + Container Name: mongodb + Vpa Object Name: mg-rs + Total Samples Count: 3 + Version: v3 + Cpu Histogram: + Bucket Weights: + Index: 0 + Weight: 10000 + Reference Timestamp: 2022-10-27T00:00:00Z + Total Weight: 0.3673624107285783 + First Sample Start: 2022-10-27T07:00:42Z + Last Sample Start: 2022-10-27T07:00:55Z + Last Update Time: 2022-10-27T07:01:00Z + Memory Histogram: + Reference Timestamp: 2022-10-28T00:00:00Z + Ref: + Container Name: replication-mode-detector + Vpa Object Name: mg-rs + Total Samples Count: 3 + Version: v3 + Conditions: + Last Transition Time: 2022-10-27T07:01:05Z + Message: Successfully created mongoDBOpsRequest demo/mops-mg-rs-cxhsy1 + Observed Generation: 1 + Reason: CreateOpsRequest + Status: True + Type: CreateOpsRequest + Vpas: + Conditions: + Last Transition Time: 2022-10-27T07:01:00Z + Status: True + Type: RecommendationProvided + Recommendation: + Container Recommendations: + Container Name: mongodb + Lower Bound: + Cpu: 400m + Memory: 400Mi + Target: + Cpu: 400m + Memory: 400Mi + Uncapped Target: + Cpu: 49m + Memory: 262144k + Upper Bound: + Cpu: 1 + Memory: 1Gi + Vpa Name: mg-rs +Events: +``` +So, the `mongodbautoscaler` resource is created successfully. + +you can see in the `Status.VPAs.Recommendation` section, that recommendation has been generated for our database. Our autoscaler operator continuously watches the recommendation generated and creates an `mongodbopsrequest` based on the recommendations, if the database pods are needed to scaled up or down. + +Let's watch the `mongodbopsrequest` in the demo namespace to see if any `mongodbopsrequest` object is created. After some time you'll see that a `mongodbopsrequest` will be created based on the recommendation. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-rs-cxhsy1 VerticalScaling Progressing 10s +``` + +Let's wait for the ops request to become successful. + +```bash +$ watch kubectl get mongodbopsrequest -n demo +Every 2.0s: kubectl get mongodbopsrequest -n demo +NAME TYPE STATUS AGE +mops-mg-rs-cxhsy1 VerticalScaling Successful 68s +``` + +We can see from the above output that the `KafkaOpsRequest` has succeeded. If we describe the `KafkaOpsRequest` we will get an overview of the steps that were followed to scale the database. + +```bash +$ kubectl describe mongodbopsrequest -n demo mops-mg-rs-cxhsy1 +Name: mops-mg-rs-cxhsy1 +Namespace: demo +Labels: +Annotations: +API Version: ops.kubedb.com/v1alpha1 +Kind: KafkaOpsRequest +Metadata: + Creation Timestamp: 2022-10-27T07:01:05Z + Generation: 1 + Managed Fields: + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:metadata: + f:ownerReferences: + .: + k:{"uid":"ab03414a-67a2-4da4-8960-6e67ae56b503"}: + f:spec: + .: + f:apply: + f:databaseRef: + f:timeout: + f:type: + f:verticalScaling: + .: + f:replicaSet: + .: + f:limits: + .: + f:cpu: + f:memory: + f:requests: + .: + f:cpu: + f:memory: + Manager: kubedb-autoscaler + Operation: Update + Time: 2022-10-27T07:01:05Z + API Version: ops.kubedb.com/v1alpha1 + Fields Type: FieldsV1 + fieldsV1: + f:status: + .: + f:conditions: + f:observedGeneration: + f:phase: + Manager: kubedb-ops-manager + Operation: Update + Subresource: status + Time: 2022-10-27T07:02:31Z + Owner References: + API Version: autoscaling.kubedb.com/v1alpha1 + Block Owner Deletion: true + Controller: true + Kind: KafkaAutoscaler + Name: mg-as-rs + UID: ab03414a-67a2-4da4-8960-6e67ae56b503 + Resource Version: 640598 + UID: f7c6db00-dd0e-4850-8bad-5f0855ce3850 +Spec: + Apply: IfReady + Database Ref: + Name: mg-rs + Timeout: 3m0s + Type: VerticalScaling + Vertical Scaling: + Replica Set: + Limits: + Cpu: 400m + Memory: 400Mi + Requests: + Cpu: 400m + Memory: 400Mi +Status: + Conditions: + Last Transition Time: 2022-10-27T07:01:05Z + Message: Kafka ops request is vertically scaling database + Observed Generation: 1 + Reason: VerticalScaling + Status: True + Type: VerticalScaling + Last Transition Time: 2022-10-27T07:02:30Z + Message: Successfully Vertically Scaled Replicaset Resources + Observed Generation: 1 + Reason: UpdateReplicaSetResources + Status: True + Type: UpdateReplicaSetResources + Last Transition Time: 2022-10-27T07:02:31Z + Message: Successfully Vertically Scaled Database + Observed Generation: 1 + Reason: Successful + Status: True + Type: Successful + Observed Generation: 1 + Phase: Successful +Events: + Type Reason Age From Message + ---- ------ ---- ---- ------- + Normal PauseDatabase 4m9s KubeDB Ops-manager Operator Pausing Kafka demo/mg-rs + Normal PauseDatabase 4m9s KubeDB Ops-manager Operator Successfully paused Kafka demo/mg-rs + Normal Starting 4m9s KubeDB Ops-manager Operator Updating Resources of PetSet: mg-rs + Normal UpdateReplicaSetResources 4m9s KubeDB Ops-manager Operator Successfully updated replicaset Resources + Normal Starting 4m9s KubeDB Ops-manager Operator Updating Resources of PetSet: mg-rs + Normal UpdateReplicaSetResources 4m9s KubeDB Ops-manager Operator Successfully updated replicaset Resources + Normal UpdateReplicaSetResources 2m44s KubeDB Ops-manager Operator Successfully Vertically Scaled Replicaset Resources + Normal ResumeDatabase 2m43s KubeDB Ops-manager Operator Resuming Kafka demo/mg-rs + Normal ResumeDatabase 2m43s KubeDB Ops-manager Operator Successfully resumed Kafka demo/mg-rs + Normal Successful 2m43s KubeDB Ops-manager Operator Successfully Vertically Scaled Database + Normal UpdateReplicaSetResources 2m43s KubeDB Ops-manager Operator Successfully Vertically Scaled Replicaset Resources + +``` + +Now, we are going to verify from the Pod, and the Kafka yaml whether the resources of the replicaset database has updated to meet up the desired state, Let's check, + +```bash +$ kubectl get pod -n demo mg-rs-0 -o json | jq '.spec.containers[].resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} + +$ kubectl get mongodb -n demo mg-rs -o json | jq '.spec.podTemplate.spec.resources' +{ + "limits": { + "cpu": "400m", + "memory": "400Mi" + }, + "requests": { + "cpu": "400m", + "memory": "400Mi" + } +} +``` + + +The above output verifies that we have successfully auto scaled the resources of the Kafka replicaset database. + +## Cleaning Up + +To clean up the Kubernetes resources created by this tutorial, run: + +```bash +kubectl delete mg -n demo mg-rs +kubectl delete mongodbautoscaler -n demo mg-as-rs +``` \ No newline at end of file diff --git a/docs/guides/kafka/autoscaler/compute/overview.md b/docs/guides/kafka/autoscaler/compute/overview.md new file mode 100644 index 000000000..529d7d4cc --- /dev/null +++ b/docs/guides/kafka/autoscaler/compute/overview.md @@ -0,0 +1,55 @@ +--- +title: Kafka Compute Autoscaling Overview +menu: + docs_{{ .version }}: + identifier: mg-auto-scaling-overview + name: Overview + parent: mg-compute-auto-scaling + weight: 10 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# Kafka Compute Resource Autoscaling + +This guide will give an overview on how KubeDB Autoscaler operator autoscales the database compute resources i.e. cpu and memory using `kafkaautoscaler` crd. + +## Before You Begin + +- You should be familiar with the following `KubeDB` concepts: + - [Kafka](/docs/guides/mongodb/concepts/mongodb.md) + - [KafkaAutoscaler](/docs/guides/kafka/concepts/autoscaler.md) + - [KafkaOpsRequest](/docs/guides/kafka/concepts/kafkaopsrequest.md) + +## How Compute Autoscaling Works + +The following diagram shows how KubeDB Autoscaler operator autoscales the resources of `Kafka` database components. Open the image in a new tab to see the enlarged version. + +
+  Compute Auto Scaling process of Kafka +
Fig: Compute Auto Scaling process of Kafka
+
+ +The Auto Scaling process consists of the following steps: + +1. At first, a user creates a `Kafka` Custom Resource Object (CRO). + +2. `KubeDB` Provisioner operator watches the `Kafka` CRO. + +3. When the operator finds a `Kafka` CRO, it creates required number of `PetSets` and related necessary stuff like secrets, services, etc. + +4. Then, in order to set up autoscaling of the various components (ie. Combined, Broker, Controller) of the `Kafka` cluster the user creates a `KafkaAutoscaler` CRO with desired configuration. + +5. `KubeDB` Autoscaler operator watches the `KafkaAutoscaler` CRO. + +6. `KubeDB` Autoscaler operator generates recommendation using the modified version of kubernetes [official recommender](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler/pkg/recommender) for different components of the database, as specified in the `KafkaAutoscaler` CRO. + +7. If the generated recommendation doesn't match the current resources of the database, then `KubeDB` Autoscaler operator creates a `KafkaOpsRequest` CRO to scale the database to match the recommendation generated. + +8. `KubeDB` Ops-manager operator watches the `KafkaOpsRequest` CRO. + +9. Then the `KubeDB` Ops-manager operator will scale the database component vertically as specified on the `KafkaOpsRequest` CRO. + +In the next docs, we are going to show a step by step guide on Autoscaling of various Kafka database components using `KafkaAutoscaler` CRD. diff --git a/docs/guides/kafka/concepts/kafkaautoscaler.md b/docs/guides/kafka/concepts/kafkaautoscaler.md new file mode 100644 index 000000000..38448e187 --- /dev/null +++ b/docs/guides/kafka/concepts/kafkaautoscaler.md @@ -0,0 +1,164 @@ +--- +title: KafkaAutoscaler CRD +menu: + docs_{{ .version }}: + identifier: kf-autoscaler-concepts + name: KafkaAutoscaler + parent: kf-concepts-kafka + weight: 26 +menu_name: docs_{{ .version }} +section_menu_id: guides +--- + +> New to KubeDB? Please start [here](/docs/README.md). + +# KafkaAutoscaler + +## What is KafkaAutoscaler + +`KafkaAutoscaler` is a Kubernetes `Custom Resource Definitions` (CRD). It provides a declarative configuration for autoscaling [Kafka](https://www.mongodb.com/) compute resources and storage of database components in a Kubernetes native way. + +## KafkaAutoscaler CRD Specifications + +Like any official Kubernetes resource, a `KafkaAutoscaler` has `TypeMeta`, `ObjectMeta`, `Spec` and `Status` sections. + +Here, some sample `KafkaAutoscaler` CROs for autoscaling different components of database is given below: + +**Sample `KafkaAutoscaler` for combined cluster:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-autoscaler-combined + namespace: demo +spec: + databaseRef: + name: kafka-dev + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + node: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 250m + memory: 350Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + node: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +**Sample `KafkaAutoscaler` for topology cluster:** + +```yaml +apiVersion: autoscaling.kubedb.com/v1alpha1 +kind: KafkaAutoscaler +metadata: + name: kf-autoscaler-topology + namespace: demo +spec: + databaseRef: + name: kafka-prod + opsRequestOptions: + timeout: 3m + apply: IfReady + compute: + broker: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 200m + memory: 300Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + controller: + trigger: "On" + podLifeTimeThreshold: 24h + minAllowed: + cpu: 200m + memory: 300Mi + maxAllowed: + cpu: 1 + memory: 1Gi + controlledResources: ["cpu", "memory"] + containerControlledValues: "RequestsAndLimits" + resourceDiffPercentage: 10 + storage: + broker: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 + controller: + expansionMode: "Online" + trigger: "On" + usageThreshold: 60 + scalingThreshold: 50 +``` + +Here, we are going to describe the various sections of a `KafkaAutoscaler` crd. + +A `KafkaAutoscaler` object has the following fields in the `spec` section. + +### spec.databaseRef + +`spec.databaseRef` is a required field that point to the [Kafka](/docs/guides/kafka/concepts/kafka.md) object for which the autoscaling will be performed. This field consists of the following sub-field: + +- **spec.databaseRef.name :** specifies the name of the [Kafka](/docs/guides/kafka/concepts/kafka.md) object. + +### spec.opsRequestOptions +These are the options to pass in the internally created opsRequest CRO. `opsRequestOptions` has two fields. + +### spec.compute + +`spec.compute` specifies the autoscaling configuration for the compute resources i.e. cpu and memory of the database components. This field consists of the following sub-field: + +- `spec.compute.node` indicates the desired compute autoscaling configuration for a combined Kafka cluster. +- `spec.compute.broker` indicates the desired compute autoscaling configuration for broker of a topology Kafka database. +- `spec.compute.controller` indicates the desired compute autoscaling configuration for controller of a topology Kafka database. + + +All of them has the following sub-fields: + +- `trigger` indicates if compute autoscaling is enabled for this component of the database. If "On" then compute autoscaling is enabled. If "Off" then compute autoscaling is disabled. +- `minAllowed` specifies the minimal amount of resources that will be recommended, default is no minimum. +- `maxAllowed` specifies the maximum amount of resources that will be recommended, default is no maximum. +- `controlledResources` specifies which type of compute resources (cpu and memory) are allowed for autoscaling. Allowed values are "cpu" and "memory". +- `containerControlledValues` specifies which resource values should be controlled. Allowed values are "RequestsAndLimits" and "RequestsOnly". +- `resourceDiffPercentage` specifies the minimum resource difference between recommended value and the current value in percentage. If the difference percentage is greater than this value than autoscaling will be triggered. +- `podLifeTimeThreshold` specifies the minimum pod lifetime of at least one of the pods before triggering autoscaling. + +There are two more fields, those are only specifiable for the percona variant inMemory databases. +- `inMemoryStorage.UsageThresholdPercentage` If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased. +- `inMemoryStorage.ScalingFactorPercentage` If db uses more than usageThresholdPercentage of the total memory, memoryStorage should be increased by this given scaling percentage. + +### spec.storage + +`spec.compute` specifies the autoscaling configuration for the storage resources of the database components. This field consists of the following sub-field: + +- `spec.compute.node` indicates the desired storage autoscaling configuration for a combined Kafka cluster. +- `spec.compute.broker` indicates the desired storage autoscaling configuration for broker of a combined Kafka cluster. +- `spec.compute.controller` indicates the desired storage autoscaling configuration for controller of a topology Kafka cluster. + + +All of them has the following sub-fields: + +- `trigger` indicates if storage autoscaling is enabled for this component of the database. If "On" then storage autoscaling is enabled. If "Off" then storage autoscaling is disabled. +- `usageThreshold` indicates usage percentage threshold, if the current storage usage exceeds then storage autoscaling will be triggered. +- `scalingThreshold` indicates the percentage of the current storage that will be scaled. +- `expansionMode` indicates the volume expansion mode.