Skip to content

Horizontal Pod Autoscaler built with predictive abilities using statistical models

License

Notifications You must be signed in to change notification settings

shandanjay/predictive-horizontal-pod-autoscaler

 
 

Repository files navigation

Build go.dev Go Report Card Documentation Status License

This project is supported by:

Predictive Horizontal Pod Autoscaler

This is a Custom Pod Autoscaler; aiming to have identical functionality to the Horizontal Pod Autoscaler, however with added predictive elements using statistical models.

This uses the Horizontal Pod Autoscaler Custom Pod Autoscaler extensively to provide most functionality for the Horizontal Pod Autoscaler parts.

Why would I use it?

This autoscaler lets you choose models and fine tune them in order to predict how many replicas a resource should have, preempting events such as regular, repeated high load.

Features

  • Functionally identical to Horizontal Pod Autoscaler for calculating replica counts without prediction.
  • Choice of statistical models to apply over Horizontal Pod Autoscaler replica counting logic.
    • Holt-Winters Smoothing
    • Linear Regression
  • Allows customisation of Kubernetes autoscaling options without master node access. Can therefore work on managed solutions such as EKS or GCP.
    • CPU Initialization Period.
    • Downscale Stabilization.
    • Sync Period.
    • Initial Readiness Delay.
  • Runs in Kubernetes as a standard Pod.

How does it work?

This project works by calculating the number of replicas a resource should have, then storing these values and using statistical models against them to produce predictions for the future. These predictions are compared and can be used instead of the raw replica count calculated by the Horizontal Pod Autoscaler logic.

More information

See the wiki for more information, such as guides and references.

See the examples/ directory for working code samples.

Developing this project

Environment

Developing this project requires these dependencies:

Any Python dependencies must be installed by running:

pip install -r requirements-dev.txt

To view docs locally you need some Python dependencies, run:

pip install -r docs/requirements.txt

It is recommended to test locally using a local Kubernetes managment system, such as k3d (allows running a small Kubernetes cluster locally using Docker).

Once you have a cluster available, you should install the Custom Pod Autoscaler Operator (CPAO) onto the cluster to let you install the PHPA.

With the CPAO installed you can install your development builds of the PHPA onto the cluster by building the image locally, and then pushing the image to the K8s cluster's registry (to do that with k3d you can use the k3d image import command).

Finally you can deploy a PHPA example (see the examples/ directory for choices) to test your changes.

Note that the examples generally use ImagePullPolicy: Always, you may need to change this to ImagePullPolicy: IfNotPresent to use your local build.

Commands

  • make vendor_modules - generates a vendor folder.
  • make - builds the Predictive HPA binary.
  • make docker - builds the Predictive HPA image.
  • make lint - lints the code.
  • make beautify - beautifies the code, must be run to pass the CI.
  • make unittest - runs the unit tests.
  • make doc - hosts the documentation locally, at 127.0.0.1:8000.
  • make view_coverage - opens up any generated coverage reports in the browser.

About

Horizontal Pod Autoscaler built with predictive abilities using statistical models

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Go 87.3%
  • Python 10.8%
  • Other 1.9%