This repo is not in alignment with current versions of Kubernetes, and will not be active in the future. The CoreOS Kubernetes documentation has been moved to the tectonic-docs repo, where it will be published and updated.
For tested, maintained, and production-ready Kubernetes instructions, see our Tectonic Installer documentation. The Tectonic Installer provides a Terraform-based Kubernetes installation. It is open source, uses upstream Kubernetes and can be easily customized.
This guide walks a deployer through launching a multi-node Kubernetes cluster on bare metal servers running CoreOS. After completing this guide, a deployer will be able to interact with the Kubernetes API from their workstation using the kubectl
CLI tool.
All Kubernetes controllers and nodes must use CoreOS version 962.0.0 or greater for the kubelet-wrapper
script to be present in the image. If you wish to use an earlier version (e.g. from the 'stable' channel) see kubelet-wrapper for more information.
This configuration uses the flannel overlay network to manage the pod network. Many bare metal configurations may instead have an existing self-managed network. In this scenario, it is common to use Calico to manage pod network policy while omitting the overlay network, and interoperating with existing physical network gear over BGP.
See the Kubernetes networking documentation for more information on self-managed networking options.
The CoreOS Matchbox project can automate network booting and provisioning Container Linux clusters. It provides:
- The Matchbox HTTP/gRPC service matches machines to configs, by hardware attributes, and can be installed as a binary, RPM, container image, or deployed on Kubernetes itself.
- Guides for creating network boot environments with iPXE/GRUB
- Support for Terraform to allow teams to manage and version bare-metal resources
- Example clusters including an etcd cluster and multi-node Kubernetes cluster.
Get started provisioning machines into clusters or read the docs.
Container Linux bare metal installation documents provide low level background details about the boot mechanisms:
Mixing multiple methods is possible. For example, doing an install to disk for the machines running the etcd cluster and Kubernetes master nodes, but PXE-booting the worker machines.
Did you install CoreOS on your machines? An SSH connection to each machine is all that's needed. We'll start the configuration next.
I'm ready to get started