Skip to content

Commit

Permalink
add docs on using Azure cni v1 using single NIC per Linux node
Browse files Browse the repository at this point in the history
  • Loading branch information
nawazkh committed Jul 12, 2023
1 parent c255dd0 commit cb6138e
Showing 1 changed file with 132 additions and 0 deletions.
132 changes: 132 additions & 0 deletions docs/book/src/topics/addons.md
Original file line number Diff line number Diff line change
Expand Up @@ -147,6 +147,138 @@ Apply kube-flannel.yml
kubectl apply -f kube-flannel.yml
```

## Using Azure CNI V1

While following the [quick start steps in Cluster API book](https://cluster-api.sigs.k8s.io/user/quick-start.html#quick-start), Azure CNI v1 can be used in place of Calico as a [container networking interface solution](https://cluster-api.sigs.k8s.io/user/quick-start.html#deploy-a-cni-solution) for your workload cluster.

Artifacts required for Azure CNI:

- [azure-cni.yaml](https://raw.githubusercontent.com/Azure/azure-container-networking/v1.5.3/hack/manifests/cni-installer-v1.yaml)

### Limitations

- Azure CNI v1 is only supported for Linux nodes. Refer to: [CAPZ#3650](https://github.com/kubernetes-sigs/cluster-api-provider-azure/issues/3650)

- We can only configure one subnet per control-plane node. Refer to: [CAPZ#3506](https://github.com/kubernetes-sigs/cluster-api-provider-azure/issues/3506)

- We can only configure one Network Interface per worker node. Refer to: [Azure-container-networking#3611](https://github.com/Azure/azure-container-networking/issues/1945)

### Update Cluster Configuration

The following resources need to be updated when using `capi-quickstart.yaml` (the default cluster manifest generated while following the Cluster API quick start).

- `kind: AzureCluster`
- update `spec.networkSpecs.subnets` with the name and role of the subnets you want to use in your workload cluster.

- ```yaml
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AzureCluster
metadata:
name: ${CLUSTER_NAME}
namespace: default
spec:
.
.
networkSpec:
subnets:
- name: control-plane-subnet # update this as per your nomenclature
role: control-plane
- name: node-subnet # update this as per your nomenclature
role: node
.
.
```

- `kind: KubeadmControlPlane` of control plane nodes
- add `max-pods: "30"` to `spec.kubeadmConfigSpec.initConfiguration.nodeRegistration.kubeletExtraArgs`.
- add `max-pods: "30"` to `spec.kubeadmConfigSpec.joinConfiguration.nodeRegistration.kubeletExtraArgs`.

- ```yaml
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: ${CLUSTER_NAME}-control-plane
namespace: default
spec:
kubeadmConfigSpec:
.
.
initConfiguration:
nodeRegistration:
kubeletExtraArgs:
max-pods: "30"
.
.
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
max-pods: "30"
.
.
```

- `kind: AzureMachineTemplate` of control-plane
- Add `networkInterfaces` to controlplane's `AzureMachineTemplate`

- ```yaml
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AzureMachineTemplate
metadata:
name: ${CLUSTER_NAME}-control-plane
namespace: default
spec:
template:
spec:
.
.
networkInterfaces:
- privateIPConfigs: 30
subnetName: control-plane-subnet
.
.
```

- `kind: AzureMachineTemplate` of worker node
- Add `networkInterfaces` to worker node's `AzureMachineTemplate`

- ```yaml
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AzureMachineTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
namespace: default
spec:
template:
spec:
networkInterfaces:
- privateIPConfigs: 30
subnetName: node-subnet
.
.
```

- `kind: KubeadmControlPlane` of worker nodes
- add `max-pods: "30"` to `spec.template.spec.joinConfiguration.nodeRegistration.kubeletExtraArgs`.

- ```yaml
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: ${CLUSTER_NAME}-md-0
namespace: default
spec:
template:
spec:
.
.
joinConfiguration:
nodeRegistration:
kubeletExtraArgs:
max-pods: "30"
.
.
```

# External Cloud Provider

The "external" or "out-of-tree" cloud provider for Azure is the recommended cloud provider for CAPZ clusters. The "in-tree" cloud provider has been deprecated since v1.20 and only bug fixes are allowed in its Kubernetes repository directory.
Expand Down

0 comments on commit cb6138e

Please sign in to comment.