Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update vpc dns #127

Merged
merged 2 commits into from
Oct 8, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
191 changes: 191 additions & 0 deletions docs/advance/vpc-dns.en.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,191 @@
# Custom VPC DNS

Due to the isolation of the user-defined VPC and the default VPC network, the coredns deployed in the default VPC cannot be accessed from within the custom VPC. If you wish to use the intra-cluster domain name resolution capability provided by Kubernetes within your custom VPC, you can refer to this document and utilize the vpc-dns CRD to do so.

This CRD eventually deploys a coredns that has two NICs, one in the user-defined VPC and the other in the default VPC to enable network interoperability and provide an internal load balancing within the custom VPC through the [custom VPC internal load balancing](./vpc-internal-lb.en.md).

## Deployment of vpc-dns dependent resources

```yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:vpc-dns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: vpc-dns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:vpc-dns
subjects:
- kind: ServiceAccount
name: vpc-dns
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vpc-dns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: vpc-dns-corefile
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
prefer_udp
}
cache 30
loop
reload
loadbalance
}
```

In addition to the above resources, the feature relies on the nat-gw-pod image for routing configuration.

## Configuring Additional Network

```yaml
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: ovn-nad
namespace: default
spec:
config: '{
"cniVersion": "0.3.0",
"type": "kube-ovn",
"server_socket": "/run/openvswitch/kube-ovn-daemon.sock",
"provider": "ovn-nad.default.ovn"
}'
```

## Configuring Configmap for vpc-dns

Create a configmap under the kube-system namespace to configure the vpc-dns usage parameters that will be used later to start the vpc-dns function:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: vpc-dns-config
namespace: kube-system
data:
coredns-vip: 10.96.0.3
enable-vpc-dns: "true"
nad-name: ovn-nad
nad-provider: ovn-nad.default.ovn
```

* `enable-vpc-dns`:enable vpc dns feature, true as default
* `coredns-image`:dns deployment image. Defaults to the clustered coredns deployment version
* `coredns-vip`:The vip that provides lb services for coredns.
* `coredns-template`:The URL where the coredns deployment template is located. defaults to the current version of the ovn directory. `coredns-template.yaml` default is `https://raw.githubusercontent.com/kubeovn/kube-ovn/<kube-ovn version>/yamls/coredns-template.yaml`.
* `nad-name`:Configured network-attachment-definitions Resource name.
* `nad-provider`:The name of the provider to use.
* `k8s-service-host`:The ip used for coredns to access the k8s apiserver service, defaults to the apiserver address within the cluster.
* `k8s-service-port`:The port used for coredns to access the k8s apiserver service, defaults to the apiserver port within the cluster.

## Deploying vpc-dns

configure vpc-dns yaml:

```yaml
kind: VpcDns
apiVersion: kubeovn.io/v1
metadata:
name: test-cjh1
spec:
vpc: cjh-vpc-1
subnet: cjh-subnet-1
replicas: 2
```

* `vpc` : The name of the vpc used to deploy the dns component.
* `subnet`:Sub-name for deploying dns components.
* `replicas`: vpc dns deployment replicas

View information about deployed resources:

```bash
# kubectl get vpc-dns
NAME ACTIVE VPC SUBNET
test-cjh1 false cjh-vpc-1 cjh-subnet-1
test-cjh2 true cjh-vpc-1 cjh-subnet-2
```

ACTIVE : true Customized dns component deployed, false No deployment.

Restrictions: only one custom dns component will be deployed under a VPC

* When multiple vpc-dns resources are configured under a VPC (i.e., different subnets for the same VPC), only one vpc-dns resource is in the state `true``, and the others are`fasle`.
* When the `true` vpc-dns is removed, the other `false` vpc-dns will be obtained for deployment.

## Validate deployment results

To view vpc-dns Pod status, use label app=vpc-dns to view all vpc-dns pod status:

```bash
# kubectl -n kube-system get pods -l app=vpc-dns
NAME READY STATUS RESTARTS AGE
vpc-dns-test-cjh1-7b878d96b4-g5979 1/1 Running 0 28s
vpc-dns-test-cjh1-7b878d96b4-ltmf9 1/1 Running 0 28s
```

View switch lb rule status information:

```bash
# kubectl -n kube-system get slr
NAME VIP PORT(S) SERVICE AGE
vpc-dns-test-cjh1 10.96.0.3 53/UDP,53/TCP,9153/TCP kube-system/slr-vpc-dns-test-cjh1 113s
```

Go to the Pod under this VPC and test the dns resolution:

```bash
nslookup kubernetes.default.svc.cluster.local 10.96.0.3
```

The subnet where the switch lb rule under this VPC is located and the pods under other subnets under the same VPC can be resolved.
33 changes: 4 additions & 29 deletions docs/advance/vpc-dns.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,34 +103,6 @@ spec:
}'
```

## 修改 ovn-default 子网的 provider

修改 ovn-default 的 provider,为上面 nad 配置的 provider `ovn-nad.default.ovn`

```yaml
apiVersion: kubeovn.io/v1
kind: Subnet
metadata:
name: ovn-default
spec:
cidrBlock: 10.16.0.0/16
default: true
disableGatewayCheck: false
disableInterConnection: false
enableDHCP: false
enableIPv6RA: false
excludeIps:
- 10.16.0.1
gateway: 10.16.0.1
gatewayType: distributed
logicalGateway: false
natOutgoing: true
private: false
protocol: IPv4
provider: ovn-nad.default.ovn # 只需修改该字段
vpc: ovn-cluster
```

## 配置 vpc-dns 的 Configmap

在 kube-system 命名空间下创建 configmap,配置 vpc-dns 使用参数,用于后面启动 vpc-dns 功能:
Expand Down Expand Up @@ -169,10 +141,12 @@ metadata:
spec:
vpc: cjh-vpc-1
subnet: cjh-subnet-1
replicas: 2
```

* `vpc` : 用于部署 dns 组件的 vpc 名称。
* `subnet`:用于部署 dns 组件的子名称。
* `replicas`: vpc dns deployment replicas

查看部署资源的信息:

Expand All @@ -185,7 +159,8 @@ test-cjh2 true cjh-vpc-1 cjh-subnet-2

`ACTIVE` : `true` 部署了自定义 dns 组件,`false` 无部署。

* 限制:一个 VPC 下只会部署一个自定义 dns 组件;
限制:一个 VPC 下只会部署一个自定义 dns 组件;

* 当一个 VPC 下配置多个 vpc-dns 资源(即同一个 VPC 不同的 subnet),只有一个 vpc-dns 资源状态 `true`,其他为 `fasle`;
* 当 `true` 的 vpc-dns 被删除掉,会获取其他 `false` 的 vpc-dns 进行部署。

Expand Down