Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement scheduling/placement asserters #7

Open
jaypipes opened this issue May 21, 2024 · 0 comments
Open

Implement scheduling/placement asserters #7

jaypipes opened this issue May 21, 2024 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@jaypipes
Copy link
Member

jaypipes commented May 21, 2024

I'd like an easy way of expressing scheduling assertions for pods in Kubernetes.

I'm thinking a declarative assertion block that looks like this:

tests:
 - kube.get: deployments/nginx
   assert:
     placement:
       spread:
        - kubernetes.io/hostname
        - topology.kubernetes.io/zone

would instruct GDT/kube to do the following:

  • fetch the nginx Deployment information, including the bound hosts for all Pods involved in the Deployment.
  • fetch all the Node information
  • assert that the Pods in the nginx Deployment are evenly spread across Nodes with the same topology.kubernetes.io/zone and kubernetes.io/hostname label
tests:
 - kube.get: deployments/nginx
   assert:
     placement:
       spread: topology.kubernetes.io/zone
       pack: kubernetes.io/hostname      

would instruct GDT/kube to do the following:

  • fetch the nginx Deployment information, including the bound hosts for all Pods involved in the Deployment.
  • fetch all the Node information
  • assert that the Pods in the nginx Deployment are evenly spread across Nodes with the same topology.kubernetes.io/zone label
  • assert that the Pods in the nginx Deployment that are placed within a topology.kubernetes.io/zone are bin-packed onto the same kubernetes.io/hostname
@jaypipes jaypipes added the enhancement New feature or request label May 21, 2024
jaypipes added a commit that referenced this issue Jun 1, 2024
The `assert.placement` field of a `gdt-kube` test Spec allows a test author to
specify the expected scheduling outcome for a set of Pods returned by the
Kubernetes API server from the result of a `kube.get` call.

Suppose you have a Deployment resource with a `TopologySpreadConstraints` that
specifies the Pods in the Deployment must land on different hosts:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
       - name: nginx
         image: nginx:latest
         ports:
          - containerPort: 80
      topologySpreadConstraints:
       - maxSkew: 1
         topologyKey: kubernetes.io/hostname
         whenUnsatisfiable: DoNotSchedule
         labelSelector:
           matchLabels:
             app: nginx
```

You can create a `gdt-kube` test case that verifies that your `nginx`
Deployment's Pods are evenly spread across all available hosts:

```yaml
tests:
 - kube:
     get: deployments/nginx
   assert:
     placement:
       spread: kubernetes.io/hostname
```

If there are more hosts than the `spec.replicas` in the Deployment, `gdt-kube`
will ensure that each Pod landed on a unique host. If there are fewer hosts
than the `spec.replicas` in the Deployment, `gdt-kube` will ensure that there
is an even spread of Pods to hosts, with any host having no more than one more
Pod than any other.

Debug/trace output includes information on how the placement spread
looked like to the gdt-kube placement spread asserter:

```
jaypipes@lappie:~/src/github.com/gdt-dev/kube$ go test -v -run TestPlacementSpread ./eval_test.go
=== RUN   TestPlacementSpread
=== RUN   TestPlacementSpread/placement-spread
[gdt] [placement-spread] kube: create [ns: default]
[gdt] [placement-spread] create-deployment (try 1 after 1.254µs) ok: true
[gdt] [placement-spread] using timeout of 40s (expected: false)
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) ok: false
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) ok: false
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) ok: false
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) ok: false
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) failure: assertion failed: match field not equal: $.status.readyReplicas had different values. expected 6 but found 3
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 5 after 3.785007183s) ok: true
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, unique nodes: 3
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, pods per node: [2 2 2]
[gdt] [placement-spread] deployment-spread-evenly-across-hosts (try 1 after 3.369µs) ok: true
[gdt] [placement-spread] kube: delete [ns: default]
[gdt] [placement-spread] delete-deployment (try 1 after 1.185µs) ok: true

--- PASS: TestPlacementSpread (4.98s)
    --- PASS: TestPlacementSpread/placement-spread (4.96s)
PASS
ok  	command-line-arguments	4.993s
```

Issue #7

Signed-off-by: Jay Pipes <[email protected]>
jaypipes added a commit that referenced this issue Jun 1, 2024
The `assert.placement` field of a `gdt-kube` test Spec allows a test author to
specify the expected scheduling outcome for a set of Pods returned by the
Kubernetes API server from the result of a `kube.get` call.

Suppose you have a Deployment resource with a `TopologySpreadConstraints` that
specifies the Pods in the Deployment must land on different hosts:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
       - name: nginx
         image: nginx:latest
         ports:
          - containerPort: 80
      topologySpreadConstraints:
       - maxSkew: 1
         topologyKey: kubernetes.io/hostname
         whenUnsatisfiable: DoNotSchedule
         labelSelector:
           matchLabels:
             app: nginx
```

You can create a `gdt-kube` test case that verifies that your `nginx`
Deployment's Pods are evenly spread across all available hosts:

```yaml
tests:
 - kube:
     get: deployments/nginx
   assert:
     placement:
       spread: kubernetes.io/hostname
```

If there are more hosts than the `spec.replicas` in the Deployment, `gdt-kube`
will ensure that each Pod landed on a unique host. If there are fewer hosts
than the `spec.replicas` in the Deployment, `gdt-kube` will ensure that there
is an even spread of Pods to hosts, with any host having no more than one more
Pod than any other.

Debug/trace output includes information on how the placement spread
looked like to the gdt-kube placement spread asserter:

```
jaypipes@lappie:~/src/github.com/gdt-dev/kube$ go test -v -run TestPlacementSpread ./eval_test.go
=== RUN   TestPlacementSpread
=== RUN   TestPlacementSpread/placement-spread
[gdt] [placement-spread] kube: create [ns: default]
[gdt] [placement-spread] create-deployment (try 1 after 1.254µs) ok: true
[gdt] [placement-spread] using timeout of 40s (expected: false)
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) ok: false
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) ok: false
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) ok: false
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) ok: false
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) failure: assertion failed: match field not equal: $.status.readyReplicas had different values. expected 6 but found 3
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 5 after 3.785007183s) ok: true
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, unique nodes: 3
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, pods per node: [2 2 2]
[gdt] [placement-spread] deployment-spread-evenly-across-hosts (try 1 after 3.369µs) ok: true
[gdt] [placement-spread] kube: delete [ns: default]
[gdt] [placement-spread] delete-deployment (try 1 after 1.185µs) ok: true

--- PASS: TestPlacementSpread (4.98s)
    --- PASS: TestPlacementSpread/placement-spread (4.96s)
PASS
ok  	command-line-arguments	4.993s
```

Issue #7

Signed-off-by: Jay Pipes <[email protected]>
jaypipes added a commit that referenced this issue Jun 1, 2024
The `assert.placement` field of a `gdt-kube` test Spec allows a test author to
specify the expected scheduling outcome for a set of Pods returned by the
Kubernetes API server from the result of a `kube.get` call.

Suppose you have a Deployment resource with a `TopologySpreadConstraints` that
specifies the Pods in the Deployment must land on different hosts:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
       - name: nginx
         image: nginx:latest
         ports:
          - containerPort: 80
      topologySpreadConstraints:
       - maxSkew: 1
         topologyKey: kubernetes.io/hostname
         whenUnsatisfiable: DoNotSchedule
         labelSelector:
           matchLabels:
             app: nginx
```

You can create a `gdt-kube` test case that verifies that your `nginx`
Deployment's Pods are evenly spread across all available hosts:

```yaml
tests:
 - kube:
     get: deployments/nginx
   assert:
     placement:
       spread: kubernetes.io/hostname
```

If there are more hosts than the `spec.replicas` in the Deployment, `gdt-kube`
will ensure that each Pod landed on a unique host. If there are fewer hosts
than the `spec.replicas` in the Deployment, `gdt-kube` will ensure that there
is an even spread of Pods to hosts, with any host having no more than one more
Pod than any other.

Debug/trace output includes information on how the placement spread
looked like to the gdt-kube placement spread asserter:

```
jaypipes@lappie:~/src/github.com/gdt-dev/kube$ go test -v -run TestPlacementSpread ./eval_test.go
=== RUN   TestPlacementSpread
=== RUN   TestPlacementSpread/placement-spread
[gdt] [placement-spread] kube: create [ns: default]
[gdt] [placement-spread] create-deployment (try 1 after 1.254µs) ok: true
[gdt] [placement-spread] using timeout of 40s (expected: false)
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) ok: false
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) ok: false
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) ok: false
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) ok: false
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) failure: assertion failed: match field not equal: $.status.readyReplicas had different values. expected 6 but found 3
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 5 after 3.785007183s) ok: true
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, unique nodes: 3
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, pods per node: [2 2 2]
[gdt] [placement-spread] deployment-spread-evenly-across-hosts (try 1 after 3.369µs) ok: true
[gdt] [placement-spread] kube: delete [ns: default]
[gdt] [placement-spread] delete-deployment (try 1 after 1.185µs) ok: true

--- PASS: TestPlacementSpread (4.98s)
    --- PASS: TestPlacementSpread/placement-spread (4.96s)
PASS
ok  	command-line-arguments	4.993s
```

Issue #7

Signed-off-by: Jay Pipes <[email protected]>
jaypipes added a commit that referenced this issue Jun 1, 2024
The `assert.placement` field of a `gdt-kube` test Spec allows a test author to
specify the expected scheduling outcome for a set of Pods returned by the
Kubernetes API server from the result of a `kube.get` call.

Suppose you have a Deployment resource with a `TopologySpreadConstraints` that
specifies the Pods in the Deployment must land on different hosts:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
       - name: nginx
         image: nginx:latest
         ports:
          - containerPort: 80
      topologySpreadConstraints:
       - maxSkew: 1
         topologyKey: kubernetes.io/hostname
         whenUnsatisfiable: DoNotSchedule
         labelSelector:
           matchLabels:
             app: nginx
```

You can create a `gdt-kube` test case that verifies that your `nginx`
Deployment's Pods are evenly spread across all available hosts:

```yaml
tests:
 - kube:
     get: deployments/nginx
   assert:
     placement:
       spread: kubernetes.io/hostname
```

If there are more hosts than the `spec.replicas` in the Deployment, `gdt-kube`
will ensure that each Pod landed on a unique host. If there are fewer hosts
than the `spec.replicas` in the Deployment, `gdt-kube` will ensure that there
is an even spread of Pods to hosts, with any host having no more than one more
Pod than any other.

Debug/trace output includes information on how the placement spread
looked like to the gdt-kube placement spread asserter:

```
jaypipes@lappie:~/src/github.com/gdt-dev/kube$ go test -v -run TestPlacementSpread ./eval_test.go
=== RUN   TestPlacementSpread
=== RUN   TestPlacementSpread/placement-spread
[gdt] [placement-spread] kube: create [ns: default]
[gdt] [placement-spread] create-deployment (try 1 after 1.254µs) ok: true
[gdt] [placement-spread] using timeout of 40s (expected: false)
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) ok: false
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) ok: false
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) ok: false
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) ok: false
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) failure: assertion failed: match field not equal: $.status.readyReplicas had different values. expected 6 but found 3
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 5 after 3.785007183s) ok: true
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, unique nodes: 3
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, pods per node: [2 2 2]
[gdt] [placement-spread] deployment-spread-evenly-across-hosts (try 1 after 3.369µs) ok: true
[gdt] [placement-spread] kube: delete [ns: default]
[gdt] [placement-spread] delete-deployment (try 1 after 1.185µs) ok: true

--- PASS: TestPlacementSpread (4.98s)
    --- PASS: TestPlacementSpread/placement-spread (4.96s)
PASS
ok  	command-line-arguments	4.993s
```

Issue #7

Signed-off-by: Jay Pipes <[email protected]>
jaypipes added a commit that referenced this issue Jun 1, 2024
The `assert.placement` field of a `gdt-kube` test Spec allows a test author to
specify the expected scheduling outcome for a set of Pods returned by the
Kubernetes API server from the result of a `kube.get` call.

Suppose you have a Deployment resource with a `TopologySpreadConstraints` that
specifies the Pods in the Deployment must land on different hosts:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
       - name: nginx
         image: nginx:latest
         ports:
          - containerPort: 80
      topologySpreadConstraints:
       - maxSkew: 1
         topologyKey: kubernetes.io/hostname
         whenUnsatisfiable: DoNotSchedule
         labelSelector:
           matchLabels:
             app: nginx
```

You can create a `gdt-kube` test case that verifies that your `nginx`
Deployment's Pods are evenly spread across all available hosts:

```yaml
tests:
 - kube:
     get: deployments/nginx
   assert:
     placement:
       spread: kubernetes.io/hostname
```

If there are more hosts than the `spec.replicas` in the Deployment, `gdt-kube`
will ensure that each Pod landed on a unique host. If there are fewer hosts
than the `spec.replicas` in the Deployment, `gdt-kube` will ensure that there
is an even spread of Pods to hosts, with any host having no more than one more
Pod than any other.

Debug/trace output includes information on how the placement spread
looked like to the gdt-kube placement spread asserter:

```
jaypipes@lappie:~/src/github.com/gdt-dev/kube$ go test -v -run TestPlacementSpread ./eval_test.go
=== RUN   TestPlacementSpread
=== RUN   TestPlacementSpread/placement-spread
[gdt] [placement-spread] kube: create [ns: default]
[gdt] [placement-spread] create-deployment (try 1 after 1.254µs) ok: true
[gdt] [placement-spread] using timeout of 40s (expected: false)
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) ok: false
[gdt] [placement-spread] deployment-ready (try 1 after 2.482µs) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) ok: false
[gdt] [placement-spread] deployment-ready (try 2 after 307.618472ms) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) ok: false
[gdt] [placement-spread] deployment-ready (try 3 after 1.245091704s) failure: assertion failed: match field not equal: $.status.readyReplicas not present in subject
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) ok: false
[gdt] [placement-spread] deployment-ready (try 4 after 2.496969168s) failure: assertion failed: match field not equal: $.status.readyReplicas had different values. expected 6 but found 3
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread] deployment-ready (try 5 after 3.785007183s) ok: true
[gdt] [placement-spread] kube: get [ns: default]
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, unique nodes: 3
[gdt] [placement-spread/assert-placement-spread] domain: topology.kubernetes.io/zone, pods per node: [2 2 2]
[gdt] [placement-spread] deployment-spread-evenly-across-hosts (try 1 after 3.369µs) ok: true
[gdt] [placement-spread] kube: delete [ns: default]
[gdt] [placement-spread] delete-deployment (try 1 after 1.185µs) ok: true

--- PASS: TestPlacementSpread (4.98s)
    --- PASS: TestPlacementSpread/placement-spread (4.96s)
PASS
ok  	command-line-arguments	4.993s
```

Issue #7

Signed-off-by: Jay Pipes <[email protected]>
@jaypipes jaypipes self-assigned this Jun 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant