Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LeaderElect caused the pod to be unable to schedule in a high availability control plane and remained in a pending state #801

Open
3 of 4 tasks
limylily opened this issue Sep 21, 2024 · 0 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@limylily
Copy link

Area

  • Scheduler
  • Controller
  • Helm Chart
  • Documents

Other components

No response

What happened?

The k8s cluster is a highly available cluster that uses helm to deploy scheduler plugins as the second scheduler. Setting the leaderElect parameter to true and the replicatCount parameter to 3 will cause pod scheduling to remain in a pending state, with no pods or events displayed. Podgroups resources will be in a scheduling state

Afterwards, I configured the leaderElect parameter to flag and the replicCount parameter to 1, and the pod scheduling was normal

What did you expect to happen?

I hope to deploy scheduler plugins as the second scheduler using helm in the k8s high availability cluster, with the leaderElect parameter set to true and the replicatCount parameter set to 3, and pod scheduling functioning properly

How can we reproduce it (as minimally and precisely as possible)?

A highly available k8s cluster, version 1.29.5
Schedule plugins in version 1.29.7
Deploying scheduler plugins as a secondary scheduler using Helm
The value. yaml configuration is as follows

# Default values for scheduler-plugins-as-a-second-scheduler.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

scheduler:
  name: scheduler-plugins-scheduler
  image: registry.k8s.io/scheduler-plugins/kube-scheduler:v0.29.7
  replicaCount: 3
  leaderElect: true
  nodeSelector: {}
  affinity: {}
  tolerations: []

controller:
  name: scheduler-plugins-controller
  image: registry.k8s.io/scheduler-plugins/controller:v0.29.7
  replicaCount: 3
  nodeSelector: {}
  affinity: {}
  tolerations: []

# LoadVariationRiskBalancing and TargetLoadPacking are not enabled by default
# as they need extra RBAC privileges on metrics.k8s.io.

plugins:
  enabled: ["NodeResourcesAllocatable"]
  disabled: [] # only in-tree plugins need to be defined here

# Customize the enabled plugins' config.
# Refer to the "pluginConfig" section of manifests/<plugin>/scheduler-config.yaml.
# For example, for Coscheduling plugin, you want to customize the permit waiting timeout to 10 seconds:
pluginConfig:
#- name: Coscheduling
#  args:
#    permitWaitingTimeSeconds: 10 # default is 60
# Or, customize the other plugins
# - name: NodeResourceTopologyMatch
#   args:
#     scoringStrategy:
#       type: MostAllocated # default is LeastAllocated
#- name: SySched
#  args:
#    defaultProfileNamespace: "default"
#    defaultProfileName: "full-seccomp"
- name: NodeResourcesAllocatable
  args:
    mode: Least
    resources:
    - name: hygon.com/dcu
      weight: 100

Anything else we need to know?

k8s cluster

I have removed the taint for the master02 and master03 nodes of k8s

NAME            STATUS   ROLES           AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
master01   Ready    control-plane   106d   v1.29.5   x.x.x.1     <none>        Ubuntu 20.04.1 LTS   5.4.0-182-generic   docker://26.1.3
master02   Ready    control-plane   105d   v1.29.5   x.x.x.2     <none>        Ubuntu 20.04.1 LTS   5.4.0-42-generic    docker://26.1.3
master03   Ready    control-plane   105d   v1.29.5   x.x.x.4     <none>        Ubuntu 20.04.1 LTS   5.4.0-42-generic    docker://26.1.3
node001    Ready    <none>          105d   v1.29.5   x.x.x.5     <none>        Ubuntu 20.04.1 LTS   5.4.0-42-generic    docker://26.1.3

podgroup info

sudo kubectl describe podgroup inference-model -n inference-model
Name:         inference-model
Namespace:    inference-model
Labels:       <none>
Annotations:  <none>
API Version:  scheduling.x-k8s.io/v1alpha1
Kind:         PodGroup
Metadata:
  Creation Timestamp:  2024-09-21T07:30:00Z
  Generation:          2
  Resource Version:    41571883
  UID:                 c54752ec-5058-4cb9-b70c-26a6cf1a9a3e
Spec:
  Min Member:                1
  Schedule Timeout Seconds:  60
Status:
  Occupied By:  inference-model/pause-56dccb8bd9
  Phase:        Scheduling
Events:         <none>

scheduler-plugins config

sudo kubectl describe configmap -n scheduler-plugins scheduler-config
Name:         scheduler-config
Namespace:    scheduler-plugins
Labels:       app.kubernetes.io/managed-by=Helm
Annotations:  meta.helm.sh/release-name: scheduler-plugins
              meta.helm.sh/release-namespace: scheduler-plugins

Data
====
scheduler-config.yaml:
----
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
leaderElection:
  leaderElect: true
profiles:
# Compose all plugins in one profile
- schedulerName: scheduler-plugins-scheduler
  plugins:
    multiPoint:
      enabled:
      - name: NodeResourcesAllocatable
      disabled:
  pluginConfig:
  - args:
      mode: Least
      resources:
      - name: hygon.com/dcu
        weight: 100
    name: NodeResourcesAllocatable


BinaryData
====

Events:  <none>

node resources

```json kubectl get nodes -o json | jq '.items[].status.allocatable' { "cpu": "128", "ephemeral-storage": "202598387377", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "memory": "263955952Ki", "pods": "110" } { "cpu": "64", "ephemeral-storage": "202655010481", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "hygon.com/dcu": "4", "memory": "131809692Ki", "pods": "110" } { "cpu": "64", "ephemeral-storage": "202655010481", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "hygon.com/dcu": "4", "memory": "131875604Ki", "pods": "110" } { "cpu": "64", "ephemeral-storage": "202655010481", "hugepages-1Gi": "0", "hugepages-2Mi": "0", "hygon.com/dcu": "4", "memory": "131875600Ki", "pods": "110" } ````

scheduler-plugins-scheduler logs

sudo kubectl logs -n scheduler-plugins deployment/scheduler-plugins-scheduler
Found 3 pods, using pod/scheduler-plugins-scheduler-84d449c69-7vkl4
I0921 07:28:23.450286       1 serving.go:380] Generated self-signed cert in-memory
W0921 07:28:23.451259       1 client_config.go:618] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0921 07:28:23.839085       1 server.go:154] "Starting Kubernetes Scheduler" version="v0.29.7"
I0921 07:28:23.839119       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0921 07:28:23.846041       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0921 07:28:23.846040       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0921 07:28:23.846074       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0921 07:28:23.846083       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0921 07:28:23.846362       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0921 07:28:23.846394       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
I0921 07:28:23.846773       1 secure_serving.go:213] Serving securely on [::]:10259
I0921 07:28:23.846800       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0921 07:28:23.947242       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
I0921 07:28:23.947243       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0921 07:28:23.947421       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0921 07:28:23.947423       1 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-scheduler...

podgroup info

cat podgroup.yaml
apiVersion: scheduling.x-k8s.io/v1alpha1
kind: PodGroup
metadata:
  name: inference-model
spec:
  scheduleTimeoutSeconds: 10
  minMember: 1

Kubernetes version

$ sudo kubectl version

Client Version: v1.29.5
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.5

Scheduler Plugins version

1.29.7
@limylily limylily added the kind/bug Categorizes issue or PR as related to a bug. label Sep 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

1 participant