Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unknown phase feature #1417

Open
BiancaTofan opened this issue May 28, 2024 · 5 comments
Open

unknown phase feature #1417

BiancaTofan opened this issue May 28, 2024 · 5 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@BiancaTofan
Copy link

BiancaTofan commented May 28, 2024

Hello ,

I am using the latest version for descheduler chart and I am trying to define a descheduler policy in order to evict pods which are in an unknown state, but it seems that nothing happens even if there are already 2 pods with this status which are running for more than 86400 seconds. Is it there a problem or my policy isn't defined correctly?

apiVersion: v1
data:
  values.yaml: |
    ---
    cmdOptions:
      v: 7
    deschedulerPolicy:
      profiles:
        - name: ProfileName
          pluginConfig:
            - args:
                evictDaemonSetPods: true
                evictLocalStoragePods: true
                evictFailedBarePods: true
              name: "DefaultEvictor"
            - args:
                nodeAffinityType:
                - "requiredDuringSchedulingIgnoredDuringExecution"
              name: "RemovePodsViolatingNodeAffinity"
            - args:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/name: kured
                maxPodLifeTimeSeconds: 86400
                namespaces:
                  include:
                  - "kubernetes-reboot-daemon"
                states:
                - "Unknown"
                - "Pending"
              name: "PodLifeTime"
          plugins:
            deschedule:
              enabled:
                - "PodLifeTime"
                - "RemovePodsViolatingNodeAffinity"
      strategies: null
    deschedulerPolicyAPIVersion: "descheduler/v1alpha2"
    deschedulingInterval: 5m
    image:
      pullPolicy: Always
    kind: Deployment
    resources:
      limits:
        ephemeral-storage: 100Mi
        memory: 127Mi
      requests:
        cpu: 50m
        ephemeral-storage: 100Mi
        memory: 127Mi
    securityContext:
      allowPrivilegeEscalation: false
      capabilities:
        drop:
          - ALL
      privileged: false
      readOnlyRootFilesystem: true
      runAsGroup: 10001
      runAsNonRoot: true
      runAsUser: 10001

Thank you in advance!

@BiancaTofan
Copy link
Author

furthermore, I noticed also that the phases aren't taken into consideration. Even though I define to be evicted only pods in unknown and pending states, the descheduler evicts pods also in running states. Do I need to add something else? I followed the entire documentation and I used it accordingly.

@googs1025
Copy link
Member

hi ! Which k8s version are you using? In addition, how do you achieve the unknown state? Because k8s pod has already abandoned the unknown state.

// These are the valid statuses of pods.
const (
	// PodPending means the pod has been accepted by the system, but one or more of the containers
	// has not been started. This includes time before being bound to a node, as well as time spent
	// pulling images onto the host.
	PodPending PodPhase = "Pending"
	// PodRunning means the pod has been bound to a node and all of the containers have been started.
	// At least one container is still running or is in the process of being restarted.
	PodRunning PodPhase = "Running"
	// PodSucceeded means that all containers in the pod have voluntarily terminated
	// with a container exit code of 0, and the system is not going to restart any of these containers.
	PodSucceeded PodPhase = "Succeeded"
	// PodFailed means that all containers in the pod have terminated, and at least one container has
	// terminated in a failure (exited with a non-zero exit code or was stopped by the system).
	PodFailed PodPhase = "Failed"
	// PodUnknown means that for some reason the state of the pod could not be obtained, typically due
	// to an error in communicating with the host of the pod.
	// Deprecated: It isn't being set since 2015 (74da3b14b0c0f658b3bb8d2def5094686d0e9095)
	PodUnknown PodPhase = "Unknown"
)

refer to: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/api/core/v1/types.go#L3089

Please forgive me if my understanding is wrong.

@BiancaTofan
Copy link
Author

Hello , I use 1.28 for the kubernetes version. Unknown phase was added recently in descheduler code. I am achieving this state due to kured (kubernetes reboot daemon) which does a node reboot.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 18, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants