Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Descheduling framework wrap up #1187

Open
ingvagabund opened this issue Jul 11, 2023 · 12 comments
Open

Descheduling framework wrap up #1187

ingvagabund opened this issue Jul 11, 2023 · 12 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@ingvagabund
Copy link
Contributor

ingvagabund commented Jul 11, 2023

Remaining bits:

@knelasevero
Copy link
Contributor

As we discussed in the community meeting, wanted to list a few efforts that we can start looking at again with the current state of the framework (now that we unblocked a bunch of efforts that were paused before):

Let me know if I am forgetting something. We probably should give weights to those with regards to complexity and importance and the prioritize what we want as must haves first.

@knelasevero
Copy link
Contributor

@damemi @ingvagabund @a7i

@damemi
Copy link
Contributor

damemi commented Jul 11, 2023

Like we talked about on the sig call, we have 2 categories of things to work on now for what's next: tech debt we shelved in the interest of moving along a framework release, and feature development that was blocked on the framework refactoring too much of the code.

I would like to prioritize the tech debt. But at this point, that shouldn't significantly block any more feature development, so those can happen in parallel.

For a long-term roadmap, the ideal "done" goal should be getting to a v1 api/v1.0 release. What that means right now isn't totally clear besides making the descheduler more extensible and maintainable. So, I think another good task would be to come up with some criteria for defining when we're ready to make that cut, and work tasks in toward that.

@pravarag
Copy link
Contributor

@damemi @knelasevero @ingvagabund any idea if any work has been done on this issue #923 or any priority on this one? I remember I decided to take it up but couldn't start on it last year. I'm available again to work on it and would be happy to contribute if it's okay to go.

@ingvagabund
Copy link
Contributor Author

We have some examples under https://github.com/kubernetes-sigs/descheduler/tree/master/examples. Though, there's still no comprehensive guide for creating and registering a plugin to the framework. Yet, providing a guide on how to create a new plugin is already welcome. The part of registering a plugin can be done later once the WithPlugin mechanism is in place.

@ingvagabund
Copy link
Contributor Author

@knelasevero thanks for the comprehensive list of todo. Work on any of the mentioned items can be started/resumed. The telemetry part might still experience a shifting code until the WithPlugin mechanics and removing the cachedClient client code are in place. Though, the work can always be delivered in steps.

@Dentrax
Copy link
Contributor

Dentrax commented Jul 14, 2023

This is really great list! Thanks for this. I'd like to get into #696, if someone enlighten my way and show where to start to initialize an "event-triggered strategy".

@damemi
Copy link
Contributor

damemi commented Jul 18, 2023

re: out of tree plugins it would be great if we could pick up #1087 and #1092, unless we want to wait and solve #1089 first

@knelasevero
Copy link
Contributor

re: out of tree plugins it would be great if we could pick up #1087 and #1092, unless we want to wait and solve #1089 first

I think we need a proper mechanism for out of tree plugins first

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2024
@knelasevero
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 10, 2024
@ingvagabund ingvagabund added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

7 participants