-
Notifications
You must be signed in to change notification settings - Fork 523
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[proposal] add a schedule plugin that support pod expands and shrinks according to the order of the defined logical node set #475
Comments
My company also have a similar plugin like yours. We can find a time to have a discuss. |
Hi, we can collaborate on this proposal |
@Huang-Wei @ffromani @seanmalloy @denkensk |
I'll have a look later this week (beginning April 3 2023) |
It will help us understanding the motivation(s) if you can elaborate on the real-world use-cases.
What do you mean by "node set order"? is that a priority field of the NodeSet CR? How a deployment's replicas are expected to be scheduled onto the matching NodeSets? and is the scheduling directives a hard or soft constraint?
Where is this max num defined? |
Hi, Thank you for your attention! We will define CRD named ResourcePoliy, it's CR instance as follows: Because ecs-pool is ranked before eci-pool, pods will be scheduled to ecs-pool first. If the number of pods scheduled into ecs-pool exceeds 100, pods will be scheduled to eci-pool |
In our company's scenario, customers will deploy both spot instances and pay-as-you-go instances simultaneously. Customers want their business to run on spot instances first to save costs, and when spot instance resources are insufficient, they will run on pay-as-you-go instances. Moreover, during business peak periods, when neither type of instance has resources, the business Pod will be scheduled to ECI nodes.
|
It seems @KunWuLuan is talking about the Alibaba cloud's feature described here: https://www.alibabacloud.com/help/en/container-service-for-kubernetes/latest/configure-priority-based-resource-scheduling. And @fjding is talking about a similar in-house implementation? (the design of maxReplicas is a bit strange though). I'm open to host an abstracted version in scheduler-plugins. BTW, not sure how you guys implement the node pool based preference, in scoring phase. My feeling is that to support it efficiently, we may need to bring some missing machinery to scheduler framework, you can check my comment in one of the sig-meeting: https://youtu.be/UhZBkFamoAg?t=1694 cc @denkensk |
Hmmm. I know it. Actually, I am the author of this feature in Alibaba cloud 😄. It takes me a long time to think of this name |
Can you introduce your scenario for this? And Why do you need to schedule 100 to ecs-pool first? @fjding |
Your comment is very useful in a real production environment. And I also care about this efficiency and memory usage if we need to memorize some history or status before. @Huang-Wei |
A cluster has multiple AZs(Available Zones), and each AZ has a VK (virtual kubelet),Users expect a Deployment's Pods to be distributed across different AZs in a certain proportion. |
@denkensk |
As I gave an example above, multi-AZ deployment is a good case, the openkruise also provides some cases.link |
Thanks for your explanation @fjding . And I'm also glad that these ideas can be applied to your scenario. And also scheduler-plugins can be used in ByteDance's Volcano Engine |
And I think we also need to clarify the core requirements. If you want to deploy the pods across different AZs, why use @KunWuLuan Do you have feedback from other users or more needs for a "resource policy"? We can discuss it here and make a more generic design together. |
@denkensk Users often use multi-AZ scenarios for disaster recovery purposes. In elastic container scenarios, such as ByteDance's VCI, users cannot accurately predict the upper limit of VCI capacity. Therefore, they cannot disable the launch of a pod just because resources in one AZ are unavailable. |
BTW, the strategy is required and maxReplica can meet the scenario you mentioned “Must". |
In my cloud scene. Our users will use ResourcePolicy to run a fixed number of Pods on ECS nodes (like maxReplicas in this design) and schedule the Pods that are scaled out during peak periods to Spot instances or ECI . |
@Huang-Wei @denkensk @ffromani |
@Huang-Wei |
Sure, please go ahead to raise a KEP. We can continue the discussion in the KEP. Just keep in mind this repo is more focusing on the scheduling portion, and may leave the discussion of CRD spec details outside. |
Thanks, @KunWuLuan we can do it together now |
@fjding Hi, I have submit a draft for this feature. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
This CRD is widely used in both my company and fjding's, we have selected the same features we meet for our customers in the proposal. So we think that the CRD that we described in the proposal is a stable version and it will not be updated frequently. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/assign |
The kubernetes support pod-deletion-cost after v1.21, in my cloud scene, the user have the demands like this:
1、Define multiple logical node set, a deployment workload can schedule pods according to this node set order, and shrink in the opposite order at the same time.
2、At the same time, it also supports the maximum number of schedulable pods per node set。
BTW, I have implemented this feature and want to contribute it to the community, I hope everyone can discuss it together
The text was updated successfully, but these errors were encountered: