-
Notifications
You must be signed in to change notification settings - Fork 116
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Graceful scaledown of deprecated MachineDeployment #812
Comments
you can remove the machineDeployment from the
it is a per node annotation and doesn't tell autoscaler anything about the node group , so scale up for node group would still happen
We have written MCM to work with CA . CA deals with unschedulable pods and directs MCM to scale a particular node group up or down. MCM only deals with machine in terms of their count , so it will just complicate things if we make MCM smart enough .
This can be done using a script also where you issue the command
while keeping a note of the available machine in the deployment. Too much configurability from our side is not required |
you can achieve this by adding a taint to all nodes of machineDeployment which your pods don't tolerate. To do so add the taint in |
/ping @mattburgess |
@mattburgess ℹ️ please take some time to help himanshu-kun or redirect to someone else if you can't. |
Yeah, understood. Thanks for the detailed response. We were trying to avoid having to write our own scale-down utility but it looks like that might be unavoidable.
It's a shame that node group auto-discovery hasn't been plugged in yet as doing this obviously requires a code change + redeployment on our side. We may look at contributing auto-discovery if it isn't already being looked at? It'd be nice, then, if CA supported such node-group deprecation. That way our migration of MDs/node-groups would look like this:
Do you think that's a reasonable request that might be considerd on the CA side? Either way, I'm happy for this to be closed, and we'll deal with the scale down on our side for now. |
We also wanted to implement it, but because of low demand and us having our hands full ,we had iceboxed the issue gardener/autoscaler#29 Your contributions are welcomed. Pls comment on the issue about how you want to implement it , and then we can discuss there |
Thanks again for the feedback @himanshu-kun. |
How to categorize this issue?
/area usability
/kind enhancement
/priority 3
What would you like to be added:
We'd like a way to be able to gracefully scale a MachineDeployment down to 0, specifically without assuming that PDBs will protect pod availability.
Why is this needed:
From time to time we have a need to completely remove a MachineDeployment from our clusters. Ideally we'd run something like
kubectl -n machine-controller-manager scale machinedeployment my-md --replicas 0
and just let MCM control things. However, this can lead to undesirable consequences:In an ideal scenario I'd quite like the following workflow:
cluster-autoscaler.kubernetes.io/scale-down-disabled: true
sufficient? Does that also tell it to not scale up either?)x
(user configurable) nodes at a time. It waits for that scaledown to complete and for the number of unschedulable pods to be <y
(user configurable) before proceeding with the next iteration of the scaledown loopThe text was updated successfully, but these errors were encountered: