-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubeadm operator #2505
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Any plan for this feature |
We haven't dropped the idea. It would support useful features like cluster
wide upgrades and cert rotations.
Similarly to other area of kubeadm. Someone has to have the time to work on
it.
If someone starts work on it again i would like for us to discuss trimming
down the heavy boilerplate that kube builder adds.
|
I have a prototype and many ideas around this and the kubeadm, library... |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/lifecycle frozen |
/cc I will evaluate this feature and see if we can move forward this. |
pacoxu/kubeadm-operator#53 I did some init implementation for I got a problem to implement the kubeadm operator.
If kubelet is not restarted, the apiserver will be the new version and kubelet will be n-1 version.
Some other thoughts, can we kill kubelet inside pod? Or can we add a flag file in hostPath to let kubelet know it should be restarted? I don't know if there is a simple way to restart kubelet by a kubelet API calling or other methods. |
@pacoxu I am also working on this, all those work for my side are still under internal maintained. It's okay for you to join in but we'd better to sync with each other to avoid the duplicated effort on this. our work is based on @fabriziopandini's POC, how about yours?
This is indeed an issue, so far it is fine since kubelet 1.24 will continue to work with apisever 1.25, but this must be taken care because of the changing from cri support on docker. |
pacoxu/kubeadm-operator#2 The same. The POC was removed from kubeadm code base in kubernetes/kubeadm#2342. |
cc @ruquanzhao for awareness, we need to figure out a solution on the upgrade of |
big +1 for multiple people collaborating on this. happy to help with more ideas / review when needed.
i don't think there is an API to restart kubelet.
this sounds like one of the ways to do it. i don't think i have and significantly better ideas right now.
the operator will have have to have super powers on the hosts, so it will be considered as a trusted "actor"...there is no other way to manage component upgrade (kubeadm, kubelet) and cert rotation, etc. |
I can build the workaround into a script.
If we restart kubelet inside pod, I tried to mount |
I write a simple kubelet-reloader
Currently the kubeadm-operator v0.1.0 can support upgrade cross versions like v1.22 to v1.24.
See quick-start. |
Thats great. I think we should have our discussion on the k/kubeadm issue to have it in one place. Also cross coordinate with @chendave to avoid duplicated work. |
kubernetes/kubeadm#2317 may be the right place. |
FYI - We have all the initial scope of the KEP #1239 implemented here: https://github.com/chendave/kubeadm-operator
But it is still a just POC. |
We have a try to ask if kubeadm operator can be a sig-clusterlifecyle subproject earlier this year. Some context can be found in https://docs.google.com/document/d/1Gmc7LyCIL_148a9Tft7pdhdee0NBHdOfHS1SAF0duI4/edit#heading=h.xm2jvfwtcfuz sig clusterlifecycle weekly meeting and kubernetes-sigs/cluster-api#7044 cluster-api gathering feedback issue.
|
Enhancement Description
https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/kubeadm/2505-Kubeadm-operator
Summary
Kubeadm operator would like to enable declarative control of kubeadm workflows, automating the execution and the
orchestration of such tasks across existing nodes in a cluster.
Motivation
Kubeadm binary can execute operations only on the machine where it is running e.g. it is not possible to execute
operations on other nodes, to copy files across nodes, etc.
As a consequence, most of the kubeadm workflows, like kubeadm upgrade,
consists of a complex sequence of tasks that should be manually executed and orchestrated across all the existing nodes
in the cluster.
Such a user experience is not ideal due to the error-prone nature of humans running commands. The manual approach
can be considered a blocker for implementing more complex workflows such as rotating certificate authorities,
modifying the settings of an existing cluster or any task that requires coordination of more than one Kubernetes node.
This KEP aims to address such problems by applying the operator pattern to kubeadm workflows.
k/enhancements
) update PR(s):k/k
) update PR(s):k/website
) update PR(s):Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.
The text was updated successfully, but these errors were encountered: