-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Leaking kubeadmconfigtemplates, openstackmachinetemplates ... #105
Comments
13683 kubeadmconfigtemplates:
|
garloff
changed the title
Leaking kubeadmconfigtemplates, clusterclasses, ...
Leaking kubeadmconfigtemplates, ...
Jun 11, 2024
garloff
changed the title
Leaking kubeadmconfigtemplates, ...
Leaking kubeadmconfigtemplates, openstackmachinetemplates ...
Jun 11, 2024
15646 openstackmachinetemplates:
|
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
/kind bug
What steps did you take and what happened:
A management cluster (kind) running in an SCS-2V-4 VM for 3 months (mostly idle) became unusable.
After some debugging, it was found that the kube-apiserver's memory usage had exploded to > 2GiB RSS.
This caused the machine to aggressively discard memory (kswapd0) just to hit major page faults resulting in the memory to be paged back in. System load > 50 (on a 2vCPU server), >>10k major page faults/s and >500MB/s reading from disk.
What did you expect to happen:
4GiB should be sufficient RAM for a not too busy management host.
Anything else you would like to add:
I was assuming that the CSO/CSPO are causing the kube-apiserver memory usage by storing too many objects.
I thus far found kubeadmconfigtemplates and clusterclasses to exist in excessive numbers.
Environment:
The text was updated successfully, but these errors were encountered: