-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: allow restricted labels in pod affinity/nodeSelector #1608
base: main
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: jwcesign The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @jwcesign. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Signed-off-by: jwcesign <[email protected]>
Pull Request Test Coverage Report for Build 10615581376Details
💛 - Coveralls |
Please take a look: |
In the PR description, consider explaining why these changes are the right way to fix it. |
Thanks, I added |
labelDomain := GetLabelDomain(key) | ||
for exceptionLabelDomain := range LabelDomainExceptions { | ||
if strings.HasSuffix(labelDomain, exceptionLabelDomain) { | ||
return fmt.Errorf("requirement label key %s is restricted; specify a well known label: %v, or a custom label that does not use a restricted domain: %v", key, sets.List(WellKnownLabels), sets.List(RestrictedLabelDomains)) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I understand this change. We're specifically allowing these sub-domains as an exception, understanding that some users use these labels. Are you saying all kops labels shouldn't be used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For nodepool's requirements, I didn't see any reason to configure something like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It could be used as nodepool's labels, not requirements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I check the logic, this function is used to check the nodepool's requirements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @njtran
Fixes #1596
Description
In general, we should not filter out the pending pods that could trigger the node scaling out by label key. If the pods can't be scheduled, it will be ignored in the scheduling simulation process:
karpenter/pkg/controllers/provisioning/scheduling/scheduler.go
Line 197 in 75fcd2a
So, let's remove the limitations of the restricted label. Also, I checked the git blame, the related PR is: aws/karpenter-provider-aws#2051
This PR, only tries to solve:
Implemented support for GT and LT requirement operators in pods
It doesn't want to limit label key
kubernetes.io/hostname
, so let's cancel the limitations.How was this change tested?
make e2etests
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.