-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: fix occasional e2e failure #1614
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: jwcesign <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: jwcesign The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @jwcesign. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
By the way, should we re-enqueue the item if an unexpected error happens, like a broken Internet connection? For example, the following code:
I prefer to re-enqueue them, make the logic robust |
Pull Request Test Coverage Report for Build 10630261456Details
💛 - Coveralls |
@@ -50,7 +50,7 @@ func (r *Registration) Reconcile(ctx context.Context, nodeClaim *v1.NodeClaim) ( | |||
if err != nil { | |||
if nodeclaimutil.IsNodeNotFoundError(err) { | |||
nodeClaim.StatusConditions().SetUnknownWithReason(v1.ConditionTypeRegistered, "NodeNotFound", "Node not registered with cluster") | |||
return reconcile.Result{}, nil | |||
return reconcile.Result{Requeue: true}, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be caught with our re-queue on nodes here: https://github.com/kubernetes-sigs/karpenter/blob/main/pkg/controllers/nodeclaim/lifecycle/controller.go#L136-L138
If we don't have a node when we first look at this, the node should be created, and re-trigger this reconciliation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It makes sense. Let me check on that.
Fixes #N/A
Description
When I run
make e2etests
, one failure is: https://github.com/kubernetes-sigs/karpenter/actions/runs/10590933040/job/29347492851The context is as follows:
nodeclaim:
corresponding node:
So this PR tries to re-enqueue the item.
How was this change tested?
It always has failure if you try about 8 times, and I tried 20+ times, the following failure disappear
[FAIL] Performance Provisioning [It] should do complex provisioning and complex drift /home/runner/work/karpenter/karpenter/test/suites/perf/scheduling_test.go:134
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.