-
Notifications
You must be signed in to change notification settings - Fork 669
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
migrate some e2e testcases #1500
base: master
Are you sure you want to change the base?
Conversation
Hi @fanhaouu. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
The current method of using |
6194ad2
to
16e4377
Compare
@fanhaouu can you split the PR into multiple commits? Each commit with a reasonable change. E.g. one commit per a test change. Also, @hsunwenfang has already started migration of TestLeaderElection in #1497. |
During the image build? Can you share the failure? I am aware of a parsing error when running descheduler. Is this a new failure?
The image is built only for testing purposes. There's a different mechanism configured for building new image after each PR merges. |
sorryt, I can't split the PR into multiple commits; many of the changes were made together. |
|
It's a good exercise to find a way how to split the PR into smaller changes. Please keep in mind we need to help each other when reviewing any PR. The smaller the changes the easier it is to review it. Even if it means to refactor the code in a different PR before you can make a PR that performs the actual migration. |
Got it, I understand. I'll pay attention to it next time, but if I have to split this into many PRs, it will consume a lot of my effort and time. The PR #1497 is not finished yet, and there are many issues. It seems like he didn't fully understand how to perform the migration. |
@ingvagabund There are still some other strategies in the |
Hi @fanhaouu |
Your submission doesn't seem to have been executed in a k8s cluster with |
/ok-to-test |
I see a lot of good improvements in the code. Thank you that. My initial review notes:
|
Ok, no problem. I will make some improvements. |
… method to remove duplicated method
16e4377
to
a4b52b8
Compare
/retest |
@@ -101,7 +101,6 @@ func TestTooManyRestarts(t *testing.T) { | |||
if _, err := clientSet.CoreV1().Namespaces().Create(ctx, testNamespace, metav1.CreateOptions{}); err != nil { | |||
t.Fatalf("Unable to create ns %v", testNamespace.Name) | |||
} | |||
defer clientSet.CoreV1().Namespaces().Delete(ctx, testNamespace.Name, metav1.DeleteOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs to stay here in case creating a deployment errors and t.Fatalf is invoked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't remove this logic. I move it to
clientSet.CoreV1().Namespaces().Delete(ctx, testNamespace.Name, metav1.DeleteOptions{}) |
@@ -194,9 +196,6 @@ func TestTopologySpreadConstraint(t *testing.T) { | |||
return | |||
} | |||
|
|||
// Ensure recently evicted Pod are rescheduled and running before asserting for a balanced topology spread | |||
test.WaitForDeploymentPodsRunning(ctx, t, clientSet, deployment) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is needed for synchronization as the comment describes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are currently running the descheduler inside a container, so we just need to continuously wait for the results using wait.PollUntilContextTimeout
. We don’t actually know when the descheduler evicts the pods.
if err := wait.PollUntilContextTimeout(ctx, 5*time.Second, 60*time.Second, true, func(ctx context.Context) (bool, error) { |
@@ -181,9 +186,6 @@ func TestTopologySpreadConstraint(t *testing.T) { | |||
plugin.(frameworktypes.BalancePlugin).Balance(ctx, workerNodes) | |||
t.Logf("Finished RemovePodsViolatingTopologySpreadConstraint strategy for %s", name) | |||
|
|||
t.Logf("Wait for terminating pods of %s to disappear", name) | |||
waitForTerminatingPodsToDisappear(ctx, t, clientSet, deployment.Namespace) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is needed for synchronization so new pods get a chance to be started
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are currently running the descheduler inside a container, so we just need to continuously wait for the results using wait.PollUntilContextTimeout
. We don’t actually know when the descheduler evicts the pods.
if err := wait.PollUntilContextTimeout(ctx, 5*time.Second, 60*time.Second, true, func(ctx context.Context) (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don’t actually know when the descheduler evicts the pods
We do as we expect the descheduler to evict certain pods in a certain time based on the configuration the test sets.
@fanhaouu just a quick review. Lemme know once the PR is ready for review and all the tests are going green. Thank you. |
…hen the descheduler running
cd7b9ce
to
115f816
Compare
115f816
to
9ba1c55
Compare
That's ok. I think you can start to review. It's really frustrating. I've been making adjustments for a long time. |
} | ||
deschedulerPolicyConfigMapObj, err := deschedulerPolicyConfigMap(podlifetimePolicy(podLifeTimeArgs, evictorArgs)) | ||
deschedulerPolicyConfigMapObj.Name = fmt.Sprintf("%s-%s", deschedulerPolicyConfigMapObj.Name, testName) | ||
deschedulerPolicyConfigMapObj, err := deschedulerPolicyConfigMap(podlifetimePolicy(podLifeTimeArgs, evictorArgs), func(cm *v1.ConfigMap) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's the benefit of abstracting deschedulerPolicyConfigMap and deschedulerDeployment methods if the apply part is used only once?
test/e2e/e2e_test.go
Outdated
@@ -1392,6 +1392,24 @@ func waitForRCPodsRunning(ctx context.Context, t *testing.T, clientSet clientset | |||
} | |||
} | |||
|
|||
func waitForTerminatingPodsToDisappear(ctx context.Context, t *testing.T, clientSet clientset.Interface, namespace string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here you are putting back waitForTerminatingPodsToDisappear
which was removed in 1e23dbc. Can you consolidate the changes into a single commit?
kubernetes/base/rbac.yaml
Outdated
- apiGroups: ["coordination.k8s.io"] | ||
resources: ["leases"] | ||
resourceNames: ["descheduler"] | ||
verbs: ["get", "patch", "delete"] | ||
verbs: ["get", "patch", "delete","create", "update"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a particular change/commit that resulting in need to add the verbs here? If so, it needs to be part of the same commit/PR.
var meetsExpectations bool | ||
var meetsEvictedExpectations bool | ||
var actualEvictedPodCount int | ||
t.Logf("Check whether the number of evicted pods meets the expectation") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The commit message resolve the issue causing the e2e test failures
does not describe which issue and what e2e test failure was resolved. Also, this commit changes two e2es. Is the change in TestTopologySpreadConstraint
related to the issue/e2e test failure? If not, can you move this change into migrate e2e_topologyspreadconstraint
commit?
@fanhaouu if you separate generic changes from this PR into separate PRs it will be easier and faster to review the migration. There's still changes in this PR that are changing changes in previous commits of this PR. |
I think submitting too many changes at once is too cumbersome and makes reviewing difficult. Therefore, I’ve decided to split this PR into smaller, individual PRs. |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
migrate e2e_duplicatepods e2e_failedpods e2e_topologyspreadconstraint e2e_leaderelection e2e_clientconnection