Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[release-1.10] Bump CAPI to v1.4.5 #3782

Conversation

k8s-infra-cherrypick-robot

This is an automated cherry-pick of #3756

/assign mboersma

Update CAPI to v1.4.5

@k8s-ci-robot k8s-ci-robot added the release-note Denotes a PR that will be considered when it comes time to generate release notes. label Jul 31, 2023
@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Jul 31, 2023
@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Jul 31, 2023
@codecov
Copy link

codecov bot commented Jul 31, 2023

Codecov Report

Patch and project coverage have no change.

Comparison is base (fedc8b5) 54.05% compared to head (aa1bebb) 54.05%.
Report is 4 commits behind head on release-1.10.

Additional details and impacted files
@@              Coverage Diff              @@
##           release-1.10    #3782   +/-   ##
=============================================
  Coverage         54.05%   54.05%           
=============================================
  Files               186      186           
  Lines             18833    18833           
=============================================
  Hits              10181    10181           
  Misses             8105     8105           
  Partials            547      547           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@CecileRobertMichon
Copy link
Contributor

@mboersma the test failure looks like a Helm install error

is there any way to make the output for those more human readable?

@mboersma
Copy link
Contributor

is there any way to make the output for those more human readable?

We print all the output from the helm install command directly, so I'm not sure how to improve that. It seems relatively clear to me.

  Jul 31 20:01:17.831: INFO: Helm command: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/hack/tools/bin/helm upgrade azuredisk-csi-driver-oot azuredisk-csi-driver --install --kubeconfig /tmp/e2e-kubeconfig3406234980 --create-namespace --namespace kube-system --repo https://raw.githubusercontent.com/kubernetes-sigs/azuredisk-csi-driver/master/charts --set controller.replicas=1 --set controller.runOnControlPlane=true
  Jul 31 20:01:47.930: INFO: Helm install output: Error: Kubernetes cluster unreachable: Get "https://node-drain-2nz6zk-32b2f2d0.eastus.cloudapp.azure.com:6443/version": dial tcp 52.151.202.93:6443: i/o timeout

I'm not positive we're actually retrying successfully here though... The code is written with Gingko Eventually in a way that really should be retrying, but I'm not seeing that in the logs.

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Jul 31, 2023

@k8s-infra-cherrypick-robot: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-cluster-api-provider-azure-capi-e2e-v1beta1 aa1bebb link false /test pull-cluster-api-provider-azure-capi-e2e-v1beta1

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@CecileRobertMichon
Copy link
Contributor

We print all the output from the helm install command directly, so I'm not sure how to improve that. It seems relatively clear to me.

Yes, now I see the actual output in the test output and I agree it's clear enough. I was originally looking at the ginkgo summary which isn't as readable...

{Unexpected error:
    <*exec.ExitError | 0xc000801880>: 
    exit status 1
    {
        ProcessState: {
            pid: 42397,
            status: 256,
            rusage: {
                Utime: {Sec: 0, Usec: 66423},
                Stime: {Sec: 0, Usec: 28451},
                Maxrss: 73296,
                Ixrss: 0,
                Idrss: 0,
                Isrss: 0,
                Minflt: 4525,
                Majflt: 0,
                Nswap: 0,
                Inblock: 0,
                Oublock: 0,
                Msgsnd: 0,
                Msgrcv: 0,
                Nsignals: 0,
                Nvcsw: 403,
                Nivcsw: 31,
            },
        },
        Stderr: nil,
    }
occurred failed [FAILED] Unexpected error:
    <*exec.ExitError | 0xc000801880>: 
    exit status 1
    {
        ProcessState: {
            pid: 42397,
            status: 256,
            rusage: {
                Utime: {Sec: 0, Usec: 66423},
                Stime: {Sec: 0, Usec: 28451},
                Maxrss: 73296,
                Ixrss: 0,
                Idrss: 0,
                Isrss: 0,
                Minflt: 4525,
                Majflt: 0,
                Nswap: 0,
                Inblock: 0,
                Oublock: 0,
                Msgsnd: 0,
                Msgrcv: 0,
                Nsignals: 0,
                Nvcsw: 403,
                Nivcsw: 31,
            },
        },
        Stderr: nil,
    }
occurred
In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/helm.go:76 @ 07/31/23 20:01:47.93
}

And yes it doesn't seem like this was a transient network flake that we should have retried and didn't.

@Jont828
Copy link
Contributor

Jont828 commented Jul 31, 2023

@mboersma Was this retrying in the main branch when we merged it? In other words, did we ever encounter an error and need to loop until the install succeeds?

@mboersma
Copy link
Contributor

mboersma commented Jul 31, 2023

Was this retrying in the main branch ... ?

I thought so, but I'm not sure now. 🤷🏻 I'll make a test PR with a bad Helm chart and see if I can work it out there.

(Meanwhile, we need CAPI v1.4.5 to merge in both release branches so testgrid stops being red for multiple SIG projects. We tried to work around the bug by reverting the Go compiler, but it's whack-a-mole and that was a temporary band-aid anyway.)

/retest

@mboersma
Copy link
Contributor

I created #3787 and I'm investigating why it's not retrying.

@CecileRobertMichon
Copy link
Contributor

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jul 31, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: CecileRobertMichon

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 1e784519a8a6c56a9b94a1a361a9e58a226307b3

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 31, 2023
@k8s-ci-robot k8s-ci-robot merged commit b14a7cc into kubernetes-sigs:release-1.10 Jul 31, 2023
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

5 participants