Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate kind replacement for e2e #1391

Open
a7i opened this issue May 7, 2024 · 12 comments
Open

Investigate kind replacement for e2e #1391

a7i opened this issue May 7, 2024 · 12 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@a7i
Copy link
Contributor

a7i commented May 7, 2024

Is your feature request related to a problem? Please describe.

Descheduler release are typically blocked by waiting for a kind node image.

Describe the solution you'd like

  1. Either build our own kind node image
  2. Pursue alternatives such as k3d or minikube, which keep up with latest versions

Describe alternatives you've considered

Stay behind on releases

Additional context

kubernetes-sigs/kind#3589

@a7i a7i added the kind/feature Categorizes issue or PR as related to a new feature. label May 7, 2024
@BenTheElder
Copy link
Member

BenTheElder commented May 13, 2024

As an upstream kubernetes project you should really consider testing against upcoming Kubernetes before it releases, by doing kind build node image within the e2e pipeline and then kind create cluster --image=kindest/node:latest (from the locally built image, more in the docs).

EDIT: 1.30 is out anyhow.

@damemi
Copy link
Contributor

damemi commented May 14, 2024

Thanks @BenTheElder, that's a great suggestion and I think it's exactly what we've been looking for. It has always felt a bit weird that we were dependent on waiting until after the kind release to test and publish our next release. I didn't know about that option

@a7i
Copy link
Contributor Author

a7i commented May 14, 2024

cc: @pravarag since you wanted to look into this.

I think we could do something like:

docker pull kindest/node:v1.30.0 || build_kind_image_from_source

@damemi
Copy link
Contributor

damemi commented May 14, 2024

@a7i I'd say our master branch could always be building from source, with our tag branches using the released version

The downside to that is we run the risk of getting master blocked on bugs from kind though

@a7i
Copy link
Contributor Author

a7i commented May 14, 2024

Wouldn't that still get us blocked on kind image being released? Unless that's intentional?

@damemi
Copy link
Contributor

damemi commented May 14, 2024

It would block new PRs to the tagged branch until that image was available, so maybe the release branches could use the conditional switch like you're suggesting

@BenTheElder
Copy link
Member

For K/K we're fetching kind from HEAD to stay compatible with any kubernetes breaking changes, but we're also running equivalent CI jobs, it's possible we'd have a breaking change that you'd have found in the release notes.

That said, we're avoiding those as much as possible, and when we're planning one they've generally been pre-announced (like the containerd 2.0 style registry config) in previous release notes similar to Kubernetes style deprecations. This isn't always possible when we have to react to e.g. the runc misc cgroup changes but generally speaking upgrading to those changes should be desired in a typical CI environment.

We're discussing continuous image builds in the future, right now we build releases with https://github.com/kubernetes-sigs/kind/blob/0a7403e49c529d22cacd3a3f3606b9d8a5c16ae7/hack/release/build/push-node.sh which amongst other things is making the images smaller by compiling out dockershim and cloud providers ... which won't be necessary vs standard builds in 1.31+

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 12, 2024
@pravarag
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 28, 2024
@pravarag
Copy link
Contributor

@a7i apologies on letting this issue slip for this long. I've started investigating on it again and will be sharing my further inputs/doubts on it soon. Thanks!

@a7i
Copy link
Contributor Author

a7i commented Aug 28, 2024

Thanks @pravarag , no worries I totally understand how open-source goes and we certainly appreciate your contribution.

the images are published quickly now too so it's not as urgent: https://hub.docker.com/r/kindest/node/tags

@BenTheElder
Copy link
Member

BenTheElder commented Aug 28, 2024

FYI you can also cheaply build images from kubernetes release binaries (recommended usage is only for 1.31+ but it works for older releases too, just larger images)

multi-arch support isn't there yet, but you could combine them with docker manifest if you require that.

For just quickly building an image at a custom version to use in github actions, see the release notes at https://github.com/kubernetes-sigs/kind/releases/tag/v0.24.0 / https://kind.sigs.k8s.io/docs/user/quick-start/#building-images

This is much faster and doesn't require compiling, only downloading a build.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

6 participants