You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If you want to implement this feature, comment to let us know (we'll work with you on design, scheduling, etc.)
Issue details
Dockerhub's rate limits (100/6hours pull actions) is frustrating to deal with when you have a lot of containers. It would be great to also host the images on ghcr or maybe quay.io. The implementation should also be pretty easy looking at the current workflow that push the images. I believe it's as simple as adding these lines to this file: https://github.com/pulumi/pulumi-kubernetes-operator/blob/master/.github/workflows/release.yaml
We appreciate your suggestion for this enhancement. I'll take this opportunity to raise this matter with our team to explore if pushing to alternative container image registries is something we'd like to support. While we work on this, please note that there's a potential workaround available: you can host a mirror of our images in your own private image registry.
Additionally, I'm curious to learn more about your specific use case of Dockerhub's rate limiting negatively affecting your workflow. My expectation is that the Pulumi Operator is meant to be long-lived on the cluster and a image pull should really only occur when upgrading to the next version of the Operator. Understanding more about how you're using our operator and how the rate limits are affecting you will help us gather valuable insights to refine our user experiences and offer better solutions.
Thank you for bringing this to our attention, and your patience is greatly appreciated as we work on enhancing our platform. If you can provide more details about your use case or have any further questions, please feel free to share.
I know I can just have a CI to build a container using FROM pulumi/pulumi-kubernetes-operator and push it to ghcr, but it would be nice to have official image from upstream.
My use case is I have a kubernetes cluster at home to self host a lot of stuffs, it's done using fluxcd to GitOps it too (https://github.com/budimanjojo/home-cluster). Because it's at home, I use it to try new stuffs too (just like testing pulumi in this case). And there's possibility that I broke the cluster to a state where it's better to just rebuild the cluster (it's fairly easy because everything is "GitOpsed"). And when I rebuild the cluster flux will pull a lot of containers and I have experienced the rate limit when rebuilding the cluster before, and it sucks. Hence I will avoid as many dockerhub containers as much as I can since that day.
Hello!
Issue details
Dockerhub's rate limits (100/6hours pull actions) is frustrating to deal with when you have a lot of containers. It would be great to also host the images on
ghcr
or maybequay.io
. The implementation should also be pretty easy looking at the current workflow that push the images. I believe it's as simple as adding these lines to this file: https://github.com/pulumi/pulumi-kubernetes-operator/blob/master/.github/workflows/release.yamlBelow this:
pulumi-kubernetes-operator/.github/workflows/release.yaml
Lines 72 to 74 in 7992cc9
Add this:
And then in this file: https://github.com/pulumi/pulumi-kubernetes-operator/blob/master/.goreleaser.yml
Below this:
pulumi-kubernetes-operator/.goreleaser.yml
Lines 58 to 60 in 7992cc9
Add this:
I can do a PR if you don't have time to do this. Thanks before.
Affected area/feature
The text was updated successfully, but these errors were encountered: