-
Notifications
You must be signed in to change notification settings - Fork 134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ovh_cloud_project_kube kubeconfig #203
Comments
you're correct. But this is more an issue with the design of the API than with the provider resource in itself. As the reset of the cluster will trigger a complete cluster re creation, then it's equivalent to a terraform destroy/apply in your scenario. maybe we could map the "reset" api endpoint in a specific terraform resource (eg: ovh_cloud_project_kube_reset) which would I can't see any other way to manage this specific case. cc @mhurtrel ? |
Note there are 2 api calls with different use cases and behaviour. Both these call are supposed to be used in very specific situation and I would bot consider them as daily action as both may have an impact on you application availability and could be avoided bybusing RBAC and keeping the clusterbin good shape.
|
Yes, ultimately that super-admin kubeconfig is definitely not expected to be disseminated, more like being used to create RBAC objects, then using kubeconfig from those RBAC for actual dissemination. In this case it was a test/dev env, no harm done. The api used on the dashboard can retrieve the kubeconfig even well after creation so that why I was confused when the api in the provider stubbornly refused to refresh. seeing "refreshing the state of module.x.ovh_cloud_project_kube.yyy" and not getting any refresh done is unexpected I'd say. This means the dashboard at https://www.ovh.com/manager/public-cloud/#/pci/projects/ is using a non-public API, I guess ? Yes a |
well, the provider is supposed to be "dumb" and only map api endpoints as 1 to 1 resources. but in the end you could end in a situation where you would have multiple resources defined in your recipe: resource ovh_cloud_project_kube cluster {} but then you would know which one has to be output. btw: when looking at the way aws eks our gcp k8s engine are mapped in terraform, i can't see any logic implemented to reset the cluster auth config. maybe this logic has to kept & managed outside terraform. |
in aws eks, theres a datasource to retrieve the kubeconfig auth info https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth |
Hi, is there a github project were we can open an issue so the behavior of the api changes or would it be okey for the maintainers just to use the reset kubeconfig endpoint in a resource? |
Just wanted to add a +1 for getting the API expanded so that there was a data source for pulling the kubeconfig. Without it, it is extremely brittle/obnoxious to use the Kuberentes/Helm providers in Terraform. |
Hi,
it seems to me the kubeconfig attribute of the ovh_cloud_project_kube is not fetched from the API, only fetched once at the resource creation. If the kubeconfig is compromised therefore reset in the dashboard, then the state of the resource in terraform is not refreshed.
Terraform Version
v0.13.7, but probably on other versions
Affected Resource(s)
ovh_cloud_project_kube
Expected Behavior
The .kubeconfig attribute of the ovh_cloud_project_kube resource should be the kubeconfig of the ovh_cloud_project_kube
Actual Behavior
The .kubeconfig attribute stays the same as when first deployed
Steps to Reproduce
terraform refresh/apply
> no change in the resource stateThe text was updated successfully, but these errors were encountered: