Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ovh_cloud_project_kube kubeconfig #203

Open
rienafairefr opened this issue Jun 18, 2021 · 7 comments
Open

ovh_cloud_project_kube kubeconfig #203

rienafairefr opened this issue Jun 18, 2021 · 7 comments

Comments

@rienafairefr
Copy link
Contributor

Hi,
it seems to me the kubeconfig attribute of the ovh_cloud_project_kube is not fetched from the API, only fetched once at the resource creation. If the kubeconfig is compromised therefore reset in the dashboard, then the state of the resource in terraform is not refreshed.

Terraform Version

v0.13.7, but probably on other versions

Affected Resource(s)

ovh_cloud_project_kube

Expected Behavior

The .kubeconfig attribute of the ovh_cloud_project_kube resource should be the kubeconfig of the ovh_cloud_project_kube

Actual Behavior

The .kubeconfig attribute stays the same as when first deployed

Steps to Reproduce

  1. create a ovh_cloud_project_kube resource
  2. reset the kubeconfig, e.g. in the dashboard.
  3. terraform refresh/apply > no change in the resource state
@yanndegat
Copy link
Collaborator

yanndegat commented Jun 18, 2021

Hi @rienafairefr

you're correct. But this is more an issue with the design of the API than with the provider resource in itself.
there's no way to detect this kind of change as the kubeconfig can be retrieved only at cluster creation.

As the reset of the cluster will trigger a complete cluster re creation, then it's equivalent to a terraform destroy/apply in your scenario.

maybe we could map the "reset" api endpoint in a specific terraform resource (eg: ovh_cloud_project_kube_reset) which would
only trigger a reset. so you can taint it on demand and retrieve the kubeconfig from this new kind of resource.

I can't see any other way to manage this specific case.

cc @mhurtrel ?

@mhurtrel
Copy link
Collaborator

Note there are 2 api calls with different use cases and behaviour.

Both these call are supposed to be used in very specific situation and I would bot consider them as daily action as both may have an impact on you application availability and could be avoided bybusing RBAC and keeping the clusterbin good shape.

  • POST /cloud/project/{serviceName}/kube/{kubeId}/kubeconfig/reset : that will keep all kubernetes data but will reset the CA and reisbtall the nodes with that CA. You will need to reget a new kubeconfig file but all pods/deployments etc are kept. Use case : you did share the admin kubeconfig and did not rely on RBAC for indivual roles and an employee left the company, you want to be sure all access are reset.
  • POST /cloud/project/{serviceName}/kube/{kubeId}/reset : all cluster data is lost. As yann shared it is equivalent to destroy and recreate. We created this very specific call to manage a very specific situation : customer that paid nodes for a month and want to keep them ibsteadbof new ones (sometimes also to keep rhe IPs that were whitelisted somewhere). But all user data is lost

@rienafairefr
Copy link
Contributor Author

Yes, ultimately that super-admin kubeconfig is definitely not expected to be disseminated, more like being used to create RBAC objects, then using kubeconfig from those RBAC for actual dissemination. In this case it was a test/dev env, no harm done.

The api used on the dashboard can retrieve the kubeconfig even well after creation so that why I was confused when the api in the provider stubbornly refused to refresh. seeing "refreshing the state of module.x.ovh_cloud_project_kube.yyy" and not getting any refresh done is unexpected I'd say. This means the dashboard at https://www.ovh.com/manager/public-cloud/#/pci/projects/ is using a non-public API, I guess ?

Yes a ovh_cloud_project_kube_kubeconfig resource might make sense, as you described @yanndegat, passing it the ovh_cloud_project_kube id. On creation of the ovh_cloud_project_kube, then the linked ovh_cloud_project_kube_kubeconfig would be its kubeconfig attribute, and the provider would call POST /cloud/project/{serviceName}/kube/{kubeId}/kubeconfig/reset if we taint the ovh_cloud_project_kube_kubeconfig, and the POST /cloud/project/{serviceName}/kube/{kubeId}/reset would only be called when tainting/re-creating the actual ovh_cloud_project_kube

@yanndegat
Copy link
Collaborator

well, the provider is supposed to be "dumb" and only map api endpoints as 1 to 1 resources.
There are some very specific cases where this is not true. but if there is to be some business logic, then it has to be implemented as an api endpoint, then mapped in terraform.

but in the end you could end in a situation where you would have multiple resources defined in your recipe:

resource ovh_cloud_project_kube cluster {}
resource ovh_cloud_project_kube_reset fullreset {}
resource ovh_cloud_project_kube_kubeconfig_reset kubeconfig_reset {}

but then you would know which one has to be output.

btw: when looking at the way aws eks our gcp k8s engine are mapped in terraform, i can't see any logic implemented to reset the cluster auth config. maybe this logic has to kept & managed outside terraform.

@yanndegat
Copy link
Collaborator

in aws eks, theres a datasource to retrieve the kubeconfig auth info

https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth

@bullshit
Copy link

Hi, is there a github project were we can open an issue so the behavior of the api changes or would it be okey for the maintainers just to use the reset kubeconfig endpoint in a resource?

@ElliotG
Copy link

ElliotG commented Jan 9, 2023

Just wanted to add a +1 for getting the API expanded so that there was a data source for pulling the kubeconfig. Without it, it is extremely brittle/obnoxious to use the Kuberentes/Helm providers in Terraform.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants