Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for importing terraform resources #193

Open
toastwaffle opened this issue Sep 19, 2023 · 10 comments
Open

Support for importing terraform resources #193

toastwaffle opened this issue Sep 19, 2023 · 10 comments
Labels
enhancement New feature or request needs:triage

Comments

@toastwaffle
Copy link
Contributor

What problem are you facing?

We're currently using upbound-provider-gcp to provision GKE clusters, but we're considering moving to provider-terraform for a variety of reasons (ability to use google-beta features, ability to use our in-house TF module registry rather than duplicating everything as crossplane compositions).

Migrating to provider-terraform is a little painful, because it tries to create things which already exist (because they're not in the TF state), and get's ALREADY_EXISTS errors back from GCP. When we migrated our DNS resources to upbound-provider-terraform, we did a maintenance window, and deleted the upbound-provider-gcp resources so that everything got recreated. That's not really an option for GKE clusters.

How could Official Terraform Provider help solve your problem?

It would be extremely useful if there was a mechanism to make provider-terraform import resources into its state (using the terraform import command).

My idea for how to do this is to have an annotation like terraform.upbound.io/needs-import: true which could be manually added to Workspace resources. Upon seeing that annotation, the provider can run terraform import before running plan/apply, and then remove in annotation if successful. I'm open to alternative suggestions though.

Assuming adding this feature wouldn't take too long, I think we'd be happy to contribute it.

@toastwaffle toastwaffle added enhancement New feature or request needs:triage labels Sep 19, 2023
@mbbush
Copy link

mbbush commented Sep 19, 2023

Have you tried using the declarative import block that was introduced in terraform 1.5?

@ytsarev
Copy link
Member

ytsarev commented Sep 20, 2023

You can point to the existing state if you like in the configuration block

@NikitaCloudRuntime
Copy link

Hello @ytsarev :)

I came across your answer by accident while searching for how to solve an issue we encountered during our Crossplane migration.

The problem is that we want to continue using terraform for several infrastructure components and don't want to deploy them using Crossplane/Gitops. An example is the AWS IPAM service. If we create the service using terraform, we will need the pool or scope IDs later when creating a VPC using Crossplane.

The question is how can Crossplane access the terraform state (to get IDs from there)?
Will your suggestion to point to an existing state file work in this scenario?

Will Crossplane also populate the existing state outputs into a Workspace.status.atProvider.outputs so we can reference them in another managed resources?

Sorry for a damb question, we're still making our first steps with Crossplane

@bobh66
Copy link
Collaborator

bobh66 commented Sep 25, 2023

@NikitaCloudRuntime if all you need is the data associated with the resources, and you don't need to manipulate the state of the resources, you have a couple of options to do that without terraform import:

  • in the Compositions where you need the "external" data use a provider-terraform Workspace to reference the resources using standard terraform data resources, publish the data you need as outputs, and then patch the data you need from the Workspace status.atProvider.outputs to the environement or Composite metadata.annotations for patching into other resources, or
  • Create a separate Composite that does the above and then creates an EnvironmentConfig with the related data, and in your other Compositions when you need that data you can select the EnvironmentConfig into the environment and patch it from there

It really depends on how often you need to use the data and in how many places.

Hope this helps

@ytsarev
Copy link
Member

ytsarev commented Sep 25, 2023

@NikitaCloudRuntime, no worries at all, it is actually pretty advanced question :)

There are a couple of examples around that you can use in addition to great suggestions from @bobh66

Cheers.

@hellofresh-tkaplonski
Copy link

Hi @ytsarev

That might be a topic for a separate thread but... what if we wanted to revert the direction of information flow? I mean, examples you provided demonstrate how we could get attributes of resources created with terraform into crossplane. But what if we wanted to access attributes of resources created by crossplane inside some other tool.

We are in the process of migration so there will be most likely a transition phase when we would need to use two IaC systems. In such a case let's say we create a VPC with Crossplane and then we want to create inside a security group with Terraform. To do so we would need to load somehow the vpcid into tf. Naturally while having this as close to declarative approach as possible.

One way I could think of was to write all interesting information about resources created by Crossplane into ConfigMap and then read the map similarly to the way we would read remote tf state files... but I cannot find a clear way to simply extract any resource attribute into ConfigMap (or Secret).

@NikitaCloudRuntime
Copy link

Thanks a lot @ytsarev and @bobh66 for the information and examples :)
I didn't have a chance to test it out today but will do shortly :)

@ytsarev
Copy link
Member

ytsarev commented Sep 27, 2023

@hellofresh-tkaplonski The first thing that comes to mind is to use some form of kubectl get managed and massively get the Managed Resource identifiers

@hellofresh-tkaplonski
Copy link

@ytsarev Thank you for prompt answer.

Getting resources with kubectl would be an option but it seems to be dangerously close to imperative programming. Isn't there a way to somehow export attributes to ConfigMap so that we could kinda "emulate" terraform's outputs and read those dedicated values with Terraform data source (https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/data-sources/config_map)?

Something like AWS ACK offers: https://aws-controllers-k8s.github.io/community/docs/user-docs/field-export/ ?

@bobh66
Copy link
Collaborator

bobh66 commented Sep 27, 2023

@hellofresh-tkaplonski you can do this today using a Composition with provider-kubernetes - create a provider-terraform Workspace that retrieves the data you need using data resources, publish the data as outputs, and then patch the outputs into a provider-kubernetes Object which deploys a ConfigMap

You could also use EnvironmentConfig to hold the data, and you can create that directly in the Composition without needing provider-kubernetes as a wrapper.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request needs:triage
Projects
None yet
Development

No branches or pull requests

6 participants