You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I should not be spammed with issues about connecting to my kubernetes cluster
Actual Behavior
I get the following error message printed for most (all?) of my resources
│ Error: Failed to configure client: exec plugin: invalid apiVersion "client.authentication.io/v1beta1"
│
│ with kubernetes_cluster_role_binding.datadog-agent-clusterrole-binding,
│ on cdk.tf.json line 2030, in resource.kubernetes_cluster_role_binding.datadog-agent-clusterrole-binding:
│ 2030: },
│
goldsky-infra-dev ╷
│ Error: Failed to configure client: exec plugin: invalid apiVersion "client.authentication.io/v1beta1"
│
│ with kubernetes_cluster_role_binding.datadog-agent-clusterrole-binding (datadog-agent-clusterrole-binding),
│ on cdk.tf.json line 2030, in resource.kubernetes_cluster_role_binding.datadog-agent-clusterrole-binding (datadog-agent-clusterrole-binding):
│ 2030: },
Steps to Reproduce
Not sure how to reliably reproduce this. I was on cdktf 0.12.3 and it worked flawlessly. I decided to upgrade to 0.16.3 and worked my way through all the build time issues with import reorganization. But now when I try to diff my stack, I get the above output for almost all of my resources.
I found this issue but it's not clear exactly what I need to do to fix my situation.
Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
If you are interested in working on this issue or have submitted a pull request, please leave a comment
The text was updated successfully, but these errors were encountered:
Well, turns out I wasn't paying enough attention. My kubernetes provider was using api version client.authentication.io/v1beta1 when it should have been using client.authentication.k8s.io/v1beta1 (note the extra .k8s)
I'm going to lock this issue because it has been closed for 30 days. This helps our maintainers find and focus on the active issues. If you've found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Expected Behavior
I should not be spammed with issues about connecting to my kubernetes cluster
Actual Behavior
I get the following error message printed for most (all?) of my resources
Steps to Reproduce
Not sure how to reliably reproduce this. I was on cdktf 0.12.3 and it worked flawlessly. I decided to upgrade to 0.16.3 and worked my way through all the build time issues with import reorganization. But now when I try to diff my stack, I get the above output for almost all of my resources.
I found this issue but it's not clear exactly what I need to do to fix my situation.
Versions
❯❯❯ npx cdktf debug
cdktf debug
language: typescript
cdktf-cli: 0.16.3
node: v17.9.1
cdktf: 0.16.3
constructs: 10.0.36
jsii: null
terraform: 1.3.5
arch: x64
os: linux 5.15.0-1036-aws
Providers
Gist
No response
Possible Solutions
this issue mentions this problem but it's not clear how to fix it.
My kubeconfig uses the
beta1
api version:My kubernetes provider uses
beta1
My aws cli version is
I've tried updating my kubeconfig with
but that still resulted in the same errors.
Workarounds
No response
Anything Else?
No response
References
hashicorp/terraform-provider-helm#893 (comment)
Help Wanted
Community Note
The text was updated successfully, but these errors were encountered: