-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shouldn't need to run update-kubeconfig manually #28
Comments
Yeah, I've had to run
after every cluster's creation. I have a feeling Terraform doesn't want to change your local There are options for the |
Yeah, might be useful to output a kubeconfig file locally, and then set the |
I think by default the |
We're wondering about maybe adding this to hubploy? (Maybe as an option which defaults to on?) How "expert" is the scenario where automatic kubeconfig is bad? |
If you can find a way to set |
OK, sounds totally reasonable to me. We're in the process of documenting the end-to-end process internally so this is easy to add to those docs explicitly. If we ever do make a comprehensive wrapper script to automate installing all the pieces end-to-end, we could also add this step to that script. |
I ran into this:
Error: stat /Users/jmiller/.kube/config: no such file or directory on autoscaler.tf line 63, in resource "helm_release" "cluster-autoscaler": 63: resource "helm_release" "cluster-autoscaler" {
and @yuvipanda suggested I write an issue about it.
The manual work-around was:
aws eks update-kubeconfig --name <CLUSTER-NAME>
The text was updated successfully, but these errors were encountered: