GitHub Action
Get GKE Credentials
This action configures authentication to a GKE cluster via a kubeconfig
file that can be used with kubectl
or other methods of interacting with the cluster.
Authentication is performed by generating a short-lived token (default behaviour) or via the GCP auth plugin present in kubectl
which uses the service account keyfile path in GOOGLE_APPLICATION_CREDENTIALS.
This is not an officially supported Google product, and it is not covered by a Google Cloud support contract. To report bugs or request features in a Google Cloud product, please contact Google Cloud support.
This action requires:
-
Google Cloud credentials that are authorized to view a GKE cluster. See the Authorization section below for more information. You also need to create a GKE cluster.
-
This action runs using Node 20. If you are using self-hosted GitHub Actions runners, you must use a runner version that supports this version or newer.
-
If you plan to create binaries, containers, pull requests, or other releases, add the following to your .gitignore to prevent accidentially committing the KUBECONFIG to your release artifact:
# Ignore generated kubeconfig from google-github-actions/get-gke-credentials gha-kubeconfig-*
jobs:
job_id:
permissions:
contents: 'read'
id-token: 'write'
steps:
- id: 'auth'
uses: 'google-github-actions/auth@v2'
with:
project_id: 'my-project'
workload_identity_provider: 'projects/123456789/locations/global/workloadIdentityPools/my-pool/providers/my-provider'
- id: 'get-credentials'
uses: 'google-github-actions/get-gke-credentials@v2'
with:
cluster_name: 'my-cluster'
location: 'us-central1-a'
# The KUBECONFIG env var is automatically exported and picked up by kubectl.
- id: 'get-pods'
run: 'kubectl get pods'
-
cluster_name
: (Required) Name of the cluster for which to get credentials. This can be specified as a full resource name:projects/<project>/locations/<location>/clusters/<cluster>
In which case the
project_id
andlocation
inputs are optional. If only specified as a name:<cluster>
then both the
project_id
andlocation
may be required. -
location
: (Optional) Location (region or zone) in which the cluster resides. This value is required unlesscluster_name
is a full resource name. -
project_id
: (Optional) Project ID where the cluster is deployed. If provided, this will override the project configured by previous steps or environment variables. If not provided, the project will be inferred from the environment, best-effort. -
context_name
: (Optional) Name to use when creating thekubectl
context. If not specified, the default value isgke_<project>_<location>_<cluster>
. -
namespace
: (Optional) Name of the Kubernetes namespace to use within the context. -
use_auth_provider
: (Optional, default:false
) Set this to true to use the Google Cloud auth plugin inkubectl
instead of inserting a short-lived access token. -
use_internal_ip
: (Optional, default:false
) Set this to true to use the internal IP address for the cluster endpoint. This is mostly used with private GKE clusters. -
use_connect_gateway
: (Optional, default:false
) Set this to true to use the Connect Gateway endpoint to connect to cluster. -
fleet_membership_name
: (Optional) Fleet membership name to use for generating Connect Gateway endpoint, of the form:projects/<project>/locations/<location>/memberships/<membership>
This only applies if
use_connect_gateway
is true. Defaults to auto discovery if empty. -
quota_project_id
: (Optional) Project ID from which to pull quota. The caller must haveserviceusage.services.use
permission on the project. If unspecified, this defaults to the project of the authenticated principle. This is an advanced setting, most users should leave this blank.
In addition to setting the $KUBECONFIG
environment variable, this GitHub
Action produces the following outputs:
kubeconfig_path
: Path on the local filesystem where the generated Kubernetes configuration file resides.
There are a few ways to authenticate this action. A service account will be needed with at least the following roles:
- Kubernetes Engine Cluster Viewer (
roles/container.clusterViewer
)
If you are using the Connect Gateway, you must also have:
- GKE Hub Viewer (
roles/gkehub.viewer
)
Use google-github-actions/auth to authenticate the action. You can use Workload Identity Federation or traditional Service Account Key JSON authentication.
by specifying the credentials
input. This Action supports both the recommended Workload Identity Federation based authentication and the traditional Service Account Key JSON based auth.
See usage for more details.
jobs:
job_id:
permissions:
contents: 'read'
id-token: 'write'
steps:
- id: 'auth'
uses: 'google-github-actions/auth@v2'
with:
project_id: 'my-project'
workload_identity_provider: 'projects/123456789/locations/global/workloadIdentityPools/my-pool/providers/my-provider'
- id: 'get-credentials'
uses: 'google-github-actions/get-gke-credentials@v2'
with:
cluster_name: 'my-cluster'
location: 'us-central1-a'
jobs:
job_id:
steps:
- id: 'auth'
uses: 'google-github-actions/auth@v2'
with:
credentials_json: '${{ secrets.gcp_credentials }}'
- id: 'get-credentials'
uses: 'google-github-actions/get-gke-credentials@v2'
with:
cluster_name: 'my-cluster'
location: 'us-central1-a'
If you are hosting your own runners, and those runners are on Google Cloud, you can leverage the Application Default Credentials of the instance. This will authenticate requests as the service account attached to the instance. This only works using a custom runner hosted on GCP.
jobs:
job_id:
steps:
- id: 'get-credentials'
uses: 'google-github-actions/get-gke-credentials@v2'
with:
cluster_name: 'my-cluster'
location: 'us-central1-a'
The action will automatically detect and use the Application Default Credentials.
You can utilize the Connect gateway feature of Fleets with this action to connect to clusters without direct network connectivity. This can be useful for connecting to private clusters from GitHub hosted runners.
jobs:
job_id:
steps:
- id: 'get-credentials'
uses: 'google-github-actions/get-gke-credentials@v2'
with:
cluster_name: 'my-private-cluster'
location: 'us-central1-a'
use_connect_gateway: 'true'
Follow the Connect gateway documentation for initial setup. Note: The Connect Agent service account must have the correct impersonation policy on the service account used to authenticate this action.