Config Connector is a Kubernetes add-on that allows customers to manage GCP resources, such as Cloud Spanner or Cloud Storage, through your cluster's API.
With Config Connector, now you can describe GCP resources declaratively using Kubernetes-style configuration. Config Connector will create any new GCP resources and update any existing ones to the state specified by your configuration, and continuously makes sure GCP is kept in sync. The same resource model is the basis of Istio, Knative, Kubernetes, and the Google Cloud Services Platform.
As a result, developers can manage their whole application, including both its Kubernetes components as well as any GCP dependencies, using the same configuration, and -- more importantly -- tooling. For example, the same customization or templating tool can be used to manage test vs. production versions of an application across both Kubernetes and GCP.
This repository contains full Config Connector source code. This inlcudes controllers, CRDs, install bundles, and sample resource configurations.
See https://cloud.google.com/config-connector/docs/overview.
For simple starter examples, see the Resource reference and Cloud Foundation Toolkit Config Connector Solutions.
- Ubuntu (18.04/20.04)
- Debian (9/10/11)
-
Create an Ubuntu 20.04 VM on Google Cloud.
-
Open an SSH connection to the VM.
-
Create a new directory for GoogleCloudPlatform open source projects if it does not exist.
mkdir -p ~/go/src/github.com/GoogleCloudPlatform
-
Update apt and install build-essential.
sudo apt-get update sudo apt install build-essential
-
Clone the source code.
cd ~/go/src/github.com/GoogleCloudPlatform git clone https://github.com/GoogleCloudPlatform/k8s-config-connector
-
Change to environment-setup directory.
cd ~/go/src/github.com/GoogleCloudPlatform/k8s-config-connector/scripts/environment-setup
-
Set up sudoless Docker.
./docker-setup.sh
-
Exit your current session, then SSH back in to the VM. Then run the following to ensure you have set up sudoless docker correctly:
docker run hello-world
-
Install Golang.
cd ~/go/src/github.com/GoogleCloudPlatform/k8s-config-connector/scripts/environment-setup ./golang-setup.sh source ~/.profile
-
Install other build dependencies.
./repo-setup.sh source ~/.profile
-
Set up a GKE cluster for testing purposes.
NOTE:
gcp-setup.sh
assumes the VM you are running it from is in a GCP project which does not already have a GKE cluster with Config Connector already set up../gcp-setup.sh
-
Install all required dependencies
-
Add all required dependencies to your
$PATH
. -
Set up a GOPATH.
-
Add
$GOPATH/bin
to your$PATH
. -
Clone the repository:
cd $GOPATH/src/github.com/GoogleCloudPlatform git clone https://github.com/GoogleCloudPlatform/k8s-config-connector
-
Enter the source code directory:
cd $GOPATH/src/github.com/GoogleCloudPlatform/k8s-config-connector
-
Build the controller:
make manager
-
Build the CRDs:
make manifests
-
Build the config-connector CLI tool:
make config-connector
-
Enable Artifact Registry for your project.
gcloud services enable artifactregistry.googleapis.com
-
Create a Docker repository. You may need to wait ~10-15 minutes to let your cluster get set up after running
make deploy
.cd $GOPATH/src/github.com/GoogleCloudPlatform/k8s-config-connector kubectl apply -f config/samples/resources/artifactregistryrepository/artifactregistry_v1beta1_artifactregistryrepository.yaml
-
Wait a few minutes and then make sure your repository exists in GCP.
gcloud artifacts repositories list
If you see a repository, then your cluster is properly functioning and actuating K8s resources onto GCP.
At this point, your cluster is running a CNRM Controller Manager image built on your system. Let's make a code change to verify that you are ready to start development.
-
Edit $GOPATH/src/github.com/GoogleCloudPlatform/k8s-config-connector/cmd/manager/main.go. Insert the
log.Printf(...)
statement below on the first line of themain()
function.package manager func main() { log.Printf("I have finished the getting started guide.") ... }
-
Build and deploy your change, force a pull of the container image.
make deploy-controller && kubectl delete pods --namespace cnrm-system --all
-
Verify your new log statement is on the first line of the logs for the CNRM Controller Manager pod.
kubectl --namespace cnrm-system logs cnrm-controller-manager-0
Please refer to our contribution guide for more details.