Skip to content

Commit

Permalink
Add terraform and checkov action
Browse files Browse the repository at this point in the history
  • Loading branch information
andifalk committed Oct 9, 2023
1 parent d6656f3 commit 75d5a5d
Show file tree
Hide file tree
Showing 14 changed files with 334 additions and 4 deletions.
42 changes: 42 additions & 0 deletions .github/workflows/checkov.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
name: checkov

# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "main" branch
push:
branches: [ "main", "master" ]
pull_request:
branches: [ "main", "master" ]

# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "scan"
scan:
permissions:
contents: read # for actions/checkout to fetch code
security-events: write # for github/codeql-action/upload-sarif to upload SARIF results
actions: read # only required for a private repository by github/codeql-action/upload-sarif to get the Action run status

# The type of runner that the job will run on
runs-on: ubuntu-latest

# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so follow-up steps can access it
- uses: actions/checkout@v3

- name: Checkov GitHub Action
uses: bridgecrewio/checkov-action@v12
with:
continue-on-error: true
# This will add both a CLI output to the console and create a results.sarif file
output_format: cli,sarif
output_file_path: console,results.sarif

- name: Upload SARIF file
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: results.sarif
65 changes: 65 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,68 @@
# Supply Chain Security

Demos for software supply chain security



## IaC: Provision a kubernetes cluster

Next, we will provision a Kubernetes cluster on Google Cloud (GKE).

### Setup gcloud SDK

After you've installed the gcloud SDK, initialize it by running the following command.

`gcloud init`

This will authorize the SDK to access GCP using your user account credentials and add the SDK to your PATH. This steps requires you to login and select the project you want to work in.

Finally, add your account to the Application Default Credentials (ADC). This will allow Terraform to access these credentials to provision resources on GCloud.

`gcloud auth application-default login`

### Terraform

In subfolder _iac_, you will find four files used to provision a VPC, subnets and a GKE cluster.

* __vpc.tf__ provisions a VPC and subnet. A new VPC is created for this tutorial so it doesn't impact your existing cloud environment and resources. This file outputs region.
* __gke.tf__ provisions a GKE cluster and a separately managed node pool (recommended). Separately managed node pools allows you to customize your Kubernetes cluster profile — this is useful if some Pods require more resources than others. You can learn more here. The number of nodes in the node pool is defined also defined here.
* __terraform.tfvars__ is a template for the project_id and region variables.
* __versions.tf__ sets the Terraform version to at least 0.14.

#### Update your terraform.tfvars file

Replace the values in your terraform.tfvars file with your project_id and region. Terraform will use these values to target your project when provisioning your resources. Your terraform.tfvars file should look like the following.

# terraform.tfvars
project_id = "REPLACE_ME"
region = "us-central1"

You can find the project your gcloud is configured to with this command.

`gcloud config get-value project`

#### Initialize Terraform workspace

After you have saved your customized variables file, initialize your Terraform workspace, which will download the provider and initialize it with the values provided in your `terraform.tfvars` file.

`terraform init`

In your initialized directory, run terraform apply and review the planned actions. Your terminal output should indicate the plan is running and what resources will be created.

You can see this terraform apply will provision a VPC, subnet, GKE Cluster and a GKE node pool. Confirm the apply with a _yes_.

This process should take approximately 10 minutes. Upon successful application, your terminal prints the outputs defined in `vpc.tf` and `gke.tf`.

#### Configure kubectl

Now that you've provisioned your GKE cluster, you need to configure kubectl.

Run the following command to retrieve the access credentials for your cluster and automatically configure kubectl.

`gcloud container clusters get-credentials $(terraform output -raw kubernetes_cluster_name) --region $(terraform output -raw region)`

#### Clean up your workspace

The provisioned cluster has a pricing tag, so remember to destroy any resources you create once you are done with the demos. Run the destroy command and confirm with `yes` in your terminal.

`terraform destroy`
19 changes: 19 additions & 0 deletions deployment/deploy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: supply-chain-security
name: supply-chain-security
spec:
replicas: 1
selector:
matchLabels:
app: supply-chain-security
template:
metadata:
labels:
app: supply-chain-security
spec:
containers:
- image: andifalk/supply-chain-security:latest
name: supply-chain-security
22 changes: 22 additions & 0 deletions iac/.terraform.lock.hcl

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Binary file not shown.
85 changes: 85 additions & 0 deletions iac/gke.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: MPL-2.0

variable "gke_username" {
default = ""
description = "gke username"
}

variable "gke_password" {
default = ""
description = "gke password"
}

variable "gke_num_nodes" {
default = 2
description = "number of gke nodes"
}

# GKE cluster
data "google_container_engine_versions" "gke_version" {
location = var.region
version_prefix = "1.27."
}

resource "google_container_cluster" "primary" {

Check failure on line 25 in iac/gke.tf

View workflow job for this annotation

GitHub Actions / scan

CKV_GCP_20: "Ensure master authorized networks is set to enabled in GKE clusters"

Check failure on line 25 in iac/gke.tf

View workflow job for this annotation

GitHub Actions / scan

CKV_GCP_23: "Ensure Kubernetes Cluster is created with Alias IP ranges enabled"

Check failure on line 25 in iac/gke.tf

View workflow job for this annotation

GitHub Actions / scan

CKV_GCP_65: "Manage Kubernetes RBAC users with Google Groups for GKE"

Check failure on line 25 in iac/gke.tf

View workflow job for this annotation

GitHub Actions / scan

CKV_GCP_64: "Ensure clusters are created with Private Nodes"

Check failure on line 25 in iac/gke.tf

View workflow job for this annotation

GitHub Actions / scan

CKV_GCP_69: "Ensure the GKE Metadata Server is Enabled"

Check failure on line 25 in iac/gke.tf

View workflow job for this annotation

GitHub Actions / scan

CKV_GCP_70: "Ensure the GKE Release Channel is set"

Check failure on line 25 in iac/gke.tf

View workflow job for this annotation

GitHub Actions / scan

CKV_GCP_13: "Ensure client certificate authentication to Kubernetes Engine Clusters is disabled"

Check failure on line 25 in iac/gke.tf

View workflow job for this annotation

GitHub Actions / scan

CKV_GCP_21: "Ensure Kubernetes Clusters are configured with Labels"

Check failure on line 25 in iac/gke.tf

View workflow job for this annotation

GitHub Actions / scan

CKV_GCP_12: "Ensure Network Policy is enabled on Kubernetes Engine Clusters"

Check failure on line 25 in iac/gke.tf

View workflow job for this annotation

GitHub Actions / scan

CKV_GCP_61: "Enable VPC Flow Logs and Intranode Visibility"
name = "${var.project_id}-gke"
location = var.region

# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1

network = google_compute_network.vpc.name
subnetwork = google_compute_subnetwork.subnet.name
}

# Separately Managed Node Pool
resource "google_container_node_pool" "primary_nodes" {
name = google_container_cluster.primary.name
location = var.region
cluster = google_container_cluster.primary.name

version = data.google_container_engine_versions.gke_version.release_channel_latest_version["STABLE"]
node_count = var.gke_num_nodes

node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]

labels = {
env = var.project_id
}

# preemptible = true
machine_type = "n1-standard-1"
tags = ["gke-node", "${var.project_id}-gke"]
metadata = {
disable-legacy-endpoints = "true"
}
}
}


# # Kubernetes provider
# # The Terraform Kubernetes Provider configuration below is used as a learning reference only.
# # It references the variables and resources provisioned in this file.
# # We recommend you put this in another file -- so you can have a more modular configuration.
# # https://learn.hashicorp.com/terraform/kubernetes/provision-gke-cluster#optional-configure-terraform-kubernetes-provider
# # To learn how to schedule deployments and services using the provider, go here: https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider.

# provider "kubernetes" {
# load_config_file = "false"

# host = google_container_cluster.primary.endpoint
# username = var.gke_username
# password = var.gke_password

# client_certificate = google_container_cluster.primary.master_auth.0.client_certificate
# client_key = google_container_cluster.primary.master_auth.0.client_key
# cluster_ca_certificate = google_container_cluster.primary.master_auth.0.cluster_ca_certificate
# }
22 changes: 22 additions & 0 deletions iac/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: MPL-2.0

output "region" {
value = var.region
description = "GCloud Region"
}

output "project_id" {
value = var.project_id
description = "GCloud Project ID"
}

output "kubernetes_cluster_name" {
value = google_container_cluster.primary.name
description = "GKE Cluster Name"
}

output "kubernetes_cluster_host" {
value = google_container_cluster.primary.endpoint
description = "GKE Cluster Host"
}
5 changes: 5 additions & 0 deletions iac/terraform.tfvars
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: MPL-2.0

project_id = "pa-afa-kubernetes"
region = "europe-west3"
14 changes: 14 additions & 0 deletions iac/versions.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: MPL-2.0

terraform {
required_providers {
google = {
source = "hashicorp/google"
version = "4.74.0"
}
}

required_version = ">= 0.14"
}

29 changes: 29 additions & 0 deletions iac/vpc.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: MPL-2.0

variable "project_id" {
description = "project id"
}

variable "region" {
description = "region"
}

provider "google" {
project = var.project_id
region = var.region
}

# VPC
resource "google_compute_network" "vpc" {
name = "${var.project_id}-vpc"
auto_create_subnetworks = "false"
}

# Subnet
resource "google_compute_subnetwork" "subnet" {
name = "${var.project_id}-subnet"
region = var.region
network = google_compute_network.vpc.name
ip_cidr_range = "10.10.0.0/24"
}
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
import org.springframework.context.annotation.Configuration;
import org.springframework.core.annotation.Order;
import org.springframework.security.config.Customizer;
import org.springframework.security.config.annotation.method.configuration.EnableMethodSecurity;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity;
import org.springframework.security.config.annotation.web.configurers.AbstractHttpConfigurer;
Expand All @@ -19,6 +20,7 @@

@Configuration
@EnableWebSecurity
@EnableMethodSecurity
public class WebSecurityConfiguration {

@Order(1)
Expand Down Expand Up @@ -53,7 +55,8 @@ public SecurityFilterChain publicAccess(HttpSecurity httpSecurity, HandlerMappin
@Bean
public UserDetailsService userDetailsService() {
InMemoryUserDetailsManager manager = new InMemoryUserDetailsManager();
manager.createUser(User.withDefaultPasswordEncoder().username("user").password("password").roles("USER").build());
manager.createUser(User.withDefaultPasswordEncoder().username("user").password("secret").roles("USER").build());
manager.createUser(User.withDefaultPasswordEncoder().username("admin").password("admin").roles("USER", "ADMIN").build());
return manager;
}
}
4 changes: 4 additions & 0 deletions src/main/java/com/example/app/data/Message.java
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,10 @@ public UUID getIdentifier() {
return identifier;
}

public void setIdentifier(UUID identifier) {
this.identifier = identifier;
}

public String getMessage() {
return message;
}
Expand Down
4 changes: 2 additions & 2 deletions src/main/java/com/example/app/data/MessageRepository.java
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,6 @@
@Repository
public interface MessageRepository extends ListCrudRepository<Message, Long> {

public Optional<Message> findOneMessageByIdentifier(UUID identifier);

Optional<Message> findOneMessageByIdentifier(UUID identifier);
int deleteMessageByIdentifier(UUID identifier);
}
Loading

0 comments on commit 75d5a5d

Please sign in to comment.