Skip to content

Commit

Permalink
Add EKS Istio Dashboards (#194)
Browse files Browse the repository at this point in the history
* Update readme

* checking in new readme example

* new istio readme

* Updated mkdocs.yml

* checking in readme files

* checking in all changes

* Test

* Test

* Test

* Testing

* Testing

* test

* Update main.tf

* Update outputs.tf

* Adding Istio dashboards

* Updating URL's

* Removing dashboards

* Updates from pre-commit

* Update istio.md

* Removing empty file

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update README.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update istio.md

* Update README.md

* Update README.md

* Istio Documentation Updates

Added istioctl prereq, updated dashboards image

* Updated setup and cleanup of Istio Bookinfo

Updated Istio Bookinfo sample app instructions, included clean up of Bookinfo resources

* Updating image and removing advanced configuration

-Switching image to Github generated link
-Removing advanced configuration section

---------

Co-authored-by: Prithvi Reddy <[email protected]>
Co-authored-by: Prithvi Reddy <[email protected]>
  • Loading branch information
3 people authored Jul 27, 2023
1 parent 7f60bd7 commit 1ceecb3
Show file tree
Hide file tree
Showing 18 changed files with 904 additions and 0 deletions.
174 changes: 174 additions & 0 deletions docs/eks/istio.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,174 @@
# Monitor Istio running on Amazon EKS

This example demonstrates how to use Terraform modules for AWS Observability Accelerator, EKS Blueprints with the Tetrate Istio Add-on and EKS monitoring for Istio.

The current example deploys the [AWS Distro for OpenTelemetry Operator](https://docs.aws.amazon.com/eks/latest/userguide/opentelemetry.html)
for Amazon EKS with its requirements and make use of an existing Amazon Managed Grafana workspace.
It creates a new Amazon Managed Service for Prometheus workspace unless provided with an existing one to reuse.

It uses the `EKS monitoring` [module](../../modules/eks-monitoring/)
to provide an existing EKS cluster with an OpenTelemetry collector,
curated Grafana dashboards, Prometheus alerting and recording rules with multiple
configuration options for Istio.

## Prerequisites

Ensure that you have the following tools installed locally:

1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html)
2. [kubectl](https://kubernetes.io/docs/tasks/tools/)
3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
4. [istioctl](https://istio.io/latest/docs/setup/getting-started/#download)

## Setup

This example uses a local terraform state. If you need states to be saved remotely,
on Amazon S3 for example, visit the [terraform remote states](https://www.terraform.io/language/state/remote) documentation

### 1. Clone the repo using the command below

```
git clone https://github.com/aws-observability/terraform-aws-observability-accelerator.git
```

### 2. Initialize terraform

```console
cd examples/eks-istio
terraform init
```

### 3. Amazon EKS Cluster

To run this example, you need to provide your EKS cluster name.
If you don't have a cluster ready, visit [this example](https://aws-observability.github.io/terraform-aws-observability-accelerator/helpers/new-eks-cluster/)
first to create a new one.

Add your cluster name for `eks_cluster_id="..."` to the `terraform.tfvars` or use an environment variable `export TF_VAR_eks_cluster_id=xxx`.

### 4. Amazon Managed Grafana workspace

To run this example you need an Amazon Managed Grafana workspace. If you have
an existing workspace, create an environment variable
`export TF_VAR_managed_grafana_workspace_id=g-xxx`.

To create a new one, visit [this example](https://aws-observability.github.io/terraform-aws-observability-accelerator/helpers/managed-grafana/).

> In the URL `https://g-xyz.grafana-workspace.eu-central-1.amazonaws.com`, the workspace ID would be `g-xyz`
### 5. <a name="apikey"></a> Grafana API Key

Amazon Managed Service for Grafana provides a control plane API for generating Grafana API keys. We will provide to Terraform
a short lived API key to run the `apply` or `destroy` command.
Ensure you have necessary IAM permissions (`CreateWorkspaceApiKey, DeleteWorkspaceApiKey`)

```sh
export TF_VAR_grafana_api_key=`aws grafana create-workspace-api-key --key-name "observability-accelerator-$(date +%s)" --key-role ADMIN --seconds-to-live 1200 --workspace-id $TF_VAR_managed_grafana_workspace_id --query key --output text`
```

## Deploy

Simply run this command to deploy (if using a variable definition file)

```sh
terraform apply -var-file=terraform.tfvars
```

or if you had setup environment variables, run

```sh
terraform apply
```

## Additional configuration

For the purpose of the example, we have provided default values for some of the variables.

1. AWS Region

Specify the AWS Region where the resources will be deployed. Edit the `terraform.tfvars` file and modify `aws_region="..."`. You can also use environement variables `export TF_VAR_aws_region=xxx`.


2. Amazon Managed Service for Prometheus workspace

If you have an existing workspace, add `managed_prometheus_workspace_id=ws-xxx`
or use an environment variable `export TF_VAR_managed_prometheus_workspace_id=ws-xxx`.

## Visualization

### 1. Grafana dashboards

Go to the Dashboards panel of your Grafana workspace. You will see a list of Istio dashboards under the `Observability Accelerator Dashboards`

<img width="1208" alt="image" src="https://github.com/aws-observability/terraform-aws-observability-accelerator/assets/34757337/19b589b4-00f6-465d-a562-1da39e8b9b8c">

Open one of the Istio dasbhoards and you will be able to view its visualization

<img width="1850" alt="image" src="https://user-images.githubusercontent.com/47993564/236842708-72225322-4f97-44cc-aac0-40a3356e50c6.jpeg">

### 2. Amazon Managed Service for Prometheus rules and alerts

Open the Amazon Managed Service for Prometheus console and view the details of your workspace. Under the `Rules management` tab, you will find new rules deployed.

<img width="1054" alt="image" src="https://user-images.githubusercontent.com/47993564/236844084-80c754e3-4fe1-45bb-8361-181432675469.jpeg">

!!! note
To setup your alert receiver, with Amazon SNS, follow [this documentation](https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-alertmanager-receiver.html)

## Deploy an example application to visualize metrics

In this section we will deploy Istio's Bookinfo sample application and extract metrics using the AWS OpenTelemetry collector. When downloading and configuring `istioctl`, there are samples included in the Istio package directory. The deployment files for Bookinfo are found in the `samples` folder. Additional details can be found on Istio's [Getting Started](https://istio.io/latest/docs/setup/getting-started/) documentation

### 1. Deploy the Bookinfo Application

1. Using the AWS CLI, configure kubectl so you can connect to your EKS cluster. Update for your region and EKS cluster name
```sh
aws eks update-kubeconfig --region <enter-your-region> --name <cluster-name>
```
2. Label the default namespace for automatic Istio sidecar injection
```sh
kubectl label namespace default istio-injection=enabled
```
3. Navigate to the Istio folder location. For example, if using Istio v1.18.2 in Downloads folder:
```sh
cd ~/Downloads/istio-1.18.2
```
4. Deploy the Bookinfo sample application
```sh
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
```
5. Connect the Bookinfo application with the Istio gateway
```sh
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
```
6. Validate that there are no issues with the Istio configuration
```sh
istioctl analyze
```
7. Get the DNS name of the load balancer for the Istio gateway
```sh
GATEWAY_URL=$(kubectl get svc istio-ingressgateway -n istio-system -o=jsonpath='{.status.loadBalancer.ingress[0].hostname}')
```

### 2. Generate traffic for the Istio Bookinfo sample application

For the Bookinfo sample application, visit `http://$GATEWAY_URL/productpage` in your web browser. To see trace data, you must send requests to your service. The number of requests depends on Istio’s sampling rate and can be configured using the Telemetry API. With the default sampling rate of 1%, you need to send at least 100 requests before the first trace is visible. To send a 100 requests to the productpage service, use the following command:
```sh
for i in $(seq 1 100); do curl -s -o /dev/null "http://$GATEWAY_URL/productpage"; done
```

### 3. Explore the Istio dashboards

Log back into your Amazon Managed Grafana workspace and navigate to the dashboard side panel. Click on the `Observability Accelerator Dashboards` folder and open the `Istio Service` Dashboard. Use the Service dropdown menu to select the `reviews.default.svc.cluster.local` service. This gives details about metrics for the service, client workloads (workloads that are calling this service), and service workloads (workloads that are providing this service).

Explore the Istio Control Plane, Mesh, and Performance dashboards as well.

## Destroy

To teardown and remove the resources created in this example:

```sh
kubectl delete -f samples/bookinfo/networking/bookinfo-gateway.yaml
kubectl delete -f samples/bookinfo/platform/kube/bookinfo.yaml
terraform destroy
```
57 changes: 57 additions & 0 deletions examples/eks-istio/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# Existing Cluster with the AWS Observability accelerator base module, Tetrate Istio Add-on and Istio monitoring

View the full documentation for this example [here](https://aws-observability.github.io/terraform-aws-observability-accelerator/eks/istio)

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Requirements

| Name | Version |
|------|---------|
| <a name="requirement_terraform"></a> [terraform](#requirement\_terraform) | >= 1.1.0 |
| <a name="requirement_aws"></a> [aws](#requirement\_aws) | >= 4.0.0 |
| <a name="requirement_helm"></a> [helm](#requirement\_helm) | >= 2.4.1 |
| <a name="requirement_kubectl"></a> [kubectl](#requirement\_kubectl) | >= 1.14 |
| <a name="requirement_kubernetes"></a> [kubernetes](#requirement\_kubernetes) | >= 2.10 |

## Providers

| Name | Version |
|------|---------|
| <a name="provider_aws"></a> [aws](#provider\_aws) | >= 4.0.0 |

## Modules

| Name | Source | Version |
|------|--------|---------|
| <a name="module_aws_observability_accelerator"></a> [aws\_observability\_accelerator](#module\_aws\_observability\_accelerator) | ../../ | n/a |
| <a name="module_eks_blueprints_kubernetes_addons"></a> [eks\_blueprints\_kubernetes\_addons](#module\_eks\_blueprints\_kubernetes\_addons) | github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons | v4.32.0 |
| <a name="module_eks_monitoring"></a> [eks\_monitoring](#module\_eks\_monitoring) | ../../modules/eks-monitoring | n/a |

## Resources

| Name | Type |
|------|------|
| [aws_eks_cluster.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster) | data source |
| [aws_eks_cluster_auth.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/eks_cluster_auth) | data source |

## Inputs

| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| <a name="input_aws_region"></a> [aws\_region](#input\_aws\_region) | AWS Region | `string` | n/a | yes |
| <a name="input_eks_cluster_id"></a> [eks\_cluster\_id](#input\_eks\_cluster\_id) | Name of the EKS cluster | `string` | `"eks-cluster-with-vpc"` | no |
| <a name="input_enable_dashboards"></a> [enable\_dashboards](#input\_enable\_dashboards) | Enables or disables curated dashboards. Dashboards are managed by the Grafana Operator | `bool` | `true` | no |
| <a name="input_grafana_api_key"></a> [grafana\_api\_key](#input\_grafana\_api\_key) | API key for authorizing the Grafana provider to make changes to Amazon Managed Grafana | `string` | n/a | yes |
| <a name="input_managed_grafana_workspace_id"></a> [managed\_grafana\_workspace\_id](#input\_managed\_grafana\_workspace\_id) | Amazon Managed Grafana Workspace ID | `string` | n/a | yes |
| <a name="input_managed_prometheus_workspace_id"></a> [managed\_prometheus\_workspace\_id](#input\_managed\_prometheus\_workspace\_id) | Amazon Managed Service for Prometheus Workspace ID | `string` | `""` | no |

## Outputs

| Name | Description |
|------|-------------|
| <a name="output_aws_region"></a> [aws\_region](#output\_aws\_region) | AWS Region |
| <a name="output_eks_cluster_id"></a> [eks\_cluster\_id](#output\_eks\_cluster\_id) | EKS Cluster Id |
| <a name="output_eks_cluster_version"></a> [eks\_cluster\_version](#output\_eks\_cluster\_version) | EKS Cluster version |
| <a name="output_managed_prometheus_workspace_endpoint"></a> [managed\_prometheus\_workspace\_endpoint](#output\_managed\_prometheus\_workspace\_endpoint) | Amazon Managed Prometheus workspace endpoint |
| <a name="output_managed_prometheus_workspace_id"></a> [managed\_prometheus\_workspace\_id](#output\_managed\_prometheus\_workspace\_id) | Amazon Managed Prometheus workspace ID |
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
121 changes: 121 additions & 0 deletions examples/eks-istio/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
provider "aws" {
region = local.region
}

data "aws_eks_cluster_auth" "this" {
name = var.eks_cluster_id
}

data "aws_eks_cluster" "this" {
name = var.eks_cluster_id
}

provider "kubernetes" {
host = local.eks_cluster_endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.this.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.this.token
}

provider "helm" {
kubernetes {
host = local.eks_cluster_endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.this.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.this.token
}
}

locals {
region = var.aws_region
eks_cluster_endpoint = data.aws_eks_cluster.this.endpoint
create_new_workspace = var.managed_prometheus_workspace_id == "" ? true : false
tags = {
Source = "github.com/aws-observability/terraform-aws-observability-accelerator"
}
}

# deploys the base module
module "aws_observability_accelerator" {
source = "../../"
# source = "github.com/aws-observability/terraform-aws-observability-accelerator?ref=v2.0.0"

aws_region = var.aws_region

# creates a new Amazon Managed Prometheus workspace, defaults to true
enable_managed_prometheus = local.create_new_workspace

# reusing existing Amazon Managed Prometheus if specified
managed_prometheus_workspace_id = var.managed_prometheus_workspace_id

# sets up the Amazon Managed Prometheus alert manager at the workspace level
enable_alertmanager = true

# reusing existing Amazon Managed Grafana workspace
managed_grafana_workspace_id = var.managed_grafana_workspace_id

tags = local.tags
}

module "eks_blueprints_kubernetes_addons" {
source = "github.com/aws-ia/terraform-aws-eks-blueprints//modules/kubernetes-addons?ref=v4.32.0"

eks_cluster_id = var.eks_cluster_id
#eks_cluster_endpoint = module.eks_blueprints.eks_cluster_endpoint
#eks_oidc_provider = module.eks_blueprints.oidc_provider
#eks_cluster_version = module.eks_blueprints.eks_cluster_version

# EKS Managed Add-ons
#enable_amazon_eks_vpc_cni = true
#enable_amazon_eks_coredns = true
#enable_amazon_eks_kube_proxy = true

# Add-ons
enable_metrics_server = true
enable_cluster_autoscaler = true

# Tetrate Istio Add-on
enable_tetrate_istio = true

tags = local.tags
}

module "eks_monitoring" {
source = "../../modules/eks-monitoring"
# source = "github.com/aws-observability/terraform-aws-observability-accelerator//modules/eks-monitoring?ref=v2.0.0"
enable_istio = true
eks_cluster_id = var.eks_cluster_id

# deploys AWS Distro for OpenTelemetry operator into the cluster
enable_amazon_eks_adot = true

# reusing existing certificate manager? defaults to true
enable_cert_manager = true

# deploys external-secrets in to the cluster
enable_external_secrets = true
grafana_api_key = var.grafana_api_key
target_secret_name = "grafana-admin-credentials"
target_secret_namespace = "grafana-operator"
grafana_url = module.aws_observability_accelerator.managed_grafana_workspace_endpoint

# control the publishing of dashboards by specifying the boolean value for the variable 'enable_dashboards', default is 'true'
enable_dashboards = var.enable_dashboards

managed_prometheus_workspace_id = module.aws_observability_accelerator.managed_prometheus_workspace_id

managed_prometheus_workspace_endpoint = module.aws_observability_accelerator.managed_prometheus_workspace_endpoint
managed_prometheus_workspace_region = module.aws_observability_accelerator.managed_prometheus_workspace_region

# optional, defaults to 60s interval and 15s timeout
prometheus_config = {
global_scrape_interval = "60s"
global_scrape_timeout = "15s"
}

enable_logs = true

tags = local.tags

depends_on = [
module.aws_observability_accelerator
]
}
24 changes: 24 additions & 0 deletions examples/eks-istio/outputs.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
output "aws_region" {
description = "AWS Region"
value = module.aws_observability_accelerator.aws_region
}

output "managed_prometheus_workspace_endpoint" {
description = "Amazon Managed Prometheus workspace endpoint"
value = module.aws_observability_accelerator.managed_prometheus_workspace_endpoint
}

output "managed_prometheus_workspace_id" {
description = "Amazon Managed Prometheus workspace ID"
value = module.aws_observability_accelerator.managed_prometheus_workspace_id
}

output "eks_cluster_version" {
description = "EKS Cluster version"
value = module.eks_monitoring.eks_cluster_version
}

output "eks_cluster_id" {
description = "EKS Cluster Id"
value = module.eks_monitoring.eks_cluster_id
}
Loading

0 comments on commit 1ceecb3

Please sign in to comment.