This serverless application consists of the following:
- Amazon API Gateway that listens for Prometheus remote read and write requests.
- AWS Lambda function that stores the received Prometheus metrics in Amazon Timestream.
This application is meant to be used as a getting started guide and does not configure TLS encryption between Prometheus and the API Gateway. This is not recommended to be used directly for production. To enable TLS encryption for production, see Configuring mutual TLS authentication for an HTTP API.
-
aws timestream-write create-database --database-name <PrometheusDatabase>
-
aws timestream-write create-table --database-name <PrometheusDatabase> --table-name <PrometheusMetricsTable>
-
Download Prometheus (or reuse your existing Prometheus instance).
NOTE: The user deploying this application must have administrative privileges due to the number of permissions required for deployment. For a detailed list of permissions see Required Permissions.
To start using the Prometheus remote storage connector for Timestream, there are multiple steps involved:
- Deployment — deploy the endpoints through Amazon API Gateway and deploy the Prometheus Connector on AWS Lambda by using either one-click deployment or AWS CLI.
- Configure Prometheus — configure the remote storage endpoints for Prometheus.
- Invoke AWS Lambda Function — update the permissions for the users.
One-click deployment will deploy the connector as a Lambda function along with an API Gateway. The API Gateway will use a public endpoint with TLS 1.2 encryption for requests. For more information on the API Gateway's public endpoints, see the Amazon API Gateway Public Endpoints section below.
Use an AWS CloudFormation template to create the stack:
When deploying your connector using one-click deployment the following parameters are available to adjust prior to deployment:
- APIGatewayStageName
- DefaultDatabase
- DefaultTable
- ExecutionPolicyName
- LogLevel
- MemorySize
- ReadThrottlingBurstLimit
- TimeoutInMillis
- WriteThrottlingBurstLimit
The DefaultDatabase
, DefaultTable
and LogLevel
may be altered to fit your needs, all other parameters will not require altering for the standard deployment. The DefaultDatabase
and DefaultTable
determine the ingestion destination, and LogLevel
can be set to info
, debug
, warn
, or error
.
To install the Timestream Prometheus Connector service launch the AWS CloudFormation stack on the AWS CloudFormation console by choosing one of the "Launch Stack" buttons in the following table:
Region | View | View in Designer | Launch |
---|---|---|---|
US East (N. Virginia) us-east-1 | View | View in Designer | Launch |
US East (Ohio) us-east-2 | View | View in Designer | Launch |
US West (Oregon) us-west-2 | View | View in Designer | Launch |
Asia Pacific (Sydney) ap-southeast-2 | View | View in Designer | Launch |
Asia Pacific (Tokyo) ap-northeast-1 | View | View in Designer | Launch |
Europe (Frankfurt) eu-central-1 | View | View in Designer | Launch |
Europe (Ireland) eu-west-1 | View | View in Designer | Launch |
Note: Attempting to use one of the above "Launch" links to create an already existing stack will fail. To update an existing stack, such as the default
PrometheusTimestreamConnector
stack, via the AWS Console, go to the stacks page athttps://<region>.console.aws.amazon.com/cloudformation/home
, select the stack you want to update from the list, then click "Update" to proceed through the update process.
The steps to deploy a template are as follows:
-
Download the latest linux amd64 release
.zip
archive from the Releases page and place in theserverless
directory. -
From the the
serverless
directory, run the following command to deploy the template:sam deploy
To run this command from a different directory add the
-t <path to serverless/template.yml>
argument to specify the template.In addition to creating a new stack with the default name
PrometheusTimestreamConnector
this command creates a default stack calledaws-sam-cli-managed-default
. The default stack manages an S3 bucket hosting all the deployment artifacts:- the SAM template.
To use a specific S3 bucket run:
sam deploy --s3-bucket <custom-bucket>
To override default parameter values use the
--parameter-overrides
argument and provide a string with format ParameterKey=ParameterValue. For example:sam deploy --parameter-overrides "TimeoutInMillis=60000 DefaultDatabase=<CustomDatabase>"
You can view the full set of parameters defined for
serverless/template.yml
below, in AWS Lambda Configuration Options.To override default values for parameters and interactively proceed through stack deployment run:
sam deploy --guided
To deploy to a specific region:
sam deploy --region <region>
To view the full set of sam deploy
options see the sam deploy documentation.
-
The deployment will have the following outputs upon completion:
- InvokeReadURL: The remote read URL for Prometheus.
- InvokeWriteURL: The remote write URL for Prometheus.
- DefaultDatabase: The database destination for queries and ingestion.
- DefaultTable: The database table destination for queries and ingestion.
An example of the output:
CloudFormation outputs from deployed stack ------------------------------------------------------------------------------------------------------------ Outputs ------------------------------------------------------------------------------------------------------------ Key InvokeReadURL Description Remote read URL for Prometheus Value https://api-id.execute-api.region.amazonaws.com/prod/read Key InvokeWriteURL Description Remote write URL for Prometheus Value https://api-id.execute-api.region.amazonaws.com/prod/write Key DefaultDatabase Description The Prometheus label containing the database name Value PrometheusDatabase Key DefaultTable Description The Prometheus label containing the table name Value PrometheusMetricsTable ------------------------------------------------------------------------------------------------------------
To view all the stack information open the for more details see Viewing AWS CloudFormation stack data and resources on the AWS Management Console.
-
Open the configuration file for Prometheus, the file is usually named
prometheus.yml
. -
Replace the
InvokeWriteURL
andInvokeReadURL
with the API Gateway URLs from deployment, and provide the appropriate IAM credentials inbasic_auth
before adding the following sections to the configuration file:NOTE: All configuration options are case-sensitive, and session_token authentication parameter is not supported for MFA authenticated AWS users.
scrape_configs: - job_name: 'prometheus' scrape_interval: 15s static_configs: - targets: ['localhost:9090'] remote_write: # Update the value to the InvokeWriteURL returned when deploying the stack. - url: "InvokeWriteURL" queue_config: max_samples_per_send: 100 # Update the username and password to a valid IAM access key and secret access key. basic_auth: username: accessKey password_file: passwordFile remote_read: # Update the value to the InvokeReadURL returned when deploying the stack. - url: "InvokeReadURL" # Update the username and password to a valid IAM access key and secret access key. basic_auth: username: accessKey password_file: passwordFile
The password_file path must be the absolute path for the file, and the password file must contain only the value for the aws_secret_access_key.
The url values for remote_read and remote_write will be outputs from the cloudformation deployment. See the following example for a remote write url:
url: "https://foo9l30.execute-api.us-east-1.amazonaws.com/dev/write"
- Ensure the user invoking the AWS Lambda function has read and write permissions to Amazon Timestream. For more details see Execution Permissions.
- Start Prometheus. Since the remote storage options for Prometheus has been configured, Prometheus will start ingesting to Timestream through the API Gateway endpoints.
Follow the verification steps in README.md#verification.
Option | Description | Default Value |
---|---|---|
DefaultDatabase | The Prometheus label containing the database name. | PrometheusDatabase |
DefaultTable | The Prometheus label containing the table name. | PrometheusMetricsTable |
MemorySize | The memory size of the AWS Lambda function. | 512 |
TimeoutInMillis | The amount of time in milliseconds to run the connector on AWS Lambda before timing out. | 30000 |
ReadThrottlingBurstLimit | The number of burst read requests per second that API Gateway permits. | 1200 |
WriteThrottlingBurstLimit | The number of burst write requests per second that API Gateway permits. | 1200 |
Option | Description | Default Value |
---|---|---|
ExecutionPolicyName | The name of the execution policy created for AWS Lambda. | LambdaExecutionPolicy |
Option | Description | Default Value |
---|---|---|
APIGatewayStageName | The stage name of the API Gateway. Stage names can contain only alphanumeric characters, hyphens, and underscores. | dev |
The default stage name dev
may indicate the endpoint is at development
stage.
If the application is ready for production, set the stage name to a more appropriate value like prod
when deploying the stack.
When deployed with one-click deployment or the serverless/template.yml
CloudFormation template, an API Gateway will be created with public endpoints.
The public endpoints are:
-
Write:
https://<API Gateway ID>.execute-api.<region>.amazonaws.com/dev/write
-
Read:
https://<API Gateway ID>.execute-api.<region>.amazonaws.com/dev/read
The public endpoints use a minimum of TLS 1.2 encryption in transit for all requests, as all API Gateway endpoints do by default.
The template assumes the user deploying the project has all the required permissions. If the user is missing any of the required permissions the deployment will fail.
See Troubleshooting section for more details.
The user deploying this project must have the following permissions listed below. Ensure the values of account-id
and region
in the resources section are updated before using this template directly.
Note - All permissions have limited resources except actions that cannot be limited to a specific resource. APIGateway actions cannot limit resources as the resource name is auto generated by the template. See the following documentation for cloudformation, sns, and iam limitations on actions: cloudformation sns iam
NOTE - This policy is too long to be added inline during user creation, and must be created as a policy and attached to the user instead.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"cloudformation:ListStacks",
"cloudformation:GetTemplateSummary",
"iam:ListRoles",
"sns:ListTopics",
"apigateway:GET",
"apigateway:POST",
"apigateway:PUT",
"apigateway:TagResource"
],
"Resource": "*"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:CreateRole",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"iam:CreatePolicy",
"iam:PassRole",
"iam:GetRolePolicy"
],
"Resource": "arn:aws:iam::<account-id>:role/PrometheusTimestreamConnector-IAMLambdaRole-*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"cloudformation:CreateChangeSet",
"cloudformation:DescribeStacks",
"cloudformation:DescribeStackEvents",
"cloudformation:DescribeChangeSet",
"cloudformation:ExecuteChangeSet",
"cloudformation:GetTemplate",
"cloudformation:CreateStack",
"cloudformation:GetStackPolicy"
],
"Resource": [
"arn:aws:cloudformation:<region>:<account-id>:stack/PrometheusTimestreamConnector/*",
"arn:aws:cloudformation:<region>:<account-id>:stack/aws-sam-cli-managed-default/*",
"arn:aws:cloudformation:<region>:aws:transform/Serverless-2016-10-31"
]
},
{
"Sid": "VisualEditor3",
"Effect": "Allow",
"Action": [
"lambda:ListFunctions",
"lambda:AddPermission",
"lambda:CreateFunction",
"lambda:TagResource",
"lambda:GetFunction"
],
"Resource": "arn:aws:lambda:<region>:<account-id>:function:PrometheusTimestreamConnector-LambdaFunction-*"
},
{
"Sid": "VisualEditor4",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetBucketPolicy",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::timestreamassets-<region>/timestream-prometheus-connector-linux-amd64-*.zip"
},
{
"Sid": "VisualEditor5",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:GetBucketPolicy",
"s3:GetBucketLocation",
"s3:PutObject",
"s3:PutBucketPolicy",
"s3:PutBucketTagging",
"s3:PutEncryptionConfiguration",
"s3:PutBucketVersioning",
"s3:PutBucketPublicAccessBlock",
"s3:CreateBucket",
"s3:DescribeJob",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::aws-sam-cli-managed-default*"
},
{
"Sid": "VisualEditor6",
"Effect": "Allow",
"Action": [
"cloudformation:GetTemplateSummary"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"cloudformation:TemplateUrl": [
"https://timestreamassets-<region>.s3.amazonaws.com/template.yml"
]
}
}
}
]
}
The user executing this project must have the following permissions listed below. Ensure the values of account-id
and region
in the resource section are updated before using this template directly. If the name of the database and table differ from the policy resource, be sure to update their values.
Note - Timestream:DescribeEndpoints resource must be
*
as specified under security_iam_service-with-iam.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"timestream:WriteRecords",
"timestream:Select"
],
"Resource": "arn:aws:timestream:<region>:<account-id>:database/PrometheusDatabase/table/PrometheusMetricsTable"
},
{
"Effect": "Allow",
"Action": [
"timestream:DescribeEndpoints"
],
"Resource": "*"
}
]
}
- Open the AWS management console for AWS IAM.
- Click
Policies
. - Click
Create policy
. - Click
JSON
. - Remove default policy and paste the Deployment policy into the Policy Editor.
- Update values for
<account-id>
and<region>
for your AWS account. - Click
Next
. - Enter
TimestreamPrometheusDeploymentPolicy
in thePolicy name
dialogue box. - Click
Create policy
.
- Open the AWS management console for AWS IAM.
- Click
Policies
. - Click
Create policy
. - Click
JSON
. - Remove default policy and paste the Execution policy into the Policy Editor.
- Update values for
<account-id>
and<region>
for your AWS account. - Click
Next
. - Enter
TimestreamPrometheusExecutionPolicy
in thePolicy name
dialogue box. - Click
Create policy
.
- Open the AWS management console for AWS IAM.
- Click
Users
. - Click
Create User
. - Enter
TimestreamPrometheusDeployment
in theUser name
dialogue box. - Click
Next
. - Click
Attach policies directly
. - Search for the policy
TimestreamPrometheusDeploymentPolicy
and select the box next to the policy. - Click
Next
. - Click
Create user
.
Note: This portion is only needed if the deploying method for the Prometheus Connector is using one-click deployment.
- Open the AWS management console for AWS IAM.
- Click
Users
. - Search for
TimestreamPrometheusDeployment
and select the user. - Click
Security credentials
. - Click
Enable console access
. - Click
Enable
andApply
. - Save the password to login the user when deploying using the one-click deployment method.
Note: This portion is only needed if the deploying method for the Prometheus Connector is using the AWS SAM CLI.
- Open the AWS management console for AWS IAM.
- Click
Users
. - Search for
TimestreamPrometheusDeployment
and select the user. - Click
Create access key
in the Summary box. - Click
Application running outside AWS
. - Click
Next
. - Click
Create access key
.
Store the Access key
and Secret access key
in your ~/.aws/credentials
file with the following format:
[default]
aws_access_key_id = <access key>
aws_secret_access_key = <Secret Access Key>
- Open the AWS management console for AWS IAM.
- Click
Users
. - Click
Create User
. - Enter
TimestreamPrometheusExecution
in theUser name
dialogue box. - Click
Next
. - Click
Attach policies directly
. - Search for the policy
TimestreamPrometheusExecutionPolicy
and select the box next to the policy. - Click
Next
. - Click
Create user
.
- Open the AWS management console for AWS IAM.
- Click
Users
. - Search for
TimestreamPrometheusExecution
and select the user. - Click
Create access key
in the Summary box. - Click
Application running outside AWS
. - Click
Next
. - Click
Create access key
.
Store the Access key
and Secret access key
for later to configure Prometheus for execution.
Running the Prometheus Connector on AWS Lambda allows for a serverless workflow. This section details the IAM permissions created by the template to integrate the Prometheus Connector with Amazon API Gateway and AWS Lambda.
The LambdaExecutionPolicy
created by the template allows the lambda function to output logs to cloudwatch. See README#IAM Role and Policy Configuration for the json policy.
The TimestreamLambdaRole
is the role used by the template in order to permit AWS lambda and API Gateway deployment. See README#IAM Role and Policy Configuration for the json role used.
Following the above steps you should be able to ingest and query your Prometheus data in Timestream. Ensure all items in the following list can be verified to confirm the guide has been completed correctly:
- The
PrometheusMetricsTable
table in thePrometheusDatabase
database is empty. - AWS CLI is configured with the correct region wishing to deploy connector to.
- User access key id is set.
- User secret access key is set.
Before running Prometheus, the result of the following AWS CLI command should show a ScalarValue
of 0
within Data
, if you have been following this document step-by-step:
aws timestream-query query --query-string "SELECT count() FROM PrometheusDatabase.PrometheusMetricsTable"
Next, start Prometheus
./prometheus
On macOS the first time running Prometheus may fail due to the developer being unable to be verified. To continue, you must grant Prometheus execution in System Settings -> Privacy & Security -> Security -> "prometheus" was blocked from use because it is not from an identified developer. -> Allow Anyway
.
After successfully starting Prometheus and seeing no errors reported by Prometheus again run
aws timestream-query query --query-string "SELECT count() FROM PrometheusDatabase.PrometheusMetricsTable"
You should now see some non-zero data value within Data
, which verifies that the Prometheus instance can ingest data into PrometheusMetricsTable
.
Next, in order to verify that Prometheus can make a successful read request, add data to your table by running and replacing <current-time-in-seconds>
appropriately:
aws timestream-write write-records --database-name PrometheusDatabase --table-name PrometheusMetricsTable --records '[{"Dimensions":[{"DimensionValueType": "VARCHAR", "Name": "job","Value": "prometheus"},{"DimensionValueType": "VARCHAR","Name": "instance","Value": "localhost:9090"}],"MeasureName":"prometheus_temperature","MeasureValue":"98.76","TimeUnit":"SECONDS","Time":"<current-time-in-seconds>"}]'
Open the Prometheus web interface (default localhost:9090
) and within the execution bar run
prometheus_temperature[15d]
And verify that data is displayed.
Note: The time range (
15d
) must be large enough to trigger the Prometheus read from external endpoint. If the time range is too small then only local data will be read.
-
Delete cloudformation stack and S3 artifacts
sam delete --stack-name PrometheusTimestreamConnector --region <region>
-
Delete table
aws timestream-write delete-table --database-name <PrometheusDatabase> --table-name <PrometheusMetricsTable> --region <region>
-
Delete database
aws timestream-write delete-database --database-name <PrometheusDatabase> --region <region>
NOTE: Cleaning up resources will require additional IAM permissions added to the base required for deployment under Deployment Permissions
Required Permissions:
- "apigateway:DELETE"
- "s3:DeleteBucket"
- "s3:DeleteObjectVersion"
- "s3:DeleteObject"
- "s3:DeleteBucketPolicy"
- "iam:DeleteUserPolicy"
- "iam:DeletePolicy"
- "iam:DeleteRole"
- "iam:DetachRolePolicy"
- "iam:DeleteRolePolicy"
- "cloudformation:DeleteStack"
- "lambda:RemovePermission"
- "lambda:DeleteFunction"
If the following error occurred while running sam deploy
,
Error: Security Constraints Not Satisfied!
Ensure the following:
-
When executing
sam deploy
, entery
for the following question instead of the defaultN
: LambdaFunction may not have authorization defined, Is this okay? [y/N]: yThe stack will now be created with the following warning:
LambdaFunction may not have authorization defined.
This behaviour occurs because the API Gateway triggers configured for the AWS Lambda function do not have authorization defined. This is fine because authorization is done in the Prometheus Connector instead of the API Gateway.
An error occurred during deployment due to invalid permissions.
Do the following:
- Ensure the user deploying the project has administrative privileges and all required deployment permissions.
- See all the required permissions in Deployment Permissions.
- Redeploy the project.
The deployment fails due to existing resources.
If this error occurred after a failed deployment:
- Open the AWS Management Console for CloudFormation and delete the failed stack.
- Redeploy on AWS console.
If this error occurred in a new deployment:
- Rename the conflict resource name to something else.
- Redeploy on AWS console.
See the list below for parameters whose values that may result in resource conflicts:
- ExecutionPolicyName
If the Lambda TimeoutInMillis
parameter is too small or a PromQL query exceeds the TimeoutInMillis
value a
remote_read: remote server https://api-id.execute-api.region.amazonaws.com/dev/read returned HTTP status 404 Not Found: {"message":"Not Found"}
error could be returned. If you encounter this error first try overriding the default value for TimeoutInMillis
(30 seconds) with a greater value using the --parameter-overrides
option for sam deploy
.
This SAM template does not enable TLS encryption by default between Prometheus and the Prometheus Connector.
During development, ensure the following:
- Regularly rotate IAM user access keys, see Rotating access keys.
- Follow the best practices.
During production, enable TLS encryption through Amazon API Gateway, see Configuring mutual TLS authentication for an HTTP API.
This project is licensed under the Apache 2.0 License.