The Prometheus Connector receives and sends time series data between Prometheus and Amazon Timestream through Prometheus' remote write and remote read protocols.
- Prerequisites
- User Documentation
- Developer Documentation
- Troubleshooting
- Caveats
- License
- Sign up for AWS — Before beginning, have an AWS account. For more information about creating an AWS account and retrieving your AWS credentials, see Signing Up for AWS.
- Amazon Timestream — Have databases and tables created on Amazon Timestream. To create databases and tables on Amazon Timestream, see Accessing Timestream.
- Minimum requirements — The Amazon Timestream Prometheus Connector for Go requires Go 1.14 or later.
- Prometheus — Download Prometheus from their Download page. To learn more about Prometheus, see their introduction documentation.
- Docker — Docker is only required when building or running the Docker image. To download Docker, see Get Started with Docker.
The following steps use one-click deployment to deploy the connector as a Lambda function along with an API Gateway.
- Prerequisites are met.
- Prometheus is configured, minimum version
2.0.0
. - Deploy with one click deployment: serverless/DEVELOPER_README.md#deployment.
- Update
remote_read
andremote_write
values inprometheus.yml
to the resources created by the deployment: serverless/DEVELOPER_README.md#configure-prometheus. - Verify the Prometheus connector is working.
The Prometheus Connector is available in the following formats:
- One-click deployment.
- Precompiled binaries.
- Docker image.
- A
zip
archive of the precompiled binary for Linux that can be integrated with AWS Lambda.
To configure Prometheus to read and write to remote storage, configure the remote_read
and remote_write
sections in prometheus.yml
. To learn more, see the remote read and remote write sections on Prometheus' configuration page.
-
Configure Prometheus' remote read and remote write destination by setting the
url
options to the Prometheus Connector's listening URLs, e.g."http://localhost:9201/write"
. -
Configure the basic authentication header for Prometheus read and write requests with valid IAM credentials.
NOTE: All configuration options are case-sensitive, and session_token authentication parameter is not supported for MFA authenticated AWS users.
basic_auth: username: accessKey password: secretAccessKey
Prometheus also supports passing the password as a file, the following example has the IAM secret access key stored in secret.txt in the credentials folder:
basic_auth: username: accessKey password_file: /Users/user/Desktop/credentials/secretAccessKey.txt
The password_file path must be the absolute path for the file, and the password file must contain only the value for the aws_secret_access_key.
NOTE: As a security best practice, it is recommended to regularly rotate IAM user access keys.
-
With the default configuration, Prometheus uses TLS version 1.2 for
remote_read
andremote_write
requests. It is recommended to secure the Prometheus requests with mutual TLS encryption in a production environment. This can be achieved by specifying the certificate authority file in thetls_config
section for Prometheus' remote read and remote write configuration. To generate self-signed certificates during development see the Creating Self-signed TLS Certificates section.Here is an example of
remote_write
andremote_read
configuration with TLS, whereRootCA.pem
is within the same directory as the Prometheus configuration file:NOTE: All configuration options are case-sensitive, and session_token authentication parameter is not supported for MFA authenticated AWS users.
remote_write: - url: "https://localhost:9201/write" tls_config: # Ensure ca_file is a valid file path pointing to the CA certificate. ca_file: RootCA.pem basic_auth: username: accessKey password: secretAccessKey remote_read: - url: "https://localhost:9201/read" basic_auth: username: accessKey password: secretAccessKey tls_config: # Ensure ca_file is a valid file path pointing to the CA certificate. ca_file: RootCA.pem
See a full example without TLS configuration in simple-example.yml.
See serverless/DEVELOPER_README.md
for serverless deployments, which includes one-click deployment links for CloudFormation.
This is the easiest and recommended method for running the connector.
The pre-compiled binaries independent of platform will have the name bootstrap
to align with the provided.al2023
lambda runtime naming convention. Run the precompiled binaries with required arguments default-database
, default-table
. Specify the region
argument if your Timestream database is not in us-east-1
, as that is the default value for the target region.
./bootstrap --default-database=prometheusDatabase --default-table=prometheusMetricsTable --region=us-west-2
It is recommended to secure the Prometheus requests with TLS encryption. To enable TLS encryption:
-
Specify the server certificate and the server private key through the
tls-certificate
andtls-key
configuration options. An example for macOS is as follows:./bootstrap \ --default-database=prometheusDatabase \ --default-table=prometheusMetricsTable \ --region=us-west-2 \ --tls-certificate=serverCertificate.crt \ --tls-key=serverPrivateKey.key
-
Ensure the certificate authority file has been specified in the
tls_config
section within Prometheus' configuration file, see Prometheus Configuration for an example.
To generate self-signed certificates during development see Creating Self-signed TLS Certificates.
For more examples on configuring the Prometheus Connector see Configuration Options.
The following error message may show up when running the precompiled binary on macOS:
"bootstrap" cannot be opened because the developer cannot be verified.
Follow these steps to resolve:
- Choose
Apple menu
>System Preferences
. - Select
Security & Privacy
. - Under the
General
tab, selectOpen Anyway
.
Load the Docker image by the following command and replace the value for appropriately:
docker load < timestream-prometheus-connector-docker-image-<version>.tar.gz
-
Linux and macOS
docker run \ -p 9201:9201 \ timestream-prometheus-connector-docker \ --default-database=prometheusDatabase \ --default-table=prometheusMetricsTable
-
Windows
docker run ^ -p 9201:9201 ^ timestream-prometheus-connector-docker ^ --default-database=prometheusDatabase ^ --default-table=prometheusMetricsTable
It is recommended to secure the Prometheus requests with HTTPS with TLS encryption. To enable TLS encryption:
-
Mount the volume containing the server certificate and the server private key to a volume on the Docker container, then specify the path to the certificate and the key through the
tls-certificate
andtls-key
configuration options. Note that the path specified must be with respect to the Docker container.In the following examples, server certificate and server private key are stored in the
$HOME/tls
directory on Linux and macOS or%USERPROFILE%/tls
on Windows, but are mounted to/root/tls
on the Docker container:-
Linux and macOS
docker run \ -v $HOME/tls:/root/tls:ro \ -p 9201:9201 \ timestream-prometheus-connector-docker \ --default-database=prometheusDatabase \ --default-table=prometheusMetricsTable \ --tls-certificate=/root/tls/serverCertificate.crt \ --tls-key=/root/tls/serverPrivateKey.key
-
Windows
docker run ^ -v "%USERPROFILE%/tls:/root/tls/:ro" ^ -p 9201:9201 ^ timestream-prometheus-connector-docker ^ --default-database=prometheusDatabase ^ --default-table=prometheusMetricsTable ^ --tls-certificate=/root/tls/serverCertificate.crt ^ --tls-key=/root/tls/serverPrivateKey.key
-
-
Ensure the certificate authority file has been specified in the
tls_config
section within Prometheus' configuration file, see Prometheus Configuration for an example.
To generate self-signed certificates during development see Creating Self-signed TLS Certificates.
To configure the web.listen-address
option when running the Prometheus Connector through a Docker image, use the -p
flag to expose the custom endpoint. The following example listens on the custom endpoint localhost:3080
:
-
Linux and macOS
docker run \ -p 3080:3080 \ timestream-prometheus-connector-docker \ --default-database=prometheusDatabase \ --default-table=prometheusMetricsTable \ --web.listen-address=:3080
-
Windows
docker run ^ -p 3080:3080 ^ timestream-prometheus-connector-docker ^ --default-database=prometheusDatabase ^ --default-table=prometheusMetricsTable ^ --web.listen-address=:3080
For more information regarding the -p
flag see the official Docker documentation.
Running the Prometheus Connector on AWS Lambda allows for a serverless workflow. This section details the steps to configuring the IAM permissions to integrate the Prometheus Connector with Amazon API Gateway and AWS Lambda.
-
Open the AWS management console for IAM.
-
Click on
Policies
and selectCreate policy
. -
Select the
JSON
tab and paste the following policy to provide basic permissions for the Lambda function to output logs to Cloudwatch:{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:/aws/lambda/${LambdaFunctionName}:*" } ] }
-
Update the values for
${AWS::Region}
,${AWS::AccountId}
,${LambdaFunctionName}
in the following policy. An example of the updated value forResource
would be:"Resource": "arn:aws:logs:us-east-1:12345678:log-group:/aws/lambda/timestream-prometheus-connector:*"
. -
Click
Next
. -
Enter a policy name.
-
Click on
Create policy
-
Click on
Roles
. -
Click on
Create role
-
Click on
Custom trust policy
. -
Paste the following to
Policy Document
to allow API gateway to access the policy.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "apigateway.amazonaws.com", "lambda.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] }
-
Click on
Next
-
Select the policy that you previously created which gives the lambda function basic permissions to output logs to Cloudwatch.
-
Click
Next
. -
Enter a name for the role, in this example will use
LambdaTimestreamFullAccessRole
. -
Click on
Create role
. -
After creating the role successfully, click
Roles
underAccess management
and choose the role created to see the details. -
Take note of the
Role ARN
, this is required when creating a policy to allow the current user access to the role.
To provide access to this newly-created role, add a permission to the current user with the following steps:
-
Open the AWS management console for IAM.
-
Under
Access management
, clickUsers
. -
Select the user that needs access to this role.
-
Click
Add permissions
. -
Select
Attach existing policies directly
. -
Click
Create policy
. -
Switch to the
JSON
tab, and paste the following to provide the user with the permission toPassRole
on the newly-created role, whererole_arn
can be on theSummary
page of the newly-created role:{ "Version": "2012-10-17", "Statement": [ { "Sid": "PolicyStatementToAllowUserToPassOneSpecificRole", "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "role_arn" } ] }
-
Click
Next
. -
Enter a policy name.
-
Click
Create policy
. -
Attach the policy to the user.
- Open the AWS management console for AWS Lambda.
- Click
Create function
. - Enter a function name, this example will use
PrometheusConnector
as the function name. - Choose
Amazon Linux 2023
from the Runtime dropdown. - Expand
Change default execution role
. - Select
Use an existing role
. - Choose the newly-created role from the dropdown, for this example it will be
LambdaTimestreamFullAccessRole
. - Click
Create function
.
- Open the AWS management console for AWS Lambda.
- Select the newly-created function, in this example it will be
PrometheusConnector
. - Under the
code
section, clickUpload from
. - Select
.zip file
. - Upload the ZIP file containing the precompiled Linux binary, which can be downloaded from the latest releases.
- Click
save
. - Click on
Configuration
. - Under the
Environment variables
section, clickEdit
. - Enter the key and corresponding value of environment variables
default_database
,default_table
, which are required, andregion
will default tous-east-1
if not populated. Here is an example: Go to Configuration Options to see more information. - Click on
Save
- Click on
Code
. - Click
Edit
in theRuntime settings
section. - In the
Handler
section, enter the name of the Amazon Timestream Prometheus Connector ZIP file, which will bebootstrap
. - Click
Save
.
- Open the AWS management console for API Gateway.
- Click
Create API
. - Click
Build
underHTTP API
. - Click
Add integration
and chooseLambda
. - Choose the appropriate
AWS Region
. - Type in the name of the Lambda function created in the previous step Create the AWS Lambda Function or paste in the function's ARN. For this example, the value will be
PrometheusConnector
. - Click
Add integration
. - Type in an
API name
, this example will usePrometheusConnectorAPI
as the API name. - Click
Next
. - There will be a pre-defined route
ANY /PrometheusConnector -> PrometheusConnector(Lambda)
, remove this route. - Click
Add route
to add a new route for remote write. - Choose
POST
for theMethod
. - Enter
/write
for the Resource Path. - Select the appropriate Lambda function as the integration target, in this example it would be
PrometheusConnector
. - Click
Add route
to add a new route for remote read. - Choose
POST
for theMethod
. - Enter
/read
for the Resource Path. - Select the appropriate Lambda function as the integration target, in this example it would be
PrometheusConnector
. - Click
Next
. - Remove the
default
stage. - Click
Add stage
. - Enter a name for the stage, in this example we will use
dev
. - Click the
Auto-deploy
button and clickNext
. - Click
Create
. - Select the newly-created API Gateway and take note of the invoke URL, this URL is required to set up Prometheus' remote read and write URL.
- It is highly recommended having TLS encryption enabled during production. See Configuring mutual TLS authentication for an HTTP API.
The process to configure Prometheus for AWS Lambda requires the same steps listed in Prometheus Configuration.
NOTE: Ensure the remote write and the remote read URLs are set to the invoke URLS.
The following example points the remote write and the remote read URLs to an API with ID foo9l30 and the deployment stage dev
.
It is highly recommended having TLS encryption enabled during production, the following example also specifies the root certificate authority file for TLS encryption, which requires configuring the TLS encryption on API Gateway. See Configuring mutual TLS authentication for an HTTP API.
remote_write:
- url: "https://foo9l30.execute-api.us-east-1.amazonaws.com/dev/write"
tls_config:
# Ensure ca_file is a valid file path pointing to the CA certificate.
ca_file: RootCA.pem
remote_read:
- url: "https://foo9l30.execute-api.us-east-1.amazonaws.com/dev/read"
tls_config:
# Ensure ca_file is a valid file path pointing to the CA certificate.
ca_file: RootCA.pem
To configure logging for API Gateway to CloudWatch, a new CloudWatch log group needs to be created.
- Open the AWS management console for CloudWatch.
- Click the
Log groups
underLog
in the sidebar. - Select
Create log group
. - Enter a group name and click
Create
. - Select the newly-created log group from the list of log groups to see the log group details.
- Take note of the ARN.
Next, open the previously created Prometheus Connector API on API Gateway to configure logging.
- Select
Logging
under theMonitor
section on the left-hand side. - Select a stage from the dropdown and click
Next
. - Click
Edit
. - Toggle
Access logging
. - Paste the ARN of the newly-created log group.
- Select the preferred log format.
- Click
Save
.
If permissions are required, add the following policy to the IAM account with region
and account-id
updated to the appropriate values:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:GetLogEvents",
"logs:FilterLogEvents"
],
"Resource": "arn:aws:logs:region:account-id:log-group:*"
},
{
"Effect": "Allow",
"Action": [
"logs:CreateLogDelivery",
"logs:PutResourcePolicy",
"logs:UpdateLogDelivery",
"logs:DeleteLogDelivery",
"logs:CreateLogGroup",
"logs:DescribeResourcePolicies",
"logs:GetLogDelivery",
"logs:ListLogDeliveries"
],
"Resource": "*"
}
]
}
For more information see Configuring logging for an HTTP API.
A log group will automatically be created when creating a new AWS Lambda function.
The name of log group is usually in the following format: the log group named in format /aws/lambda/{LambdaFunctionName}
.
To view the logs from the Prometheus Connector:
- Open the AWS management console for CloudWatch.
- Click
Log groups
underLogs
. - Select the log group for the Prometheus Connector. In this example it would be
/aws/lambda/PrometheusConnector
and here is an example of the log streams within:
Start Prometheus by running the command: ./prometheus --config.file=prometheus.yml
.
NOTE: All configuration options keys are case-sensitive
When running the Prometheus Connector on AWS Lambda, configuration options need to be set as Lambda environment variables.
The default-database name and default-table name are required for data ingestion and data retrieval. If they are not provided, the Prometheus Connector will return a 400 Bad Request to the caller.
Standalone Option | Lambda Option | Description | Required | Default Value |
---|---|---|---|---|
default-database |
default_database |
The Prometheus default database name. | No | None |
default-table |
default_table |
The Prometheus default table name. | No | None |
region |
region |
The signing region for the Amazon Timestream service. | No | us-east-1 |
tls-certificate |
N/A |
The path to the TLS server certificate file. This is required to enable HTTPS. If unspecified, HTTP will be used. | No | None |
tls-key |
N/A |
The path to the TLS server private key file. This is required to enable HTTPS. If unspecified, HTTP will be used. | No | None |
web.listen-address |
N/A |
The endpoint to listen to for write and read requests sent from Prometheus. | No | :9201 |
web.telemetry-path |
N/A |
The path containing metrics collected by the Prometheus Connector, such as ignoredSamples . This allows Prometheus to scrape and monitor data from the specified telemetry-path. |
No | /metrics |
NOTE:
web.listen-address
andweb.telemetry-path
configuration options are not available when running the Prometheus Connector on AWS Lambda.
-
Configure the Prometheus Connector to access the Amazon Timestream service in the US West (Oregon) Region instead of the default US East (N. Virginia) Region.
Runtime Command Precompiled Binaries ./bootstrap --default-database=PrometheusDatabase --default-table=PrometheusMetricsTable --region=us-west-2
AWS Lambda Function aws lambda update-function-configuration --function-name PrometheusConnector --environment "Variables={default_database=prometheusDatabase,default_table=prometheusMetricsTable,region=us-west-2}"
-
Configure the Prometheus Connector listen for requests on an HTTPS server
https://localhost:9201
with TLS encryption.Runtime Command Precompiled Binaries ./bootstrap --default-database=PrometheusDatabase --default-table=PrometheusMetricsTable --tls-certificate=serverCertificate.crt --tls-key=serverPrivateKey.key
AWS Lambda Function N/A
-
Configure the Prometheus Connector to listen for Prometheus requests on
http://localhost:3080
.Runtime Command Precompiled Binaries ./bootstrap --default-database=PrometheusDatabase --default-table=PrometheusMetricsTable --web.listen-address=:3080
AWS Lambda Function N/A
-
Configure the Prometheus Connector to listen for Prometheus requests on
http://localhost:3080
and serve collected metrics tohttp://localhost:3080/timestream-metrics
.Runtime Command Precompiled Binaries ./bootstrap --default-database=PrometheusDatabase --default-table=PrometheusMetricsTable --web.listen-address=:3080 --web.telemetry-path=/timestream-metrics
AWS Lambda Function N/A
The Prometheus Connector exposes the query SDK's retry configurations for users.
Standalone OptionOption | Lambda Option | Description | Is Required | Default Value |
---|---|---|---|---|
max-retries |
max_retries |
The maximum number of times the read request will be retried for failures. | No | 3 |
Configure the Prometheus Connector to retry up to 10 times upon recoverable errors.
Runtime | Command |
---|---|
Precompiled Binaries | ./bootstrap --default-database=PrometheusDatabase --default-table=PrometheusMetricsTable --max-retries=10 |
AWS Lambda Function | aws lambda update-function-configuration --function-name PrometheusConnector --environment "Variables={default_database=prometheusDatabase,default_table=prometheusMetricsTable,max_retries=10}" |
Standalone Option | Lambda Option | Description | Required | Default Value | Valid Values |
---|---|---|---|---|---|
enable-logging |
enable_logging |
Enables or disables logging in the Prometheus Connector. | No | true |
1 , t , T , TRUE , true , True , 0 , f , F , FALSE , false , False |
fail-on-long-label |
fail_on_long_label |
Enables or disables the option to halt the program immediately when a Prometheus Label name exceeds 256 bytes. | No | false |
1 , t , T , TRUE , true , True , 0 , f , F , FALSE , false , False |
fail-on-invalid-sample-value |
fail_on_invalid_sample_value |
Enables or disables the option to halt the program immediately when a Sample contains a non-finite float value. | No | false |
1 , t , T , TRUE , true , True , 0 , f , F , FALSE , false , False |
log.level |
log_level |
Sets the output level for logs. | No | info |
info , warn , debug , error |
log.format |
log_format |
Sets the output format for the logs. The output for logs always goes to stderr, unless the logging has been disabled. | No | logfmt |
logfmt , json |
Setting log levels:
- SAM CLI -
sam deploy --parameter-overrides "LogLevel=Debug"
- One-click deployment - Update the
log_level
environment variable when configuring the deployment parameters- With an already deployed Lambda you can edit the environment variable
log_level
in the Lambda configuration
- With an already deployed Lambda you can edit the environment variable
- Local execution -
./bootstrap --default-database=PrometheusDatabase --default-table=PrometheusMetricsTable --log.level=info
NOTE: The logging level is by default set to
info
. Setlog.level
todebug
to view any Samples ignored due to long metric name or non-finite values.
fail-on-long-label
— Prometheus recommends using meaningful and detailed metrics names, which may result in metric names exceeding the maximum length (256 bytes) supported by Amazon Timestream.
If a Prometheus time series has a metric name exceeding the maximum supported length, the Prometheus Connector will by default log and ignore the Prometheus time series.
To quickly spot and resolve issues that may be caused by ignored Prometheus time series during development, set fail-on-long-label
flag to true
, and the Prometheus Connector will log and halt on a long metric name.
fail-on-invalid-sample-value
— If the Prometheus WriteRequest contains time series with non-finite float values such as NaN, -Inf, or Inf, the Prometheus Connector will by default log and ignore any of those time series.
To quickly spot and resolve issues that may be caused by ignored Prometheus time series during development, set fail-on-invalid-sample-value
flag to true
, and the Prometheus Connector will log and halt on a Prometheus time series with non-finite float values. fail-on-long-label
and fail-on-invalid-sample-value
configurations are not recommended during production operation.
-
Disable logging in the Prometheus Connector.
Runtime Command Precompiled Binaries ./bootstrap --default-database=PrometheusDatabase --default-table=PrometheusMetricsTable --enable-logging=false
AWS Lambda Function aws lambda update-function-configuration --function-name PrometheusPrometheus Connector --environment "Variables={default_database=prometheusDatabase,default_table=prometheusMetricsTable,enable_logging=false}"
-
Toggle the Prometheus Connector to halt on:
- label names exceeding the maximum length supported by Amazon Timestream;
- Prometheus time series with non-finite values.Runtime Command Precompiled Binaries ./bootstrap --default-database=PrometheusDatabase --default-table=PrometheusMetricsTable --fail-on-long-label=true --fail-on-invalid-sample=true
AWS Lambda Function aws lambda update-function-configuration --function-name PrometheusConnector --environment "Variables={default_database=prometheusDatabase,default_table=prometheusMetricsTable,fail_on_long_label=true, fail_on_invalid_sample_value=true}"
-
Configure the Prometheus Connector to output the logs at debug level and in JSON format.
Runtime Command Precompiled Binaries ./bootstrap --default-database=PrometheusDatabase --default-table=PrometheusMetricsTable --log.level=debug --log.format=json
AWS Lambda Function aws lambda update-function-configuration --function-name PrometheusConnector --environment "Variables={default_database=prometheusDatabase,default_table=prometheusMetricsTable,log_level=debug, log_format=json}"
If a Prometheus time series has a metric name exceeding the maximum supported length, the Prometheus Connector will by default log and ignore any of those Samples.
Long metric names will be logged by the Prometheus Connector, one can use the write_relabel_configs
in prometheus.yml
to rename a long metric name.
Below is an example prometheus.yml
relabeling the long metric name prometheus_remote_storage_read_request_duration_seconds_bucket
to prometheus_read_request_duration_seconds_bucket
.
global:
scrape_interval: 60s
evaluation_interval: 60s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
remote_write:
- url: "http://localhost:9201/write"
write_relabel_configs:
- source_labels: ["__name__"]
action: replace
regex: prometheus_remote_storage_read_request_duration_seconds_bucket
replacement: prometheus_read_request_duration_seconds_bucket
target_label: __name__
remote_read:
- url: "http://localhost:9201/read"
When the connector is deployed as a Lambda function, authentication is handled by passing through credentials with each request; validation is done within the Lambda function using the AWS SDK for Go. In general, the Timestream Prometheus Connector will use the default credentials provider implemented in the AWS SDK for Go instead of allowing users to provide the credentials through command-line flags. This prevents sensitive data from being easily scraped.
Due to Prometheus' lack of support for SigV4 (see the Unsupported SigV4 Authentication section), the
API Gateway deployed via one-click deployment or with the serverless/template.yml
CloudFormation template
does not use SigV4 for its public endpoints.
The Prometheus Connector uses the following User-Agent
header for all requests:
User-Agent: Prometheus Connector/<version> aws-sdk-go/<version> (go<version>; <os>; <cpu arch>)
-
To verify Prometheus is running, open
http://localhost:9090/
in a browser, this opens Prometheus' expression browser. -
To verify the Prometheus Connector is ready to receive requests, ensure the following log message is printed. See the Troubleshooting section for other error messages.
level=info ts=2020-11-21T01:06:49.188Z caller=utils.go:33 message="Timestream <write/query> connection is initialized (Database: <database-name>, Table: <table-name>, Region: <region>)"
-
To verify the Prometheus Connector is ingesting data, use the AWS CLI to execute the following query:
aws timestream-query query --query-string "SELECT count() FROM prometheusDatabase.prometheusMetricsTable"
The output should look similar to the following:
{ "Rows": [ { "Data": [ { "ScalarValue": "340" } ] } ], "ColumnInfo": [ { "Name": "_col0", "Type": { "ScalarType": "BIGINT" } } ], "QueryId": "AEBQEAMYNBGX7RA" }
This sample output indicates that 340 rows has been ingested.
-
To verify the Prometheus Connector can query data from Amazon Timestream, visit
http://localhost:9090/
in a browser, which opens Prometheus' expression browser, and execute a Prometheus Query Language (PromQL) query. The PromQL query will use the values ofdefault-database
anddefault-table
as the corresponding database and table that contains data. Here is a simple example:prometheus_http_requests_total{}
prometheus_http_requests_total
is a metric name. The database and table being queried are the correspondingdefault-database
anddefault-table
configured for the Prometheus connector. This PromQL will return all the time series data from the past hour with the metric nameprometheus_http_requests_total
indefault-table
ofdefault-database
. Here is a query result example:PromQL also supports regex, here is an example:
prometheus_http_requests_total{handler!="/api/v1/query", job=~"p*", code!~"2.."}
This example queries all rows from
prometheusMetricsTable
ofprometheusDatabase
where:- column
metric name
equals toprometheus_http_requests_total
; - column
handler
does not equal to/api/v1/query
; - column
job
matches the regex patternp*
; - column
code
does not match the regex pattern2..
.
For more examples, see Prometheus Query Examples. There are other ways to execute PromQLs, such as through Prometheus' HTTP API, or through Grafana.
- column
- Ensure to download all the dependencies, run the command:
go get -u -v -f all
. - Use the command to build the program:
go build
. - Now, proceed from the Prometheus Configuration section in User Documentation to run the connector.
- Navigate to the repository’s root directory on a command-line interface.
- Run the following command to build the image:
docker buildx build . -t timestream-prometheus-connector-docker
.
The following logs can be useful for determining if any records are ignored by the Prometheus Connector
. Enable debug
logging if you require additional information about rejected records.
Records ignored by the Prometheus Connector:
x number of records were rejected for ingestion to Timestream. See Troubleshooting in the README for why these may be rejected, or turn on debug logging for additional info.
Records requested for ingestion through the Prometheus Connector:
x records requested for ingestion from Prometheus.
Records successfully ingested through the Prometheus Connector:
Successfully wrote x records to database: PrometheusDatabase table: PrometheusMetricsTable
Note: Errors and records are only logged with debug mode. With the default log level of
info
, only the high level errors are logged. See Logger Configuration Options for how to adjust the logging level.
All connector-specific errors can be found in errors/errors.go
.
-
Error:
LongLabelNameError
Description: The metric name exceeds the maximum supported length and the
fail-on-long-label
is set totrue
.Log Example
level=error ts=2020-11-06T02:01:46.753Z calleawr=utils.go:23 message="Unable to convert the received Prometheus write request to Timestream Records." error="LongLabelNameError: metric name 'prometheus_remote_storage_read_request_duration_seconds_bucket' exceeds 60 characters, the maximum length supported by Timestream"
Solution
- Rename the invalid metric name using the relabelling method in Relabel Long Labels section.
- Set the
fail-on-long-label
tofalse
, which means the Prometheus Connector will log and not attempt to ingest the time series containing the long metric name.
-
Error:
InvalidSampleValueError
Description: The Prometheus WriteRequest contains time series with unsupported non-finite float Sample values such as NaN, -Inf, or Inf and the
fail-on-invalid-sample-value
is set totrue
.Log Example
debug ts=2020-11-06T02:29:26.760Z caller=utils.go:28 message="Timestream only accepts finite IEEE Standard 754 floating point precision. Samples with NaN, Inf and -Inf are ignored." timeSeries="labels:<name:\"__name__\" value:\"prometheus_rule_evaluation_duration_seconds\" > labels:<name:\"instance\" value:\"localhost:9090\" > labels:<name:\"job\" value:\"prometheus\" > labels:<name:\"monitor\" value:\"codelab-monitor\" > labels:<name:\"quantile\" value:\"0.99\" > labels:<name:\"prometheusDatabase\" value:\"promDB\" > labels:<name:\"prometheusMetricsTable\" value:\"prom\" > samples:<value:nan timestamp:1604629766606 > "
Solution
Users can set the
fail-on-invalid-sample-value
tofalse
, and the Prometheus Connector will log and not attempt to ingest any Prometheus time series with non-finite Sample value. For more details, see the Log Configuration Options. -
Error:
MissingDatabaseWithWriteError
Description: The default database environment variable has not been set.
Log Example
error="InvalidDestinationError: the given database name: timestreamDatabase cannot be found for the current time series labels:<name:\"__name__\" value:\"go_gc_duration_seconds\" > labels:<name:\"instance\" value:\"localhost:9090\" > labels:<name:\"job\" value:\"prometheus\" > labels:<name:\"monitor\" value:\"codelab-monitor\" > labels:<name:\"quantile\" value:\"0\" > labels:<name:\"prometheusDatabase\" value:\"promDB\" > labels:<name:\"prometheusMetricsTable\" value:\"prom\" > samples:<timestamp:1604627351607 > "
Solution
- Ensure default-database and default-table are set when running Prometheus Connector. Note that the configuration options for the AWS Lambda integration are in
snake_case
. For more details and examples see the Advanced Options section.
- Ensure default-database and default-table are set when running Prometheus Connector. Note that the configuration options for the AWS Lambda integration are in
-
Error:
NewMissingTableWithWriteError
Description: The default table is not configured with the environment variable
default-table
.Log Example
level=error ts=2020-11-07T01:47:30.752Z caller=utils.go:23 message="Unable to convert the received Prometheus write request to Timestream Records." error="The given table name: timestreamTableName cannot be found for the current time series:<name:\"__name__\" value:\"prometheus_tsdb_tombstone_cleanup_seconds_bucket\" > labels:<name:\"instance\" value:\"localhost:9090\" > labels:<name:\"job\" value:\"prometheus\" > labels:<name:\"le\" value:\"0.005\" > labels:<name:\"monitor\" value:\"codelab-monitor\" > labels:<name:\"prometheusDatabase\" value:\"promDB\" > labels:<name:\"prometheusMetricsTable\" value:\"prom\" > samples:<timestamp:1604713406607 > "
Solution
- Ensure default-database and default-table are set when running Prometheus Connector. Note that the configuration options for the AWS Lambda integration are in
snake_case
. For more details and examples see the Advanced Options section.
- Ensure default-database and default-table are set when running Prometheus Connector. Note that the configuration options for the AWS Lambda integration are in
-
Error:
NewMissingDatabaseError
Description: The environment variable default-database must be specified for the Prometheus Connector.
Log Example
level=error ts=2020-11-07T01:49:31.041Z caller=utils.go:23 message="Error occurred while reading the data back from Timestream." error="the given database name: <exampledatabase> cannot be found. Please provide the table name with the flag default-database."
Solution
Set the environment variable
default-database
to the destination database for the Prometheus Connector. -
Error:
NewMissingTableError
Description: The environment variable default-table must be specified for the Prometheus Connector.
Log Example
level=error ts=2020-11-07T01:48:53.694Z caller=utils.go:23 message="Error occurred while reading the data back from Timestream." error="the given table name: <tablename> cannot be found. Please provide the table name with the flag default-table"
Solution
Set the environment variable
default-table
to the destination table for the Prometheus Connector. -
Error:
MissingDestinationError
Description: The environment variables default-database and default-table must be specified for the Lambda function.
Solution
Set the environment variables default-database and default-table for the AWS Lambda Function with the following command, update the function name if necessary:
aws lambda update-function-configuration --function-name PrometheusConnector --environment "Variables={default_database=prometheusDatabase,default_table=prometheusMetricsTable}"
For more information, please go to Configure the AWS Lambda Function.
-
Error:
ParseEnableLoggingError
Description: The value set for the
enable-logging
option is not an accepted value.Solution
Check the accepted list of values for
enable-logging
in the Logger Configuration Options section. -
Error:
ParseMetricLabelError
Description: The value set for the
fail-on-long-label
option is not an accepted value.Solution
Check the accepted list of values for
fail-on-long-label
in the Logger Configuration Options section. -
Error:
ParseSampleOptionError
Description: The value set for the
fail-on-invalid-sample-value
option is not an accepted value.Solution
Check the accepted list of values for
fail-on-invalid-sample-value
in the Logger Configuration Options section. -
Error:
MissingHeaderError
Description: This error may occur when running the Prometheus Connector on AWS Lambda. The request sent to the Prometheus Connector is missing either the
x-prometheus-remote-read-version
or thex-prometheus-remote-write-version
header.Solution
Check the request headers and add
x-prometheus-remote-read-version
orx-prometheus-remote-write-version
to the request headers. This error returns a 400 Bad Request status code to the caller. -
Error:
ParseRetriesError
Description: This error will occur when the
max-retries
option has an invalid value.Solution
See the Retry Configuration Options section for acceptable formats for the
max-retries
option. -
Error:
UnknownMatcherError
Description: This error will occur when an unknown matcher is within a PromQL query. Prometheus only supports four types of matchers within a filter:
=
,!=
,=~
, and!~
.Solution
Re-evaluate your PromQL query and ensure you are using only the above matchers.
Errors | Status Code | Description | Solution |
---|---|---|---|
ValidationException |
400 | Invalid or malformed request. | Please check if the provided default-database or default-table values are set and review the Configuration Options. |
ServiceQuotaExceededException |
402 | Instance quota of resource exceeded for this account. | Remove unused instance or upgrade the total number of resources for this account. |
AccessDeniedException |
403 | You are not authorized to perform this action | Ensure you have sufficient access to Amazon Timestream. |
ResourceNotFoundException |
404 | The operation tried to access a non-existent resource. | Specify the resource correctly, or check whether its status is not ACTIVE. |
ConflictException |
409 | Amazon Timestream was unable to process this request because it contains a resource that already exists. | Update the request with correct resource. |
RejectedRecordsException |
419 | Amazon Timestream will throw this exception in the following cases: 1. Records with duplicate data where there are multiple records with the same dimensions, timestamps, and measure names but different measure values. 2. Records with timestamps that lie outside the retention duration of the memory store. 3. Records with dimensions or measures that exceed the Amazon Timestream defined limits. |
1. Check and process the data to ensure that there are no different measure values at the same timestamp given other labels/filters are the same. 2. Check or update the retention duration in database. |
InvalidEndpointException |
421 | The requested endpoint was invalid. | Check whether the endpoint is NIL or in an incorrect format. |
ThrottlingException |
429 | Too many requests were made by a user exceeding service quotas. The request was throttled. | Continue to send data at the same (or higher) throughput. Go to Data Ingestion for more information. |
InternalServerException |
500 | Amazon Timestream was unable to fully process this request because of an internal server error. | Please send the request again later. |
Errors | Status Code | Description | Solution |
---|---|---|---|
QueryExecutionException |
400 | Amazon Timestream was unable to run the query successfully. | See the logs to get more information about the failed query. |
ValidationException |
400 | Invalid or malformed request. | Check whether the query contains invalid regex (see Unsupported RE2 Syntax) or an invalid matcher in the query. |
AccessDeniedException |
403 | You are not authorized to perform this action | Ensure you have sufficient access to Amazon Timestream. |
ConflictException |
409 | Unable to poll results for a cancelled query. | The query is cancelled. Please resend. |
InvalidEndpointException |
421 | The requested endpoint was invalid | Check whether the endpoint is NULL or in an incorrect format. |
ThrottlingException |
429 | The request was denied due to request throttling. | Continue to send queries at the same (or higher) throughput. |
InternalServerException |
500 | Amazon Timestream was unable to fully process this request because of an internal server error. | Please send the request again later. |
Prometheus supports SigV4 for the remote_write
protocol with limitations and lacks SigV4 support for the remote_read
protocol. With the deployment method of the Prometheus Connector
being a lambda function, the service
portion of the SigV4 header must be set to the value execute-api
. Prometheus hard-codes this value to aps
, limiting SigV4 support to Amazon Managed Service for Prometheus. Integrating SigV4 support will require remote_read
SigV4 support added and configuration settings for the service
portion of the SigV4 header integrated with Prometheus.
If SigV4 is required, SigV4 authentication is possible by running Prometheus with a sidecar. This will require enabling IAM authentication for the APIGateway deployment, which is not covered in the Prometheus Connector
documentation.
All Prometheus requests sent to the Prometheus Connector will be authorized through the AWS SDK for Go. The Prometheus Connector only supports passing the IAM user access key and the IAM user secret access key through the basic authentication header.
It is recommended to regularly rotate IAM user access keys.
Prometheus follows the RE2 syntax https://github.com/google/re2/wiki/Syntax, while Amazon Timestream supports the Java regex pattern. Any query with unsupported regex syntax will result in a 400 Bad Request status code.
Unsupported RE2 Regex | Functionality | Sample PromQL |
---|---|---|
(?P<name>\w+) |
Named and numbered capturing group | up{job=~"(?P<name>\\w+)"} |
Prometheus tracks all time series successfully sent to the remote write storage and all rejected Prometheus time series. Since the Prometheus Connector does not ingest Prometheus time series with non-finite float sample values or time series with long metric names exceeding the supported length limit, the Prometheus' metrics for successful time series are inaccurate. To find out the number of ignored Prometheus time series and the total number of Prometheus time series received, check the metrics at <web.listen-address><web.telemetry-path>
, for instance, http://localhost:9201/metrics
.
This library is licensed under the Apache 2.0 License.