Skip to content

Dynamic Terraform module, which creates a Kinesis Firehose Stream and others resources like Cloudwatch, IAM Roles and Security Groups that integrate with Kinesis Firehose. Supports all destinations and all Kinesis Firehose Features.

License

Notifications You must be signed in to change notification settings

fdmsantos/terraform-aws-kinesis-firehose

Repository files navigation

AWS Kinesis Firehose Terraform module

semantic-release: angular

Dynamic Terraform module, which creates a Kinesis Firehose Stream and others resources like Cloudwatch, IAM Roles and Security Groups that integrate with Kinesis Firehose. Supports all destinations and all Kinesis Firehose Features.

Table of Contents

Module versioning rule

Module version AWS Provider version
>= 1.x.x ~> 4.4
>= 2.x.x ~> 5.0
>= 3.x.x >= 5.33

Features

  • Sources
    • Kinesis Data Stream
    • Direct Put
    • WAF
    • MSK
  • Destinations
    • S3
      • Data Format Conversion
      • Dynamic Partition
    • Redshift
      • VPC Support. Security Groups creation supported.
      • Support to Secrets Manager.
    • ElasticSearch / Opensearch / Opensearch Serverless
      • VPC Support. Security Groups creation supported.
    • Splunk
      • VPC Support. Security Groups creation supported.
      • Support to Secrets Manager.
    • Snowflake
      • VPCE Support.
      • Support to Secrets Manager
    • Custom Http Endpoint
      • Support to Secrets Manager
    • DataDog
      • Support to Secrets Manager
    • Coralogix
      • Support to Secrets Manager
    • New Relic
      • Support to Secrets Manager
    • Dynatrace
      • Support to Secrets Manager
    • Honeycomb
      • Support to Secrets Manager
    • Logic Monitor
      • Support to Secrets Manager
    • MongoDB Cloud
      • Support to Secrets Manager
    • Sumo Logic
  • Data Transformation With Lambda
  • Original Data Backup in S3
  • Logging and Encryption
  • Application Role to Direct Put Sources
  • Turn on/off cloudwatch logs decompressing and data message extraction
  • Permissions
    • IAM Roles
    • Opensearch / Opensearch Serverless Service Role
    • Associate Role to Redshift Cluster Iam Roles
    • Cross Account S3 Bucket Policy
    • Cross Account Elasticsearch / OpenSearch / Opensearch Serverless Service policy

How to Use

Sources

Kinesis Data Stream

To Enabled it: input_source = "kinesis".

module "firehose" {
  source                    = "fdmsantos/kinesis-firehose/aws"
  version                   = "x.x.x"
  name                      = "firehose-delivery-stream"
  input_source              =  "kinesis"
  kinesis_source_stream_arn = "<kinesis_stream_arn>"
  destination               = "s3" # or destination = "extended_s3"
  s3_bucket_arn             = "<bucket_arn>"
}
Kinesis Data Stream Encrypted

If Kinesis Data Stream is encrypted, it's necessary pass this info to module .

To Enabled It: input_source = "kinesis".

KMS Key: use kinesis_source_kms_arn variable to indicate the KMS Key to module add permissions to policy to decrypt the Kinesis Data Stream.

Direct Put

To Enabled it: input_source = "direct-put".

module "firehose" {
  source           = "fdmsantos/kinesis-firehose/aws"
  version          = "x.x.x"
  name             = "firehose-delivery-stream"
  input_source     = "direct-put"
  destination      = "s3" # or destination = "extended_s3"
  s3_bucket_arn    = "<bucket_arn>"
}

WAF

To Enabled it: input_source = "waf".

module "firehose" {
  source           = "fdmsantos/kinesis-firehose/aws"
  version          = "x.x.x"
  name             = "firehose-delivery-stream"
  input_source     = "waf"
  destination      = "s3" # or destination = "extended_s3"
  s3_bucket_arn    = "<bucket_arn>"
}

MSK

To Enabled it: input_source = "msk".

module "firehose" {
  source                 = "fdmsantos/kinesis-firehose/aws"
  version                = "x.x.x"
  name                   = "firehose-delivery-stream"
  input_source           = "msk"
  msk_source_cluster_arn = "<msk_cluster_arn>"
  msk_source_topic_name  = "test"
  destination            = "s3"
  s3_bucket_arn          = "<bucket_arn>"
}

Destinations

S3

To Enabled It: destination = "s3" or destination = "extended_s3"

Variables Prefix: s3_

To Enable Encryption: enable_s3_encryption = true

Note: For other destinations, the s3_ variables are used to configure the required intermediary bucket before delivery data to destination. Not Supported to Elasticsearch, Splunk and http destinations

module "firehose" {
  source                    = "fdmsantos/kinesis-firehose/aws"
  version                   = "x.x.x"
  name                      = "firehose-delivery-stream"
  destination               = "s3" # or destination = "extended_s3"
  s3_bucket_arn             = "<bucket_arn>"
}

Redshift

To Enabled It: destination = "redshift"

Variables Prefix: redshift_

module "firehose" {
  source                        = "fdmsantos/kinesis-firehose/aws"
  version                       = "x.x.x"
  name                          = "firehose-delivery-stream"
  destination                   = "redshift"
  s3_bucket_arn                 = "<bucket_arn>"
  redshift_cluster_identifier   = "<redshift_cluster_identifier>"
  redshift_cluster_endpoint     = "<redshift_cluster_endpoint>"
  redshift_database_name        = "<redshift_cluster_database>"
  redshift_username             = "<redshift_cluster_username>"
  redshift_password             = "<redshift_cluster_password>"
  redshift_table_name           = "<redshift_cluster_table>"
  redshift_copy_options         = "json 'auto ignorecase'"
}

Elasticsearch

To Enabled It: destination = "elasticsearch"

Variables Prefix: elasticsearch_

module "firehose" {
  source                   = "fdmsantos/kinesis-firehose/aws"
  version                  = "x.x.x"
  name                     = "firehose-delivery-stream"
  destination              = "elasticsearch"
  elasticsearch_domain_arn = "<elasticsearch_domain_arn>"
  elasticsearch_index_name = "<elasticsearch_index_name"
}

Opensearch

To Enabled It: destination = "opensearch"

Variables Prefix: opensearch_

module "firehose" {
  source                = "fdmsantos/kinesis-firehose/aws"
  version               = "x.x.x"
  name                  = "firehose-delivery-stream"
  destination           = "opensearch"
  opensearch_domain_arn = "<opensearch_domain_arn>"
  opensearch_index_name = "<opensearch_index_name"
}

Opensearch Serverless

To Enabled It: destination = "opensearchserverless"

Variables Prefix: opensearchserverless_

module "firehose" {
  source                                   = "fdmsantos/kinesis-firehose/aws"
  version                                  = "x.x.x"
  name                                     = "firehose-delivery-stream"
  destination                              = "opensearch"
  opensearchserverless_collection_endpoint = "<opensearchserverless_collection_endpoint>"
  opensearchserverless_collection_arn      = "<opensearchserverless_collection_arn>"
  opensearch_index_name                    = "<opensearchserverless_index_name"
}

Splunk

To Enabled It: destination = "splunk"

Variables Prefix: splunk_

module "firehose" {
  source                            = "fdmsantos/kinesis-firehose/aws"
  version                           = "x.x.x"
  name                              = "firehose-delivery-stream"
  destination                       = "splunk"
  splunk_hec_endpoint               = "<splunk_hec_endpoint>"
  splunk_hec_endpoint_type          = "<splunk_hec_endpoint_type>"
  splunk_hec_token                  = "<splunk_hec_token>"
  splunk_hec_acknowledgment_timeout = 450
  splunk_retry_duration             = 450
}

Snowflake

To Enabled It: destination = "snowflake"

Variables Prefix: snowflake_

module "firehose" {
  source                       = "fdmsantos/kinesis-firehose/aws"
  version                      = "x.x.x"
  name                         = "firehose-delivery-stream"
  destination                  = "snowflake"
  snowflake_account_identifier = "<snowflake_account_identifier>"
  snowflake_private_key        = "<snowflake_private_key>"
  snowflake_key_passphrase     = "<snowflake_key_passphrase>"
  snowflake_user               = "<snowflake_user>"
  snowflake_database           = "<snowflake_database>"
  snowflake_schema             = "<snowflake_schema>"
  snowflake_table              = "<snowflake_table>"
}

HTTP Endpoint

To Enabled It: destination = "http_endpoint"

Variables Prefix: http_endpoint_

To enable Request Configuration: http_endpoint_enable_request_configuration = true

Request Configuration Variables Prefix: http_endpoint_request_configuration_

module "firehose" {
  source                                                = "fdmsantos/kinesis-firehose/aws"
  version                                               = "x.x.x"
  name                                                  = "firehose-delivery-stream"
  destination                                           = "http_endpoint"
  buffer_interval                                       = 60
  http_endpoint_name                                    = "<http_endpoint_name>"
  http_endpoint_url                                     = "<http_endpoint_url>"
  http_endpoint_access_key                              = "<http_endpoint_access_key>"
  http_endpoint_retry_duration                          = 400
  http_endpoint_enable_request_configuration            = true
  http_endpoint_request_configuration_content_encoding  = "GZIP"
  http_endpoint_request_configuration_common_attributes = [
    {
      name  = "testname"
      value = "testvalue"
    },
    {
      name  = "testname2"
      value = "testvalue2"
    }
  ]
}

Datadog

To Enabled It: destination = "datadog"

Variables Prefix: http_endpoint_ and datadog_endpoint_type

Check HTTP Endpoint to more details and Destinations Mapping to see the difference between http_endpoint and datadog destinations

module "firehose" {
  source                   = "fdmsantos/kinesis-firehose/aws"
  version                  = "x.x.x"
  name                     = "firehose-delivery-stream"
  destination              = "datadog"
  datadog_endpoint_type    = "metrics_eu"
  http_endpoint_access_key = "<datadog_api_key>"
}

New Relic

To Enabled It: destination = "newrelic"

Variables Prefix: http_endpoint_ and newrelic_endpoint_type

Check HTTP Endpoint to more details and Destinations Mapping to see the difference between http_endpoint and newrelic destinations

module "firehose" {
  source                   = "fdmsantos/kinesis-firehose/aws"
  version                  = "x.x.x"
  name                     = "firehose-delivery-stream"
  destination              = "newrelic"
  newrelic_endpoint_type   = "metrics_eu"
  http_endpoint_access_key = "<newrelic_api_key>"
}

Coralogix

To Enabled It: destination = "coralogix"

Variables Prefix: http_endpoint_ and coralogix_

Check HTTP Endpoint to more details and Destinations Mapping to see the difference between http_endpoint and coralogix destinations

**Check Firehose-to-Coralogix to more details.

module "firehose" {
  source                       = "fdmsantos/kinesis-firehose/aws"
  version                      = "x.x.x"
  name                         = "firehose-delivery-stream"
  destination                  = "coralogix"
  coralogix_endpoint_location  = "ireland"
  http_endpoint_access_key     = "<coralogix_private_key>"
}

Dynatrace

To Enabled It: destination = "dynatrace"

Variables Prefix: http_endpoint_, dynatrace_endpoint_location and dynatrace_api_url

Check HTTP Endpoint to more details and Destinations Mapping to see the difference between http_endpoint and dynatrace destinations

module "firehose" {
  source                      = "fdmsantos/kinesis-firehose/aws"
  version                     = "x.x.x"
  name                        = "firehose-delivery-stream"
  destination                 = "dynatrace"
  dynatrace_endpoint_location = "eu"
  dynatrace_api_url           = "https://xyazb123456.live.dynatrace.com"
  http_endpoint_access_key    = "<dynatrace_api_token>"
}

Honeycomb

To Enabled It: destination = "honeycomb"

Variables Prefix: http_endpoint_, honeycomb_api_host (Default: https://api.honeycomb.io) and honeycomb_dataset_name.

Check HTTP Endpoint to more details and Destinations Mapping to see the difference between http_endpoint and honeycomb destinations

module "firehose" {
  source                   = "fdmsantos/kinesis-firehose/aws"
  version                  = "x.x.x"
  name                     = "firehose-delivery-stream"
  destination              = "honeycomb"
  honeycomb_api_host       = "https://api.honeycomb.io"
  honeycomb_dataset_name   = "<honeycomb_dataset_name>"
  http_endpoint_access_key = "<honeycomb_api_key>"
}

Logic Monitor

To Enabled It: destination = "logicmonitor"

Variables Prefix: http_endpoint_ and logicmonitor_account

Check HTTP Endpoint to more details and Destinations Mapping to see the difference between http_endpoint and logicmonitor destinations

module "firehose" {
  source                   = "fdmsantos/kinesis-firehose/aws"
  version                  = "x.x.x"
  name                     = "firehose-delivery-stream"
  destination              = "logicmonitor"
  logicmonitor_account     = "<logicmonitor_account>"
  http_endpoint_access_key = "<logicmonitor_api_key>"
}

MongoDB

To Enabled It: destination = "mongodb"

Variables Prefix: http_endpoint_ and mongodb_realm_webhook_url

Check HTTP Endpoint to more details and Destinations Mapping to see the difference between http_endpoint and mongodb destinations

module "firehose" {
  source                    = "fdmsantos/kinesis-firehose/aws"
  version                   = "x.x.x"
  name                      = "firehose-delivery-stream"
  destination               = "mongodb"
  mongodb_realm_webhook_url = "<mongodb_realm_webhook_url>"
  http_endpoint_access_key  = "<mongodb_api_key>"
}

SumoLogic

To Enabled It: destination = "sumologic"

Variables Prefix: http_endpoint_, sumologic_deployment_name and sumologic_data_type

Check HTTP Endpoint to more details and Destinations Mapping to see the difference between http_endpoint and Sumo Logic destinations

module "firehose" {
  source                    = "fdmsantos/kinesis-firehose/aws"
  version                   = "x.x.x"
  name                      = "firehose-delivery-stream"
  destination               = "sumologic"
  sumologic_deployment_name = "<sumologic_deployment_name>"
  sumologic_data_type       = "<sumologic_data_type>"
  http_endpoint_access_key  = "<sumologic_access_token>"
}

Iceberg

To Enabled It: destination = "iceberg"

Variables Prefix: iceberg_

module "firehose" {
  source                = "fdmsantos/kinesis-firehose/aws"
  version               = "x.x.x"
  name                  = "firehose-delivery-stream"
  destination           = "iceberg"
  s3_bucket_arn         = "<s3_bucket_arn>"
  iceberg_catalog_arn   = "arn:${data.aws_partition.current.partition}:glue:${data.aws_region.current.name}:${data.aws_caller_identity.current.account_id}:catalog"
  iceberg_database_name = "<database>"
  iceberg_table_name    = "<table>"
}

Server Side Encryption

Supported By: Only Direct Put source

To Enabled It: enable_sse = true

Variables Prefix: sse_

module "firehose" {
  source           = "fdmsantos/kinesis-firehose/aws"
  version          = "x.x.x"
  name             = "firehose-delivery-stream"
  destination      = "s3" # or destination = "extended_s3"
  s3_bucket_arn    = "<bucket_arn>"
  enable_sse       = true
  sse_kms_key_type = "CUSTOMER_MANAGED_CMK"
  sse_kms_key_arn  = aws_kms_key.this.arn
}

Data Transformation with Lambda

Supported By: All destinations and Sources

To Enabled It: enable_lambda_transform = true

Variables Prefix: transform_lambda_

module "firehose" {
  source                           = "fdmsantos/kinesis-firehose/aws"
  version                          = "x.x.x"
  name                             = "firehose-delivery-stream"
  input_source                     = "kinesis"
  kinesis_source_stream_arn        = "<kinesis_stream_arn>"
  destination                      = "s3" # or destination = "extended_s3"
  s3_bucket_arn                    = "<bucket_arn>"
  enable_lambda_transform          = true
  transform_lambda_arn             = "<lambda_arn>"
  transform_lambda_buffer_size     = 2 # Don't configure this parameter if you want use default value (1)
  transform_lambda_buffer_interval = 90 # Don't configure this parameter if you want use default value (60)
  transform_lambda_number_retries  = 4 # Don't configure this parameter if you want use default value (3)
}

Data Format Conversion

Supported By: Only S3 Destination

To Enabled It: enable_data_format_conversion = true

Variables Prefix: data_format_conversion_

module "firehose" {
  source                                 = "fdmsantos/kinesis-firehose/aws"
  version                                = "x.x.x"
  name                                   = "firehose-delivery-stream"
  input_source                           = "kinesis"
  kinesis_source_stream_arn              = "<kinesis_stream_arn>"
  destination                            = "s3" # or destination = "extended_s3"
  s3_bucket_arn                          = "<bucket_arn>"
  enable_data_format_conversion          = true
  data_format_conversion_glue_database   = "<glue_database_name>"
  data_format_conversion_glue_table_name = "<glue_table_name>"
  data_format_conversion_input_format    = "HIVE"
  data_format_conversion_output_format   = "ORC"
}

Dynamic Partition

Supported By: Only S3 Destination

To Enabled It: enable_dynamic_partitioning = true

Variables Prefix: dynamic_partitioning_

module "firehose" {
  source                                        = "fdmsantos/kinesis-firehose/aws"
  version                                       = "x.x.x"
  name                                          = "firehose-delivery-stream"
  input_source                                  = "kinesis"
  kinesis_source_stream_arn                     = "<kinesis_stream_arn>"
  destination                                   = "s3" # or destination = "extended_s3"
  s3_bucket_arn                                 = "<bucket_arn>"
  s3_prefix                                     = "prod/user_id=!{partitionKeyFromQuery:user_id}/year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}/"
  enable_dynamic_partitioning                   = true
  dynamic_partitioning_retry_duration           = 350
  dynamic_partition_metadata_extractor_query    = "{user_id:.user_id}"
  dynamic_partition_enable_record_deaggregation = true
}

S3 Backup Data

Supported By: All Destinations

To Enabled It: enable_s3_backup = true. It's always enable to Elasticsearch, splunk and http destinations

To Enable Backup Encryption: s3_backup_enable_encryption = true

To Enable Backup Logging s3_backup_enable_log = true. Not supported to Elasticsearch, splunk and http destinations. It's possible add existing cloudwatch group or create new

Variables Prefix: s3_backup_

module "firehose" {
  source                        = "fdmsantos/kinesis-firehose/aws"
  version                       = "x.x.x"
  name                          = "${var.name_prefix}-delivery-stream"
  destination                   = "s3" # or destination = "extended_s3"
  s3_bucket_arn                 = aws_s3_bucket.s3.arn
  enable_s3_backup              = true
  s3_backup_bucket_arn          = aws_s3_bucket.s3.arn
  s3_backup_prefix              = "backup/"
  s3_backup_error_output_prefix = "error/"
  s3_backup_buffer_interval     = 100
  s3_backup_buffer_size         = 100
  s3_backup_compression         = "GZIP"
  s3_backup_use_existing_role   = false
  s3_backup_role_arn            = aws_iam_role.this.arn
  s3_backup_enable_encryption   = true
  s3_backup_kms_key_arn         = aws_kms_key.this.arn
  s3_backup_enable_log          = true
}

Destination Delivery Logging

Supported By: All Destinations

To Enabled It: enable_destination_log = true. It's enabled by default. It's possible add existing cloudwatch group or create new

Variables Prefix: destination_cw_log_

module "firehose" {
  source                      = "fdmsantos/kinesis-firehose/aws"
  version                     = "x.x.x"
  name                        = "firehose-delivery-stream"
  destination                 = "s3" # or destination = "extended_s3"
  s3_bucket_arn               = "<bucket_arn>"
  enable_destination_log      = true
  destination_log_group_name  = "<cw_log_group_arn>"
  destination_log_stream_name = "<cw_log_stream_name>"
}

VPC Support

It's possible use module only to create security groups.

Use variable create = false for this feature.

ElasticSearch / Opensearch / Opensearch Serverless

Supported By: ElasticSearch / Opensearch destination

To Enabled It: enable_vpc = true

To Create Opensearch IAM Service Linked Role: vpc_create_service_linked_role = true

If you want to have separate security groups for firehose and destination: vpc_security_group_same_as_destination = false

Examples

# Creates the Security Groups (For firehose and Destination)
module "firehose" {
  source                                               = "fdmsantos/kinesis-firehose/aws"
  version                                              = "x.x.x"
  name                                                 = "firehose-delivery-stream"
  destination                                          = "opensearch"
  opensearch_domain_arn                                = "<opensearch_domain_arn>"
  opensearch_index_name                                = "<opensearch_index_name>"
  enable_vpc                                           = true
  vpc_subnet_ids                                       = "<list(subnets_ids)>"
  vpc_create_security_group                            = true
  vpc_create_destination_security_group                = true
  elasticsearch_vpc_security_group_same_as_destination = false
}
# Use Existing Security Group
module "firehose" {
  source                          = "fdmsantos/kinesis-firehose/aws"
  version                         = "x.x.x"
  name                            = "firehose-delivery-stream"
  destination                     = "opensearch"
  opensearch_domain_arn           = "<opensearch_domain_arn>"
  opensearch_index_name           = "<opensearch_index_name>"
  enable_vpc                      = true
  vpc_subnet_ids                  = "<list(subnets_ids)>"
  vpc_security_group_firehose_ids = "<list(security_group_ids)>"
}
# Configure Existing Security Groups
module "firehose" {
  source                                            = "fdmsantos/kinesis-firehose/aws"
  version                                           = "x.x.x"
  name                                              = "firehose-delivery-stream"
  destination                                       = "opensearch"
  opensearch_domain_arn                             = "<opensearch_domain_arn>"
  opensearch_index_name                             = "<opensearch_index_name>"
  enable_vpc                                        = true
  vpc_subnet_ids                                    = "<list(subnets_ids)>"
  vpc_security_group_firehose_configure_existing    = true
  vpc_security_group_firehose_ids                   = "<list(security_group_ids)>"
  vpc_security_group_destination_configure_existing = true
  vpc_security_group_destination_ids                = "<list(security_group_ids)>"
}

Redshift / Splunk

Supported By: Redshift and Splunk destination

To Get Firehose CIDR Blocks to allow in destination security groups, use the following output: firehose_cidr_blocks

# Create the Security Group (For Destination)
module "firehose" {
  source                                = "fdmsantos/kinesis-firehose/aws"
  version                               = "x.x.x"
  name                                  = "firehose-delivery-stream"
  destination                           = "<redshift|splunk>"
  vpc_create_destination_security_group = true
}
# Configure Existing Security Groups
module "firehose" {
  source                                            = "fdmsantos/kinesis-firehose/aws"
  version                                           = "x.x.x"
  name                                              = "firehose-delivery-stream"
  destination                                       = "<redshift|splunk>"
  vpc_security_group_destination_configure_existing = true
  vpc_security_group_destination_ids                = "<list(security_group_ids)>"
}

Application Role

Supported By: Direct Put Source

To Create: create_application_role = true

To Create Policy: create_application_role_policy = true

Variables Prefix: application_role_

# Create Application Role to an application that runs in EC2 Instance
module "firehose" {
  source                             = "fdmsantos/kinesis-firehose/aws"
  version                            = "x.x.x"
  name                               = "firehose-delivery-stream"
  destination                        = "s3" # or destination = "extended_s3"
  create_application_role            = true
  create_application_role_policy     = true
  application_role_service_principal = "ec2.amazonaws.com"
}
# Configure existing Application Role to an application that runs in EC2 Instance with a policy with provided actions
module "firehose" {
  source                              = "fdmsantos/kinesis-firehose/aws"
  version                             = "x.x.x"
  name                                = "firehose-delivery-stream"
  destination                         = "s3" # or destination = "extended_s3"
  configure_existing_application_role = true
  application_role_name               = "application-role"
  create_application_role_policy      = true
  application_role_policy_actions     = [
    "firehose:PutRecord",
    "firehose:PutRecordBatch",
    "firehose:CreateDeliveryStream",
    "firehose:UpdateDestination"
  ]
}

Secrets Manager

Supported By: Snowflake / Redshift / Splunk and Http Endpoint destinations

To Enable: enable_secrets_manager = true

Variables Prefix: secret_

module "firehose" {
  source                      = "fdmsantos/kinesis-firehose/aws"
  version                     = "x.x.x"
  name                        = "firehose-delivery-stream"
  destination                 = "redshift"
  s3_bucket_arn               = "<bucket_arn>"
  redshift_cluster_identifier = "<redshift_cluster_identifier>"
  redshift_cluster_endpoint   = "<redshift_cluster_endpoint>"
  redshift_database_name      = "<redshift_cluster_database>"
  enable_secrets_manager      = true
  secret_arn                  = "<secret_arn>"
  secret_kms_key_arn          = "<secret_kms_key_arn>"
  redshift_table_name         = "<redshift_cluster_table>"
  redshift_copy_options       = "json 'auto ignorecase'"
}

Destinations Mapping

The destination variable configured in module is mapped to firehose valid destination.

Module Destination Firehose Destination Differences
s3 and extended_s3 extended_s3 There is no difference between s3 or extended_s3 destinations
redshift redshift
splunk splunk
elasticsearch elasticsearch
opensearch opensearch
opensearchserverless opensearchserverless
snowflake snowflake
iceberg iceberg
http_endpoint http_endpoint
datadog http_endpoint The difference regarding http_endpoint is the http_endpoint_url and http_endpoint_name variables aren't support, and it's necessary configure datadog_endpoint_type variable
newrelic http_endpoint The difference regarding http_endpoint is the http_endpoint_url and http_endpoint_name variables aren't support, and it's necessary configure newrelic_endpoint_type variable
coralogix http_endpoint The difference regarding http_endpoint is the http_endpoint_url and http_endpoint_name variables aren't support, and it's necessary configure coralogix_endpoint_location variable
dynatrace http_endpoint The difference regarding http_endpoint is the http_endpoint_url and http_endpoint_name variables aren't support, and it's necessary configure dynatrace_endpoint_location and dynatrace_api_url variable
honeycomb http_endpoint The difference regarding http_endpoint is the http_endpoint_url and http_endpoint_name variables aren't support, and it's necessary configure honeycomb_dataset_name variable
logicmonitor http_endpoint The difference regarding http_endpoint is the http_endpoint_url and http_endpoint_name variables aren't support, and it's necessary configure logicmonitor_account variable
mongodb http_endpoint The difference regarding http_endpoint is the http_endpoint_url and http_endpoint_name variables aren't support, and it's necessary configure mongodb_realm_webhook_url variable
sumologic http_endpoint The difference regarding http_endpoint is the http_endpoint_url and http_endpoint_name variables aren't support, and it's necessary configure sumologic_deployment_name and sumologic_data_type variables

Examples

  • Direct Put - Creates an encrypted Kinesis firehose stream with Direct Put as source and S3 as destination.
  • Direct Put With Lambda - Creates a Kinesis firehose stream with Direct Put as source and S3 as destination with transformation lambda.
  • Kinesis Data Stream Source - Creates a basic Kinesis Firehose stream with Kinesis data stream as source and s3 as destination.
  • WAF Source - Creates a Kinesis Firehose Stream with AWS Web WAF as source and S3 as destination.
  • MSK Source - Creates a Kinesis Firehose Stream with MSK Cluster as source and S3 as destination.
  • S3 Destination Complete - Creates a Kinesis Firehose Stream with all features enabled.
  • Redshift - Creates a Kinesis Firehose Stream with redshift as destination.
  • Redshift - Creates a Kinesis Firehose Stream with redshift as destination using secrets manager.
  • Redshift In VPC - Creates a Kinesis Firehose Stream with redshift in VPC as destination.
  • Public Opensearch - Creates a Kinesis Firehose Stream with public opensearch as destination.
  • Public Opensearch Serverless - Creates a Kinesis Firehose Stream with public serverless opensearch as destination.
  • Opensearch Serverless In Vpc - Creates a Kinesis Firehose Stream with serverless opensearch in VPC as destination.
  • Public Splunk - Creates a Kinesis Firehose Stream with public splunk as destination.
  • Splunk In VPC - Creates a Kinesis Firehose Stream with splunk in VPC as destination.
  • Snowflake - Creates a Kinesis Firehose Stream with snowflake as destination.
  • Custom Http Endpoint - Creates a Kinesis Firehose Stream with custom http endpoint as destination.
  • Datadog - Creates a Kinesis Firehose Stream with datadog europe metrics as destination.
  • New Relic - Creates a Kinesis Firehose Stream with New Relic europe metrics as destination.
  • Coralogix - Creates a Kinesis Firehose Stream with coralogix ireland as destination.
  • Dynatrace - Creates a Kinesis Firehose Stream with dynatrace europe as destination.
  • Honeycomb - Creates a Kinesis Firehose Stream with honeycomb as destination.
  • LogicMonitor - Creates a Kinesis Firehose Stream with Logic Monitor as destination.
  • MongoDB - Creates a Kinesis Firehose Stream with MongoDB as destination.
  • SumoLogic - Creates a Kinesis Firehose Stream with Sumo Logic as destination.
  • Iceberg - Creates a Kinesis Firehose Stream with Iceberg as destination.

Requirements

Name Version
terraform >= 0.13.1
aws >= 5.73

Providers

Name Version
aws >= 5.73

Modules

No modules.

Resources

Name Type
aws_cloudwatch_log_group.log resource
aws_cloudwatch_log_stream.backup resource
aws_cloudwatch_log_stream.destination resource
aws_iam_policy.application resource
aws_iam_policy.cw resource
aws_iam_policy.elasticsearch resource
aws_iam_policy.glue resource
aws_iam_policy.iceberg resource
aws_iam_policy.kinesis resource
aws_iam_policy.lambda resource
aws_iam_policy.msk resource
aws_iam_policy.opensearch resource
aws_iam_policy.opensearchserverless resource
aws_iam_policy.s3 resource
aws_iam_policy.s3_kms resource
aws_iam_policy.secretsmanager resource
aws_iam_policy.secretsmanager_cmk_encryption resource
aws_iam_policy.vpc resource
aws_iam_role.application resource
aws_iam_role.firehose resource
aws_iam_role_policy_attachment.application resource
aws_iam_role_policy_attachment.cw resource
aws_iam_role_policy_attachment.elasticsearch resource
aws_iam_role_policy_attachment.glue resource
aws_iam_role_policy_attachment.iceberg resource
aws_iam_role_policy_attachment.kinesis resource
aws_iam_role_policy_attachment.lambda resource
aws_iam_role_policy_attachment.msk resource
aws_iam_role_policy_attachment.opensearch resource
aws_iam_role_policy_attachment.opensearchserverless resource
aws_iam_role_policy_attachment.s3 resource
aws_iam_role_policy_attachment.s3_kms resource
aws_iam_role_policy_attachment.secretsmanager resource
aws_iam_role_policy_attachment.secretsmanager_cmk_encryption resource
aws_iam_role_policy_attachment.vpc resource
aws_iam_service_linked_role.opensearch resource
aws_iam_service_linked_role.opensearchserverless resource
aws_kinesis_firehose_delivery_stream.this resource
aws_redshift_cluster_iam_roles.this resource
aws_security_group.destination resource
aws_security_group.firehose resource
aws_security_group_rule.destination resource
aws_security_group_rule.firehose resource
aws_security_group_rule.firehose_egress_rule resource
aws_caller_identity.current data source
aws_iam_policy_document.application data source
aws_iam_policy_document.application_assume_role data source
aws_iam_policy_document.assume_role data source
aws_iam_policy_document.cross_account_elasticsearch data source
aws_iam_policy_document.cross_account_opensearch data source
aws_iam_policy_document.cross_account_opensearchserverless data source
aws_iam_policy_document.cross_account_s3 data source
aws_iam_policy_document.cw data source
aws_iam_policy_document.elasticsearch data source
aws_iam_policy_document.glue data source
aws_iam_policy_document.iceberg data source
aws_iam_policy_document.kinesis data source
aws_iam_policy_document.lambda data source
aws_iam_policy_document.msk data source
aws_iam_policy_document.opensearch data source
aws_iam_policy_document.opensearchserverless data source
aws_iam_policy_document.s3 data source
aws_iam_policy_document.s3_kms data source
aws_iam_policy_document.secretsmanager data source
aws_iam_policy_document.secretsmanager_cmk_encryption data source
aws_iam_policy_document.vpc data source
aws_region.current data source
aws_subnet.subnet data source

Inputs

Name Description Type Default Required
append_delimiter_to_record To configure your delivery stream to add a new line delimiter between records in objects that are delivered to Amazon S3. bool false no
application_role_description Description of IAM Application role to use for Kinesis Firehose Stream Source string null no
application_role_force_detach_policies Specifies to force detaching any policies the IAM Application role has before destroying it bool true no
application_role_name Name of IAM Application role to use for Kinesis Firehose Stream Source string null no
application_role_path Path of IAM Application role to use for Kinesis Firehose Stream Source string null no
application_role_permissions_boundary The ARN of the policy that is used to set the permissions boundary for the IAM Application role used by Kinesis Firehose Stream Source string null no
application_role_policy_actions List of Actions to Application Role Policy list(string)
[
"firehose:PutRecord",
"firehose:PutRecordBatch"
]
no
application_role_service_principal AWS Service Principal to assume application role string null no
application_role_tags A map of tags to assign to IAM Application role map(string) {} no
associate_role_to_redshift_cluster Set it to false if don't want the module associate the role to redshift cluster bool true no
buffering_interval Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination number 300 no
buffering_size Buffer incoming data to the specified size, in MBs, before delivering it to the destination. number 5 no
configure_existing_application_role Set it to True if want use existing application role to add the firehose Policy bool false no
coralogix_endpoint_location Endpoint Location to coralogix destination string "ireland" no
coralogix_parameter_application_name By default, your delivery stream arn will be used as applicationName. string null no
coralogix_parameter_subsystem_name By default, your delivery stream name will be used as subsystemName. string null no
coralogix_parameter_use_dynamic_values To use dynamic values for applicationName and subsystemName. bool false no
create Controls if kinesis firehose should be created (it affects almost all resources) bool true no
create_application_role Set it to true to create role to be used by the source bool false no
create_application_role_policy Set it to true to create policy to the role used by the source bool false no
create_destination_cw_log_group Enables or disables the cloudwatch log group creation to destination bool true no
create_role Controls whether IAM role for Kinesis Firehose Stream should be created bool true no
cw_log_retention_in_days Specifies the number of days you want to retain log events in the specified log group. Possible values are: 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. number null no
cw_tags A map of tags to assign to the resource. map(string) {} no
data_format_conversion_block_size The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The Value is in Bytes. number 268435456 no
data_format_conversion_glue_catalog_id The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default. string null no
data_format_conversion_glue_database Name of the AWS Glue database that contains the schema for the output data. string null no
data_format_conversion_glue_region If you don't specify an AWS Region, the default is the current region. string null no
data_format_conversion_glue_role_arn The role that Kinesis Data Firehose can use to access AWS Glue. This role must be in the same account you use for Kinesis Data Firehose. Cross-account roles aren't allowed. string null no
data_format_conversion_glue_table_name Specifies the AWS Glue table that contains the column information that constitutes your data schema string null no
data_format_conversion_glue_use_existing_role Indicates if want use the kinesis firehose role to glue access. bool true no
data_format_conversion_glue_version_id Specifies the table version for the output data schema. string "LATEST" no
data_format_conversion_hive_timestamps A list of how you want Kinesis Data Firehose to parse the date and time stamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. list(string) [] no
data_format_conversion_input_format Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe string "OpenX" no
data_format_conversion_openx_case_insensitive When set to true, Kinesis Data Firehose converts JSON keys to lowercase before deserializing them. bool true no
data_format_conversion_openx_column_to_json_key_mappings A map of column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. map(string) null no
data_format_conversion_openx_convert_dots_to_underscores Specifies that the names of the keys include dots and that you want Kinesis Data Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. bool false no
data_format_conversion_orc_bloom_filter_columns A list of column names for which you want Kinesis Data Firehose to create bloom filters. list(string) [] no
data_format_conversion_orc_bloom_filter_false_positive_probability The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. number 0.05 no
data_format_conversion_orc_compression The compression code to use over data blocks. string "SNAPPY" no
data_format_conversion_orc_dict_key_threshold A float that represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1. number 0 no
data_format_conversion_orc_enable_padding Set this to true to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. bool false no
data_format_conversion_orc_format_version The version of the file to write. string "V0_12" no
data_format_conversion_orc_padding_tolerance A float between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. number 0.05 no
data_format_conversion_orc_row_index_stripe The number of rows between index entries. number 10000 no
data_format_conversion_orc_stripe_size he number of bytes in each strip. number 67108864 no
data_format_conversion_output_format Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe string "PARQUET" no
data_format_conversion_parquet_compression The compression code to use over data blocks. string "SNAPPY" no
data_format_conversion_parquet_dict_compression Indicates whether to enable dictionary compression. bool false no
data_format_conversion_parquet_max_padding The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The value is in bytes number 0 no
data_format_conversion_parquet_page_size Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The value is in bytes number 1048576 no
data_format_conversion_parquet_writer_version Indicates the version of row format to output. string "V1" no
datadog_endpoint_type Endpoint type to datadog destination string "logs_eu" no
destination This is the destination to where the data is delivered string n/a yes
destination_cross_account Indicates if destination is in a different account. Only supported to Elasticsearch and OpenSearch bool false no
destination_log_group_name The CloudWatch group name for destination logs string null no
destination_log_stream_name The CloudWatch log stream name for destination logs string null no
dynamic_partition_append_delimiter_to_record DEPRECATED!! Use var append_delimiter_to_record instead!! Use To configure your delivery stream to add a new line delimiter between records in objects that are delivered to Amazon S3. bool false no
dynamic_partition_enable_record_deaggregation Data deaggregation is the process of parsing through the records in a delivery stream and separating the records based either on valid JSON or on the specified delimiter bool false no
dynamic_partition_metadata_extractor_query Dynamic Partition JQ query. string null no
dynamic_partition_record_deaggregation_delimiter Specifies the delimiter to be used for parsing through the records in the delivery stream and deaggregating them string null no
dynamic_partition_record_deaggregation_type Data deaggregation is the process of parsing through the records in a delivery stream and separating the records based either on valid JSON or on the specified delimiter string "JSON" no
dynamic_partitioning_retry_duration Total amount of seconds Firehose spends on retries number 300 no
dynatrace_api_url API URL to Dynatrace destination string null no
dynatrace_endpoint_location Endpoint Location to Dynatrace destination string "eu" no
elasticsearch_domain_arn The ARN of the Amazon ES domain. The pattern needs to be arn:.* string null no
elasticsearch_index_name The Elasticsearch index name string null no
elasticsearch_index_rotation_period The Elasticsearch index rotation period. Index rotation appends a timestamp to the IndexName to facilitate expiration of old data string "OneDay" no
elasticsearch_retry_duration The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt string 300 no
elasticsearch_type_name The Elasticsearch type name with maximum length of 100 characters string null no
enable_cloudwatch_logs_data_message_extraction Cloudwatch Logs data message extraction bool false no
enable_cloudwatch_logs_decompression Enables or disables Cloudwatch Logs decompression bool false no
enable_data_format_conversion Set it to true if you want to disable format conversion. bool false no
enable_destination_log The CloudWatch Logging Options for the delivery stream bool true no
enable_dynamic_partitioning Enables or disables dynamic partitioning bool false no
enable_lambda_transform Set it to true to enable data transformation with lambda bool false no
enable_s3_backup The Amazon S3 backup mode bool false no
enable_s3_encryption Indicates if want use encryption in S3 bucket. bool false no
enable_secrets_manager Enables or disables the Secrets Manager configuration. bool false no
enable_sse Whether to enable encryption at rest. Only makes sense when source is Direct Put bool false no
enable_vpc Indicates if destination is configured in VPC. Supports Elasticsearch and Opensearch destinations. bool false no
firehose_role IAM role ARN attached to the Kinesis Firehose Stream. string null no
honeycomb_api_host If you use a Secure Tenancy or other proxy, put its schema://host[:port] here string "https://api.honeycomb.io" no
honeycomb_dataset_name Your Honeycomb dataset name to Honeycomb destination string null no
http_endpoint_access_key The access key required for Kinesis Firehose to authenticate with the HTTP endpoint selected as the destination string null no
http_endpoint_enable_request_configuration The request configuration bool false no
http_endpoint_name The HTTP endpoint name string null no
http_endpoint_request_configuration_common_attributes Describes the metadata sent to the HTTP endpoint destination. The variable is list. Each element is map with two keys , name and value, that corresponds to common attribute name and value list(map(string)) [] no
http_endpoint_request_configuration_content_encoding Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination string "GZIP" no
http_endpoint_retry_duration Total amount of seconds Firehose spends on retries. This duration starts after the initial attempt fails, It does not include the time periods during which Firehose waits for acknowledgment from the specified destination after each attempt number 300 no
http_endpoint_url The HTTP endpoint URL to which Kinesis Firehose sends your data string null no
iceberg_catalog_arn Glue catalog ARN identifier of the destination Apache Iceberg Tables. You must specify the ARN in the format arn:aws:glue:region:account-id:catalog. string null no
iceberg_database_name The name of the Apache Iceberg database. string null no
iceberg_destination_config_s3_error_output_prefix The table specific S3 error output prefix. All the errors that occurred while delivering to this table will be prefixed with this value in S3 destination. string null no
iceberg_destination_config_unique_keys A list of unique keys for a given Apache Iceberg table. Firehose will use these for running Create, Update, or Delete operations on the given Iceberg table. list(string) [] no
iceberg_retry_duration The period of time, in seconds between 0 to 7200, during which Firehose retries to deliver data to the specified destination. number 300 no
iceberg_table_name The name of the Apache Iceberg Table. string null no
input_source This is the kinesis firehose source string "direct-put" no
kinesis_source_is_encrypted Indicates if Kinesis data stream source is encrypted bool false no
kinesis_source_kms_arn Kinesis Source KMS Key to add Firehose role to decrypt the records. string null no
kinesis_source_role_arn DEPRECATED!! Use variable instead source_role_arn! The ARN of the role that provides access to the source Kinesis stream string null no
kinesis_source_stream_arn The kinesis stream used as the source of the firehose delivery stream string null no
kinesis_source_use_existing_role DEPRECATED!! Use variable source_use_existing_role instead! Indicates if want use the kinesis firehose role to kinesis data stream access. bool true no
logicmonitor_account Account to use in Logic Monitor destination string null no
mongodb_realm_webhook_url Realm Webhook URL to use in MongoDB destination string null no
msk_source_cluster_arn The ARN of the Amazon MSK cluster. string null no
msk_source_connectivity_type The type of connectivity used to access the Amazon MSK cluster. Valid values: PUBLIC, PRIVATE. string "PUBLIC" no
msk_source_topic_name The topic name within the Amazon MSK cluster. string null no
name A name to identify the stream. This is unique to the AWS account and region the Stream is created in string n/a yes
newrelic_endpoint_type Endpoint type to New Relic destination string "logs_eu" no
opensearch_document_id_options The method for setting up document ID. string "FIREHOSE_DEFAULT" no
opensearch_domain_arn The ARN of the Amazon Opensearch domain. The pattern needs to be arn:.*. Conflicts with cluster_endpoint. string null no
opensearch_index_name The Opensearch (And OpenSearch Serverless) index name. string null no
opensearch_index_rotation_period The Opensearch index rotation period. Index rotation appends a timestamp to the IndexName to facilitate expiration of old data string "OneDay" no
opensearch_retry_duration After an initial failure to deliver to Amazon OpenSearch, the total amount of time, in seconds between 0 to 7200, during which Firehose re-attempts delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. The default value is 300s. There will be no retry if the value is 0. string 300 no
opensearch_type_name The opensearch type name with maximum length of 100 characters. Types are deprecated in OpenSearch_1.1. TypeName must be empty. string null no
opensearch_vpc_create_service_linked_role Set it to True if want create Opensearch Service Linked Role to Access VPC. bool false no
opensearchserverless_collection_arn The ARN of the Amazon Opensearch Serverless Collection. The pattern needs to be arn:.*. string null no
opensearchserverless_collection_endpoint The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service. string null no
policy_path Path of policies to that should be added to IAM role for Kinesis Firehose Stream string null no
redshift_cluster_endpoint The redshift endpoint string null no
redshift_cluster_identifier Redshift Cluster identifier. Necessary to associate the iam role to cluster string null no
redshift_copy_options Copy options for copying the data from the s3 intermediate bucket into redshift, for example to change the default delimiter string null no
redshift_data_table_columns The data table columns that will be targeted by the copy command string null no
redshift_database_name The redshift database name string null no
redshift_password The password for the redshift username above string null no
redshift_retry_duration The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt string 3600 no
redshift_table_name The name of the table in the redshift cluster that the s3 bucket will copy to string null no
redshift_username The username that the firehose delivery stream will assume. It is strongly recommended that the username and password provided is used exclusively for Amazon Kinesis Firehose purposes, and that the permissions for the account are restricted for Amazon Redshift INSERT permissions string null no
role_description Description of IAM role to use for Kinesis Firehose Stream string null no
role_force_detach_policies Specifies to force detaching any policies the IAM role has before destroying it bool true no
role_name Name of IAM role to use for Kinesis Firehose Stream string null no
role_path Path of IAM role to use for Kinesis Firehose Stream string null no
role_permissions_boundary The ARN of the policy that is used to set the permissions boundary for the IAM role used by Kinesis Firehose Stream string null no
role_tags A map of tags to assign to IAM role map(string) {} no
s3_backup_bucket_arn The ARN of the S3 backup bucket string null no
s3_backup_buffering_interval Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. number 300 no
s3_backup_buffering_size Buffer incoming data to the specified size, in MBs, before delivering it to the destination. number 5 no
s3_backup_compression The compression format string "UNCOMPRESSED" no
s3_backup_create_cw_log_group Enables or disables the cloudwatch log group creation bool true no
s3_backup_enable_encryption Indicates if want enable KMS Encryption in S3 Backup Bucket. bool false no
s3_backup_enable_log Enables or disables the logging bool true no
s3_backup_error_output_prefix Prefix added to failed records before writing them to S3 string null no
s3_backup_kms_key_arn Specifies the KMS key ARN the stream will use to encrypt data. If not set, no encryption will be used. string null no
s3_backup_log_group_name he CloudWatch group name for logging string null no
s3_backup_log_stream_name The CloudWatch log stream name for logging string null no
s3_backup_mode Defines how documents should be delivered to Amazon S3. Used to elasticsearch, opensearch, splunk, http configurations. For S3 and Redshift use enable_s3_backup string "FailedOnly" no
s3_backup_prefix The YYYY/MM/DD/HH time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket string null no
s3_backup_role_arn The role that Kinesis Data Firehose can use to access S3 Backup. string null no
s3_backup_use_existing_role Indicates if want use the kinesis firehose role to s3 backup bucket access. bool true no
s3_bucket_arn The ARN of the S3 destination bucket string null no
s3_compression_format The compression format string "UNCOMPRESSED" no
s3_configuration_buffering_interval Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. number 300 no
s3_configuration_buffering_size Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting SizeInMBs to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec set SizeInMBs to be 10 MB or higher. number 5 no
s3_cross_account Indicates if S3 bucket destination is in a different account bool false no
s3_custom_time_zone The time zone you prefer. Valid values are UTC or a non-3-letter IANA time zones (for example, America/Los_Angeles). Default value is UTC. string "UTC" no
s3_error_output_prefix Prefix added to failed records before writing them to S3. This prefix appears immediately following the bucket name. string null no
s3_file_extension The file extension to override the default file extension (for example, .json). string null no
s3_kms_key_arn Specifies the KMS key ARN the stream will use to encrypt data. If not set, no encryption will be used string null no
s3_own_bucket Indicates if you own the bucket. If not, will be configure permissions to grants the bucket owner full access to the objects delivered by Kinesis Data Firehose bool true no
s3_prefix The YYYY/MM/DD/HH time format prefix is automatically used for delivered S3 files. You can specify an extra prefix to be added in front of the time format prefix. Note that if the prefix ends with a slash, it appears as a folder in the S3 bucket string null no
secret_arn The ARN of the Secrets Manager secret. This value is required if enable_secrets_manager is true. string null no
secret_kms_key_arn The ARN of the KMS Key to encrypt the Secret. This value is required if key used to encrypt the Secret is CMK and want the module generates the IAM Policy to access it. string null no
snowflake_account_identifier The Snowflake account identifier. string null no
snowflake_content_column_name The name of the content column. string null no
snowflake_data_loading_option The data loading option. string null no
snowflake_database The Snowflake database name. string null no
snowflake_key_passphrase The Snowflake passphrase for the private key. string null no
snowflake_metadata_column_name The name of the metadata column. string null no
snowflake_private_key The Snowflake private key for authentication. string null no
snowflake_private_link_vpce_id The VPCE ID for Firehose to privately connect with Snowflake. string null no
snowflake_retry_duration The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. string 60 no
snowflake_role_configuration_enabled Whether the Snowflake role is enabled. bool false no
snowflake_role_configuration_role The Snowflake role. string null no
snowflake_schema The Snowflake schema name. string null no
snowflake_table The Snowflake table name. string null no
snowflake_user The user for authentication.. string null no
source_role_arn The ARN of the role that provides access to the source. Only Supported on Kinesis and MSK Sources string null no
source_use_existing_role Indicates if want use the kinesis firehose role for sources access. Only Supported on Kinesis and MSK Sources bool true no
splunk_hec_acknowledgment_timeout The amount of time, that Kinesis Firehose waits to receive an acknowledgment from Splunk after it sends it data number 600 no
splunk_hec_endpoint The HTTP Event Collector (HEC) endpoint to which Kinesis Firehose sends your data string null no
splunk_hec_endpoint_type The HEC endpoint type string "Raw" no
splunk_hec_token The GUID that you obtain from your Splunk cluster when you create a new HEC endpoint string null no
splunk_retry_duration After an initial failure to deliver to Splunk, the total amount of time, in seconds between 0 to 7200, during which Firehose re-attempts delivery (including the first attempt) number 300 no
sse_kms_key_arn Amazon Resource Name (ARN) of the encryption key string null no
sse_kms_key_type Type of encryption key. string "AWS_OWNED_CMK" no
sumologic_data_type Data Type to use in Sumo Logic destination string "log" no
sumologic_deployment_name Deployment Name to use in Sumo Logic destination string null no
tags A map of tags to assign to resources. map(string) {} no
transform_lambda_arn Lambda ARN to Transform source records string null no
transform_lambda_buffer_interval The period of time during which Kinesis Data Firehose buffers incoming data before invoking the AWS Lambda function. The AWS Lambda function is invoked once the value of the buffer size or the buffer interval is reached. number null no
transform_lambda_buffer_size The AWS Lambda function has a 6 MB invocation payload quota. Your data can expand in size after it's processed by the AWS Lambda function. A smaller buffer size allows for more room should the data expand after processing. number null no
transform_lambda_number_retries Number of retries for AWS Transformation lambda number null no
transform_lambda_role_arn The ARN of the role to execute the transform lambda. If null use the Firehose Stream role string null no
vpc_create_destination_security_group Indicates if want create destination security group to associate to firehose destinations bool false no
vpc_create_security_group Indicates if want create security group to associate to kinesis firehose bool false no
vpc_role_arn The ARN of the IAM role to be assumed by Firehose for calling the Amazon EC2 configuration API and for creating network interfaces. Supports Elasticsearch and Opensearch destinations. string null no
vpc_security_group_destination_configure_existing Indicates if want configure an existing destination security group with the necessary rules bool false no
vpc_security_group_destination_ids A list of security group IDs associated to destinations to allow firehose traffic list(string) null no
vpc_security_group_destination_vpc_id VPC ID to create the destination security group. Only supported to Redshift and splunk destinations string null no
vpc_security_group_firehose_configure_existing Indicates if want configure an existing firehose security group with the necessary rules bool false no
vpc_security_group_firehose_ids A list of security group IDs to associate with Kinesis Firehose. list(string) null no
vpc_security_group_same_as_destination Indicates if the firehose security group is the same as destination. bool true no
vpc_security_group_tags A map of tags to assign to security group map(string) {} no
vpc_subnet_ids A list of subnet IDs to associate with Kinesis Firehose. Supports Elasticsearch and Opensearch destinations. list(string) null no
vpc_use_existing_role Indicates if want use the kinesis firehose role to VPC access. Supports Elasticsearch and Opensearch destinations. bool true no

Outputs

Name Description
application_role_arn The ARN of the IAM role created for Kinesis Firehose Stream Source
application_role_name The Name of the IAM role created for Kinesis Firehose Stream Source Source
application_role_policy_arn The ARN of the IAM policy created for Kinesis Firehose Stream Source
application_role_policy_name The Name of the IAM policy created for Kinesis Firehose Stream Source Source
destination_security_group_id Security Group ID associated to destination
destination_security_group_name Security Group Name associated to destination
destination_security_group_rule_ids Security Group Rules ID created in Destination Security group
elasticsearch_cross_account_service_policy Elasticsearch Service policy when the opensearch domain belongs to another account
firehose_cidr_blocks Firehose stream cidr blocks to unblock on destination security group
firehose_security_group_id Security Group ID associated to Firehose Stream. Only Supported for elasticsearch destination
firehose_security_group_name Security Group Name associated to Firehose Stream. Only Supported for elasticsearch destination
firehose_security_group_rule_ids Security Group Rules ID created in Firehose Stream Security group. Only Supported for elasticsearch destination
kinesis_firehose_arn The ARN of the Kinesis Firehose Stream
kinesis_firehose_cloudwatch_log_backup_stream_arn The ARN of the created Cloudwatch Log Group Stream to backup
kinesis_firehose_cloudwatch_log_backup_stream_name The name of the created Cloudwatch Log Group Stream to backup
kinesis_firehose_cloudwatch_log_delivery_stream_arn The ARN of the created Cloudwatch Log Group Stream to delivery
kinesis_firehose_cloudwatch_log_delivery_stream_name The name of the created Cloudwatch Log Group Stream to delivery
kinesis_firehose_cloudwatch_log_group_arn The ARN of the created Cloudwatch Log Group
kinesis_firehose_cloudwatch_log_group_name The name of the created Cloudwatch Log Group
kinesis_firehose_destination_id The Destination id of the Kinesis Firehose Stream
kinesis_firehose_name The name of the Kinesis Firehose Stream
kinesis_firehose_role_arn The ARN of the IAM role created for Kinesis Firehose Stream
kinesis_firehose_version_id The Version id of the Kinesis Firehose Stream
opensearch_cross_account_service_policy Opensearch Service policy when the opensearch domain belongs to another account
opensearch_iam_service_linked_role_arn The ARN of the Opensearch IAM Service linked role
opensearchserverless_cross_account_service_policy Opensearch Serverless Service policy when the opensearch domain belongs to another account
opensearchserverless_iam_service_linked_role_arn The ARN of the Opensearch Serverless IAM Service linked role
s3_cross_account_bucket_policy Bucket Policy to S3 Bucket Destination when the bucket belongs to another account

Upgrade

  • Version 1.x to 2.x Upgrade Guide here
  • Version 2.x to 3.x Upgrade Guide here

Deprecations

Version 3.1.0

  • Variable kinesis_source_role_arn is deprecated. Use source_role_arn instead.
  • Variable kinesis_source_use_existing_role is deprecated. Use source_use_existing_role instead.

Version 3.3.0

  • Variable dynamic_partition_append_delimiter_to_record is deprecated. Use append_delimiter_to_record instead.

License

Apache 2 Licensed. See LICENSE for full details.