Skip to content

Commit

Permalink
Merge pull request #9 from airbytehq/chore/terraform-documentation
Browse files Browse the repository at this point in the history
chore: add more documentation into the provider
  • Loading branch information
bgroff authored Jun 26, 2023
2 parents b2b2e6d + d4af559 commit c0059c2
Show file tree
Hide file tree
Showing 475 changed files with 8,805 additions and 6,307 deletions.
1 change: 1 addition & 0 deletions airbyte.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@ openapi: "3.1.0"
info:
title: "airbyte-api"
version: "1.0.0"
description: Programatically control Airbyte Cloud.
servers:
- url: "https://api.airbyte.com/v1"
description: "Airbyte API v1"
Expand Down
4 changes: 2 additions & 2 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
page_title: "airbyte Provider"
subcategory: ""
description: |-
Programatically control Airbyte Cloud.
airbyte-api: Programatically control Airbyte Cloud.
---

# airbyte Provider

Programatically control Airbyte Cloud.
airbyte-api: Programatically control Airbyte Cloud.



Expand Down
24 changes: 13 additions & 11 deletions docs/resources/connection.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,14 +23,16 @@ Connection Resource
### Optional

- `configurations` (Attributes) A list of configured stream options for a connection. (see [below for nested schema](#nestedatt--configurations))
- `data_residency` (String)
- `name` (String)
- `namespace_definition` (String) Define the location where the data will be stored in the destination
- `namespace_format` (String)
- `non_breaking_schema_updates_behavior` (String) Set how Airbyte handles syncs when it detects a non-breaking schema change in the source
- `prefix` (String)
- `data_residency` (String) must be one of [auto, us, eu]
- `name` (String) Optional name of the connection
- `namespace_definition` (String) must be one of [source, destination, custom_format]
Define the location where the data will be stored in the destination
- `namespace_format` (String) Used when namespaceDefinition is 'custom_format'. If blank then behaves like namespaceDefinition = 'destination'. If "${SOURCE_NAMESPACE}" then behaves like namespaceDefinition = 'source'.
- `non_breaking_schema_updates_behavior` (String) must be one of [ignore, disable_connection]
Set how Airbyte handles syncs when it detects a non-breaking schema change in the source
- `prefix` (String) Prefix that will be prepended to the name of each stream when it is written to the destination (ex. “airbyte_” causes “projects” => “airbyte_projects”).
- `schedule` (Attributes) schedule for when the the connection should run, per the schedule type (see [below for nested schema](#nestedatt--schedule))
- `status` (String)
- `status` (String) must be one of [active, inactive, deprecated]

### Read-Only

Expand All @@ -53,9 +55,9 @@ Required:

Optional:

- `cursor_field` (List of String)
- `primary_key` (List of List of String)
- `sync_mode` (String)
- `cursor_field` (List of String) Path to the field that will be used to determine if a record is new or modified since the last sync. This field is REQUIRED if `sync_mode` is `incremental` unless there is a default.
- `primary_key` (List of List of String) Paths to the fields that will be used as primary key. This field is REQUIRED if `destination_sync_mode` is `*_dedup` unless it is already supplied by the source schema.
- `sync_mode` (String) must be one of [full_refresh_overwrite, full_refresh_append, incremental_append, incremental_deduped_history]



Expand All @@ -64,7 +66,7 @@ Optional:

Required:

- `schedule_type` (String)
- `schedule_type` (String) must be one of [manual, cron]

Optional:

Expand Down
17 changes: 9 additions & 8 deletions docs/resources/destination_amazon_sqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,16 +31,17 @@ DestinationAmazonSqs Resource

Required:

- `destination_type` (String)
- `queue_url` (String)
- `region` (String) AWS Region of the SQS Queue
- `destination_type` (String) must be one of [amazon-sqs]
- `queue_url` (String) URL of the SQS Queue
- `region` (String) must be one of [us-east-1, us-east-2, us-west-1, us-west-2, af-south-1, ap-east-1, ap-south-1, ap-northeast-1, ap-northeast-2, ap-northeast-3, ap-southeast-1, ap-southeast-2, ca-central-1, cn-north-1, cn-northwest-1, eu-central-1, eu-north-1, eu-south-1, eu-west-1, eu-west-2, eu-west-3, sa-east-1, me-south-1, us-gov-east-1, us-gov-west-1]
AWS Region of the SQS Queue

Optional:

- `access_key` (String)
- `message_body_key` (String)
- `message_delay` (Number)
- `message_group_id` (String)
- `secret_key` (String)
- `access_key` (String) The Access Key ID of the AWS IAM Role to use for sending messages
- `message_body_key` (String) Use this property to extract the contents of the named key in the input record to use as the SQS message body. If not set, the entire content of the input record data is used as the message body.
- `message_delay` (Number) Modify the Message Delay of the individual message from the Queue's default (seconds).
- `message_group_id` (String) The tag that specifies that a message belongs to a specific message group. This parameter applies only to, and is REQUIRED by, FIFO queues.
- `secret_key` (String) The Secret Key of the AWS IAM Role to use for sending messages


72 changes: 41 additions & 31 deletions docs/resources/destination_aws_datalake.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,22 +31,24 @@ DestinationAwsDatalake Resource

Required:

- `bucket_name` (String)
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
- `destination_type` (String)
- `lakeformation_database_name` (String)
- `region` (String) The region of the S3 bucket. See <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions">here</a> for all region codes.
- `bucket_name` (String) The name of the S3 bucket. Read more <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html">here</a>.
- `credentials` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials))
- `destination_type` (String) must be one of [aws-datalake]
- `lakeformation_database_name` (String) The default database this destination will use to create tables in per stream. Can be changed per connection by customizing the namespace.
- `region` (String) must be one of [, us-east-1, us-east-2, us-west-1, us-west-2, af-south-1, ap-east-1, ap-south-1, ap-northeast-1, ap-northeast-2, ap-northeast-3, ap-southeast-1, ap-southeast-2, ca-central-1, cn-north-1, cn-northwest-1, eu-central-1, eu-north-1, eu-south-1, eu-west-1, eu-west-2, eu-west-3, sa-east-1, me-south-1, us-gov-east-1, us-gov-west-1]
The region of the S3 bucket. See <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions">here</a> for all region codes.

Optional:

- `aws_account_id` (String)
- `bucket_prefix` (String)
- `format` (Attributes) (see [below for nested schema](#nestedatt--configuration--format))
- `glue_catalog_float_as_decimal` (Boolean)
- `lakeformation_database_default_tag_key` (String)
- `lakeformation_database_default_tag_values` (String)
- `lakeformation_governed_tables` (Boolean)
- `partitioning` (String) Partition data by cursor fields when a cursor field is a date
- `aws_account_id` (String) target aws account id
- `bucket_prefix` (String) S3 prefix
- `format` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format))
- `glue_catalog_float_as_decimal` (Boolean) Cast float/double as decimal(38,18). This can help achieve higher accuracy and represent numbers correctly as received from the source.
- `lakeformation_database_default_tag_key` (String) Add a default tag key to databases created by this destination
- `lakeformation_database_default_tag_values` (String) Add default values for the `Tag Key` to databases created by this destination. Comma separate for multiple values.
- `lakeformation_governed_tables` (Boolean) Whether to create tables as LF governed tables.
- `partitioning` (String) must be one of [NO PARTITIONING, DATE, YEAR, MONTH, DAY, YEAR/MONTH, YEAR/MONTH/DAY]
Partition data by cursor fields when a cursor field is a date

<a id="nestedatt--configuration--credentials"></a>
### Nested Schema for `configuration.credentials`
Expand All @@ -63,37 +65,41 @@ Optional:

Required:

- `credentials_title` (String) Name of the credentials
- `role_arn` (String)
- `credentials_title` (String) must be one of [IAM Role]
Name of the credentials
- `role_arn` (String) Will assume this role to write data to s3


<a id="nestedatt--configuration--credentials--destination_aws_datalake_authentication_mode_iam_user"></a>
### Nested Schema for `configuration.credentials.destination_aws_datalake_authentication_mode_iam_user`

Required:

- `aws_access_key_id` (String)
- `aws_secret_access_key` (String)
- `credentials_title` (String) Name of the credentials
- `aws_access_key_id` (String) AWS User Access Key Id
- `aws_secret_access_key` (String) Secret Access Key
- `credentials_title` (String) must be one of [IAM User]
Name of the credentials


<a id="nestedatt--configuration--credentials--destination_aws_datalake_update_authentication_mode_iam_role"></a>
### Nested Schema for `configuration.credentials.destination_aws_datalake_update_authentication_mode_iam_role`

Required:

- `credentials_title` (String) Name of the credentials
- `role_arn` (String)
- `credentials_title` (String) must be one of [IAM Role]
Name of the credentials
- `role_arn` (String) Will assume this role to write data to s3


<a id="nestedatt--configuration--credentials--destination_aws_datalake_update_authentication_mode_iam_user"></a>
### Nested Schema for `configuration.credentials.destination_aws_datalake_update_authentication_mode_iam_user`

Required:

- `aws_access_key_id` (String)
- `aws_secret_access_key` (String)
- `credentials_title` (String) Name of the credentials
- `aws_access_key_id` (String) AWS User Access Key Id
- `aws_secret_access_key` (String) Secret Access Key
- `credentials_title` (String) must be one of [IAM User]
Name of the credentials



Expand All @@ -112,46 +118,50 @@ Optional:

Required:

- `format_type` (String)
- `format_type` (String) must be one of [JSONL]

Optional:

- `compression_codec` (String) The compression algorithm used to compress data.
- `compression_codec` (String) must be one of [UNCOMPRESSED, GZIP]
The compression algorithm used to compress data.


<a id="nestedatt--configuration--format--destination_aws_datalake_output_format_wildcard_parquet_columnar_storage"></a>
### Nested Schema for `configuration.format.destination_aws_datalake_output_format_wildcard_parquet_columnar_storage`

Required:

- `format_type` (String)
- `format_type` (String) must be one of [Parquet]

Optional:

- `compression_codec` (String) The compression algorithm used to compress data.
- `compression_codec` (String) must be one of [UNCOMPRESSED, SNAPPY, GZIP, ZSTD]
The compression algorithm used to compress data.


<a id="nestedatt--configuration--format--destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json"></a>
### Nested Schema for `configuration.format.destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json`

Required:

- `format_type` (String)
- `format_type` (String) must be one of [JSONL]

Optional:

- `compression_codec` (String) The compression algorithm used to compress data.
- `compression_codec` (String) must be one of [UNCOMPRESSED, GZIP]
The compression algorithm used to compress data.


<a id="nestedatt--configuration--format--destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage"></a>
### Nested Schema for `configuration.format.destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage`

Required:

- `format_type` (String)
- `format_type` (String) must be one of [Parquet]

Optional:

- `compression_codec` (String) The compression algorithm used to compress data.
- `compression_codec` (String) must be one of [UNCOMPRESSED, SNAPPY, GZIP, ZSTD]
The compression algorithm used to compress data.


30 changes: 16 additions & 14 deletions docs/resources/destination_azure_blob_storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,17 +31,17 @@ DestinationAzureBlobStorage Resource

Required:

- `azure_blob_storage_account_key` (String)
- `azure_blob_storage_account_name` (String)
- `destination_type` (String)
- `format` (Attributes) (see [below for nested schema](#nestedatt--configuration--format))
- `azure_blob_storage_account_key` (String) The Azure blob storage account key.
- `azure_blob_storage_account_name` (String) The account's name of the Azure Blob Storage.
- `destination_type` (String) must be one of [azure-blob-storage]
- `format` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format))

Optional:

- `azure_blob_storage_container_name` (String)
- `azure_blob_storage_endpoint_domain_name` (String)
- `azure_blob_storage_output_buffer_size` (Number)
- `azure_blob_storage_spill_size` (Number)
- `azure_blob_storage_container_name` (String) The name of the Azure blob storage container. If not exists - will be created automatically. May be empty, then will be created automatically airbytecontainer+timestamp
- `azure_blob_storage_endpoint_domain_name` (String) This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
- `azure_blob_storage_output_buffer_size` (Number) The amount of megabytes to buffer for the output stream to Azure. This will impact memory footprint on workers, but may need adjustment for performance and appropriate block size in Azure.
- `azure_blob_storage_spill_size` (Number) The amount of megabytes after which the connector should spill the records in a new blob object. Make sure to configure size greater than individual records. Enter 0 if not applicable

<a id="nestedatt--configuration--format"></a>
### Nested Schema for `configuration.format`
Expand All @@ -58,32 +58,34 @@ Optional:

Required:

- `flattening` (String) Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
- `format_type` (String)
- `flattening` (String) must be one of [No flattening, Root level flattening]
Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
- `format_type` (String) must be one of [CSV]


<a id="nestedatt--configuration--format--destination_azure_blob_storage_output_format_json_lines_newline_delimited_json"></a>
### Nested Schema for `configuration.format.destination_azure_blob_storage_output_format_json_lines_newline_delimited_json`

Required:

- `format_type` (String)
- `format_type` (String) must be one of [JSONL]


<a id="nestedatt--configuration--format--destination_azure_blob_storage_update_output_format_csv_comma_separated_values"></a>
### Nested Schema for `configuration.format.destination_azure_blob_storage_update_output_format_csv_comma_separated_values`

Required:

- `flattening` (String) Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
- `format_type` (String)
- `flattening` (String) must be one of [No flattening, Root level flattening]
Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
- `format_type` (String) must be one of [CSV]


<a id="nestedatt--configuration--format--destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json"></a>
### Nested Schema for `configuration.format.destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json`

Required:

- `format_type` (String)
- `format_type` (String) must be one of [JSONL]


Loading

0 comments on commit c0059c2

Please sign in to comment.