Skip to content

Commit

Permalink
run go generate
Browse files Browse the repository at this point in the history
  • Loading branch information
terencecho committed Nov 21, 2023
1 parent 6e9ce72 commit c6f674c
Show file tree
Hide file tree
Showing 493 changed files with 7,863 additions and 19,925 deletions.
14 changes: 7 additions & 7 deletions docs/data-sources/connection.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,15 +28,15 @@ data "airbyte_connection" "my_connection" {
### Read-Only

- `configurations` (Attributes) A list of configured stream options for a connection. (see [below for nested schema](#nestedatt--configurations))
- `data_residency` (String) must be one of ["auto", "us", "eu"]
- `data_residency` (String) must be one of ["auto", "us", "eu"]; Default: "auto"
- `destination_id` (String)
- `name` (String) Optional name of the connection
- `namespace_definition` (String) must be one of ["source", "destination", "custom_format"]
- `name` (String)
- `namespace_definition` (String) must be one of ["source", "destination", "custom_format"]; Default: "destination"
Define the location where the data will be stored in the destination
- `namespace_format` (String) Used when namespaceDefinition is 'custom_format'. If blank then behaves like namespaceDefinition = 'destination'. If "${SOURCE_NAMESPACE}" then behaves like namespaceDefinition = 'source'.
- `non_breaking_schema_updates_behavior` (String) must be one of ["ignore", "disable_connection", "propagate_columns", "propagate_fully"]
- `namespace_format` (String)
- `non_breaking_schema_updates_behavior` (String) must be one of ["ignore", "disable_connection", "propagate_columns", "propagate_fully"]; Default: "ignore"
Set how Airbyte handles syncs when it detects a non-breaking schema change in the source
- `prefix` (String) Prefix that will be prepended to the name of each stream when it is written to the destination (ex. “airbyte_” causes “projects” => “airbyte_projects”).
- `prefix` (String)
- `schedule` (Attributes) schedule for when the the connection should run, per the schedule type (see [below for nested schema](#nestedatt--schedule))
- `source_id` (String)
- `status` (String) must be one of ["active", "inactive", "deprecated"]
Expand Down Expand Up @@ -68,6 +68,6 @@ Read-Only:

- `basic_timing` (String)
- `cron_expression` (String)
- `schedule_type` (String) must be one of ["manual", "cron"]
- `schedule_type` (String) must be one of ["manual", "cron", "basic"]


127 changes: 3 additions & 124 deletions docs/data-sources/destination_aws_datalake.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,131 +27,10 @@ data "airbyte_destination_aws_datalake" "my_destination_awsdatalake" {

### Read-Only

- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
- `configuration` (String) Parsed as JSON.
The values required to configure the destination.
- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)

<a id="nestedatt--configuration"></a>
### Nested Schema for `configuration`

Read-Only:

- `aws_account_id` (String) target aws account id
- `bucket_name` (String) The name of the S3 bucket. Read more <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html">here</a>.
- `bucket_prefix` (String) S3 prefix
- `credentials` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials))
- `destination_type` (String) must be one of ["aws-datalake"]
- `format` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format))
- `glue_catalog_float_as_decimal` (Boolean) Cast float/double as decimal(38,18). This can help achieve higher accuracy and represent numbers correctly as received from the source.
- `lakeformation_database_default_tag_key` (String) Add a default tag key to databases created by this destination
- `lakeformation_database_default_tag_values` (String) Add default values for the `Tag Key` to databases created by this destination. Comma separate for multiple values.
- `lakeformation_database_name` (String) The default database this destination will use to create tables in per stream. Can be changed per connection by customizing the namespace.
- `lakeformation_governed_tables` (Boolean) Whether to create tables as LF governed tables.
- `partitioning` (String) must be one of ["NO PARTITIONING", "DATE", "YEAR", "MONTH", "DAY", "YEAR/MONTH", "YEAR/MONTH/DAY"]
Partition data by cursor fields when a cursor field is a date
- `region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
The region of the S3 bucket. See <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions">here</a> for all region codes.

<a id="nestedatt--configuration--credentials"></a>
### Nested Schema for `configuration.credentials`

Read-Only:

- `destination_aws_datalake_authentication_mode_iam_role` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--destination_aws_datalake_authentication_mode_iam_role))
- `destination_aws_datalake_authentication_mode_iam_user` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--destination_aws_datalake_authentication_mode_iam_user))
- `destination_aws_datalake_update_authentication_mode_iam_role` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--destination_aws_datalake_update_authentication_mode_iam_role))
- `destination_aws_datalake_update_authentication_mode_iam_user` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--destination_aws_datalake_update_authentication_mode_iam_user))

<a id="nestedatt--configuration--credentials--destination_aws_datalake_authentication_mode_iam_role"></a>
### Nested Schema for `configuration.credentials.destination_aws_datalake_authentication_mode_iam_role`

Read-Only:

- `credentials_title` (String) must be one of ["IAM Role"]
Name of the credentials
- `role_arn` (String) Will assume this role to write data to s3


<a id="nestedatt--configuration--credentials--destination_aws_datalake_authentication_mode_iam_user"></a>
### Nested Schema for `configuration.credentials.destination_aws_datalake_authentication_mode_iam_user`

Read-Only:

- `aws_access_key_id` (String) AWS User Access Key Id
- `aws_secret_access_key` (String) Secret Access Key
- `credentials_title` (String) must be one of ["IAM User"]
Name of the credentials


<a id="nestedatt--configuration--credentials--destination_aws_datalake_update_authentication_mode_iam_role"></a>
### Nested Schema for `configuration.credentials.destination_aws_datalake_update_authentication_mode_iam_role`

Read-Only:

- `credentials_title` (String) must be one of ["IAM Role"]
Name of the credentials
- `role_arn` (String) Will assume this role to write data to s3


<a id="nestedatt--configuration--credentials--destination_aws_datalake_update_authentication_mode_iam_user"></a>
### Nested Schema for `configuration.credentials.destination_aws_datalake_update_authentication_mode_iam_user`

Read-Only:

- `aws_access_key_id` (String) AWS User Access Key Id
- `aws_secret_access_key` (String) Secret Access Key
- `credentials_title` (String) must be one of ["IAM User"]
Name of the credentials



<a id="nestedatt--configuration--format"></a>
### Nested Schema for `configuration.format`

Read-Only:

- `destination_aws_datalake_output_format_wildcard_json_lines_newline_delimited_json` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--destination_aws_datalake_output_format_wildcard_json_lines_newline_delimited_json))
- `destination_aws_datalake_output_format_wildcard_parquet_columnar_storage` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--destination_aws_datalake_output_format_wildcard_parquet_columnar_storage))
- `destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json))
- `destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage))

<a id="nestedatt--configuration--format--destination_aws_datalake_output_format_wildcard_json_lines_newline_delimited_json"></a>
### Nested Schema for `configuration.format.destination_aws_datalake_output_format_wildcard_json_lines_newline_delimited_json`

Read-Only:

- `compression_codec` (String) must be one of ["UNCOMPRESSED", "GZIP"]
The compression algorithm used to compress data.
- `format_type` (String) must be one of ["JSONL"]


<a id="nestedatt--configuration--format--destination_aws_datalake_output_format_wildcard_parquet_columnar_storage"></a>
### Nested Schema for `configuration.format.destination_aws_datalake_output_format_wildcard_parquet_columnar_storage`

Read-Only:

- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "ZSTD"]
The compression algorithm used to compress data.
- `format_type` (String) must be one of ["Parquet"]


<a id="nestedatt--configuration--format--destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json"></a>
### Nested Schema for `configuration.format.destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json`

Read-Only:

- `compression_codec` (String) must be one of ["UNCOMPRESSED", "GZIP"]
The compression algorithm used to compress data.
- `format_type` (String) must be one of ["JSONL"]


<a id="nestedatt--configuration--format--destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage"></a>
### Nested Schema for `configuration.format.destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage`

Read-Only:

- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "ZSTD"]
The compression algorithm used to compress data.
- `format_type` (String) must be one of ["Parquet"]


63 changes: 3 additions & 60 deletions docs/data-sources/destination_azure_blob_storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,67 +27,10 @@ data "airbyte_destination_azure_blob_storage" "my_destination_azureblobstorage"

### Read-Only

- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
- `configuration` (String) Parsed as JSON.
The values required to configure the destination.
- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)

<a id="nestedatt--configuration"></a>
### Nested Schema for `configuration`

Read-Only:

- `azure_blob_storage_account_key` (String) The Azure blob storage account key.
- `azure_blob_storage_account_name` (String) The account's name of the Azure Blob Storage.
- `azure_blob_storage_container_name` (String) The name of the Azure blob storage container. If not exists - will be created automatically. May be empty, then will be created automatically airbytecontainer+timestamp
- `azure_blob_storage_endpoint_domain_name` (String) This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
- `azure_blob_storage_output_buffer_size` (Number) The amount of megabytes to buffer for the output stream to Azure. This will impact memory footprint on workers, but may need adjustment for performance and appropriate block size in Azure.
- `azure_blob_storage_spill_size` (Number) The amount of megabytes after which the connector should spill the records in a new blob object. Make sure to configure size greater than individual records. Enter 0 if not applicable
- `destination_type` (String) must be one of ["azure-blob-storage"]
- `format` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format))

<a id="nestedatt--configuration--format"></a>
### Nested Schema for `configuration.format`

Read-Only:

- `destination_azure_blob_storage_output_format_csv_comma_separated_values` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--destination_azure_blob_storage_output_format_csv_comma_separated_values))
- `destination_azure_blob_storage_output_format_json_lines_newline_delimited_json` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--destination_azure_blob_storage_output_format_json_lines_newline_delimited_json))
- `destination_azure_blob_storage_update_output_format_csv_comma_separated_values` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--destination_azure_blob_storage_update_output_format_csv_comma_separated_values))
- `destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json))

<a id="nestedatt--configuration--format--destination_azure_blob_storage_output_format_csv_comma_separated_values"></a>
### Nested Schema for `configuration.format.destination_azure_blob_storage_output_format_csv_comma_separated_values`

Read-Only:

- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
- `format_type` (String) must be one of ["CSV"]


<a id="nestedatt--configuration--format--destination_azure_blob_storage_output_format_json_lines_newline_delimited_json"></a>
### Nested Schema for `configuration.format.destination_azure_blob_storage_output_format_json_lines_newline_delimited_json`

Read-Only:

- `format_type` (String) must be one of ["JSONL"]


<a id="nestedatt--configuration--format--destination_azure_blob_storage_update_output_format_csv_comma_separated_values"></a>
### Nested Schema for `configuration.format.destination_azure_blob_storage_update_output_format_csv_comma_separated_values`

Read-Only:

- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
- `format_type` (String) must be one of ["CSV"]


<a id="nestedatt--configuration--format--destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json"></a>
### Nested Schema for `configuration.format.destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json`

Read-Only:

- `format_type` (String) must be one of ["JSONL"]


Loading

0 comments on commit c6f674c

Please sign in to comment.