Skip to content

Commit

Permalink
feat: upgrade to latest terraform generator with prefilled fields: he…
Browse files Browse the repository at this point in the history
…nce less configuration needed
  • Loading branch information
ThomasRooney committed Oct 5, 2023
1 parent 6e55dd8 commit df12ece
Show file tree
Hide file tree
Showing 4,614 changed files with 103,629 additions and 42,787 deletions.
The diff you're trying to view is too large. We only load the first 3000 changed files.
Empty file modified .gitattributes
100755 → 100644
Empty file.
10 changes: 10 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,16 @@ TF_REATTACH_PROVIDERS=... terraform apply

<!-- End SDK Available Operations -->



<!-- Start Dev Containers -->

<!-- End Dev Containers -->

<!-- Placeholder for Future Speakeasy SDK Sections -->



Terraform allows you to use local provider builds by setting a `dev_overrides` block in a configuration file called `.terraformrc`. This block overrides all other configured installation methods.

Terraform searches for the `.terraformrc` file in your home directory and applies any configuration settings you set.
Expand Down
Empty file modified USAGE.md
100755 → 100644
Empty file.
9 changes: 5 additions & 4 deletions docs/data-sources/connection.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,13 +28,14 @@ data "airbyte_connection" "my_connection" {
### Read-Only

- `configurations` (Attributes) A list of configured stream options for a connection. (see [below for nested schema](#nestedatt--configurations))
- `data_residency` (String) must be one of ["auto", "us", "eu"]
- `data_residency` (String) must be one of ["auto", "us", "eu"]; Default: "auto"
- `destination_id` (String)
- `name` (String) Optional name of the connection
- `namespace_definition` (String) must be one of ["source", "destination", "custom_format"]
- `namespace_definition` (String) must be one of ["source", "destination", "custom_format"]; Default: "destination"
Define the location where the data will be stored in the destination
- `namespace_format` (String) Used when namespaceDefinition is 'custom_format'. If blank then behaves like namespaceDefinition = 'destination'. If "${SOURCE_NAMESPACE}" then behaves like namespaceDefinition = 'source'.
- `non_breaking_schema_updates_behavior` (String) must be one of ["ignore", "disable_connection", "propagate_columns", "propagate_fully"]
- `namespace_format` (String) Default: null
Used when namespaceDefinition is 'custom_format'. If blank then behaves like namespaceDefinition = 'destination'. If "${SOURCE_NAMESPACE}" then behaves like namespaceDefinition = 'source'.
- `non_breaking_schema_updates_behavior` (String) must be one of ["ignore", "disable_connection", "propagate_columns", "propagate_fully"]; Default: "ignore"
Set how Airbyte handles syncs when it detects a non-breaking schema change in the source
- `prefix` (String) Prefix that will be prepended to the name of each stream when it is written to the destination (ex. “airbyte_” causes “projects” => “airbyte_projects”).
- `schedule` (Attributes) schedule for when the the connection should run, per the schedule type (see [below for nested schema](#nestedatt--schedule))
Expand Down
35 changes: 14 additions & 21 deletions docs/data-sources/destination_aws_datalake.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,16 +40,17 @@ Read-Only:
- `bucket_name` (String) The name of the S3 bucket. Read more <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/create-bucket-overview.html">here</a>.
- `bucket_prefix` (String) S3 prefix
- `credentials` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials))
- `destination_type` (String) must be one of ["aws-datalake"]
- `format` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format))
- `glue_catalog_float_as_decimal` (Boolean) Cast float/double as decimal(38,18). This can help achieve higher accuracy and represent numbers correctly as received from the source.
- `glue_catalog_float_as_decimal` (Boolean) Default: false
Cast float/double as decimal(38,18). This can help achieve higher accuracy and represent numbers correctly as received from the source.
- `lakeformation_database_default_tag_key` (String) Add a default tag key to databases created by this destination
- `lakeformation_database_default_tag_values` (String) Add default values for the `Tag Key` to databases created by this destination. Comma separate for multiple values.
- `lakeformation_database_name` (String) The default database this destination will use to create tables in per stream. Can be changed per connection by customizing the namespace.
- `lakeformation_governed_tables` (Boolean) Whether to create tables as LF governed tables.
- `partitioning` (String) must be one of ["NO PARTITIONING", "DATE", "YEAR", "MONTH", "DAY", "YEAR/MONTH", "YEAR/MONTH/DAY"]
- `lakeformation_governed_tables` (Boolean) Default: false
Whether to create tables as LF governed tables.
- `partitioning` (String) must be one of ["NO PARTITIONING", "DATE", "YEAR", "MONTH", "DAY", "YEAR/MONTH", "YEAR/MONTH/DAY"]; Default: "NO PARTITIONING"
Partition data by cursor fields when a cursor field is a date
- `region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
- `region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]; Default: ""
The region of the S3 bucket. See <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html#concepts-available-regions">here</a> for all region codes.

<a id="nestedatt--configuration--credentials"></a>
Expand All @@ -67,8 +68,6 @@ Read-Only:

Read-Only:

- `credentials_title` (String) must be one of ["IAM Role"]
Name of the credentials
- `role_arn` (String) Will assume this role to write data to s3


Expand All @@ -79,17 +78,13 @@ Read-Only:

- `aws_access_key_id` (String) AWS User Access Key Id
- `aws_secret_access_key` (String) Secret Access Key
- `credentials_title` (String) must be one of ["IAM User"]
Name of the credentials


<a id="nestedatt--configuration--credentials--destination_aws_datalake_update_authentication_mode_iam_role"></a>
### Nested Schema for `configuration.credentials.destination_aws_datalake_update_authentication_mode_iam_role`

Read-Only:

- `credentials_title` (String) must be one of ["IAM Role"]
Name of the credentials
- `role_arn` (String) Will assume this role to write data to s3


Expand All @@ -100,8 +95,6 @@ Read-Only:

- `aws_access_key_id` (String) AWS User Access Key Id
- `aws_secret_access_key` (String) Secret Access Key
- `credentials_title` (String) must be one of ["IAM User"]
Name of the credentials



Expand All @@ -120,38 +113,38 @@ Read-Only:

Read-Only:

- `compression_codec` (String) must be one of ["UNCOMPRESSED", "GZIP"]
- `compression_codec` (String) must be one of ["UNCOMPRESSED", "GZIP"]; Default: "UNCOMPRESSED"
The compression algorithm used to compress data.
- `format_type` (String) must be one of ["JSONL"]
- `format_type` (String) must be one of ["JSONL"]; Default: "JSONL"


<a id="nestedatt--configuration--format--destination_aws_datalake_output_format_wildcard_parquet_columnar_storage"></a>
### Nested Schema for `configuration.format.destination_aws_datalake_output_format_wildcard_parquet_columnar_storage`

Read-Only:

- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "ZSTD"]
- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "ZSTD"]; Default: "SNAPPY"
The compression algorithm used to compress data.
- `format_type` (String) must be one of ["Parquet"]
- `format_type` (String) must be one of ["Parquet"]; Default: "Parquet"


<a id="nestedatt--configuration--format--destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json"></a>
### Nested Schema for `configuration.format.destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json`

Read-Only:

- `compression_codec` (String) must be one of ["UNCOMPRESSED", "GZIP"]
- `compression_codec` (String) must be one of ["UNCOMPRESSED", "GZIP"]; Default: "UNCOMPRESSED"
The compression algorithm used to compress data.
- `format_type` (String) must be one of ["JSONL"]
- `format_type` (String) must be one of ["JSONL"]; Default: "JSONL"


<a id="nestedatt--configuration--format--destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage"></a>
### Nested Schema for `configuration.format.destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage`

Read-Only:

- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "ZSTD"]
- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "ZSTD"]; Default: "SNAPPY"
The compression algorithm used to compress data.
- `format_type` (String) must be one of ["Parquet"]
- `format_type` (String) must be one of ["Parquet"]; Default: "Parquet"


24 changes: 8 additions & 16 deletions docs/data-sources/destination_azure_blob_storage.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,12 @@ Read-Only:
- `azure_blob_storage_account_key` (String) The Azure blob storage account key.
- `azure_blob_storage_account_name` (String) The account's name of the Azure Blob Storage.
- `azure_blob_storage_container_name` (String) The name of the Azure blob storage container. If not exists - will be created automatically. May be empty, then will be created automatically airbytecontainer+timestamp
- `azure_blob_storage_endpoint_domain_name` (String) This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
- `azure_blob_storage_output_buffer_size` (Number) The amount of megabytes to buffer for the output stream to Azure. This will impact memory footprint on workers, but may need adjustment for performance and appropriate block size in Azure.
- `azure_blob_storage_spill_size` (Number) The amount of megabytes after which the connector should spill the records in a new blob object. Make sure to configure size greater than individual records. Enter 0 if not applicable
- `destination_type` (String) must be one of ["azure-blob-storage"]
- `azure_blob_storage_endpoint_domain_name` (String) Default: "blob.core.windows.net"
This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
- `azure_blob_storage_output_buffer_size` (Number) Default: 5
The amount of megabytes to buffer for the output stream to Azure. This will impact memory footprint on workers, but may need adjustment for performance and appropriate block size in Azure.
- `azure_blob_storage_spill_size` (Number) Default: 500
The amount of megabytes after which the connector should spill the records in a new blob object. Make sure to configure size greater than individual records. Enter 0 if not applicable
- `format` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format))

<a id="nestedatt--configuration--format"></a>
Expand All @@ -60,34 +62,24 @@ Read-Only:

Read-Only:

- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
- `flattening` (String) must be one of ["No flattening", "Root level flattening"]; Default: "No flattening"
Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
- `format_type` (String) must be one of ["CSV"]


<a id="nestedatt--configuration--format--destination_azure_blob_storage_output_format_json_lines_newline_delimited_json"></a>
### Nested Schema for `configuration.format.destination_azure_blob_storage_output_format_json_lines_newline_delimited_json`

Read-Only:

- `format_type` (String) must be one of ["JSONL"]


<a id="nestedatt--configuration--format--destination_azure_blob_storage_update_output_format_csv_comma_separated_values"></a>
### Nested Schema for `configuration.format.destination_azure_blob_storage_update_output_format_csv_comma_separated_values`

Read-Only:

- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
- `flattening` (String) must be one of ["No flattening", "Root level flattening"]; Default: "No flattening"
Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
- `format_type` (String) must be one of ["CSV"]


<a id="nestedatt--configuration--format--destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json"></a>
### Nested Schema for `configuration.format.destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json`

Read-Only:

- `format_type` (String) must be one of ["JSONL"]


Loading

0 comments on commit df12ece

Please sign in to comment.