diff --git a/docs/data-sources/connection.md b/docs/data-sources/connection.md
index ad58cde69..5a9825acb 100644
--- a/docs/data-sources/connection.md
+++ b/docs/data-sources/connection.md
@@ -28,15 +28,15 @@ data "airbyte_connection" "my_connection" {
### Read-Only
- `configurations` (Attributes) A list of configured stream options for a connection. (see [below for nested schema](#nestedatt--configurations))
-- `data_residency` (String) must be one of ["auto", "us", "eu"]
+- `data_residency` (String) must be one of ["auto", "us", "eu"]; Default: "auto"
- `destination_id` (String)
-- `name` (String) Optional name of the connection
-- `namespace_definition` (String) must be one of ["source", "destination", "custom_format"]
+- `name` (String)
+- `namespace_definition` (String) must be one of ["source", "destination", "custom_format"]; Default: "destination"
Define the location where the data will be stored in the destination
-- `namespace_format` (String) Used when namespaceDefinition is 'custom_format'. If blank then behaves like namespaceDefinition = 'destination'. If "${SOURCE_NAMESPACE}" then behaves like namespaceDefinition = 'source'.
-- `non_breaking_schema_updates_behavior` (String) must be one of ["ignore", "disable_connection", "propagate_columns", "propagate_fully"]
+- `namespace_format` (String)
+- `non_breaking_schema_updates_behavior` (String) must be one of ["ignore", "disable_connection", "propagate_columns", "propagate_fully"]; Default: "ignore"
Set how Airbyte handles syncs when it detects a non-breaking schema change in the source
-- `prefix` (String) Prefix that will be prepended to the name of each stream when it is written to the destination (ex. “airbyte_” causes “projects” => “airbyte_projects”).
+- `prefix` (String)
- `schedule` (Attributes) schedule for when the the connection should run, per the schedule type (see [below for nested schema](#nestedatt--schedule))
- `source_id` (String)
- `status` (String) must be one of ["active", "inactive", "deprecated"]
@@ -68,6 +68,6 @@ Read-Only:
- `basic_timing` (String)
- `cron_expression` (String)
-- `schedule_type` (String) must be one of ["manual", "cron"]
+- `schedule_type` (String) must be one of ["manual", "cron", "basic"]
diff --git a/docs/data-sources/destination_aws_datalake.md b/docs/data-sources/destination_aws_datalake.md
index a74e088b9..126bc87cf 100644
--- a/docs/data-sources/destination_aws_datalake.md
+++ b/docs/data-sources/destination_aws_datalake.md
@@ -27,131 +27,10 @@ data "airbyte_destination_aws_datalake" "my_destination_awsdatalake" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `aws_account_id` (String) target aws account id
-- `bucket_name` (String) The name of the S3 bucket. Read more here.
-- `bucket_prefix` (String) S3 prefix
-- `credentials` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `destination_type` (String) must be one of ["aws-datalake"]
-- `format` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format))
-- `glue_catalog_float_as_decimal` (Boolean) Cast float/double as decimal(38,18). This can help achieve higher accuracy and represent numbers correctly as received from the source.
-- `lakeformation_database_default_tag_key` (String) Add a default tag key to databases created by this destination
-- `lakeformation_database_default_tag_values` (String) Add default values for the `Tag Key` to databases created by this destination. Comma separate for multiple values.
-- `lakeformation_database_name` (String) The default database this destination will use to create tables in per stream. Can be changed per connection by customizing the namespace.
-- `lakeformation_governed_tables` (Boolean) Whether to create tables as LF governed tables.
-- `partitioning` (String) must be one of ["NO PARTITIONING", "DATE", "YEAR", "MONTH", "DAY", "YEAR/MONTH", "YEAR/MONTH/DAY"]
-Partition data by cursor fields when a cursor field is a date
-- `region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
-The region of the S3 bucket. See here for all region codes.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `destination_aws_datalake_authentication_mode_iam_role` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--destination_aws_datalake_authentication_mode_iam_role))
-- `destination_aws_datalake_authentication_mode_iam_user` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--destination_aws_datalake_authentication_mode_iam_user))
-- `destination_aws_datalake_update_authentication_mode_iam_role` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--destination_aws_datalake_update_authentication_mode_iam_role))
-- `destination_aws_datalake_update_authentication_mode_iam_user` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--destination_aws_datalake_update_authentication_mode_iam_user))
-
-
-### Nested Schema for `configuration.credentials.destination_aws_datalake_authentication_mode_iam_role`
-
-Read-Only:
-
-- `credentials_title` (String) must be one of ["IAM Role"]
-Name of the credentials
-- `role_arn` (String) Will assume this role to write data to s3
-
-
-
-### Nested Schema for `configuration.credentials.destination_aws_datalake_authentication_mode_iam_user`
-
-Read-Only:
-
-- `aws_access_key_id` (String) AWS User Access Key Id
-- `aws_secret_access_key` (String) Secret Access Key
-- `credentials_title` (String) must be one of ["IAM User"]
-Name of the credentials
-
-
-
-### Nested Schema for `configuration.credentials.destination_aws_datalake_update_authentication_mode_iam_role`
-
-Read-Only:
-
-- `credentials_title` (String) must be one of ["IAM Role"]
-Name of the credentials
-- `role_arn` (String) Will assume this role to write data to s3
-
-
-
-### Nested Schema for `configuration.credentials.destination_aws_datalake_update_authentication_mode_iam_user`
-
-Read-Only:
-
-- `aws_access_key_id` (String) AWS User Access Key Id
-- `aws_secret_access_key` (String) Secret Access Key
-- `credentials_title` (String) must be one of ["IAM User"]
-Name of the credentials
-
-
-
-
-### Nested Schema for `configuration.format`
-
-Read-Only:
-
-- `destination_aws_datalake_output_format_wildcard_json_lines_newline_delimited_json` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--destination_aws_datalake_output_format_wildcard_json_lines_newline_delimited_json))
-- `destination_aws_datalake_output_format_wildcard_parquet_columnar_storage` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--destination_aws_datalake_output_format_wildcard_parquet_columnar_storage))
-- `destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json))
-- `destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage))
-
-
-### Nested Schema for `configuration.format.destination_aws_datalake_output_format_wildcard_json_lines_newline_delimited_json`
-
-Read-Only:
-
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "GZIP"]
-The compression algorithm used to compress data.
-- `format_type` (String) must be one of ["JSONL"]
-
-
-
-### Nested Schema for `configuration.format.destination_aws_datalake_output_format_wildcard_parquet_columnar_storage`
-
-Read-Only:
-
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "ZSTD"]
-The compression algorithm used to compress data.
-- `format_type` (String) must be one of ["Parquet"]
-
-
-
-### Nested Schema for `configuration.format.destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json`
-
-Read-Only:
-
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "GZIP"]
-The compression algorithm used to compress data.
-- `format_type` (String) must be one of ["JSONL"]
-
-
-
-### Nested Schema for `configuration.format.destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage`
-
-Read-Only:
-
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "ZSTD"]
-The compression algorithm used to compress data.
-- `format_type` (String) must be one of ["Parquet"]
-
diff --git a/docs/data-sources/destination_azure_blob_storage.md b/docs/data-sources/destination_azure_blob_storage.md
index 78b4dddf5..311a1e9a1 100644
--- a/docs/data-sources/destination_azure_blob_storage.md
+++ b/docs/data-sources/destination_azure_blob_storage.md
@@ -27,67 +27,10 @@ data "airbyte_destination_azure_blob_storage" "my_destination_azureblobstorage"
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `azure_blob_storage_account_key` (String) The Azure blob storage account key.
-- `azure_blob_storage_account_name` (String) The account's name of the Azure Blob Storage.
-- `azure_blob_storage_container_name` (String) The name of the Azure blob storage container. If not exists - will be created automatically. May be empty, then will be created automatically airbytecontainer+timestamp
-- `azure_blob_storage_endpoint_domain_name` (String) This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
-- `azure_blob_storage_output_buffer_size` (Number) The amount of megabytes to buffer for the output stream to Azure. This will impact memory footprint on workers, but may need adjustment for performance and appropriate block size in Azure.
-- `azure_blob_storage_spill_size` (Number) The amount of megabytes after which the connector should spill the records in a new blob object. Make sure to configure size greater than individual records. Enter 0 if not applicable
-- `destination_type` (String) must be one of ["azure-blob-storage"]
-- `format` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format))
-
-
-### Nested Schema for `configuration.format`
-
-Read-Only:
-
-- `destination_azure_blob_storage_output_format_csv_comma_separated_values` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--destination_azure_blob_storage_output_format_csv_comma_separated_values))
-- `destination_azure_blob_storage_output_format_json_lines_newline_delimited_json` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--destination_azure_blob_storage_output_format_json_lines_newline_delimited_json))
-- `destination_azure_blob_storage_update_output_format_csv_comma_separated_values` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--destination_azure_blob_storage_update_output_format_csv_comma_separated_values))
-- `destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json))
-
-
-### Nested Schema for `configuration.format.destination_azure_blob_storage_output_format_csv_comma_separated_values`
-
-Read-Only:
-
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
-- `format_type` (String) must be one of ["CSV"]
-
-
-
-### Nested Schema for `configuration.format.destination_azure_blob_storage_output_format_json_lines_newline_delimited_json`
-
-Read-Only:
-
-- `format_type` (String) must be one of ["JSONL"]
-
-
-
-### Nested Schema for `configuration.format.destination_azure_blob_storage_update_output_format_csv_comma_separated_values`
-
-Read-Only:
-
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
-- `format_type` (String) must be one of ["CSV"]
-
-
-
-### Nested Schema for `configuration.format.destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json`
-
-Read-Only:
-
-- `format_type` (String) must be one of ["JSONL"]
-
diff --git a/docs/data-sources/destination_bigquery.md b/docs/data-sources/destination_bigquery.md
index 74596f1dd..b2b0d7c2c 100644
--- a/docs/data-sources/destination_bigquery.md
+++ b/docs/data-sources/destination_bigquery.md
@@ -27,114 +27,10 @@ data "airbyte_destination_bigquery" "my_destination_bigquery" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `big_query_client_buffer_size_mb` (Number) Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here.
-- `credentials_json` (String) The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
-- `dataset_id` (String) The default BigQuery Dataset ID that tables are replicated to if the source does not specify a namespace. Read more here.
-- `dataset_location` (String) must be one of ["US", "EU", "asia-east1", "asia-east2", "asia-northeast1", "asia-northeast2", "asia-northeast3", "asia-south1", "asia-south2", "asia-southeast1", "asia-southeast2", "australia-southeast1", "australia-southeast2", "europe-central1", "europe-central2", "europe-north1", "europe-southwest1", "europe-west1", "europe-west2", "europe-west3", "europe-west4", "europe-west6", "europe-west7", "europe-west8", "europe-west9", "me-west1", "northamerica-northeast1", "northamerica-northeast2", "southamerica-east1", "southamerica-west1", "us-central1", "us-east1", "us-east2", "us-east3", "us-east4", "us-east5", "us-west1", "us-west2", "us-west3", "us-west4"]
-The location of the dataset. Warning: Changes made after creation will not be applied. Read more here.
-- `destination_type` (String) must be one of ["bigquery"]
-- `loading_method` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method))
-- `project_id` (String) The GCP project ID for the project containing the target BigQuery dataset. Read more here.
-- `raw_data_dataset` (String) The dataset to write raw tables into
-- `transformation_priority` (String) must be one of ["interactive", "batch"]
-Interactive run type means that the query is executed as soon as possible, and these queries count towards concurrent rate limit and daily limit. Read more about interactive run type here. Batch queries are queued and started as soon as idle resources are available in the BigQuery shared resource pool, which usually occurs within a few minutes. Batch queries don’t count towards your concurrent rate limit. Read more about batch queries here. The default "interactive" value is used if not set explicitly.
-
-
-### Nested Schema for `configuration.loading_method`
-
-Read-Only:
-
-- `destination_bigquery_loading_method_gcs_staging` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_loading_method_gcs_staging))
-- `destination_bigquery_loading_method_standard_inserts` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_loading_method_standard_inserts))
-- `destination_bigquery_update_loading_method_gcs_staging` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_update_loading_method_gcs_staging))
-- `destination_bigquery_update_loading_method_standard_inserts` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_update_loading_method_standard_inserts))
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_loading_method_gcs_staging`
-
-Read-Only:
-
-- `credential` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_loading_method_gcs_staging--credential))
-- `file_buffer_count` (Number) Number of file buffers allocated for writing data. Increasing this number is beneficial for connections using Change Data Capture (CDC) and up to the number of streams within a connection. Increasing the number of file buffers past the maximum number of streams has deteriorating effects
-- `gcs_bucket_name` (String) The name of the GCS bucket. Read more here.
-- `gcs_bucket_path` (String) Directory under the GCS bucket where data will be written.
-- `keep_files_in_gcs_bucket` (String) must be one of ["Delete all tmp files from GCS", "Keep all tmp files in GCS"]
-This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
-- `method` (String) must be one of ["GCS Staging"]
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_loading_method_gcs_staging.method`
-
-Read-Only:
-
-- `destination_bigquery_loading_method_gcs_staging_credential_hmac_key` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_loading_method_gcs_staging--method--destination_bigquery_loading_method_gcs_staging_credential_hmac_key))
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_loading_method_gcs_staging.method.destination_bigquery_loading_method_gcs_staging_credential_hmac_key`
-
-Read-Only:
-
-- `credential_type` (String) must be one of ["HMAC_KEY"]
-- `hmac_key_access_id` (String) HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long.
-- `hmac_key_secret` (String) The corresponding secret for the access ID. It is a 40-character base-64 encoded string.
-
-
-
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_loading_method_standard_inserts`
-
-Read-Only:
-
-- `method` (String) must be one of ["Standard"]
-
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_update_loading_method_gcs_staging`
-
-Read-Only:
-
-- `credential` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_update_loading_method_gcs_staging--credential))
-- `file_buffer_count` (Number) Number of file buffers allocated for writing data. Increasing this number is beneficial for connections using Change Data Capture (CDC) and up to the number of streams within a connection. Increasing the number of file buffers past the maximum number of streams has deteriorating effects
-- `gcs_bucket_name` (String) The name of the GCS bucket. Read more here.
-- `gcs_bucket_path` (String) Directory under the GCS bucket where data will be written.
-- `keep_files_in_gcs_bucket` (String) must be one of ["Delete all tmp files from GCS", "Keep all tmp files in GCS"]
-This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
-- `method` (String) must be one of ["GCS Staging"]
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_update_loading_method_gcs_staging.method`
-
-Read-Only:
-
-- `destination_bigquery_update_loading_method_gcs_staging_credential_hmac_key` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_update_loading_method_gcs_staging--method--destination_bigquery_update_loading_method_gcs_staging_credential_hmac_key))
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_update_loading_method_gcs_staging.method.destination_bigquery_update_loading_method_gcs_staging_credential_hmac_key`
-
-Read-Only:
-
-- `credential_type` (String) must be one of ["HMAC_KEY"]
-- `hmac_key_access_id` (String) HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long.
-- `hmac_key_secret` (String) The corresponding secret for the access ID. It is a 40-character base-64 encoded string.
-
-
-
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_update_loading_method_standard_inserts`
-
-Read-Only:
-
-- `method` (String) must be one of ["Standard"]
-
diff --git a/docs/data-sources/destination_bigquery_denormalized.md b/docs/data-sources/destination_bigquery_denormalized.md
deleted file mode 100644
index 136f7db42..000000000
--- a/docs/data-sources/destination_bigquery_denormalized.md
+++ /dev/null
@@ -1,137 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_destination_bigquery_denormalized Data Source - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- DestinationBigqueryDenormalized DataSource
----
-
-# airbyte_destination_bigquery_denormalized (Data Source)
-
-DestinationBigqueryDenormalized DataSource
-
-## Example Usage
-
-```terraform
-data "airbyte_destination_bigquery_denormalized" "my_destination_bigquerydenormalized" {
- destination_id = "...my_destination_id..."
-}
-```
-
-
-## Schema
-
-### Required
-
-- `destination_id` (String)
-
-### Read-Only
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `big_query_client_buffer_size_mb` (Number) Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here.
-- `credentials_json` (String) The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
-- `dataset_id` (String) The default BigQuery Dataset ID that tables are replicated to if the source does not specify a namespace. Read more here.
-- `dataset_location` (String) must be one of ["US", "EU", "asia-east1", "asia-east2", "asia-northeast1", "asia-northeast2", "asia-northeast3", "asia-south1", "asia-south2", "asia-southeast1", "asia-southeast2", "australia-southeast1", "australia-southeast2", "europe-central1", "europe-central2", "europe-north1", "europe-southwest1", "europe-west1", "europe-west2", "europe-west3", "europe-west4", "europe-west6", "europe-west7", "europe-west8", "europe-west9", "me-west1", "northamerica-northeast1", "northamerica-northeast2", "southamerica-east1", "southamerica-west1", "us-central1", "us-east1", "us-east2", "us-east3", "us-east4", "us-east5", "us-west1", "us-west2", "us-west3", "us-west4"]
-The location of the dataset. Warning: Changes made after creation will not be applied. The default "US" value is used if not set explicitly. Read more here.
-- `destination_type` (String) must be one of ["bigquery-denormalized"]
-- `loading_method` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method))
-- `project_id` (String) The GCP project ID for the project containing the target BigQuery dataset. Read more here.
-
-
-### Nested Schema for `configuration.loading_method`
-
-Read-Only:
-
-- `destination_bigquery_denormalized_loading_method_gcs_staging` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_loading_method_gcs_staging))
-- `destination_bigquery_denormalized_loading_method_standard_inserts` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_loading_method_standard_inserts))
-- `destination_bigquery_denormalized_update_loading_method_gcs_staging` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_update_loading_method_gcs_staging))
-- `destination_bigquery_denormalized_update_loading_method_standard_inserts` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_update_loading_method_standard_inserts))
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_loading_method_gcs_staging`
-
-Read-Only:
-
-- `credential` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_loading_method_gcs_staging--credential))
-- `file_buffer_count` (Number) Number of file buffers allocated for writing data. Increasing this number is beneficial for connections using Change Data Capture (CDC) and up to the number of streams within a connection. Increasing the number of file buffers past the maximum number of streams has deteriorating effects
-- `gcs_bucket_name` (String) The name of the GCS bucket. Read more here.
-- `gcs_bucket_path` (String) Directory under the GCS bucket where data will be written. Read more here.
-- `keep_files_in_gcs_bucket` (String) must be one of ["Delete all tmp files from GCS", "Keep all tmp files in GCS"]
-This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
-- `method` (String) must be one of ["GCS Staging"]
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_loading_method_gcs_staging.method`
-
-Read-Only:
-
-- `destination_bigquery_denormalized_loading_method_gcs_staging_credential_hmac_key` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_loading_method_gcs_staging--method--destination_bigquery_denormalized_loading_method_gcs_staging_credential_hmac_key))
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_loading_method_gcs_staging.method.destination_bigquery_denormalized_loading_method_gcs_staging_credential_hmac_key`
-
-Read-Only:
-
-- `credential_type` (String) must be one of ["HMAC_KEY"]
-- `hmac_key_access_id` (String) HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long.
-- `hmac_key_secret` (String) The corresponding secret for the access ID. It is a 40-character base-64 encoded string.
-
-
-
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_loading_method_standard_inserts`
-
-Read-Only:
-
-- `method` (String) must be one of ["Standard"]
-
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_update_loading_method_gcs_staging`
-
-Read-Only:
-
-- `credential` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_update_loading_method_gcs_staging--credential))
-- `file_buffer_count` (Number) Number of file buffers allocated for writing data. Increasing this number is beneficial for connections using Change Data Capture (CDC) and up to the number of streams within a connection. Increasing the number of file buffers past the maximum number of streams has deteriorating effects
-- `gcs_bucket_name` (String) The name of the GCS bucket. Read more here.
-- `gcs_bucket_path` (String) Directory under the GCS bucket where data will be written. Read more here.
-- `keep_files_in_gcs_bucket` (String) must be one of ["Delete all tmp files from GCS", "Keep all tmp files in GCS"]
-This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
-- `method` (String) must be one of ["GCS Staging"]
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_update_loading_method_gcs_staging.method`
-
-Read-Only:
-
-- `destination_bigquery_denormalized_update_loading_method_gcs_staging_credential_hmac_key` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_update_loading_method_gcs_staging--method--destination_bigquery_denormalized_update_loading_method_gcs_staging_credential_hmac_key))
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_update_loading_method_gcs_staging.method.destination_bigquery_denormalized_update_loading_method_gcs_staging_credential_hmac_key`
-
-Read-Only:
-
-- `credential_type` (String) must be one of ["HMAC_KEY"]
-- `hmac_key_access_id` (String) HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long.
-- `hmac_key_secret` (String) The corresponding secret for the access ID. It is a 40-character base-64 encoded string.
-
-
-
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_update_loading_method_standard_inserts`
-
-Read-Only:
-
-- `method` (String) must be one of ["Standard"]
-
-
diff --git a/docs/data-sources/destination_clickhouse.md b/docs/data-sources/destination_clickhouse.md
index b80d91423..698295f37 100644
--- a/docs/data-sources/destination_clickhouse.md
+++ b/docs/data-sources/destination_clickhouse.md
@@ -27,103 +27,10 @@ data "airbyte_destination_clickhouse" "my_destination_clickhouse" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["clickhouse"]
-- `host` (String) Hostname of the database.
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) Password associated with the username.
-- `port` (Number) HTTP port of the database.
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `username` (String) Username to use to access the database.
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `destination_clickhouse_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_clickhouse_ssh_tunnel_method_no_tunnel))
-- `destination_clickhouse_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_clickhouse_ssh_tunnel_method_password_authentication))
-- `destination_clickhouse_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_clickhouse_ssh_tunnel_method_ssh_key_authentication))
-- `destination_clickhouse_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_clickhouse_update_ssh_tunnel_method_no_tunnel))
-- `destination_clickhouse_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_clickhouse_update_ssh_tunnel_method_password_authentication))
-- `destination_clickhouse_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_clickhouse_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.destination_clickhouse_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_clickhouse_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_clickhouse_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_clickhouse_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_clickhouse_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_clickhouse_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/destination_convex.md b/docs/data-sources/destination_convex.md
index 962b6c13b..1fcc34e35 100644
--- a/docs/data-sources/destination_convex.md
+++ b/docs/data-sources/destination_convex.md
@@ -27,17 +27,10 @@ data "airbyte_destination_convex" "my_destination_convex" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_key` (String) API access key used to send data to a Convex deployment.
-- `deployment_url` (String) URL of the Convex deployment that is the destination
-- `destination_type` (String) must be one of ["convex"]
-
diff --git a/docs/data-sources/destination_cumulio.md b/docs/data-sources/destination_cumulio.md
index a9e00b846..2b717a9c2 100644
--- a/docs/data-sources/destination_cumulio.md
+++ b/docs/data-sources/destination_cumulio.md
@@ -27,18 +27,10 @@ data "airbyte_destination_cumulio" "my_destination_cumulio" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_host` (String) URL of the Cumul.io API (e.g. 'https://api.cumul.io', 'https://api.us.cumul.io', or VPC-specific API url). Defaults to 'https://api.cumul.io'.
-- `api_key` (String) An API key generated in Cumul.io's platform (can be generated here: https://app.cumul.io/start/profile/integration).
-- `api_token` (String) The corresponding API token generated in Cumul.io's platform (can be generated here: https://app.cumul.io/start/profile/integration).
-- `destination_type` (String) must be one of ["cumulio"]
-
diff --git a/docs/data-sources/destination_databend.md b/docs/data-sources/destination_databend.md
index c7750edcb..3aee7b9d7 100644
--- a/docs/data-sources/destination_databend.md
+++ b/docs/data-sources/destination_databend.md
@@ -27,21 +27,10 @@ data "airbyte_destination_databend" "my_destination_databend" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["databend"]
-- `host` (String) Hostname of the database.
-- `password` (String) Password associated with the username.
-- `port` (Number) Port of the database.
-- `table` (String) The default table was written to.
-- `username` (String) Username to use to access the database.
-
diff --git a/docs/data-sources/destination_databricks.md b/docs/data-sources/destination_databricks.md
index 267cd6625..5ad8d456d 100644
--- a/docs/data-sources/destination_databricks.md
+++ b/docs/data-sources/destination_databricks.md
@@ -27,106 +27,10 @@ data "airbyte_destination_databricks" "my_destination_databricks" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `accept_terms` (Boolean) You must agree to the Databricks JDBC Driver Terms & Conditions to use this connector.
-- `data_source` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source))
-- `database` (String) The name of the catalog. If not specified otherwise, the "hive_metastore" will be used.
-- `databricks_http_path` (String) Databricks Cluster HTTP Path.
-- `databricks_personal_access_token` (String) Databricks Personal Access Token for making authenticated requests.
-- `databricks_port` (String) Databricks Cluster Port.
-- `databricks_server_hostname` (String) Databricks Cluster Server Hostname.
-- `destination_type` (String) must be one of ["databricks"]
-- `enable_schema_evolution` (Boolean) Support schema evolution for all streams. If "false", the connector might fail when a stream's schema changes.
-- `purge_staging_data` (Boolean) Default to 'true'. Switch it to 'false' for debugging purpose.
-- `schema` (String) The default schema tables are written. If not specified otherwise, the "default" will be used.
-
-
-### Nested Schema for `configuration.data_source`
-
-Read-Only:
-
-- `destination_databricks_data_source_amazon_s3` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--destination_databricks_data_source_amazon_s3))
-- `destination_databricks_data_source_azure_blob_storage` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--destination_databricks_data_source_azure_blob_storage))
-- `destination_databricks_data_source_recommended_managed_tables` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--destination_databricks_data_source_recommended_managed_tables))
-- `destination_databricks_update_data_source_amazon_s3` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--destination_databricks_update_data_source_amazon_s3))
-- `destination_databricks_update_data_source_azure_blob_storage` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--destination_databricks_update_data_source_azure_blob_storage))
-- `destination_databricks_update_data_source_recommended_managed_tables` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--destination_databricks_update_data_source_recommended_managed_tables))
-
-
-### Nested Schema for `configuration.data_source.destination_databricks_data_source_amazon_s3`
-
-Read-Only:
-
-- `data_source_type` (String) must be one of ["S3_STORAGE"]
-- `file_name_pattern` (String) The pattern allows you to set the file-name format for the S3 staging file(s)
-- `s3_access_key_id` (String) The Access Key Id granting allow one to access the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket.
-- `s3_bucket_name` (String) The name of the S3 bucket to use for intermittent staging of the data.
-- `s3_bucket_path` (String) The directory under the S3 bucket where data will be written.
-- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
-The region of the S3 staging bucket to use if utilising a copy strategy.
-- `s3_secret_access_key` (String) The corresponding secret to the above access key id.
-
-
-
-### Nested Schema for `configuration.data_source.destination_databricks_data_source_azure_blob_storage`
-
-Read-Only:
-
-- `azure_blob_storage_account_name` (String) The account's name of the Azure Blob Storage.
-- `azure_blob_storage_container_name` (String) The name of the Azure blob storage container.
-- `azure_blob_storage_endpoint_domain_name` (String) This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
-- `azure_blob_storage_sas_token` (String) Shared access signature (SAS) token to grant limited access to objects in your storage account.
-- `data_source_type` (String) must be one of ["AZURE_BLOB_STORAGE"]
-
-
-
-### Nested Schema for `configuration.data_source.destination_databricks_data_source_recommended_managed_tables`
-
-Read-Only:
-
-- `data_source_type` (String) must be one of ["MANAGED_TABLES_STORAGE"]
-
-
-
-### Nested Schema for `configuration.data_source.destination_databricks_update_data_source_amazon_s3`
-
-Read-Only:
-
-- `data_source_type` (String) must be one of ["S3_STORAGE"]
-- `file_name_pattern` (String) The pattern allows you to set the file-name format for the S3 staging file(s)
-- `s3_access_key_id` (String) The Access Key Id granting allow one to access the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket.
-- `s3_bucket_name` (String) The name of the S3 bucket to use for intermittent staging of the data.
-- `s3_bucket_path` (String) The directory under the S3 bucket where data will be written.
-- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
-The region of the S3 staging bucket to use if utilising a copy strategy.
-- `s3_secret_access_key` (String) The corresponding secret to the above access key id.
-
-
-
-### Nested Schema for `configuration.data_source.destination_databricks_update_data_source_azure_blob_storage`
-
-Read-Only:
-
-- `azure_blob_storage_account_name` (String) The account's name of the Azure Blob Storage.
-- `azure_blob_storage_container_name` (String) The name of the Azure blob storage container.
-- `azure_blob_storage_endpoint_domain_name` (String) This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
-- `azure_blob_storage_sas_token` (String) Shared access signature (SAS) token to grant limited access to objects in your storage account.
-- `data_source_type` (String) must be one of ["AZURE_BLOB_STORAGE"]
-
-
-
-### Nested Schema for `configuration.data_source.destination_databricks_update_data_source_recommended_managed_tables`
-
-Read-Only:
-
-- `data_source_type` (String) must be one of ["MANAGED_TABLES_STORAGE"]
-
diff --git a/docs/data-sources/destination_dev_null.md b/docs/data-sources/destination_dev_null.md
index 63a8e6009..dce64add1 100644
--- a/docs/data-sources/destination_dev_null.md
+++ b/docs/data-sources/destination_dev_null.md
@@ -27,39 +27,10 @@ data "airbyte_destination_dev_null" "my_destination_devnull" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `destination_type` (String) must be one of ["dev-null"]
-- `test_destination` (Attributes) The type of destination to be used (see [below for nested schema](#nestedatt--configuration--test_destination))
-
-
-### Nested Schema for `configuration.test_destination`
-
-Read-Only:
-
-- `destination_dev_null_test_destination_silent` (Attributes) The type of destination to be used (see [below for nested schema](#nestedatt--configuration--test_destination--destination_dev_null_test_destination_silent))
-- `destination_dev_null_update_test_destination_silent` (Attributes) The type of destination to be used (see [below for nested schema](#nestedatt--configuration--test_destination--destination_dev_null_update_test_destination_silent))
-
-
-### Nested Schema for `configuration.test_destination.destination_dev_null_test_destination_silent`
-
-Read-Only:
-
-- `test_destination_type` (String) must be one of ["SILENT"]
-
-
-
-### Nested Schema for `configuration.test_destination.destination_dev_null_update_test_destination_silent`
-
-Read-Only:
-
-- `test_destination_type` (String) must be one of ["SILENT"]
-
diff --git a/docs/data-sources/destination_duckdb.md b/docs/data-sources/destination_duckdb.md
new file mode 100644
index 000000000..eb6f66030
--- /dev/null
+++ b/docs/data-sources/destination_duckdb.md
@@ -0,0 +1,36 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_destination_duckdb Data Source - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ DestinationDuckdb DataSource
+---
+
+# airbyte_destination_duckdb (Data Source)
+
+DestinationDuckdb DataSource
+
+## Example Usage
+
+```terraform
+data "airbyte_destination_duckdb" "my_destination_duckdb" {
+ destination_id = "...my_destination_id..."
+}
+```
+
+
+## Schema
+
+### Required
+
+- `destination_id` (String)
+
+### Read-Only
+
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
+- `name` (String)
+- `workspace_id` (String)
+
+
diff --git a/docs/data-sources/destination_dynamodb.md b/docs/data-sources/destination_dynamodb.md
index ca158c7d8..78f7f8990 100644
--- a/docs/data-sources/destination_dynamodb.md
+++ b/docs/data-sources/destination_dynamodb.md
@@ -27,21 +27,10 @@ data "airbyte_destination_dynamodb" "my_destination_dynamodb" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_key_id` (String) The access key id to access the DynamoDB. Airbyte requires Read and Write permissions to the DynamoDB.
-- `destination_type` (String) must be one of ["dynamodb"]
-- `dynamodb_endpoint` (String) This is your DynamoDB endpoint url.(if you are working with AWS DynamoDB, just leave empty).
-- `dynamodb_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
-The region of the DynamoDB.
-- `dynamodb_table_name_prefix` (String) The prefix to use when naming DynamoDB tables.
-- `secret_access_key` (String) The corresponding secret to the access key id.
-
diff --git a/docs/data-sources/destination_elasticsearch.md b/docs/data-sources/destination_elasticsearch.md
index a3aa42c53..574c8dd6f 100644
--- a/docs/data-sources/destination_elasticsearch.md
+++ b/docs/data-sources/destination_elasticsearch.md
@@ -27,68 +27,10 @@ data "airbyte_destination_elasticsearch" "my_destination_elasticsearch" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `authentication_method` (Attributes) The type of authentication to be used (see [below for nested schema](#nestedatt--configuration--authentication_method))
-- `ca_certificate` (String) CA certificate
-- `destination_type` (String) must be one of ["elasticsearch"]
-- `endpoint` (String) The full url of the Elasticsearch server
-- `upsert` (Boolean) If a primary key identifier is defined in the source, an upsert will be performed using the primary key value as the elasticsearch doc id. Does not support composite primary keys.
-
-
-### Nested Schema for `configuration.authentication_method`
-
-Read-Only:
-
-- `destination_elasticsearch_authentication_method_api_key_secret` (Attributes) Use a api key and secret combination to authenticate (see [below for nested schema](#nestedatt--configuration--authentication_method--destination_elasticsearch_authentication_method_api_key_secret))
-- `destination_elasticsearch_authentication_method_username_password` (Attributes) Basic auth header with a username and password (see [below for nested schema](#nestedatt--configuration--authentication_method--destination_elasticsearch_authentication_method_username_password))
-- `destination_elasticsearch_update_authentication_method_api_key_secret` (Attributes) Use a api key and secret combination to authenticate (see [below for nested schema](#nestedatt--configuration--authentication_method--destination_elasticsearch_update_authentication_method_api_key_secret))
-- `destination_elasticsearch_update_authentication_method_username_password` (Attributes) Basic auth header with a username and password (see [below for nested schema](#nestedatt--configuration--authentication_method--destination_elasticsearch_update_authentication_method_username_password))
-
-
-### Nested Schema for `configuration.authentication_method.destination_elasticsearch_authentication_method_api_key_secret`
-
-Read-Only:
-
-- `api_key_id` (String) The Key ID to used when accessing an enterprise Elasticsearch instance.
-- `api_key_secret` (String) The secret associated with the API Key ID.
-- `method` (String) must be one of ["secret"]
-
-
-
-### Nested Schema for `configuration.authentication_method.destination_elasticsearch_authentication_method_username_password`
-
-Read-Only:
-
-- `method` (String) must be one of ["basic"]
-- `password` (String) Basic auth password to access a secure Elasticsearch server
-- `username` (String) Basic auth username to access a secure Elasticsearch server
-
-
-
-### Nested Schema for `configuration.authentication_method.destination_elasticsearch_update_authentication_method_api_key_secret`
-
-Read-Only:
-
-- `api_key_id` (String) The Key ID to used when accessing an enterprise Elasticsearch instance.
-- `api_key_secret` (String) The secret associated with the API Key ID.
-- `method` (String) must be one of ["secret"]
-
-
-
-### Nested Schema for `configuration.authentication_method.destination_elasticsearch_update_authentication_method_username_password`
-
-Read-Only:
-
-- `method` (String) must be one of ["basic"]
-- `password` (String) Basic auth password to access a secure Elasticsearch server
-- `username` (String) Basic auth username to access a secure Elasticsearch server
-
diff --git a/docs/data-sources/destination_firebolt.md b/docs/data-sources/destination_firebolt.md
index 9860c7478..329a1f3e7 100644
--- a/docs/data-sources/destination_firebolt.md
+++ b/docs/data-sources/destination_firebolt.md
@@ -27,71 +27,10 @@ data "airbyte_destination_firebolt" "my_destination_firebolt" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `account` (String) Firebolt account to login.
-- `database` (String) The database to connect to.
-- `destination_type` (String) must be one of ["firebolt"]
-- `engine` (String) Engine name or url to connect to.
-- `host` (String) The host name of your Firebolt database.
-- `loading_method` (Attributes) Loading method used to select the way data will be uploaded to Firebolt (see [below for nested schema](#nestedatt--configuration--loading_method))
-- `password` (String) Firebolt password.
-- `username` (String) Firebolt email address you use to login.
-
-
-### Nested Schema for `configuration.loading_method`
-
-Read-Only:
-
-- `destination_firebolt_loading_method_external_table_via_s3` (Attributes) Loading method used to select the way data will be uploaded to Firebolt (see [below for nested schema](#nestedatt--configuration--loading_method--destination_firebolt_loading_method_external_table_via_s3))
-- `destination_firebolt_loading_method_sql_inserts` (Attributes) Loading method used to select the way data will be uploaded to Firebolt (see [below for nested schema](#nestedatt--configuration--loading_method--destination_firebolt_loading_method_sql_inserts))
-- `destination_firebolt_update_loading_method_external_table_via_s3` (Attributes) Loading method used to select the way data will be uploaded to Firebolt (see [below for nested schema](#nestedatt--configuration--loading_method--destination_firebolt_update_loading_method_external_table_via_s3))
-- `destination_firebolt_update_loading_method_sql_inserts` (Attributes) Loading method used to select the way data will be uploaded to Firebolt (see [below for nested schema](#nestedatt--configuration--loading_method--destination_firebolt_update_loading_method_sql_inserts))
-
-
-### Nested Schema for `configuration.loading_method.destination_firebolt_loading_method_external_table_via_s3`
-
-Read-Only:
-
-- `aws_key_id` (String) AWS access key granting read and write access to S3.
-- `aws_key_secret` (String) Corresponding secret part of the AWS Key
-- `method` (String) must be one of ["S3"]
-- `s3_bucket` (String) The name of the S3 bucket.
-- `s3_region` (String) Region name of the S3 bucket.
-
-
-
-### Nested Schema for `configuration.loading_method.destination_firebolt_loading_method_sql_inserts`
-
-Read-Only:
-
-- `method` (String) must be one of ["SQL"]
-
-
-
-### Nested Schema for `configuration.loading_method.destination_firebolt_update_loading_method_external_table_via_s3`
-
-Read-Only:
-
-- `aws_key_id` (String) AWS access key granting read and write access to S3.
-- `aws_key_secret` (String) Corresponding secret part of the AWS Key
-- `method` (String) must be one of ["S3"]
-- `s3_bucket` (String) The name of the S3 bucket.
-- `s3_region` (String) Region name of the S3 bucket.
-
-
-
-### Nested Schema for `configuration.loading_method.destination_firebolt_update_loading_method_sql_inserts`
-
-Read-Only:
-
-- `method` (String) must be one of ["SQL"]
-
diff --git a/docs/data-sources/destination_firestore.md b/docs/data-sources/destination_firestore.md
index f2b9f50a1..ed59960c5 100644
--- a/docs/data-sources/destination_firestore.md
+++ b/docs/data-sources/destination_firestore.md
@@ -27,17 +27,10 @@ data "airbyte_destination_firestore" "my_destination_firestore" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials_json` (String) The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
-- `destination_type` (String) must be one of ["firestore"]
-- `project_id` (String) The GCP project ID for the project containing the target BigQuery dataset.
-
diff --git a/docs/data-sources/destination_gcs.md b/docs/data-sources/destination_gcs.md
index 3423670b8..ada8298f5 100644
--- a/docs/data-sources/destination_gcs.md
+++ b/docs/data-sources/destination_gcs.md
@@ -27,381 +27,10 @@ data "airbyte_destination_gcs" "my_destination_gcs" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credential` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--credential))
-- `destination_type` (String) must be one of ["gcs"]
-- `format` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format))
-- `gcs_bucket_name` (String) You can find the bucket name in the App Engine Admin console Application Settings page, under the label Google Cloud Storage Bucket. Read more here.
-- `gcs_bucket_path` (String) GCS Bucket Path string Subdirectory under the above bucket to sync the data into.
-- `gcs_bucket_region` (String) must be one of ["northamerica-northeast1", "northamerica-northeast2", "us-central1", "us-east1", "us-east4", "us-west1", "us-west2", "us-west3", "us-west4", "southamerica-east1", "southamerica-west1", "europe-central2", "europe-north1", "europe-west1", "europe-west2", "europe-west3", "europe-west4", "europe-west6", "asia-east1", "asia-east2", "asia-northeast1", "asia-northeast2", "asia-northeast3", "asia-south1", "asia-south2", "asia-southeast1", "asia-southeast2", "australia-southeast1", "australia-southeast2", "asia", "eu", "us", "asia1", "eur4", "nam4"]
-Select a Region of the GCS Bucket. Read more here.
-
-
-### Nested Schema for `configuration.credential`
-
-Read-Only:
-
-- `destination_gcs_authentication_hmac_key` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--credential--destination_gcs_authentication_hmac_key))
-- `destination_gcs_update_authentication_hmac_key` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--credential--destination_gcs_update_authentication_hmac_key))
-
-
-### Nested Schema for `configuration.credential.destination_gcs_authentication_hmac_key`
-
-Read-Only:
-
-- `credential_type` (String) must be one of ["HMAC_KEY"]
-- `hmac_key_access_id` (String) When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long. Read more here.
-- `hmac_key_secret` (String) The corresponding secret for the access ID. It is a 40-character base-64 encoded string. Read more here.
-
-
-
-### Nested Schema for `configuration.credential.destination_gcs_update_authentication_hmac_key`
-
-Read-Only:
-
-- `credential_type` (String) must be one of ["HMAC_KEY"]
-- `hmac_key_access_id` (String) When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long. Read more here.
-- `hmac_key_secret` (String) The corresponding secret for the access ID. It is a 40-character base-64 encoded string. Read more here.
-
-
-
-
-### Nested Schema for `configuration.format`
-
-Read-Only:
-
-- `destination_gcs_output_format_avro_apache_avro` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro))
-- `destination_gcs_output_format_csv_comma_separated_values` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_csv_comma_separated_values))
-- `destination_gcs_output_format_json_lines_newline_delimited_json` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_json_lines_newline_delimited_json))
-- `destination_gcs_output_format_parquet_columnar_storage` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_parquet_columnar_storage))
-- `destination_gcs_update_output_format_avro_apache_avro` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro))
-- `destination_gcs_update_output_format_csv_comma_separated_values` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_csv_comma_separated_values))
-- `destination_gcs_update_output_format_json_lines_newline_delimited_json` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_json_lines_newline_delimited_json))
-- `destination_gcs_update_output_format_parquet_columnar_storage` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_parquet_columnar_storage))
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro`
-
-Read-Only:
-
-- `compression_codec` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--compression_codec))
-- `format_type` (String) must be one of ["Avro"]
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type`
-
-Read-Only:
-
-- `destination_gcs_output_format_avro_apache_avro_compression_codec_bzip2` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--format_type--destination_gcs_output_format_avro_apache_avro_compression_codec_bzip2))
-- `destination_gcs_output_format_avro_apache_avro_compression_codec_deflate` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--format_type--destination_gcs_output_format_avro_apache_avro_compression_codec_deflate))
-- `destination_gcs_output_format_avro_apache_avro_compression_codec_no_compression` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--format_type--destination_gcs_output_format_avro_apache_avro_compression_codec_no_compression))
-- `destination_gcs_output_format_avro_apache_avro_compression_codec_snappy` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--format_type--destination_gcs_output_format_avro_apache_avro_compression_codec_snappy))
-- `destination_gcs_output_format_avro_apache_avro_compression_codec_xz` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--format_type--destination_gcs_output_format_avro_apache_avro_compression_codec_xz))
-- `destination_gcs_output_format_avro_apache_avro_compression_codec_zstandard` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--format_type--destination_gcs_output_format_avro_apache_avro_compression_codec_zstandard))
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type.destination_gcs_output_format_avro_apache_avro_compression_codec_bzip2`
-
-Read-Only:
-
-- `codec` (String) must be one of ["bzip2"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type.destination_gcs_output_format_avro_apache_avro_compression_codec_deflate`
-
-Read-Only:
-
-- `codec` (String) must be one of ["Deflate"]
-- `compression_level` (Number) 0: no compression & fastest, 9: best compression & slowest.
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type.destination_gcs_output_format_avro_apache_avro_compression_codec_no_compression`
-
-Read-Only:
-
-- `codec` (String) must be one of ["no compression"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type.destination_gcs_output_format_avro_apache_avro_compression_codec_snappy`
-
-Read-Only:
-
-- `codec` (String) must be one of ["snappy"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type.destination_gcs_output_format_avro_apache_avro_compression_codec_xz`
-
-Read-Only:
-
-- `codec` (String) must be one of ["xz"]
-- `compression_level` (Number) The presets 0-3 are fast presets with medium compression. The presets 4-6 are fairly slow presets with high compression. The default preset is 6. The presets 7-9 are like the preset 6 but use bigger dictionaries and have higher compressor and decompressor memory requirements. Unless the uncompressed size of the file exceeds 8 MiB, 16 MiB, or 32 MiB, it is waste of memory to use the presets 7, 8, or 9, respectively. Read more here for details.
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type.destination_gcs_output_format_avro_apache_avro_compression_codec_zstandard`
-
-Read-Only:
-
-- `codec` (String) must be one of ["zstandard"]
-- `compression_level` (Number) Negative levels are 'fast' modes akin to lz4 or snappy, levels above 9 are generally for archival purposes, and levels above 18 use a lot of memory.
-- `include_checksum` (Boolean) If true, include a checksum with each data block.
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_csv_comma_separated_values`
-
-Read-Only:
-
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_csv_comma_separated_values--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input JSON data should be normalized (flattened) in the output CSV. Please refer to docs for details.
-- `format_type` (String) must be one of ["CSV"]
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_csv_comma_separated_values.format_type`
-
-Read-Only:
-
-- `destination_gcs_output_format_csv_comma_separated_values_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_csv_comma_separated_values--format_type--destination_gcs_output_format_csv_comma_separated_values_compression_gzip))
-- `destination_gcs_output_format_csv_comma_separated_values_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_csv_comma_separated_values--format_type--destination_gcs_output_format_csv_comma_separated_values_compression_no_compression))
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_csv_comma_separated_values.format_type.destination_gcs_output_format_csv_comma_separated_values_compression_gzip`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["GZIP"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_csv_comma_separated_values.format_type.destination_gcs_output_format_csv_comma_separated_values_compression_no_compression`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["No Compression"]
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_json_lines_newline_delimited_json`
-
-Read-Only:
-
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_json_lines_newline_delimited_json--compression))
-- `format_type` (String) must be one of ["JSONL"]
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_json_lines_newline_delimited_json.format_type`
-
-Read-Only:
-
-- `destination_gcs_output_format_json_lines_newline_delimited_json_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_json_lines_newline_delimited_json--format_type--destination_gcs_output_format_json_lines_newline_delimited_json_compression_gzip))
-- `destination_gcs_output_format_json_lines_newline_delimited_json_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_json_lines_newline_delimited_json--format_type--destination_gcs_output_format_json_lines_newline_delimited_json_compression_no_compression))
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_json_lines_newline_delimited_json.format_type.destination_gcs_output_format_json_lines_newline_delimited_json_compression_gzip`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["GZIP"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_json_lines_newline_delimited_json.format_type.destination_gcs_output_format_json_lines_newline_delimited_json_compression_no_compression`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["No Compression"]
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_parquet_columnar_storage`
-
-Read-Only:
-
-- `block_size_mb` (Number) This is the size of a row group being buffered in memory. It limits the memory usage when writing. Larger values will improve the IO when reading, but consume more memory when writing. Default: 128 MB.
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "LZO", "BROTLI", "LZ4", "ZSTD"]
-The compression algorithm used to compress data pages.
-- `dictionary_encoding` (Boolean) Default: true.
-- `dictionary_page_size_kb` (Number) There is one dictionary page per column per row group when dictionary encoding is used. The dictionary page size works like the page size but for dictionary. Default: 1024 KB.
-- `format_type` (String) must be one of ["Parquet"]
-- `max_padding_size_mb` (Number) Maximum size allowed as padding to align row groups. This is also the minimum size of a row group. Default: 8 MB.
-- `page_size_kb` (Number) The page size is for compression. A block is composed of pages. A page is the smallest unit that must be read fully to access a single record. If this value is too small, the compression will deteriorate. Default: 1024 KB.
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro`
-
-Read-Only:
-
-- `compression_codec` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--compression_codec))
-- `format_type` (String) must be one of ["Avro"]
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type`
-
-Read-Only:
-
-- `destination_gcs_update_output_format_avro_apache_avro_compression_codec_bzip2` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--format_type--destination_gcs_update_output_format_avro_apache_avro_compression_codec_bzip2))
-- `destination_gcs_update_output_format_avro_apache_avro_compression_codec_deflate` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--format_type--destination_gcs_update_output_format_avro_apache_avro_compression_codec_deflate))
-- `destination_gcs_update_output_format_avro_apache_avro_compression_codec_no_compression` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--format_type--destination_gcs_update_output_format_avro_apache_avro_compression_codec_no_compression))
-- `destination_gcs_update_output_format_avro_apache_avro_compression_codec_snappy` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--format_type--destination_gcs_update_output_format_avro_apache_avro_compression_codec_snappy))
-- `destination_gcs_update_output_format_avro_apache_avro_compression_codec_xz` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--format_type--destination_gcs_update_output_format_avro_apache_avro_compression_codec_xz))
-- `destination_gcs_update_output_format_avro_apache_avro_compression_codec_zstandard` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--format_type--destination_gcs_update_output_format_avro_apache_avro_compression_codec_zstandard))
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type.destination_gcs_update_output_format_avro_apache_avro_compression_codec_bzip2`
-
-Read-Only:
-
-- `codec` (String) must be one of ["bzip2"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type.destination_gcs_update_output_format_avro_apache_avro_compression_codec_deflate`
-
-Read-Only:
-
-- `codec` (String) must be one of ["Deflate"]
-- `compression_level` (Number) 0: no compression & fastest, 9: best compression & slowest.
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type.destination_gcs_update_output_format_avro_apache_avro_compression_codec_no_compression`
-
-Read-Only:
-
-- `codec` (String) must be one of ["no compression"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type.destination_gcs_update_output_format_avro_apache_avro_compression_codec_snappy`
-
-Read-Only:
-
-- `codec` (String) must be one of ["snappy"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type.destination_gcs_update_output_format_avro_apache_avro_compression_codec_xz`
-
-Read-Only:
-
-- `codec` (String) must be one of ["xz"]
-- `compression_level` (Number) The presets 0-3 are fast presets with medium compression. The presets 4-6 are fairly slow presets with high compression. The default preset is 6. The presets 7-9 are like the preset 6 but use bigger dictionaries and have higher compressor and decompressor memory requirements. Unless the uncompressed size of the file exceeds 8 MiB, 16 MiB, or 32 MiB, it is waste of memory to use the presets 7, 8, or 9, respectively. Read more here for details.
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type.destination_gcs_update_output_format_avro_apache_avro_compression_codec_zstandard`
-
-Read-Only:
-
-- `codec` (String) must be one of ["zstandard"]
-- `compression_level` (Number) Negative levels are 'fast' modes akin to lz4 or snappy, levels above 9 are generally for archival purposes, and levels above 18 use a lot of memory.
-- `include_checksum` (Boolean) If true, include a checksum with each data block.
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_csv_comma_separated_values`
-
-Read-Only:
-
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_csv_comma_separated_values--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input JSON data should be normalized (flattened) in the output CSV. Please refer to docs for details.
-- `format_type` (String) must be one of ["CSV"]
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_csv_comma_separated_values.format_type`
-
-Read-Only:
-
-- `destination_gcs_update_output_format_csv_comma_separated_values_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_csv_comma_separated_values--format_type--destination_gcs_update_output_format_csv_comma_separated_values_compression_gzip))
-- `destination_gcs_update_output_format_csv_comma_separated_values_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_csv_comma_separated_values--format_type--destination_gcs_update_output_format_csv_comma_separated_values_compression_no_compression))
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_csv_comma_separated_values.format_type.destination_gcs_update_output_format_csv_comma_separated_values_compression_gzip`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["GZIP"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_csv_comma_separated_values.format_type.destination_gcs_update_output_format_csv_comma_separated_values_compression_no_compression`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["No Compression"]
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_json_lines_newline_delimited_json`
-
-Read-Only:
-
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_json_lines_newline_delimited_json--compression))
-- `format_type` (String) must be one of ["JSONL"]
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_json_lines_newline_delimited_json.format_type`
-
-Read-Only:
-
-- `destination_gcs_update_output_format_json_lines_newline_delimited_json_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_json_lines_newline_delimited_json--format_type--destination_gcs_update_output_format_json_lines_newline_delimited_json_compression_gzip))
-- `destination_gcs_update_output_format_json_lines_newline_delimited_json_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_json_lines_newline_delimited_json--format_type--destination_gcs_update_output_format_json_lines_newline_delimited_json_compression_no_compression))
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_json_lines_newline_delimited_json.format_type.destination_gcs_update_output_format_json_lines_newline_delimited_json_compression_gzip`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["GZIP"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_json_lines_newline_delimited_json.format_type.destination_gcs_update_output_format_json_lines_newline_delimited_json_compression_no_compression`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["No Compression"]
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_parquet_columnar_storage`
-
-Read-Only:
-
-- `block_size_mb` (Number) This is the size of a row group being buffered in memory. It limits the memory usage when writing. Larger values will improve the IO when reading, but consume more memory when writing. Default: 128 MB.
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "LZO", "BROTLI", "LZ4", "ZSTD"]
-The compression algorithm used to compress data pages.
-- `dictionary_encoding` (Boolean) Default: true.
-- `dictionary_page_size_kb` (Number) There is one dictionary page per column per row group when dictionary encoding is used. The dictionary page size works like the page size but for dictionary. Default: 1024 KB.
-- `format_type` (String) must be one of ["Parquet"]
-- `max_padding_size_mb` (Number) Maximum size allowed as padding to align row groups. This is also the minimum size of a row group. Default: 8 MB.
-- `page_size_kb` (Number) The page size is for compression. A block is composed of pages. A page is the smallest unit that must be read fully to access a single record. If this value is too small, the compression will deteriorate. Default: 1024 KB.
-
diff --git a/docs/data-sources/destination_google_sheets.md b/docs/data-sources/destination_google_sheets.md
index 6513438a8..ce1340761 100644
--- a/docs/data-sources/destination_google_sheets.md
+++ b/docs/data-sources/destination_google_sheets.md
@@ -27,26 +27,10 @@ data "airbyte_destination_google_sheets" "my_destination_googlesheets" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Google API Credentials for connecting to Google Sheets and Google Drive APIs (see [below for nested schema](#nestedatt--configuration--credentials))
-- `destination_type` (String) must be one of ["google-sheets"]
-- `spreadsheet_id` (String) The link to your spreadsheet. See this guide for more details.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `client_id` (String) The Client ID of your Google Sheets developer application.
-- `client_secret` (String) The Client Secret of your Google Sheets developer application.
-- `refresh_token` (String) The token for obtaining new access token.
-
diff --git a/docs/data-sources/destination_keen.md b/docs/data-sources/destination_keen.md
index c8d1b3efc..1ee3a8435 100644
--- a/docs/data-sources/destination_keen.md
+++ b/docs/data-sources/destination_keen.md
@@ -27,18 +27,10 @@ data "airbyte_destination_keen" "my_destination_keen" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) To get Keen Master API Key, navigate to the Access tab from the left-hand, side panel and check the Project Details section.
-- `destination_type` (String) must be one of ["keen"]
-- `infer_timestamp` (Boolean) Allow connector to guess keen.timestamp value based on the streamed data.
-- `project_id` (String) To get Keen Project ID, navigate to the Access tab from the left-hand, side panel and check the Project Details section.
-
diff --git a/docs/data-sources/destination_kinesis.md b/docs/data-sources/destination_kinesis.md
index 505774cb9..f1efe960f 100644
--- a/docs/data-sources/destination_kinesis.md
+++ b/docs/data-sources/destination_kinesis.md
@@ -27,21 +27,10 @@ data "airbyte_destination_kinesis" "my_destination_kinesis" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_key` (String) Generate the AWS Access Key for current user.
-- `buffer_size` (Number) Buffer size for storing kinesis records before being batch streamed.
-- `destination_type` (String) must be one of ["kinesis"]
-- `endpoint` (String) AWS Kinesis endpoint.
-- `private_key` (String) The AWS Private Key - a string of numbers and letters that are unique for each account, also known as a "recovery phrase".
-- `region` (String) AWS region. Your account determines the Regions that are available to you.
-- `shard_count` (Number) Number of shards to which the data should be streamed.
-
diff --git a/docs/data-sources/destination_langchain.md b/docs/data-sources/destination_langchain.md
index d6c478c6e..464749589 100644
--- a/docs/data-sources/destination_langchain.md
+++ b/docs/data-sources/destination_langchain.md
@@ -27,145 +27,10 @@ data "airbyte_destination_langchain" "my_destination_langchain" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `destination_type` (String) must be one of ["langchain"]
-- `embedding` (Attributes) Embedding configuration (see [below for nested schema](#nestedatt--configuration--embedding))
-- `indexing` (Attributes) Indexing configuration (see [below for nested schema](#nestedatt--configuration--indexing))
-- `processing` (Attributes) (see [below for nested schema](#nestedatt--configuration--processing))
-
-
-### Nested Schema for `configuration.embedding`
-
-Read-Only:
-
-- `destination_langchain_embedding_fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--destination_langchain_embedding_fake))
-- `destination_langchain_embedding_open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--destination_langchain_embedding_open_ai))
-- `destination_langchain_update_embedding_fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--destination_langchain_update_embedding_fake))
-- `destination_langchain_update_embedding_open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--destination_langchain_update_embedding_open_ai))
-
-
-### Nested Schema for `configuration.embedding.destination_langchain_embedding_fake`
-
-Read-Only:
-
-- `mode` (String) must be one of ["fake"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_langchain_embedding_open_ai`
-
-Read-Only:
-
-- `mode` (String) must be one of ["openai"]
-- `openai_key` (String)
-
-
-
-### Nested Schema for `configuration.embedding.destination_langchain_update_embedding_fake`
-
-Read-Only:
-
-- `mode` (String) must be one of ["fake"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_langchain_update_embedding_open_ai`
-
-Read-Only:
-
-- `mode` (String) must be one of ["openai"]
-- `openai_key` (String)
-
-
-
-
-### Nested Schema for `configuration.indexing`
-
-Read-Only:
-
-- `destination_langchain_indexing_chroma_local_persistance` (Attributes) Chroma is a popular vector store that can be used to store and retrieve embeddings. It will build its index in memory and persist it to disk by the end of the sync. (see [below for nested schema](#nestedatt--configuration--indexing--destination_langchain_indexing_chroma_local_persistance))
-- `destination_langchain_indexing_doc_array_hnsw_search` (Attributes) DocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite. (see [below for nested schema](#nestedatt--configuration--indexing--destination_langchain_indexing_doc_array_hnsw_search))
-- `destination_langchain_indexing_pinecone` (Attributes) Pinecone is a popular vector store that can be used to store and retrieve embeddings. It is a managed service and can also be queried from outside of langchain. (see [below for nested schema](#nestedatt--configuration--indexing--destination_langchain_indexing_pinecone))
-- `destination_langchain_update_indexing_chroma_local_persistance` (Attributes) Chroma is a popular vector store that can be used to store and retrieve embeddings. It will build its index in memory and persist it to disk by the end of the sync. (see [below for nested schema](#nestedatt--configuration--indexing--destination_langchain_update_indexing_chroma_local_persistance))
-- `destination_langchain_update_indexing_doc_array_hnsw_search` (Attributes) DocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite. (see [below for nested schema](#nestedatt--configuration--indexing--destination_langchain_update_indexing_doc_array_hnsw_search))
-- `destination_langchain_update_indexing_pinecone` (Attributes) Pinecone is a popular vector store that can be used to store and retrieve embeddings. It is a managed service and can also be queried from outside of langchain. (see [below for nested schema](#nestedatt--configuration--indexing--destination_langchain_update_indexing_pinecone))
-
-
-### Nested Schema for `configuration.indexing.destination_langchain_indexing_chroma_local_persistance`
-
-Read-Only:
-
-- `collection_name` (String) Name of the collection to use.
-- `destination_path` (String) Path to the directory where chroma files will be written. The files will be placed inside that local mount.
-- `mode` (String) must be one of ["chroma_local"]
-
-
-
-### Nested Schema for `configuration.indexing.destination_langchain_indexing_doc_array_hnsw_search`
-
-Read-Only:
-
-- `destination_path` (String) Path to the directory where hnswlib and meta data files will be written. The files will be placed inside that local mount. All files in the specified destination directory will be deleted on each run.
-- `mode` (String) must be one of ["DocArrayHnswSearch"]
-
-
-
-### Nested Schema for `configuration.indexing.destination_langchain_indexing_pinecone`
-
-Read-Only:
-
-- `index` (String) Pinecone index to use
-- `mode` (String) must be one of ["pinecone"]
-- `pinecone_environment` (String) Pinecone environment to use
-- `pinecone_key` (String)
-
-
-
-### Nested Schema for `configuration.indexing.destination_langchain_update_indexing_chroma_local_persistance`
-
-Read-Only:
-
-- `collection_name` (String) Name of the collection to use.
-- `destination_path` (String) Path to the directory where chroma files will be written. The files will be placed inside that local mount.
-- `mode` (String) must be one of ["chroma_local"]
-
-
-
-### Nested Schema for `configuration.indexing.destination_langchain_update_indexing_doc_array_hnsw_search`
-
-Read-Only:
-
-- `destination_path` (String) Path to the directory where hnswlib and meta data files will be written. The files will be placed inside that local mount. All files in the specified destination directory will be deleted on each run.
-- `mode` (String) must be one of ["DocArrayHnswSearch"]
-
-
-
-### Nested Schema for `configuration.indexing.destination_langchain_update_indexing_pinecone`
-
-Read-Only:
-
-- `index` (String) Pinecone index to use
-- `mode` (String) must be one of ["pinecone"]
-- `pinecone_environment` (String) Pinecone environment to use
-- `pinecone_key` (String)
-
-
-
-
-### Nested Schema for `configuration.processing`
-
-Read-Only:
-
-- `chunk_overlap` (Number) Size of overlap between chunks in tokens to store in vector store to better capture relevant context
-- `chunk_size` (Number) Size of chunks in tokens to store in vector store (make sure it is not too big for the context if your LLM)
-- `text_fields` (List of String) List of fields in the record that should be used to calculate the embedding. All other fields are passed along as meta fields. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered text fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array.
-
diff --git a/docs/data-sources/destination_milvus.md b/docs/data-sources/destination_milvus.md
index 515a84e8e..ce46b12e2 100644
--- a/docs/data-sources/destination_milvus.md
+++ b/docs/data-sources/destination_milvus.md
@@ -27,195 +27,10 @@ data "airbyte_destination_milvus" "my_destination_milvus" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `destination_type` (String) must be one of ["milvus"]
-- `embedding` (Attributes) Embedding configuration (see [below for nested schema](#nestedatt--configuration--embedding))
-- `indexing` (Attributes) Indexing configuration (see [below for nested schema](#nestedatt--configuration--indexing))
-- `processing` (Attributes) (see [below for nested schema](#nestedatt--configuration--processing))
-
-
-### Nested Schema for `configuration.embedding`
-
-Read-Only:
-
-- `destination_milvus_embedding_cohere` (Attributes) Use the Cohere API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_embedding_cohere))
-- `destination_milvus_embedding_fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_embedding_fake))
-- `destination_milvus_embedding_from_field` (Attributes) Use a field in the record as the embedding. This is useful if you already have an embedding for your data and want to store it in the vector store. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_embedding_from_field))
-- `destination_milvus_embedding_open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_embedding_open_ai))
-- `destination_milvus_update_embedding_cohere` (Attributes) Use the Cohere API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_update_embedding_cohere))
-- `destination_milvus_update_embedding_fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_update_embedding_fake))
-- `destination_milvus_update_embedding_from_field` (Attributes) Use a field in the record as the embedding. This is useful if you already have an embedding for your data and want to store it in the vector store. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_update_embedding_from_field))
-- `destination_milvus_update_embedding_open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_update_embedding_open_ai))
-
-
-### Nested Schema for `configuration.embedding.destination_milvus_embedding_cohere`
-
-Read-Only:
-
-- `cohere_key` (String)
-- `mode` (String) must be one of ["cohere"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_milvus_embedding_fake`
-
-Read-Only:
-
-- `mode` (String) must be one of ["fake"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_milvus_embedding_from_field`
-
-Read-Only:
-
-- `dimensions` (Number) The number of dimensions the embedding model is generating
-- `field_name` (String) Name of the field in the record that contains the embedding
-- `mode` (String) must be one of ["from_field"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_milvus_embedding_open_ai`
-
-Read-Only:
-
-- `mode` (String) must be one of ["openai"]
-- `openai_key` (String)
-
-
-
-### Nested Schema for `configuration.embedding.destination_milvus_update_embedding_cohere`
-
-Read-Only:
-
-- `cohere_key` (String)
-- `mode` (String) must be one of ["cohere"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_milvus_update_embedding_fake`
-
-Read-Only:
-
-- `mode` (String) must be one of ["fake"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_milvus_update_embedding_from_field`
-
-Read-Only:
-
-- `dimensions` (Number) The number of dimensions the embedding model is generating
-- `field_name` (String) Name of the field in the record that contains the embedding
-- `mode` (String) must be one of ["from_field"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_milvus_update_embedding_open_ai`
-
-Read-Only:
-
-- `mode` (String) must be one of ["openai"]
-- `openai_key` (String)
-
-
-
-
-### Nested Schema for `configuration.indexing`
-
-Read-Only:
-
-- `auth` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--indexing--auth))
-- `collection` (String) The collection to load data into
-- `db` (String) The database to connect to
-- `host` (String) The public endpoint of the Milvus instance.
-- `text_field` (String) The field in the entity that contains the embedded text
-- `vector_field` (String) The field in the entity that contains the vector
-
-
-### Nested Schema for `configuration.indexing.auth`
-
-Read-Only:
-
-- `destination_milvus_indexing_authentication_api_token` (Attributes) Authenticate using an API token (suitable for Zilliz Cloud) (see [below for nested schema](#nestedatt--configuration--indexing--auth--destination_milvus_indexing_authentication_api_token))
-- `destination_milvus_indexing_authentication_no_auth` (Attributes) Do not authenticate (suitable for locally running test clusters, do not use for clusters with public IP addresses) (see [below for nested schema](#nestedatt--configuration--indexing--auth--destination_milvus_indexing_authentication_no_auth))
-- `destination_milvus_indexing_authentication_username_password` (Attributes) Authenticate using username and password (suitable for self-managed Milvus clusters) (see [below for nested schema](#nestedatt--configuration--indexing--auth--destination_milvus_indexing_authentication_username_password))
-- `destination_milvus_update_indexing_authentication_api_token` (Attributes) Authenticate using an API token (suitable for Zilliz Cloud) (see [below for nested schema](#nestedatt--configuration--indexing--auth--destination_milvus_update_indexing_authentication_api_token))
-- `destination_milvus_update_indexing_authentication_no_auth` (Attributes) Do not authenticate (suitable for locally running test clusters, do not use for clusters with public IP addresses) (see [below for nested schema](#nestedatt--configuration--indexing--auth--destination_milvus_update_indexing_authentication_no_auth))
-- `destination_milvus_update_indexing_authentication_username_password` (Attributes) Authenticate using username and password (suitable for self-managed Milvus clusters) (see [below for nested schema](#nestedatt--configuration--indexing--auth--destination_milvus_update_indexing_authentication_username_password))
-
-
-### Nested Schema for `configuration.indexing.auth.destination_milvus_update_indexing_authentication_username_password`
-
-Read-Only:
-
-- `mode` (String) must be one of ["token"]
-- `token` (String) API Token for the Milvus instance
-
-
-
-### Nested Schema for `configuration.indexing.auth.destination_milvus_update_indexing_authentication_username_password`
-
-Read-Only:
-
-- `mode` (String) must be one of ["no_auth"]
-
-
-
-### Nested Schema for `configuration.indexing.auth.destination_milvus_update_indexing_authentication_username_password`
-
-Read-Only:
-
-- `mode` (String) must be one of ["username_password"]
-- `password` (String) Password for the Milvus instance
-- `username` (String) Username for the Milvus instance
-
-
-
-### Nested Schema for `configuration.indexing.auth.destination_milvus_update_indexing_authentication_username_password`
-
-Read-Only:
-
-- `mode` (String) must be one of ["token"]
-- `token` (String) API Token for the Milvus instance
-
-
-
-### Nested Schema for `configuration.indexing.auth.destination_milvus_update_indexing_authentication_username_password`
-
-Read-Only:
-
-- `mode` (String) must be one of ["no_auth"]
-
-
-
-### Nested Schema for `configuration.indexing.auth.destination_milvus_update_indexing_authentication_username_password`
-
-Read-Only:
-
-- `mode` (String) must be one of ["username_password"]
-- `password` (String) Password for the Milvus instance
-- `username` (String) Username for the Milvus instance
-
-
-
-
-
-### Nested Schema for `configuration.processing`
-
-Read-Only:
-
-- `chunk_overlap` (Number) Size of overlap between chunks in tokens to store in vector store to better capture relevant context
-- `chunk_size` (Number) Size of chunks in tokens to store in vector store (make sure it is not too big for the context if your LLM)
-- `metadata_fields` (List of String) List of fields in the record that should be stored as metadata. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered metadata fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array. When specifying nested paths, all matching values are flattened into an array set to a field named by the path.
-- `text_fields` (List of String) List of fields in the record that should be used to calculate the embedding. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered text fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array.
-
diff --git a/docs/data-sources/destination_mongodb.md b/docs/data-sources/destination_mongodb.md
index fcb6ba121..40f2b6dd2 100644
--- a/docs/data-sources/destination_mongodb.md
+++ b/docs/data-sources/destination_mongodb.md
@@ -27,218 +27,10 @@ data "airbyte_destination_mongodb" "my_destination_mongodb" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `auth_type` (Attributes) Authorization type. (see [below for nested schema](#nestedatt--configuration--auth_type))
-- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["mongodb"]
-- `instance_type` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type))
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-
-
-### Nested Schema for `configuration.auth_type`
-
-Read-Only:
-
-- `destination_mongodb_authorization_type_login_password` (Attributes) Login/Password. (see [below for nested schema](#nestedatt--configuration--auth_type--destination_mongodb_authorization_type_login_password))
-- `destination_mongodb_authorization_type_none` (Attributes) None. (see [below for nested schema](#nestedatt--configuration--auth_type--destination_mongodb_authorization_type_none))
-- `destination_mongodb_update_authorization_type_login_password` (Attributes) Login/Password. (see [below for nested schema](#nestedatt--configuration--auth_type--destination_mongodb_update_authorization_type_login_password))
-- `destination_mongodb_update_authorization_type_none` (Attributes) None. (see [below for nested schema](#nestedatt--configuration--auth_type--destination_mongodb_update_authorization_type_none))
-
-
-### Nested Schema for `configuration.auth_type.destination_mongodb_authorization_type_login_password`
-
-Read-Only:
-
-- `authorization` (String) must be one of ["login/password"]
-- `password` (String) Password associated with the username.
-- `username` (String) Username to use to access the database.
-
-
-
-### Nested Schema for `configuration.auth_type.destination_mongodb_authorization_type_none`
-
-Read-Only:
-
-- `authorization` (String) must be one of ["none"]
-
-
-
-### Nested Schema for `configuration.auth_type.destination_mongodb_update_authorization_type_login_password`
-
-Read-Only:
-
-- `authorization` (String) must be one of ["login/password"]
-- `password` (String) Password associated with the username.
-- `username` (String) Username to use to access the database.
-
-
-
-### Nested Schema for `configuration.auth_type.destination_mongodb_update_authorization_type_none`
-
-Read-Only:
-
-- `authorization` (String) must be one of ["none"]
-
-
-
-
-### Nested Schema for `configuration.instance_type`
-
-Read-Only:
-
-- `destination_mongodb_mongo_db_instance_type_mongo_db_atlas` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--destination_mongodb_mongo_db_instance_type_mongo_db_atlas))
-- `destination_mongodb_mongo_db_instance_type_replica_set` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--destination_mongodb_mongo_db_instance_type_replica_set))
-- `destination_mongodb_mongo_db_instance_type_standalone_mongo_db_instance` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--destination_mongodb_mongo_db_instance_type_standalone_mongo_db_instance))
-- `destination_mongodb_update_mongo_db_instance_type_mongo_db_atlas` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--destination_mongodb_update_mongo_db_instance_type_mongo_db_atlas))
-- `destination_mongodb_update_mongo_db_instance_type_replica_set` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--destination_mongodb_update_mongo_db_instance_type_replica_set))
-- `destination_mongodb_update_mongo_db_instance_type_standalone_mongo_db_instance` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--destination_mongodb_update_mongo_db_instance_type_standalone_mongo_db_instance))
-
-
-### Nested Schema for `configuration.instance_type.destination_mongodb_mongo_db_instance_type_mongo_db_atlas`
-
-Read-Only:
-
-- `cluster_url` (String) URL of a cluster to connect to.
-- `instance` (String) must be one of ["atlas"]
-
-
-
-### Nested Schema for `configuration.instance_type.destination_mongodb_mongo_db_instance_type_replica_set`
-
-Read-Only:
-
-- `instance` (String) must be one of ["replica"]
-- `replica_set` (String) A replica set name.
-- `server_addresses` (String) The members of a replica set. Please specify `host`:`port` of each member seperated by comma.
-
-
-
-### Nested Schema for `configuration.instance_type.destination_mongodb_mongo_db_instance_type_standalone_mongo_db_instance`
-
-Read-Only:
-
-- `host` (String) The Host of a Mongo database to be replicated.
-- `instance` (String) must be one of ["standalone"]
-- `port` (Number) The Port of a Mongo database to be replicated.
-
-
-
-### Nested Schema for `configuration.instance_type.destination_mongodb_update_mongo_db_instance_type_mongo_db_atlas`
-
-Read-Only:
-
-- `cluster_url` (String) URL of a cluster to connect to.
-- `instance` (String) must be one of ["atlas"]
-
-
-
-### Nested Schema for `configuration.instance_type.destination_mongodb_update_mongo_db_instance_type_replica_set`
-
-Read-Only:
-
-- `instance` (String) must be one of ["replica"]
-- `replica_set` (String) A replica set name.
-- `server_addresses` (String) The members of a replica set. Please specify `host`:`port` of each member seperated by comma.
-
-
-
-### Nested Schema for `configuration.instance_type.destination_mongodb_update_mongo_db_instance_type_standalone_mongo_db_instance`
-
-Read-Only:
-
-- `host` (String) The Host of a Mongo database to be replicated.
-- `instance` (String) must be one of ["standalone"]
-- `port` (Number) The Port of a Mongo database to be replicated.
-
-
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `destination_mongodb_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mongodb_ssh_tunnel_method_no_tunnel))
-- `destination_mongodb_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mongodb_ssh_tunnel_method_password_authentication))
-- `destination_mongodb_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mongodb_ssh_tunnel_method_ssh_key_authentication))
-- `destination_mongodb_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mongodb_update_ssh_tunnel_method_no_tunnel))
-- `destination_mongodb_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mongodb_update_ssh_tunnel_method_password_authentication))
-- `destination_mongodb_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mongodb_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mongodb_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mongodb_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mongodb_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mongodb_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mongodb_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mongodb_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/destination_mssql.md b/docs/data-sources/destination_mssql.md
index 35c20caac..fa77c0edf 100644
--- a/docs/data-sources/destination_mssql.md
+++ b/docs/data-sources/destination_mssql.md
@@ -27,150 +27,10 @@ data "airbyte_destination_mssql" "my_destination_mssql" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `database` (String) The name of the MSSQL database.
-- `destination_type` (String) must be one of ["mssql"]
-- `host` (String) The host name of the MSSQL database.
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) The password associated with this username.
-- `port` (Number) The port of the MSSQL database.
-- `schema` (String) The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public".
-- `ssl_method` (Attributes) The encryption method which is used to communicate with the database. (see [below for nested schema](#nestedatt--configuration--ssl_method))
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `username` (String) The username which is used to access the database.
-
-
-### Nested Schema for `configuration.ssl_method`
-
-Read-Only:
-
-- `destination_mssql_ssl_method_encrypted_trust_server_certificate` (Attributes) Use the certificate provided by the server without verification. (For testing purposes only!) (see [below for nested schema](#nestedatt--configuration--ssl_method--destination_mssql_ssl_method_encrypted_trust_server_certificate))
-- `destination_mssql_ssl_method_encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--ssl_method--destination_mssql_ssl_method_encrypted_verify_certificate))
-- `destination_mssql_update_ssl_method_encrypted_trust_server_certificate` (Attributes) Use the certificate provided by the server without verification. (For testing purposes only!) (see [below for nested schema](#nestedatt--configuration--ssl_method--destination_mssql_update_ssl_method_encrypted_trust_server_certificate))
-- `destination_mssql_update_ssl_method_encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--ssl_method--destination_mssql_update_ssl_method_encrypted_verify_certificate))
-
-
-### Nested Schema for `configuration.ssl_method.destination_mssql_ssl_method_encrypted_trust_server_certificate`
-
-Read-Only:
-
-- `ssl_method` (String) must be one of ["encrypted_trust_server_certificate"]
-
-
-
-### Nested Schema for `configuration.ssl_method.destination_mssql_ssl_method_encrypted_verify_certificate`
-
-Read-Only:
-
-- `host_name_in_certificate` (String) Specifies the host name of the server. The value of this property must match the subject property of the certificate.
-- `ssl_method` (String) must be one of ["encrypted_verify_certificate"]
-
-
-
-### Nested Schema for `configuration.ssl_method.destination_mssql_update_ssl_method_encrypted_trust_server_certificate`
-
-Read-Only:
-
-- `ssl_method` (String) must be one of ["encrypted_trust_server_certificate"]
-
-
-
-### Nested Schema for `configuration.ssl_method.destination_mssql_update_ssl_method_encrypted_verify_certificate`
-
-Read-Only:
-
-- `host_name_in_certificate` (String) Specifies the host name of the server. The value of this property must match the subject property of the certificate.
-- `ssl_method` (String) must be one of ["encrypted_verify_certificate"]
-
-
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `destination_mssql_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mssql_ssh_tunnel_method_no_tunnel))
-- `destination_mssql_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mssql_ssh_tunnel_method_password_authentication))
-- `destination_mssql_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mssql_ssh_tunnel_method_ssh_key_authentication))
-- `destination_mssql_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mssql_update_ssh_tunnel_method_no_tunnel))
-- `destination_mssql_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mssql_update_ssh_tunnel_method_password_authentication))
-- `destination_mssql_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mssql_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mssql_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mssql_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mssql_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mssql_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mssql_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mssql_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/destination_mysql.md b/docs/data-sources/destination_mysql.md
index 01493670c..f1c051b1c 100644
--- a/docs/data-sources/destination_mysql.md
+++ b/docs/data-sources/destination_mysql.md
@@ -27,103 +27,10 @@ data "airbyte_destination_mysql" "my_destination_mysql" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["mysql"]
-- `host` (String) Hostname of the database.
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) Password associated with the username.
-- `port` (Number) Port of the database.
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `username` (String) Username to use to access the database.
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `destination_mysql_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mysql_ssh_tunnel_method_no_tunnel))
-- `destination_mysql_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mysql_ssh_tunnel_method_password_authentication))
-- `destination_mysql_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mysql_ssh_tunnel_method_ssh_key_authentication))
-- `destination_mysql_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mysql_update_ssh_tunnel_method_no_tunnel))
-- `destination_mysql_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mysql_update_ssh_tunnel_method_password_authentication))
-- `destination_mysql_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mysql_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mysql_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mysql_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mysql_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mysql_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mysql_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mysql_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/destination_oracle.md b/docs/data-sources/destination_oracle.md
index b7287088b..d439323af 100644
--- a/docs/data-sources/destination_oracle.md
+++ b/docs/data-sources/destination_oracle.md
@@ -27,104 +27,10 @@ data "airbyte_destination_oracle" "my_destination_oracle" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `destination_type` (String) must be one of ["oracle"]
-- `host` (String) The hostname of the database.
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) The password associated with the username.
-- `port` (Number) The port of the database.
-- `schema` (String) The default schema is used as the target schema for all statements issued from the connection that do not explicitly specify a schema name. The usual value for this field is "airbyte". In Oracle, schemas and users are the same thing, so the "user" parameter is used as the login credentials and this is used for the default Airbyte message schema.
-- `sid` (String) The System Identifier uniquely distinguishes the instance from any other instance on the same computer.
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `username` (String) The username to access the database. This user must have CREATE USER privileges in the database.
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `destination_oracle_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_oracle_ssh_tunnel_method_no_tunnel))
-- `destination_oracle_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_oracle_ssh_tunnel_method_password_authentication))
-- `destination_oracle_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_oracle_ssh_tunnel_method_ssh_key_authentication))
-- `destination_oracle_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_oracle_update_ssh_tunnel_method_no_tunnel))
-- `destination_oracle_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_oracle_update_ssh_tunnel_method_password_authentication))
-- `destination_oracle_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_oracle_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.destination_oracle_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_oracle_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_oracle_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_oracle_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_oracle_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_oracle_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/destination_pinecone.md b/docs/data-sources/destination_pinecone.md
index a8cbc6933..6bdd19ec0 100644
--- a/docs/data-sources/destination_pinecone.md
+++ b/docs/data-sources/destination_pinecone.md
@@ -27,103 +27,10 @@ data "airbyte_destination_pinecone" "my_destination_pinecone" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `destination_type` (String) must be one of ["pinecone"]
-- `embedding` (Attributes) Embedding configuration (see [below for nested schema](#nestedatt--configuration--embedding))
-- `indexing` (Attributes) Pinecone is a popular vector store that can be used to store and retrieve embeddings. (see [below for nested schema](#nestedatt--configuration--indexing))
-- `processing` (Attributes) (see [below for nested schema](#nestedatt--configuration--processing))
-
-
-### Nested Schema for `configuration.embedding`
-
-Read-Only:
-
-- `destination_pinecone_embedding_cohere` (Attributes) Use the Cohere API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--destination_pinecone_embedding_cohere))
-- `destination_pinecone_embedding_fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--destination_pinecone_embedding_fake))
-- `destination_pinecone_embedding_open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--destination_pinecone_embedding_open_ai))
-- `destination_pinecone_update_embedding_cohere` (Attributes) Use the Cohere API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--destination_pinecone_update_embedding_cohere))
-- `destination_pinecone_update_embedding_fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--destination_pinecone_update_embedding_fake))
-- `destination_pinecone_update_embedding_open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--destination_pinecone_update_embedding_open_ai))
-
-
-### Nested Schema for `configuration.embedding.destination_pinecone_embedding_cohere`
-
-Read-Only:
-
-- `cohere_key` (String)
-- `mode` (String) must be one of ["cohere"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_pinecone_embedding_fake`
-
-Read-Only:
-
-- `mode` (String) must be one of ["fake"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_pinecone_embedding_open_ai`
-
-Read-Only:
-
-- `mode` (String) must be one of ["openai"]
-- `openai_key` (String)
-
-
-
-### Nested Schema for `configuration.embedding.destination_pinecone_update_embedding_cohere`
-
-Read-Only:
-
-- `cohere_key` (String)
-- `mode` (String) must be one of ["cohere"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_pinecone_update_embedding_fake`
-
-Read-Only:
-
-- `mode` (String) must be one of ["fake"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_pinecone_update_embedding_open_ai`
-
-Read-Only:
-
-- `mode` (String) must be one of ["openai"]
-- `openai_key` (String)
-
-
-
-
-### Nested Schema for `configuration.indexing`
-
-Read-Only:
-
-- `index` (String) Pinecone index to use
-- `pinecone_environment` (String) Pinecone environment to use
-- `pinecone_key` (String)
-
-
-
-### Nested Schema for `configuration.processing`
-
-Read-Only:
-
-- `chunk_overlap` (Number) Size of overlap between chunks in tokens to store in vector store to better capture relevant context
-- `chunk_size` (Number) Size of chunks in tokens to store in vector store (make sure it is not too big for the context if your LLM)
-- `metadata_fields` (List of String) List of fields in the record that should be stored as metadata. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered metadata fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array. When specifying nested paths, all matching values are flattened into an array set to a field named by the path.
-- `text_fields` (List of String) List of fields in the record that should be used to calculate the embedding. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered text fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array.
-
diff --git a/docs/data-sources/destination_postgres.md b/docs/data-sources/destination_postgres.md
index c61ff7598..4343cf009 100644
--- a/docs/data-sources/destination_postgres.md
+++ b/docs/data-sources/destination_postgres.md
@@ -27,239 +27,10 @@ data "airbyte_destination_postgres" "my_destination_postgres" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["postgres"]
-- `host` (String) Hostname of the database.
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) Password associated with the username.
-- `port` (Number) Port of the database.
-- `schema` (String) The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public".
-- `ssl_mode` (Attributes) SSL connection modes.
- disable - Chose this mode to disable encryption of communication between Airbyte and destination database
- allow - Chose this mode to enable encryption only when required by the source database
- prefer - Chose this mode to allow unencrypted connection only if the source database does not support encryption
- require - Chose this mode to always require encryption. If the source database server does not support encryption, connection will fail
- verify-ca - Chose this mode to always require encryption and to verify that the source database server has a valid SSL certificate
- verify-full - This is the most secure mode. Chose this mode to always require encryption and to verify the identity of the source database server
- See more information - in the docs. (see [below for nested schema](#nestedatt--configuration--ssl_mode))
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `username` (String) Username to use to access the database.
-
-
-### Nested Schema for `configuration.ssl_mode`
-
-Read-Only:
-
-- `destination_postgres_ssl_modes_allow` (Attributes) Allow SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_ssl_modes_allow))
-- `destination_postgres_ssl_modes_disable` (Attributes) Disable SSL. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_ssl_modes_disable))
-- `destination_postgres_ssl_modes_prefer` (Attributes) Prefer SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_ssl_modes_prefer))
-- `destination_postgres_ssl_modes_require` (Attributes) Require SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_ssl_modes_require))
-- `destination_postgres_ssl_modes_verify_ca` (Attributes) Verify-ca SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_ssl_modes_verify_ca))
-- `destination_postgres_ssl_modes_verify_full` (Attributes) Verify-full SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_ssl_modes_verify_full))
-- `destination_postgres_update_ssl_modes_allow` (Attributes) Allow SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_update_ssl_modes_allow))
-- `destination_postgres_update_ssl_modes_disable` (Attributes) Disable SSL. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_update_ssl_modes_disable))
-- `destination_postgres_update_ssl_modes_prefer` (Attributes) Prefer SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_update_ssl_modes_prefer))
-- `destination_postgres_update_ssl_modes_require` (Attributes) Require SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_update_ssl_modes_require))
-- `destination_postgres_update_ssl_modes_verify_ca` (Attributes) Verify-ca SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_update_ssl_modes_verify_ca))
-- `destination_postgres_update_ssl_modes_verify_full` (Attributes) Verify-full SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_update_ssl_modes_verify_full))
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_ssl_modes_allow`
-
-Read-Only:
-
-- `mode` (String) must be one of ["allow"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_ssl_modes_disable`
-
-Read-Only:
-
-- `mode` (String) must be one of ["disable"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_ssl_modes_prefer`
-
-Read-Only:
-
-- `mode` (String) must be one of ["prefer"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_ssl_modes_require`
-
-Read-Only:
-
-- `mode` (String) must be one of ["require"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_ssl_modes_verify_ca`
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-ca"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_ssl_modes_verify_full`
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-full"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_update_ssl_modes_allow`
-
-Read-Only:
-
-- `mode` (String) must be one of ["allow"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_update_ssl_modes_disable`
-
-Read-Only:
-
-- `mode` (String) must be one of ["disable"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_update_ssl_modes_prefer`
-
-Read-Only:
-
-- `mode` (String) must be one of ["prefer"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_update_ssl_modes_require`
-
-Read-Only:
-
-- `mode` (String) must be one of ["require"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_update_ssl_modes_verify_ca`
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-ca"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_update_ssl_modes_verify_full`
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-full"]
-
-
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `destination_postgres_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_postgres_ssh_tunnel_method_no_tunnel))
-- `destination_postgres_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_postgres_ssh_tunnel_method_password_authentication))
-- `destination_postgres_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_postgres_ssh_tunnel_method_ssh_key_authentication))
-- `destination_postgres_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_postgres_update_ssh_tunnel_method_no_tunnel))
-- `destination_postgres_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_postgres_update_ssh_tunnel_method_password_authentication))
-- `destination_postgres_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_postgres_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.destination_postgres_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_postgres_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_postgres_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_postgres_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_postgres_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_postgres_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/destination_pubsub.md b/docs/data-sources/destination_pubsub.md
index 251e0d5d8..77918b67f 100644
--- a/docs/data-sources/destination_pubsub.md
+++ b/docs/data-sources/destination_pubsub.md
@@ -27,23 +27,10 @@ data "airbyte_destination_pubsub" "my_destination_pubsub" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `batching_delay_threshold` (Number) Number of ms before the buffer is flushed
-- `batching_element_count_threshold` (Number) Number of messages before the buffer is flushed
-- `batching_enabled` (Boolean) If TRUE messages will be buffered instead of sending them one by one
-- `batching_request_bytes_threshold` (Number) Number of bytes before the buffer is flushed
-- `credentials_json` (String) The contents of the JSON service account key. Check out the docs if you need help generating this key.
-- `destination_type` (String) must be one of ["pubsub"]
-- `ordering_enabled` (Boolean) If TRUE PubSub publisher will have message ordering enabled. Every message will have an ordering key of stream
-- `project_id` (String) The GCP project ID for the project containing the target PubSub.
-- `topic_id` (String) The PubSub topic ID in the given GCP project ID.
-
diff --git a/docs/data-sources/destination_qdrant.md b/docs/data-sources/destination_qdrant.md
new file mode 100644
index 000000000..eedc79bd1
--- /dev/null
+++ b/docs/data-sources/destination_qdrant.md
@@ -0,0 +1,36 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_destination_qdrant Data Source - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ DestinationQdrant DataSource
+---
+
+# airbyte_destination_qdrant (Data Source)
+
+DestinationQdrant DataSource
+
+## Example Usage
+
+```terraform
+data "airbyte_destination_qdrant" "my_destination_qdrant" {
+ destination_id = "...my_destination_id..."
+}
+```
+
+
+## Schema
+
+### Required
+
+- `destination_id` (String)
+
+### Read-Only
+
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
+- `name` (String)
+- `workspace_id` (String)
+
+
diff --git a/docs/data-sources/destination_redis.md b/docs/data-sources/destination_redis.md
index 8dce9b1fa..e7dc7a46c 100644
--- a/docs/data-sources/destination_redis.md
+++ b/docs/data-sources/destination_redis.md
@@ -27,157 +27,10 @@ data "airbyte_destination_redis" "my_destination_redis" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `cache_type` (String) must be one of ["hash"]
-Redis cache type to store data in.
-- `destination_type` (String) must be one of ["redis"]
-- `host` (String) Redis host to connect to.
-- `password` (String) Password associated with Redis.
-- `port` (Number) Port of Redis.
-- `ssl` (Boolean) Indicates whether SSL encryption protocol will be used to connect to Redis. It is recommended to use SSL connection if possible.
-- `ssl_mode` (Attributes) SSL connection modes.
-
verify-full - This is the most secure mode. Always require encryption and verifies the identity of the source database server (see [below for nested schema](#nestedatt--configuration--ssl_mode))
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `username` (String) Username associated with Redis.
-
-
-### Nested Schema for `configuration.ssl_mode`
-
-Read-Only:
-
-- `destination_redis_ssl_modes_disable` (Attributes) Disable SSL. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_redis_ssl_modes_disable))
-- `destination_redis_ssl_modes_verify_full` (Attributes) Verify-full SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_redis_ssl_modes_verify_full))
-- `destination_redis_update_ssl_modes_disable` (Attributes) Disable SSL. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_redis_update_ssl_modes_disable))
-- `destination_redis_update_ssl_modes_verify_full` (Attributes) Verify-full SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_redis_update_ssl_modes_verify_full))
-
-
-### Nested Schema for `configuration.ssl_mode.destination_redis_ssl_modes_disable`
-
-Read-Only:
-
-- `mode` (String) must be one of ["disable"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_redis_ssl_modes_verify_full`
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-full"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_redis_update_ssl_modes_disable`
-
-Read-Only:
-
-- `mode` (String) must be one of ["disable"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_redis_update_ssl_modes_verify_full`
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-full"]
-
-
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `destination_redis_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redis_ssh_tunnel_method_no_tunnel))
-- `destination_redis_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redis_ssh_tunnel_method_password_authentication))
-- `destination_redis_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redis_ssh_tunnel_method_ssh_key_authentication))
-- `destination_redis_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redis_update_ssh_tunnel_method_no_tunnel))
-- `destination_redis_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redis_update_ssh_tunnel_method_password_authentication))
-- `destination_redis_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redis_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redis_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redis_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redis_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redis_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redis_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redis_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/destination_redshift.md b/docs/data-sources/destination_redshift.md
index b0775d232..11b98f6c8 100644
--- a/docs/data-sources/destination_redshift.md
+++ b/docs/data-sources/destination_redshift.md
@@ -27,220 +27,10 @@ data "airbyte_destination_redshift" "my_destination_redshift" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["redshift"]
-- `host` (String) Host Endpoint of the Redshift Cluster (must include the cluster-id, region and end with .redshift.amazonaws.com)
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) Password associated with the username.
-- `port` (Number) Port of the database.
-- `schema` (String) The default schema tables are written to if the source does not specify a namespace. Unless specifically configured, the usual value for this field is "public".
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `uploading_method` (Attributes) The method how the data will be uploaded to the database. (see [below for nested schema](#nestedatt--configuration--uploading_method))
-- `username` (String) Username to use to access the database.
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `destination_redshift_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redshift_ssh_tunnel_method_no_tunnel))
-- `destination_redshift_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redshift_ssh_tunnel_method_password_authentication))
-- `destination_redshift_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redshift_ssh_tunnel_method_ssh_key_authentication))
-- `destination_redshift_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redshift_update_ssh_tunnel_method_no_tunnel))
-- `destination_redshift_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redshift_update_ssh_tunnel_method_password_authentication))
-- `destination_redshift_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redshift_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redshift_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redshift_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redshift_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redshift_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redshift_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redshift_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-
-### Nested Schema for `configuration.uploading_method`
-
-Read-Only:
-
-- `destination_redshift_update_uploading_method_s3_staging` (Attributes) The method how the data will be uploaded to the database. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_update_uploading_method_s3_staging))
-- `destination_redshift_update_uploading_method_standard` (Attributes) The method how the data will be uploaded to the database. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_update_uploading_method_standard))
-- `destination_redshift_uploading_method_s3_staging` (Attributes) The method how the data will be uploaded to the database. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_uploading_method_s3_staging))
-- `destination_redshift_uploading_method_standard` (Attributes) The method how the data will be uploaded to the database. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_uploading_method_standard))
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_update_uploading_method_s3_staging`
-
-Read-Only:
-
-- `access_key_id` (String) This ID grants access to the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket. See AWS docs on how to generate an access key ID and secret access key.
-- `encryption` (Attributes) How to encrypt the staging data (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_update_uploading_method_s3_staging--encryption))
-- `file_buffer_count` (Number) Number of file buffers allocated for writing data. Increasing this number is beneficial for connections using Change Data Capture (CDC) and up to the number of streams within a connection. Increasing the number of file buffers past the maximum number of streams has deteriorating effects
-- `file_name_pattern` (String) The pattern allows you to set the file-name format for the S3 staging file(s)
-- `method` (String) must be one of ["S3 Staging"]
-- `purge_staging_data` (Boolean) Whether to delete the staging files from S3 after completing the sync. See docs for details.
-- `s3_bucket_name` (String) The name of the staging S3 bucket to use if utilising a COPY strategy. COPY is recommended for production workloads for better speed and scalability. See AWS docs for more details.
-- `s3_bucket_path` (String) The directory under the S3 bucket where data will be written. If not provided, then defaults to the root directory. See path's name recommendations for more details.
-- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1"]
-The region of the S3 staging bucket to use if utilising a COPY strategy. See AWS docs for details.
-- `secret_access_key` (String) The corresponding secret to the above access key id. See AWS docs on how to generate an access key ID and secret access key.
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_update_uploading_method_s3_staging.secret_access_key`
-
-Read-Only:
-
-- `destination_redshift_update_uploading_method_s3_staging_encryption_aes_cbc_envelope_encryption` (Attributes) Staging data will be encrypted using AES-CBC envelope encryption. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_update_uploading_method_s3_staging--secret_access_key--destination_redshift_update_uploading_method_s3_staging_encryption_aes_cbc_envelope_encryption))
-- `destination_redshift_update_uploading_method_s3_staging_encryption_no_encryption` (Attributes) Staging data will be stored in plaintext. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_update_uploading_method_s3_staging--secret_access_key--destination_redshift_update_uploading_method_s3_staging_encryption_no_encryption))
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_update_uploading_method_s3_staging.secret_access_key.destination_redshift_update_uploading_method_s3_staging_encryption_aes_cbc_envelope_encryption`
-
-Read-Only:
-
-- `encryption_type` (String) must be one of ["aes_cbc_envelope"]
-- `key_encrypting_key` (String) The key, base64-encoded. Must be either 128, 192, or 256 bits. Leave blank to have Airbyte generate an ephemeral key for each sync.
-
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_update_uploading_method_s3_staging.secret_access_key.destination_redshift_update_uploading_method_s3_staging_encryption_no_encryption`
-
-Read-Only:
-
-- `encryption_type` (String) must be one of ["none"]
-
-
-
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_update_uploading_method_standard`
-
-Read-Only:
-
-- `method` (String) must be one of ["Standard"]
-
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_uploading_method_s3_staging`
-
-Read-Only:
-
-- `access_key_id` (String) This ID grants access to the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket. See AWS docs on how to generate an access key ID and secret access key.
-- `encryption` (Attributes) How to encrypt the staging data (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_uploading_method_s3_staging--encryption))
-- `file_buffer_count` (Number) Number of file buffers allocated for writing data. Increasing this number is beneficial for connections using Change Data Capture (CDC) and up to the number of streams within a connection. Increasing the number of file buffers past the maximum number of streams has deteriorating effects
-- `file_name_pattern` (String) The pattern allows you to set the file-name format for the S3 staging file(s)
-- `method` (String) must be one of ["S3 Staging"]
-- `purge_staging_data` (Boolean) Whether to delete the staging files from S3 after completing the sync. See docs for details.
-- `s3_bucket_name` (String) The name of the staging S3 bucket to use if utilising a COPY strategy. COPY is recommended for production workloads for better speed and scalability. See AWS docs for more details.
-- `s3_bucket_path` (String) The directory under the S3 bucket where data will be written. If not provided, then defaults to the root directory. See path's name recommendations for more details.
-- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1"]
-The region of the S3 staging bucket to use if utilising a COPY strategy. See AWS docs for details.
-- `secret_access_key` (String) The corresponding secret to the above access key id. See AWS docs on how to generate an access key ID and secret access key.
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_uploading_method_s3_staging.secret_access_key`
-
-Read-Only:
-
-- `destination_redshift_uploading_method_s3_staging_encryption_aes_cbc_envelope_encryption` (Attributes) Staging data will be encrypted using AES-CBC envelope encryption. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_uploading_method_s3_staging--secret_access_key--destination_redshift_uploading_method_s3_staging_encryption_aes_cbc_envelope_encryption))
-- `destination_redshift_uploading_method_s3_staging_encryption_no_encryption` (Attributes) Staging data will be stored in plaintext. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_uploading_method_s3_staging--secret_access_key--destination_redshift_uploading_method_s3_staging_encryption_no_encryption))
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_uploading_method_s3_staging.secret_access_key.destination_redshift_uploading_method_s3_staging_encryption_aes_cbc_envelope_encryption`
-
-Read-Only:
-
-- `encryption_type` (String) must be one of ["aes_cbc_envelope"]
-- `key_encrypting_key` (String) The key, base64-encoded. Must be either 128, 192, or 256 bits. Leave blank to have Airbyte generate an ephemeral key for each sync.
-
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_uploading_method_s3_staging.secret_access_key.destination_redshift_uploading_method_s3_staging_encryption_no_encryption`
-
-Read-Only:
-
-- `encryption_type` (String) must be one of ["none"]
-
-
-
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_uploading_method_standard`
-
-Read-Only:
-
-- `method` (String) must be one of ["Standard"]
-
diff --git a/docs/data-sources/destination_s3.md b/docs/data-sources/destination_s3.md
index 16f8d2284..929bed92f 100644
--- a/docs/data-sources/destination_s3.md
+++ b/docs/data-sources/destination_s3.md
@@ -27,360 +27,10 @@ data "airbyte_destination_s3" "my_destination_s3" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_key_id` (String) The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more here.
-- `destination_type` (String) must be one of ["s3"]
-- `file_name_pattern` (String) The pattern allows you to set the file-name format for the S3 staging file(s)
-- `format` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format))
-- `s3_bucket_name` (String) The name of the S3 bucket. Read more here.
-- `s3_bucket_path` (String) Directory under the S3 bucket where data will be written. Read more here
-- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
-The region of the S3 bucket. See here for all region codes.
-- `s3_endpoint` (String) Your S3 endpoint url. Read more here
-- `s3_path_format` (String) Format string on how data will be organized inside the S3 bucket directory. Read more here
-- `secret_access_key` (String) The corresponding secret to the access key ID. Read more here
-
-
-### Nested Schema for `configuration.format`
-
-Read-Only:
-
-- `destination_s3_output_format_avro_apache_avro` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro))
-- `destination_s3_output_format_csv_comma_separated_values` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_csv_comma_separated_values))
-- `destination_s3_output_format_json_lines_newline_delimited_json` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_json_lines_newline_delimited_json))
-- `destination_s3_output_format_parquet_columnar_storage` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_parquet_columnar_storage))
-- `destination_s3_update_output_format_avro_apache_avro` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro))
-- `destination_s3_update_output_format_csv_comma_separated_values` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_csv_comma_separated_values))
-- `destination_s3_update_output_format_json_lines_newline_delimited_json` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_json_lines_newline_delimited_json))
-- `destination_s3_update_output_format_parquet_columnar_storage` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_parquet_columnar_storage))
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro`
-
-Read-Only:
-
-- `compression_codec` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--compression_codec))
-- `format_type` (String) must be one of ["Avro"]
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type`
-
-Read-Only:
-
-- `destination_s3_output_format_avro_apache_avro_compression_codec_bzip2` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--format_type--destination_s3_output_format_avro_apache_avro_compression_codec_bzip2))
-- `destination_s3_output_format_avro_apache_avro_compression_codec_deflate` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--format_type--destination_s3_output_format_avro_apache_avro_compression_codec_deflate))
-- `destination_s3_output_format_avro_apache_avro_compression_codec_no_compression` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--format_type--destination_s3_output_format_avro_apache_avro_compression_codec_no_compression))
-- `destination_s3_output_format_avro_apache_avro_compression_codec_snappy` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--format_type--destination_s3_output_format_avro_apache_avro_compression_codec_snappy))
-- `destination_s3_output_format_avro_apache_avro_compression_codec_xz` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--format_type--destination_s3_output_format_avro_apache_avro_compression_codec_xz))
-- `destination_s3_output_format_avro_apache_avro_compression_codec_zstandard` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--format_type--destination_s3_output_format_avro_apache_avro_compression_codec_zstandard))
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type.destination_s3_output_format_avro_apache_avro_compression_codec_bzip2`
-
-Read-Only:
-
-- `codec` (String) must be one of ["bzip2"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type.destination_s3_output_format_avro_apache_avro_compression_codec_deflate`
-
-Read-Only:
-
-- `codec` (String) must be one of ["Deflate"]
-- `compression_level` (Number) 0: no compression & fastest, 9: best compression & slowest.
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type.destination_s3_output_format_avro_apache_avro_compression_codec_no_compression`
-
-Read-Only:
-
-- `codec` (String) must be one of ["no compression"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type.destination_s3_output_format_avro_apache_avro_compression_codec_snappy`
-
-Read-Only:
-
-- `codec` (String) must be one of ["snappy"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type.destination_s3_output_format_avro_apache_avro_compression_codec_xz`
-
-Read-Only:
-
-- `codec` (String) must be one of ["xz"]
-- `compression_level` (Number) See here for details.
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type.destination_s3_output_format_avro_apache_avro_compression_codec_zstandard`
-
-Read-Only:
-
-- `codec` (String) must be one of ["zstandard"]
-- `compression_level` (Number) Negative levels are 'fast' modes akin to lz4 or snappy, levels above 9 are generally for archival purposes, and levels above 18 use a lot of memory.
-- `include_checksum` (Boolean) If true, include a checksum with each data block.
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_csv_comma_separated_values`
-
-Read-Only:
-
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_csv_comma_separated_values--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
-- `format_type` (String) must be one of ["CSV"]
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_csv_comma_separated_values.format_type`
-
-Read-Only:
-
-- `destination_s3_output_format_csv_comma_separated_values_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_csv_comma_separated_values--format_type--destination_s3_output_format_csv_comma_separated_values_compression_gzip))
-- `destination_s3_output_format_csv_comma_separated_values_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_csv_comma_separated_values--format_type--destination_s3_output_format_csv_comma_separated_values_compression_no_compression))
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_csv_comma_separated_values.format_type.destination_s3_output_format_csv_comma_separated_values_compression_gzip`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["GZIP"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_csv_comma_separated_values.format_type.destination_s3_output_format_csv_comma_separated_values_compression_no_compression`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["No Compression"]
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_json_lines_newline_delimited_json`
-
-Read-Only:
-
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_json_lines_newline_delimited_json--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input json data should be normalized (flattened) in the output JSON Lines. Please refer to docs for details.
-- `format_type` (String) must be one of ["JSONL"]
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_json_lines_newline_delimited_json.format_type`
-
-Read-Only:
-
-- `destination_s3_output_format_json_lines_newline_delimited_json_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_json_lines_newline_delimited_json--format_type--destination_s3_output_format_json_lines_newline_delimited_json_compression_gzip))
-- `destination_s3_output_format_json_lines_newline_delimited_json_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_json_lines_newline_delimited_json--format_type--destination_s3_output_format_json_lines_newline_delimited_json_compression_no_compression))
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_json_lines_newline_delimited_json.format_type.destination_s3_output_format_json_lines_newline_delimited_json_compression_gzip`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["GZIP"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_json_lines_newline_delimited_json.format_type.destination_s3_output_format_json_lines_newline_delimited_json_compression_no_compression`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["No Compression"]
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_parquet_columnar_storage`
-
-Read-Only:
-
-- `block_size_mb` (Number) This is the size of a row group being buffered in memory. It limits the memory usage when writing. Larger values will improve the IO when reading, but consume more memory when writing. Default: 128 MB.
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "LZO", "BROTLI", "LZ4", "ZSTD"]
-The compression algorithm used to compress data pages.
-- `dictionary_encoding` (Boolean) Default: true.
-- `dictionary_page_size_kb` (Number) There is one dictionary page per column per row group when dictionary encoding is used. The dictionary page size works like the page size but for dictionary. Default: 1024 KB.
-- `format_type` (String) must be one of ["Parquet"]
-- `max_padding_size_mb` (Number) Maximum size allowed as padding to align row groups. This is also the minimum size of a row group. Default: 8 MB.
-- `page_size_kb` (Number) The page size is for compression. A block is composed of pages. A page is the smallest unit that must be read fully to access a single record. If this value is too small, the compression will deteriorate. Default: 1024 KB.
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro`
-
-Read-Only:
-
-- `compression_codec` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--compression_codec))
-- `format_type` (String) must be one of ["Avro"]
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type`
-
-Read-Only:
-
-- `destination_s3_update_output_format_avro_apache_avro_compression_codec_bzip2` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--format_type--destination_s3_update_output_format_avro_apache_avro_compression_codec_bzip2))
-- `destination_s3_update_output_format_avro_apache_avro_compression_codec_deflate` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--format_type--destination_s3_update_output_format_avro_apache_avro_compression_codec_deflate))
-- `destination_s3_update_output_format_avro_apache_avro_compression_codec_no_compression` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--format_type--destination_s3_update_output_format_avro_apache_avro_compression_codec_no_compression))
-- `destination_s3_update_output_format_avro_apache_avro_compression_codec_snappy` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--format_type--destination_s3_update_output_format_avro_apache_avro_compression_codec_snappy))
-- `destination_s3_update_output_format_avro_apache_avro_compression_codec_xz` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--format_type--destination_s3_update_output_format_avro_apache_avro_compression_codec_xz))
-- `destination_s3_update_output_format_avro_apache_avro_compression_codec_zstandard` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--format_type--destination_s3_update_output_format_avro_apache_avro_compression_codec_zstandard))
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type.destination_s3_update_output_format_avro_apache_avro_compression_codec_bzip2`
-
-Read-Only:
-
-- `codec` (String) must be one of ["bzip2"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type.destination_s3_update_output_format_avro_apache_avro_compression_codec_deflate`
-
-Read-Only:
-
-- `codec` (String) must be one of ["Deflate"]
-- `compression_level` (Number) 0: no compression & fastest, 9: best compression & slowest.
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type.destination_s3_update_output_format_avro_apache_avro_compression_codec_no_compression`
-
-Read-Only:
-
-- `codec` (String) must be one of ["no compression"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type.destination_s3_update_output_format_avro_apache_avro_compression_codec_snappy`
-
-Read-Only:
-
-- `codec` (String) must be one of ["snappy"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type.destination_s3_update_output_format_avro_apache_avro_compression_codec_xz`
-
-Read-Only:
-
-- `codec` (String) must be one of ["xz"]
-- `compression_level` (Number) See here for details.
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type.destination_s3_update_output_format_avro_apache_avro_compression_codec_zstandard`
-
-Read-Only:
-
-- `codec` (String) must be one of ["zstandard"]
-- `compression_level` (Number) Negative levels are 'fast' modes akin to lz4 or snappy, levels above 9 are generally for archival purposes, and levels above 18 use a lot of memory.
-- `include_checksum` (Boolean) If true, include a checksum with each data block.
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_csv_comma_separated_values`
-
-Read-Only:
-
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_csv_comma_separated_values--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
-- `format_type` (String) must be one of ["CSV"]
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_csv_comma_separated_values.format_type`
-
-Read-Only:
-
-- `destination_s3_update_output_format_csv_comma_separated_values_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_csv_comma_separated_values--format_type--destination_s3_update_output_format_csv_comma_separated_values_compression_gzip))
-- `destination_s3_update_output_format_csv_comma_separated_values_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_csv_comma_separated_values--format_type--destination_s3_update_output_format_csv_comma_separated_values_compression_no_compression))
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_csv_comma_separated_values.format_type.destination_s3_update_output_format_csv_comma_separated_values_compression_gzip`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["GZIP"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_csv_comma_separated_values.format_type.destination_s3_update_output_format_csv_comma_separated_values_compression_no_compression`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["No Compression"]
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_json_lines_newline_delimited_json`
-
-Read-Only:
-
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_json_lines_newline_delimited_json--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input json data should be normalized (flattened) in the output JSON Lines. Please refer to docs for details.
-- `format_type` (String) must be one of ["JSONL"]
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_json_lines_newline_delimited_json.format_type`
-
-Read-Only:
-
-- `destination_s3_update_output_format_json_lines_newline_delimited_json_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_json_lines_newline_delimited_json--format_type--destination_s3_update_output_format_json_lines_newline_delimited_json_compression_gzip))
-- `destination_s3_update_output_format_json_lines_newline_delimited_json_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_json_lines_newline_delimited_json--format_type--destination_s3_update_output_format_json_lines_newline_delimited_json_compression_no_compression))
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_json_lines_newline_delimited_json.format_type.destination_s3_update_output_format_json_lines_newline_delimited_json_compression_gzip`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["GZIP"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_json_lines_newline_delimited_json.format_type.destination_s3_update_output_format_json_lines_newline_delimited_json_compression_no_compression`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["No Compression"]
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_parquet_columnar_storage`
-
-Read-Only:
-
-- `block_size_mb` (Number) This is the size of a row group being buffered in memory. It limits the memory usage when writing. Larger values will improve the IO when reading, but consume more memory when writing. Default: 128 MB.
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "LZO", "BROTLI", "LZ4", "ZSTD"]
-The compression algorithm used to compress data pages.
-- `dictionary_encoding` (Boolean) Default: true.
-- `dictionary_page_size_kb` (Number) There is one dictionary page per column per row group when dictionary encoding is used. The dictionary page size works like the page size but for dictionary. Default: 1024 KB.
-- `format_type` (String) must be one of ["Parquet"]
-- `max_padding_size_mb` (Number) Maximum size allowed as padding to align row groups. This is also the minimum size of a row group. Default: 8 MB.
-- `page_size_kb` (Number) The page size is for compression. A block is composed of pages. A page is the smallest unit that must be read fully to access a single record. If this value is too small, the compression will deteriorate. Default: 1024 KB.
-
diff --git a/docs/data-sources/destination_s3_glue.md b/docs/data-sources/destination_s3_glue.md
index a9268a9b1..f92b03ce8 100644
--- a/docs/data-sources/destination_s3_glue.md
+++ b/docs/data-sources/destination_s3_glue.md
@@ -27,105 +27,10 @@ data "airbyte_destination_s3_glue" "my_destination_s3glue" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_key_id` (String) The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more here.
-- `destination_type` (String) must be one of ["s3-glue"]
-- `file_name_pattern` (String) The pattern allows you to set the file-name format for the S3 staging file(s)
-- `format` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format))
-- `glue_database` (String) Name of the glue database for creating the tables, leave blank if no integration
-- `glue_serialization_library` (String) must be one of ["org.openx.data.jsonserde.JsonSerDe", "org.apache.hive.hcatalog.data.JsonSerDe"]
-The library that your query engine will use for reading and writing data in your lake.
-- `s3_bucket_name` (String) The name of the S3 bucket. Read more here.
-- `s3_bucket_path` (String) Directory under the S3 bucket where data will be written. Read more here
-- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
-The region of the S3 bucket. See here for all region codes.
-- `s3_endpoint` (String) Your S3 endpoint url. Read more here
-- `s3_path_format` (String) Format string on how data will be organized inside the S3 bucket directory. Read more here
-- `secret_access_key` (String) The corresponding secret to the access key ID. Read more here
-
-
-### Nested Schema for `configuration.format`
-
-Read-Only:
-
-- `destination_s3_glue_output_format_json_lines_newline_delimited_json` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_output_format_json_lines_newline_delimited_json))
-- `destination_s3_glue_update_output_format_json_lines_newline_delimited_json` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_update_output_format_json_lines_newline_delimited_json))
-
-
-### Nested Schema for `configuration.format.destination_s3_glue_output_format_json_lines_newline_delimited_json`
-
-Read-Only:
-
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_output_format_json_lines_newline_delimited_json--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input json data should be normalized (flattened) in the output JSON Lines. Please refer to docs for details.
-- `format_type` (String) must be one of ["JSONL"]
-
-
-### Nested Schema for `configuration.format.destination_s3_glue_output_format_json_lines_newline_delimited_json.format_type`
-
-Read-Only:
-
-- `destination_s3_glue_output_format_json_lines_newline_delimited_json_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_output_format_json_lines_newline_delimited_json--format_type--destination_s3_glue_output_format_json_lines_newline_delimited_json_compression_gzip))
-- `destination_s3_glue_output_format_json_lines_newline_delimited_json_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_output_format_json_lines_newline_delimited_json--format_type--destination_s3_glue_output_format_json_lines_newline_delimited_json_compression_no_compression))
-
-
-### Nested Schema for `configuration.format.destination_s3_glue_output_format_json_lines_newline_delimited_json.format_type.destination_s3_glue_output_format_json_lines_newline_delimited_json_compression_gzip`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["GZIP"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_glue_output_format_json_lines_newline_delimited_json.format_type.destination_s3_glue_output_format_json_lines_newline_delimited_json_compression_no_compression`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["No Compression"]
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_s3_glue_update_output_format_json_lines_newline_delimited_json`
-
-Read-Only:
-
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_update_output_format_json_lines_newline_delimited_json--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input json data should be normalized (flattened) in the output JSON Lines. Please refer to docs for details.
-- `format_type` (String) must be one of ["JSONL"]
-
-
-### Nested Schema for `configuration.format.destination_s3_glue_update_output_format_json_lines_newline_delimited_json.format_type`
-
-Read-Only:
-
-- `destination_s3_glue_update_output_format_json_lines_newline_delimited_json_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_update_output_format_json_lines_newline_delimited_json--format_type--destination_s3_glue_update_output_format_json_lines_newline_delimited_json_compression_gzip))
-- `destination_s3_glue_update_output_format_json_lines_newline_delimited_json_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_update_output_format_json_lines_newline_delimited_json--format_type--destination_s3_glue_update_output_format_json_lines_newline_delimited_json_compression_no_compression))
-
-
-### Nested Schema for `configuration.format.destination_s3_glue_update_output_format_json_lines_newline_delimited_json.format_type.destination_s3_glue_update_output_format_json_lines_newline_delimited_json_compression_gzip`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["GZIP"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_glue_update_output_format_json_lines_newline_delimited_json.format_type.destination_s3_glue_update_output_format_json_lines_newline_delimited_json_compression_no_compression`
-
-Read-Only:
-
-- `compression_type` (String) must be one of ["No Compression"]
-
diff --git a/docs/data-sources/destination_sftp_json.md b/docs/data-sources/destination_sftp_json.md
index 8cf448197..385d13431 100644
--- a/docs/data-sources/destination_sftp_json.md
+++ b/docs/data-sources/destination_sftp_json.md
@@ -27,20 +27,10 @@ data "airbyte_destination_sftp_json" "my_destination_sftpjson" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `destination_path` (String) Path to the directory where json files will be written.
-- `destination_type` (String) must be one of ["sftp-json"]
-- `host` (String) Hostname of the SFTP server.
-- `password` (String) Password associated with the username.
-- `port` (Number) Port of the SFTP server.
-- `username` (String) Username to use to access the SFTP server.
-
diff --git a/docs/data-sources/destination_snowflake.md b/docs/data-sources/destination_snowflake.md
index b68969f18..571c6ec7b 100644
--- a/docs/data-sources/destination_snowflake.md
+++ b/docs/data-sources/destination_snowflake.md
@@ -27,97 +27,10 @@ data "airbyte_destination_snowflake" "my_destination_snowflake" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `database` (String) Enter the name of the database you want to sync data into
-- `destination_type` (String) must be one of ["snowflake"]
-- `host` (String) Enter your Snowflake account's locator (in the format ...snowflakecomputing.com)
-- `jdbc_url_params` (String) Enter the additional properties to pass to the JDBC URL string when connecting to the database (formatted as key=value pairs separated by the symbol &). Example: key1=value1&key2=value2&key3=value3
-- `raw_data_schema` (String) The schema to write raw tables into
-- `role` (String) Enter the role that you want to use to access Snowflake
-- `schema` (String) Enter the name of the default schema
-- `username` (String) Enter the name of the user you want to use to access the database
-- `warehouse` (String) Enter the name of the warehouse that you want to sync data into
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `destination_snowflake_authorization_method_key_pair_authentication` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--destination_snowflake_authorization_method_key_pair_authentication))
-- `destination_snowflake_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--destination_snowflake_authorization_method_o_auth2_0))
-- `destination_snowflake_authorization_method_username_and_password` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--destination_snowflake_authorization_method_username_and_password))
-- `destination_snowflake_update_authorization_method_key_pair_authentication` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--destination_snowflake_update_authorization_method_key_pair_authentication))
-- `destination_snowflake_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--destination_snowflake_update_authorization_method_o_auth2_0))
-- `destination_snowflake_update_authorization_method_username_and_password` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--destination_snowflake_update_authorization_method_username_and_password))
-
-
-### Nested Schema for `configuration.credentials.destination_snowflake_authorization_method_key_pair_authentication`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Key Pair Authentication"]
-- `private_key` (String) RSA Private key to use for Snowflake connection. See the docs for more information on how to obtain this key.
-- `private_key_password` (String) Passphrase for private key
-
-
-
-### Nested Schema for `configuration.credentials.destination_snowflake_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Enter you application's Access Token
-- `auth_type` (String) must be one of ["OAuth2.0"]
-- `client_id` (String) Enter your application's Client ID
-- `client_secret` (String) Enter your application's Client secret
-- `refresh_token` (String) Enter your application's Refresh Token
-
-
-
-### Nested Schema for `configuration.credentials.destination_snowflake_authorization_method_username_and_password`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Username and Password"]
-- `password` (String) Enter the password associated with the username.
-
-
-
-### Nested Schema for `configuration.credentials.destination_snowflake_update_authorization_method_key_pair_authentication`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Key Pair Authentication"]
-- `private_key` (String) RSA Private key to use for Snowflake connection. See the docs for more information on how to obtain this key.
-- `private_key_password` (String) Passphrase for private key
-
-
-
-### Nested Schema for `configuration.credentials.destination_snowflake_update_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Enter you application's Access Token
-- `auth_type` (String) must be one of ["OAuth2.0"]
-- `client_id` (String) Enter your application's Client ID
-- `client_secret` (String) Enter your application's Client secret
-- `refresh_token` (String) Enter your application's Refresh Token
-
-
-
-### Nested Schema for `configuration.credentials.destination_snowflake_update_authorization_method_username_and_password`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Username and Password"]
-- `password` (String) Enter the password associated with the username.
-
diff --git a/docs/data-sources/destination_timeplus.md b/docs/data-sources/destination_timeplus.md
index bcb2d32d3..ee545360c 100644
--- a/docs/data-sources/destination_timeplus.md
+++ b/docs/data-sources/destination_timeplus.md
@@ -27,17 +27,10 @@ data "airbyte_destination_timeplus" "my_destination_timeplus" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `apikey` (String) Personal API key
-- `destination_type` (String) must be one of ["timeplus"]
-- `endpoint` (String) Timeplus workspace endpoint
-
diff --git a/docs/data-sources/destination_typesense.md b/docs/data-sources/destination_typesense.md
index 11db436ef..6d9647d98 100644
--- a/docs/data-sources/destination_typesense.md
+++ b/docs/data-sources/destination_typesense.md
@@ -27,20 +27,10 @@ data "airbyte_destination_typesense" "my_destination_typesense" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Typesense API Key
-- `batch_size` (Number) How many documents should be imported together. Default 1000
-- `destination_type` (String) must be one of ["typesense"]
-- `host` (String) Hostname of the Typesense instance without protocol.
-- `port` (String) Port of the Typesense instance. Ex: 8108, 80, 443. Default is 443
-- `protocol` (String) Protocol of the Typesense instance. Ex: http or https. Default is https
-
diff --git a/docs/data-sources/destination_vertica.md b/docs/data-sources/destination_vertica.md
index 64ef4c6c5..d607b06fb 100644
--- a/docs/data-sources/destination_vertica.md
+++ b/docs/data-sources/destination_vertica.md
@@ -27,104 +27,10 @@ data "airbyte_destination_vertica" "my_destination_vertica" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["vertica"]
-- `host` (String) Hostname of the database.
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) Password associated with the username.
-- `port` (Number) Port of the database.
-- `schema` (String) Schema for vertica destination
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `username` (String) Username to use to access the database.
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `destination_vertica_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_vertica_ssh_tunnel_method_no_tunnel))
-- `destination_vertica_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_vertica_ssh_tunnel_method_password_authentication))
-- `destination_vertica_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_vertica_ssh_tunnel_method_ssh_key_authentication))
-- `destination_vertica_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_vertica_update_ssh_tunnel_method_no_tunnel))
-- `destination_vertica_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_vertica_update_ssh_tunnel_method_password_authentication))
-- `destination_vertica_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_vertica_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.destination_vertica_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_vertica_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_vertica_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_vertica_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_vertica_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_vertica_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/destination_weaviate.md b/docs/data-sources/destination_weaviate.md
new file mode 100644
index 000000000..f358d252c
--- /dev/null
+++ b/docs/data-sources/destination_weaviate.md
@@ -0,0 +1,36 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_destination_weaviate Data Source - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ DestinationWeaviate DataSource
+---
+
+# airbyte_destination_weaviate (Data Source)
+
+DestinationWeaviate DataSource
+
+## Example Usage
+
+```terraform
+data "airbyte_destination_weaviate" "my_destination_weaviate" {
+ destination_id = "...my_destination_id..."
+}
+```
+
+
+## Schema
+
+### Required
+
+- `destination_id` (String)
+
+### Read-Only
+
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
+- `name` (String)
+- `workspace_id` (String)
+
+
diff --git a/docs/data-sources/destination_xata.md b/docs/data-sources/destination_xata.md
index f12a84d10..cdb0201fd 100644
--- a/docs/data-sources/destination_xata.md
+++ b/docs/data-sources/destination_xata.md
@@ -27,17 +27,10 @@ data "airbyte_destination_xata" "my_destination_xata" {
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the destination.
+- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) API Key to connect.
-- `db_url` (String) URL pointing to your workspace.
-- `destination_type` (String) must be one of ["xata"]
-
diff --git a/docs/data-sources/source_aha.md b/docs/data-sources/source_aha.md
index a3e136563..c0d905f82 100644
--- a/docs/data-sources/source_aha.md
+++ b/docs/data-sources/source_aha.md
@@ -14,7 +14,6 @@ SourceAha DataSource
```terraform
data "airbyte_source_aha" "my_source_aha" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_aha" "my_source_aha" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) API Key
-- `source_type` (String) must be one of ["aha"]
-- `url` (String) URL
-
diff --git a/docs/data-sources/source_aircall.md b/docs/data-sources/source_aircall.md
index 92a28f479..32b9df483 100644
--- a/docs/data-sources/source_aircall.md
+++ b/docs/data-sources/source_aircall.md
@@ -14,7 +14,6 @@ SourceAircall DataSource
```terraform
data "airbyte_source_aircall" "my_source_aircall" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_aircall" "my_source_aircall" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_id` (String) App ID found at settings https://dashboard.aircall.io/integrations/api-keys
-- `api_token` (String) App token found at settings (Ref- https://dashboard.aircall.io/integrations/api-keys)
-- `source_type` (String) must be one of ["aircall"]
-- `start_date` (String) Date time filter for incremental filter, Specify which date to extract from.
-
diff --git a/docs/data-sources/source_airtable.md b/docs/data-sources/source_airtable.md
index f8f218990..4e2b709ef 100644
--- a/docs/data-sources/source_airtable.md
+++ b/docs/data-sources/source_airtable.md
@@ -14,7 +14,6 @@ SourceAirtable DataSource
```terraform
data "airbyte_source_airtable" "my_source_airtable" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,75 +25,12 @@ data "airbyte_source_airtable" "my_source_airtable" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["airtable"]
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_airtable_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_airtable_authentication_o_auth2_0))
-- `source_airtable_authentication_personal_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_airtable_authentication_personal_access_token))
-- `source_airtable_update_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_airtable_update_authentication_o_auth2_0))
-- `source_airtable_update_authentication_personal_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_airtable_update_authentication_personal_access_token))
-
-
-### Nested Schema for `configuration.credentials.source_airtable_authentication_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The client ID of the Airtable developer application.
-- `client_secret` (String) The client secret the Airtable developer application.
-- `refresh_token` (String) The key to refresh the expired access token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-
-
-### Nested Schema for `configuration.credentials.source_airtable_authentication_personal_access_token`
-
-Read-Only:
-
-- `api_key` (String) The Personal Access Token for the Airtable account. See the Support Guide for more information on how to obtain this token.
-- `auth_method` (String) must be one of ["api_key"]
-
-
-
-### Nested Schema for `configuration.credentials.source_airtable_update_authentication_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The client ID of the Airtable developer application.
-- `client_secret` (String) The client secret the Airtable developer application.
-- `refresh_token` (String) The key to refresh the expired access token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-
-
-### Nested Schema for `configuration.credentials.source_airtable_update_authentication_personal_access_token`
-
-Read-Only:
-
-- `api_key` (String) The Personal Access Token for the Airtable account. See the Support Guide for more information on how to obtain this token.
-- `auth_method` (String) must be one of ["api_key"]
-
diff --git a/docs/data-sources/source_alloydb.md b/docs/data-sources/source_alloydb.md
index 05cae8890..02d984cef 100644
--- a/docs/data-sources/source_alloydb.md
+++ b/docs/data-sources/source_alloydb.md
@@ -14,7 +14,6 @@ SourceAlloydb DataSource
```terraform
data "airbyte_source_alloydb" "my_source_alloydb" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,377 +25,12 @@ data "airbyte_source_alloydb" "my_source_alloydb" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `database` (String) Name of the database.
-- `host` (String) Hostname of the database.
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (Eg. key1=value1&key2=value2&key3=value3). For more information read about JDBC URL parameters.
-- `password` (String) Password associated with the username.
-- `port` (Number) Port of the database.
-- `replication_method` (Attributes) Replication method for extracting data from the database. (see [below for nested schema](#nestedatt--configuration--replication_method))
-- `schemas` (List of String) The list of schemas (case sensitive) to sync from. Defaults to public.
-- `source_type` (String) must be one of ["alloydb"]
-- `ssl_mode` (Attributes) SSL connection modes.
- Read more in the docs. (see [below for nested schema](#nestedatt--configuration--ssl_mode))
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `username` (String) Username to access the database.
-
-
-### Nested Schema for `configuration.replication_method`
-
-Read-Only:
-
-- `source_alloydb_replication_method_logical_replication_cdc` (Attributes) Logical replication uses the Postgres write-ahead log (WAL) to detect inserts, updates, and deletes. This needs to be configured on the source database itself. Only available on Postgres 10 and above. Read the docs. (see [below for nested schema](#nestedatt--configuration--replication_method--source_alloydb_replication_method_logical_replication_cdc))
-- `source_alloydb_replication_method_standard` (Attributes) Standard replication requires no setup on the DB side but will not be able to represent deletions incrementally. (see [below for nested schema](#nestedatt--configuration--replication_method--source_alloydb_replication_method_standard))
-- `source_alloydb_replication_method_standard_xmin` (Attributes) Xmin replication requires no setup on the DB side but will not be able to represent deletions incrementally. (see [below for nested schema](#nestedatt--configuration--replication_method--source_alloydb_replication_method_standard_xmin))
-- `source_alloydb_update_replication_method_logical_replication_cdc` (Attributes) Logical replication uses the Postgres write-ahead log (WAL) to detect inserts, updates, and deletes. This needs to be configured on the source database itself. Only available on Postgres 10 and above. Read the docs. (see [below for nested schema](#nestedatt--configuration--replication_method--source_alloydb_update_replication_method_logical_replication_cdc))
-- `source_alloydb_update_replication_method_standard` (Attributes) Standard replication requires no setup on the DB side but will not be able to represent deletions incrementally. (see [below for nested schema](#nestedatt--configuration--replication_method--source_alloydb_update_replication_method_standard))
-- `source_alloydb_update_replication_method_standard_xmin` (Attributes) Xmin replication requires no setup on the DB side but will not be able to represent deletions incrementally. (see [below for nested schema](#nestedatt--configuration--replication_method--source_alloydb_update_replication_method_standard_xmin))
-
-
-### Nested Schema for `configuration.replication_method.source_alloydb_replication_method_logical_replication_cdc`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `lsn_commit_behaviour` (String) must be one of ["While reading Data", "After loading Data in the destination"]
-Determines when Airbtye should flush the LSN of processed WAL logs in the source database. `After loading Data in the destination` is default. If `While reading Data` is selected, in case of a downstream failure (while loading data into the destination), next sync would result in a full sync.
-- `method` (String) must be one of ["CDC"]
-- `plugin` (String) must be one of ["pgoutput"]
-A logical decoding plugin installed on the PostgreSQL server.
-- `publication` (String) A Postgres publication used for consuming changes. Read about publications and replication identities.
-- `queue_size` (Number) The size of the internal queue. This may interfere with memory consumption and efficiency of the connector, please be careful.
-- `replication_slot` (String) A plugin logical replication slot. Read about replication slots.
-
-
-
-### Nested Schema for `configuration.replication_method.source_alloydb_replication_method_standard`
-
-Read-Only:
-
-- `method` (String) must be one of ["Standard"]
-
-
-
-### Nested Schema for `configuration.replication_method.source_alloydb_replication_method_standard_xmin`
-
-Read-Only:
-
-- `method` (String) must be one of ["Xmin"]
-
-
-
-### Nested Schema for `configuration.replication_method.source_alloydb_update_replication_method_logical_replication_cdc`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `lsn_commit_behaviour` (String) must be one of ["While reading Data", "After loading Data in the destination"]
-Determines when Airbtye should flush the LSN of processed WAL logs in the source database. `After loading Data in the destination` is default. If `While reading Data` is selected, in case of a downstream failure (while loading data into the destination), next sync would result in a full sync.
-- `method` (String) must be one of ["CDC"]
-- `plugin` (String) must be one of ["pgoutput"]
-A logical decoding plugin installed on the PostgreSQL server.
-- `publication` (String) A Postgres publication used for consuming changes. Read about publications and replication identities.
-- `queue_size` (Number) The size of the internal queue. This may interfere with memory consumption and efficiency of the connector, please be careful.
-- `replication_slot` (String) A plugin logical replication slot. Read about replication slots.
-
-
-
-### Nested Schema for `configuration.replication_method.source_alloydb_update_replication_method_standard`
-
-Read-Only:
-
-- `method` (String) must be one of ["Standard"]
-
-
-
-### Nested Schema for `configuration.replication_method.source_alloydb_update_replication_method_standard_xmin`
-
-Read-Only:
-
-- `method` (String) must be one of ["Xmin"]
-
-
-
-
-### Nested Schema for `configuration.ssl_mode`
-
-Read-Only:
-
-- `source_alloydb_ssl_modes_allow` (Attributes) Enables encryption only when required by the source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_ssl_modes_allow))
-- `source_alloydb_ssl_modes_disable` (Attributes) Disables encryption of communication between Airbyte and source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_ssl_modes_disable))
-- `source_alloydb_ssl_modes_prefer` (Attributes) Allows unencrypted connection only if the source database does not support encryption. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_ssl_modes_prefer))
-- `source_alloydb_ssl_modes_require` (Attributes) Always require encryption. If the source database server does not support encryption, connection will fail. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_ssl_modes_require))
-- `source_alloydb_ssl_modes_verify_ca` (Attributes) Always require encryption and verifies that the source database server has a valid SSL certificate. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_ssl_modes_verify_ca))
-- `source_alloydb_ssl_modes_verify_full` (Attributes) This is the most secure mode. Always require encryption and verifies the identity of the source database server. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_ssl_modes_verify_full))
-- `source_alloydb_update_ssl_modes_allow` (Attributes) Enables encryption only when required by the source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_update_ssl_modes_allow))
-- `source_alloydb_update_ssl_modes_disable` (Attributes) Disables encryption of communication between Airbyte and source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_update_ssl_modes_disable))
-- `source_alloydb_update_ssl_modes_prefer` (Attributes) Allows unencrypted connection only if the source database does not support encryption. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_update_ssl_modes_prefer))
-- `source_alloydb_update_ssl_modes_require` (Attributes) Always require encryption. If the source database server does not support encryption, connection will fail. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_update_ssl_modes_require))
-- `source_alloydb_update_ssl_modes_verify_ca` (Attributes) Always require encryption and verifies that the source database server has a valid SSL certificate. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_update_ssl_modes_verify_ca))
-- `source_alloydb_update_ssl_modes_verify_full` (Attributes) This is the most secure mode. Always require encryption and verifies the identity of the source database server. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_update_ssl_modes_verify_full))
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_ssl_modes_allow`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["allow"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_ssl_modes_disable`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["disable"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_ssl_modes_prefer`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["prefer"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_ssl_modes_require`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["require"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_ssl_modes_verify_ca`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-ca"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_ssl_modes_verify_full`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-full"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_update_ssl_modes_allow`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["allow"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_update_ssl_modes_disable`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["disable"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_update_ssl_modes_prefer`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["prefer"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_update_ssl_modes_require`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["require"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_update_ssl_modes_verify_ca`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-ca"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_update_ssl_modes_verify_full`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-full"]
-
-
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `source_alloydb_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_alloydb_ssh_tunnel_method_no_tunnel))
-- `source_alloydb_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_alloydb_ssh_tunnel_method_password_authentication))
-- `source_alloydb_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_alloydb_ssh_tunnel_method_ssh_key_authentication))
-- `source_alloydb_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_alloydb_update_ssh_tunnel_method_no_tunnel))
-- `source_alloydb_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_alloydb_update_ssh_tunnel_method_password_authentication))
-- `source_alloydb_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_alloydb_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.source_alloydb_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_alloydb_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_alloydb_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_alloydb_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_alloydb_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_alloydb_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/source_amazon_ads.md b/docs/data-sources/source_amazon_ads.md
index e320fca0e..2ebf2f996 100644
--- a/docs/data-sources/source_amazon_ads.md
+++ b/docs/data-sources/source_amazon_ads.md
@@ -14,7 +14,6 @@ SourceAmazonAds DataSource
```terraform
data "airbyte_source_amazon_ads" "my_source_amazonads" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,33 +25,12 @@ data "airbyte_source_amazon_ads" "my_source_amazonads" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The client ID of your Amazon Ads developer application. See the docs for more information.
-- `client_secret` (String) The client secret of your Amazon Ads developer application. See the docs for more information.
-- `look_back_window` (Number) The amount of days to go back in time to get the updated data from Amazon Ads
-- `marketplace_ids` (List of String) Marketplace IDs you want to fetch data for. Note: If Profile IDs are also selected, profiles will be selected if they match the Profile ID OR the Marketplace ID.
-- `profiles` (List of Number) Profile IDs you want to fetch data for. See docs for more details. Note: If Marketplace IDs are also selected, profiles will be selected if they match the Profile ID OR the Marketplace ID.
-- `refresh_token` (String) Amazon Ads refresh token. See the docs for more information on how to obtain this token.
-- `region` (String) must be one of ["NA", "EU", "FE"]
-Region to pull data from (EU/NA/FE). See docs for more details.
-- `report_record_types` (List of String) Optional configuration which accepts an array of string of record types. Leave blank for default behaviour to pull all report types. Use this config option only if you want to pull specific report type(s). See docs for more details
-- `source_type` (String) must be one of ["amazon-ads"]
-- `start_date` (String) The Start date for collecting reports, should not be more than 60 days in the past. In YYYY-MM-DD format
-- `state_filter` (List of String) Reflects the state of the Display, Product, and Brand Campaign streams as enabled, paused, or archived. If you do not populate this field, it will be ignored completely.
-
diff --git a/docs/data-sources/source_amazon_seller_partner.md b/docs/data-sources/source_amazon_seller_partner.md
index 282501471..132a75cf3 100644
--- a/docs/data-sources/source_amazon_seller_partner.md
+++ b/docs/data-sources/source_amazon_seller_partner.md
@@ -14,7 +14,6 @@ SourceAmazonSellerPartner DataSource
```terraform
data "airbyte_source_amazon_seller_partner" "my_source_amazonsellerpartner" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,38 +25,12 @@ data "airbyte_source_amazon_seller_partner" "my_source_amazonsellerpartner" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `advanced_stream_options` (String) Additional information to configure report options. This varies by report type, not every report implement this kind of feature. Must be a valid json string.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `aws_access_key` (String) Specifies the AWS access key used as part of the credentials to authenticate the user.
-- `aws_environment` (String) must be one of ["PRODUCTION", "SANDBOX"]
-Select the AWS Environment.
-- `aws_secret_key` (String) Specifies the AWS secret key used as part of the credentials to authenticate the user.
-- `lwa_app_id` (String) Your Login with Amazon Client ID.
-- `lwa_client_secret` (String) Your Login with Amazon Client Secret.
-- `max_wait_seconds` (Number) Sometimes report can take up to 30 minutes to generate. This will set the limit for how long to wait for a successful report.
-- `period_in_days` (Number) Will be used for stream slicing for initial full_refresh sync when no updated state is present for reports that support sliced incremental sync.
-- `refresh_token` (String) The Refresh Token obtained via OAuth flow authorization.
-- `region` (String) must be one of ["AE", "AU", "BE", "BR", "CA", "DE", "EG", "ES", "FR", "GB", "IN", "IT", "JP", "MX", "NL", "PL", "SA", "SE", "SG", "TR", "UK", "US"]
-Select the AWS Region.
-- `replication_end_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data after this date will not be replicated.
-- `replication_start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-- `report_options` (String) Additional information passed to reports. This varies by report type. Must be a valid json string.
-- `role_arn` (String) Specifies the Amazon Resource Name (ARN) of an IAM role that you want to use to perform operations requested using this profile. (Needs permission to 'Assume Role' STS).
-- `source_type` (String) must be one of ["amazon-seller-partner"]
-
diff --git a/docs/data-sources/source_amazon_sqs.md b/docs/data-sources/source_amazon_sqs.md
index eac98b0d0..26ba47ff2 100644
--- a/docs/data-sources/source_amazon_sqs.md
+++ b/docs/data-sources/source_amazon_sqs.md
@@ -14,7 +14,6 @@ SourceAmazonSqs DataSource
```terraform
data "airbyte_source_amazon_sqs" "my_source_amazonsqs" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,31 +25,12 @@ data "airbyte_source_amazon_sqs" "my_source_amazonsqs" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_key` (String) The Access Key ID of the AWS IAM Role to use for pulling messages
-- `attributes_to_return` (String) Comma separated list of Mesage Attribute names to return
-- `delete_messages` (Boolean) If Enabled, messages will be deleted from the SQS Queue after being read. If Disabled, messages are left in the queue and can be read more than once. WARNING: Enabling this option can result in data loss in cases of failure, use with caution, see documentation for more detail.
-- `max_batch_size` (Number) Max amount of messages to get in one batch (10 max)
-- `max_wait_time` (Number) Max amount of time in seconds to wait for messages in a single poll (20 max)
-- `queue_url` (String) URL of the SQS Queue
-- `region` (String) must be one of ["us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
-AWS Region of the SQS Queue
-- `secret_key` (String) The Secret Key of the AWS IAM Role to use for pulling messages
-- `source_type` (String) must be one of ["amazon-sqs"]
-- `visibility_timeout` (Number) Modify the Visibility Timeout of the individual message from the Queue's default (seconds).
-
diff --git a/docs/data-sources/source_amplitude.md b/docs/data-sources/source_amplitude.md
index 0df5b2607..6816123f4 100644
--- a/docs/data-sources/source_amplitude.md
+++ b/docs/data-sources/source_amplitude.md
@@ -14,7 +14,6 @@ SourceAmplitude DataSource
```terraform
data "airbyte_source_amplitude" "my_source_amplitude" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,27 +25,12 @@ data "airbyte_source_amplitude" "my_source_amplitude" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Amplitude API Key. See the setup guide for more information on how to obtain this key.
-- `data_region` (String) must be one of ["Standard Server", "EU Residency Server"]
-Amplitude data region server
-- `request_time_range` (Number) According to Considerations too big time range in request can cause a timeout error. In this case, set shorter time interval in hours.
-- `secret_key` (String) Amplitude Secret Key. See the setup guide for more information on how to obtain this key.
-- `source_type` (String) must be one of ["amplitude"]
-- `start_date` (String) UTC date and time in the format 2021-01-25T00:00:00Z. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_apify_dataset.md b/docs/data-sources/source_apify_dataset.md
index 3a3966b70..1cdc585f5 100644
--- a/docs/data-sources/source_apify_dataset.md
+++ b/docs/data-sources/source_apify_dataset.md
@@ -14,7 +14,6 @@ SourceApifyDataset DataSource
```terraform
data "airbyte_source_apify_dataset" "my_source_apifydataset" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_apify_dataset" "my_source_apifydataset" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `clean` (Boolean) If set to true, only clean items will be downloaded from the dataset. See description of what clean means in Apify API docs. If not sure, set clean to false.
-- `dataset_id` (String) ID of the dataset you would like to load to Airbyte.
-- `source_type` (String) must be one of ["apify-dataset"]
-- `token` (String) Your application's Client Secret. You can find this value on the console integrations tab after you login.
-
diff --git a/docs/data-sources/source_appfollow.md b/docs/data-sources/source_appfollow.md
index f6692d321..4b84c082c 100644
--- a/docs/data-sources/source_appfollow.md
+++ b/docs/data-sources/source_appfollow.md
@@ -14,7 +14,6 @@ SourceAppfollow DataSource
```terraform
data "airbyte_source_appfollow" "my_source_appfollow" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_appfollow" "my_source_appfollow" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_secret` (String) API Key provided by Appfollow
-- `source_type` (String) must be one of ["appfollow"]
-
diff --git a/docs/data-sources/source_asana.md b/docs/data-sources/source_asana.md
index ce10ec949..9ef0ce9b2 100644
--- a/docs/data-sources/source_asana.md
+++ b/docs/data-sources/source_asana.md
@@ -14,7 +14,6 @@ SourceAsana DataSource
```terraform
data "airbyte_source_asana" "my_source_asana" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,75 +25,12 @@ data "airbyte_source_asana" "my_source_asana" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Choose how to authenticate to Github (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["asana"]
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_asana_authentication_mechanism_authenticate_via_asana_oauth` (Attributes) Choose how to authenticate to Github (see [below for nested schema](#nestedatt--configuration--credentials--source_asana_authentication_mechanism_authenticate_via_asana_oauth))
-- `source_asana_authentication_mechanism_authenticate_with_personal_access_token` (Attributes) Choose how to authenticate to Github (see [below for nested schema](#nestedatt--configuration--credentials--source_asana_authentication_mechanism_authenticate_with_personal_access_token))
-- `source_asana_update_authentication_mechanism_authenticate_via_asana_oauth` (Attributes) Choose how to authenticate to Github (see [below for nested schema](#nestedatt--configuration--credentials--source_asana_update_authentication_mechanism_authenticate_via_asana_oauth))
-- `source_asana_update_authentication_mechanism_authenticate_with_personal_access_token` (Attributes) Choose how to authenticate to Github (see [below for nested schema](#nestedatt--configuration--credentials--source_asana_update_authentication_mechanism_authenticate_with_personal_access_token))
-
-
-### Nested Schema for `configuration.credentials.source_asana_authentication_mechanism_authenticate_via_asana_oauth`
-
-Read-Only:
-
-- `client_id` (String)
-- `client_secret` (String)
-- `option_title` (String) must be one of ["OAuth Credentials"]
-OAuth Credentials
-- `refresh_token` (String)
-
-
-
-### Nested Schema for `configuration.credentials.source_asana_authentication_mechanism_authenticate_with_personal_access_token`
-
-Read-Only:
-
-- `option_title` (String) must be one of ["PAT Credentials"]
-PAT Credentials
-- `personal_access_token` (String) Asana Personal Access Token (generate yours here).
-
-
-
-### Nested Schema for `configuration.credentials.source_asana_update_authentication_mechanism_authenticate_via_asana_oauth`
-
-Read-Only:
-
-- `client_id` (String)
-- `client_secret` (String)
-- `option_title` (String) must be one of ["OAuth Credentials"]
-OAuth Credentials
-- `refresh_token` (String)
-
-
-
-### Nested Schema for `configuration.credentials.source_asana_update_authentication_mechanism_authenticate_with_personal_access_token`
-
-Read-Only:
-
-- `option_title` (String) must be one of ["PAT Credentials"]
-PAT Credentials
-- `personal_access_token` (String) Asana Personal Access Token (generate yours here).
-
diff --git a/docs/data-sources/source_auth0.md b/docs/data-sources/source_auth0.md
index c7ff792a7..b1871066e 100644
--- a/docs/data-sources/source_auth0.md
+++ b/docs/data-sources/source_auth0.md
@@ -14,7 +14,6 @@ SourceAuth0 DataSource
```terraform
data "airbyte_source_auth0" "my_source_auth0" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,73 +25,12 @@ data "airbyte_source_auth0" "my_source_auth0" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `base_url` (String) The Authentication API is served over HTTPS. All URLs referenced in the documentation have the following base `https://YOUR_DOMAIN`
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["auth0"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_auth0_authentication_method_o_auth2_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_auth0_authentication_method_o_auth2_access_token))
-- `source_auth0_authentication_method_o_auth2_confidential_application` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_auth0_authentication_method_o_auth2_confidential_application))
-- `source_auth0_update_authentication_method_o_auth2_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_auth0_update_authentication_method_o_auth2_access_token))
-- `source_auth0_update_authentication_method_o_auth2_confidential_application` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_auth0_update_authentication_method_o_auth2_confidential_application))
-
-
-### Nested Schema for `configuration.credentials.source_auth0_authentication_method_o_auth2_access_token`
-
-Read-Only:
-
-- `access_token` (String) Also called API Access Token The access token used to call the Auth0 Management API Token. It's a JWT that contains specific grant permissions knowns as scopes.
-- `auth_type` (String) must be one of ["oauth2_access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_auth0_authentication_method_o_auth2_confidential_application`
-
-Read-Only:
-
-- `audience` (String) The audience for the token, which is your API. You can find this in the Identifier field on your API's settings tab
-- `auth_type` (String) must be one of ["oauth2_confidential_application"]
-- `client_id` (String) Your application's Client ID. You can find this value on the application's settings tab after you login the admin portal.
-- `client_secret` (String) Your application's Client Secret. You can find this value on the application's settings tab after you login the admin portal.
-
-
-
-### Nested Schema for `configuration.credentials.source_auth0_update_authentication_method_o_auth2_access_token`
-
-Read-Only:
-
-- `access_token` (String) Also called API Access Token The access token used to call the Auth0 Management API Token. It's a JWT that contains specific grant permissions knowns as scopes.
-- `auth_type` (String) must be one of ["oauth2_access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_auth0_update_authentication_method_o_auth2_confidential_application`
-
-Read-Only:
-
-- `audience` (String) The audience for the token, which is your API. You can find this in the Identifier field on your API's settings tab
-- `auth_type` (String) must be one of ["oauth2_confidential_application"]
-- `client_id` (String) Your application's Client ID. You can find this value on the application's settings tab after you login the admin portal.
-- `client_secret` (String) Your application's Client Secret. You can find this value on the application's settings tab after you login the admin portal.
-
diff --git a/docs/data-sources/source_aws_cloudtrail.md b/docs/data-sources/source_aws_cloudtrail.md
index 6f81d4c8c..570fbf10c 100644
--- a/docs/data-sources/source_aws_cloudtrail.md
+++ b/docs/data-sources/source_aws_cloudtrail.md
@@ -14,7 +14,6 @@ SourceAwsCloudtrail DataSource
```terraform
data "airbyte_source_aws_cloudtrail" "my_source_awscloudtrail" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_aws_cloudtrail" "my_source_awscloudtrail" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `aws_key_id` (String) AWS CloudTrail Access Key ID. See the docs for more information on how to obtain this key.
-- `aws_region_name` (String) The default AWS Region to use, for example, us-west-1 or us-west-2. When specifying a Region inline during client initialization, this property is named region_name.
-- `aws_secret_key` (String) AWS CloudTrail Access Key ID. See the docs for more information on how to obtain this key.
-- `source_type` (String) must be one of ["aws-cloudtrail"]
-- `start_date` (String) The date you would like to replicate data. Data in AWS CloudTrail is available for last 90 days only. Format: YYYY-MM-DD.
-
diff --git a/docs/data-sources/source_azure_blob_storage.md b/docs/data-sources/source_azure_blob_storage.md
index 44564f923..4daef1747 100644
--- a/docs/data-sources/source_azure_blob_storage.md
+++ b/docs/data-sources/source_azure_blob_storage.md
@@ -14,7 +14,6 @@ SourceAzureBlobStorage DataSource
```terraform
data "airbyte_source_azure_blob_storage" "my_source_azureblobstorage" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,51 +25,12 @@ data "airbyte_source_azure_blob_storage" "my_source_azureblobstorage" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `azure_blob_storage_account_key` (String) The Azure blob storage account key.
-- `azure_blob_storage_account_name` (String) The account's name of the Azure Blob Storage.
-- `azure_blob_storage_blobs_prefix` (String) The Azure blob storage prefix to be applied
-- `azure_blob_storage_container_name` (String) The name of the Azure blob storage container.
-- `azure_blob_storage_endpoint` (String) This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
-- `azure_blob_storage_schema_inference_limit` (Number) The Azure blob storage blobs to scan for inferring the schema, useful on large amounts of data with consistent structure
-- `format` (Attributes) Input data format (see [below for nested schema](#nestedatt--configuration--format))
-- `source_type` (String) must be one of ["azure-blob-storage"]
-
-
-### Nested Schema for `configuration.format`
-
-Read-Only:
-
-- `source_azure_blob_storage_input_format_json_lines_newline_delimited_json` (Attributes) Input data format (see [below for nested schema](#nestedatt--configuration--format--source_azure_blob_storage_input_format_json_lines_newline_delimited_json))
-- `source_azure_blob_storage_update_input_format_json_lines_newline_delimited_json` (Attributes) Input data format (see [below for nested schema](#nestedatt--configuration--format--source_azure_blob_storage_update_input_format_json_lines_newline_delimited_json))
-
-
-### Nested Schema for `configuration.format.source_azure_blob_storage_input_format_json_lines_newline_delimited_json`
-
-Read-Only:
-
-- `format_type` (String) must be one of ["JSONL"]
-
-
-
-### Nested Schema for `configuration.format.source_azure_blob_storage_update_input_format_json_lines_newline_delimited_json`
-
-Read-Only:
-
-- `format_type` (String) must be one of ["JSONL"]
-
diff --git a/docs/data-sources/source_azure_table.md b/docs/data-sources/source_azure_table.md
index ce642ddde..62a90aa80 100644
--- a/docs/data-sources/source_azure_table.md
+++ b/docs/data-sources/source_azure_table.md
@@ -14,7 +14,6 @@ SourceAzureTable DataSource
```terraform
data "airbyte_source_azure_table" "my_source_azuretable" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_azure_table" "my_source_azuretable" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `source_type` (String) must be one of ["azure-table"]
-- `storage_access_key` (String) Azure Table Storage Access Key. See the docs for more information on how to obtain this key.
-- `storage_account_name` (String) The name of your storage account.
-- `storage_endpoint_suffix` (String) Azure Table Storage service account URL suffix. See the docs for more information on how to obtain endpoint suffix
-
diff --git a/docs/data-sources/source_bamboo_hr.md b/docs/data-sources/source_bamboo_hr.md
index ce7015f56..896a58abd 100644
--- a/docs/data-sources/source_bamboo_hr.md
+++ b/docs/data-sources/source_bamboo_hr.md
@@ -14,7 +14,6 @@ SourceBambooHr DataSource
```terraform
data "airbyte_source_bamboo_hr" "my_source_bamboohr" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_bamboo_hr" "my_source_bamboohr" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Api key of bamboo hr
-- `custom_reports_fields` (String) Comma-separated list of fields to include in custom reports.
-- `custom_reports_include_default_fields` (Boolean) If true, the custom reports endpoint will include the default fields defined here: https://documentation.bamboohr.com/docs/list-of-field-names.
-- `source_type` (String) must be one of ["bamboo-hr"]
-- `subdomain` (String) Sub Domain of bamboo hr
-
diff --git a/docs/data-sources/source_bigcommerce.md b/docs/data-sources/source_bigcommerce.md
deleted file mode 100644
index 7d420141a..000000000
--- a/docs/data-sources/source_bigcommerce.md
+++ /dev/null
@@ -1,49 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_bigcommerce Data Source - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceBigcommerce DataSource
----
-
-# airbyte_source_bigcommerce (Data Source)
-
-SourceBigcommerce DataSource
-
-## Example Usage
-
-```terraform
-data "airbyte_source_bigcommerce" "my_source_bigcommerce" {
- secret_id = "...my_secret_id..."
- source_id = "...my_source_id..."
-}
-```
-
-
-## Schema
-
-### Required
-
-- `source_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `source_type` (String) must be one of ["bigcommerce"]
-- `start_date` (String) The date you would like to replicate data. Format: YYYY-MM-DD.
-- `store_hash` (String) The hash code of the store. For https://api.bigcommerce.com/stores/HASH_CODE/v3/, The store's hash code is 'HASH_CODE'.
-
-
diff --git a/docs/data-sources/source_bigquery.md b/docs/data-sources/source_bigquery.md
index 23d78ce2d..6cd2b04f0 100644
--- a/docs/data-sources/source_bigquery.md
+++ b/docs/data-sources/source_bigquery.md
@@ -14,7 +14,6 @@ SourceBigquery DataSource
```terraform
data "airbyte_source_bigquery" "my_source_bigquery" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_bigquery" "my_source_bigquery" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials_json` (String) The contents of your Service Account Key JSON file. See the docs for more information on how to obtain this key.
-- `dataset_id` (String) The dataset ID to search for tables and views. If you are only loading data from one dataset, setting this option could result in much faster schema discovery.
-- `project_id` (String) The GCP project ID for the project containing the target BigQuery dataset.
-- `source_type` (String) must be one of ["bigquery"]
-
diff --git a/docs/data-sources/source_bing_ads.md b/docs/data-sources/source_bing_ads.md
index fcc29989b..3acf0f55c 100644
--- a/docs/data-sources/source_bing_ads.md
+++ b/docs/data-sources/source_bing_ads.md
@@ -14,7 +14,6 @@ SourceBingAds DataSource
```terraform
data "airbyte_source_bing_ads" "my_source_bingads" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,29 +25,12 @@ data "airbyte_source_bing_ads" "my_source_bingads" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your Microsoft Advertising developer application.
-- `client_secret` (String) The Client Secret of your Microsoft Advertising developer application.
-- `developer_token` (String) Developer token associated with user. See more info in the docs.
-- `lookback_window` (Number) Also known as attribution or conversion window. How far into the past to look for records (in days). If your conversion window has an hours/minutes granularity, round it up to the number of days exceeding. Used only for performance report streams in incremental mode.
-- `refresh_token` (String) Refresh Token to renew the expired Access Token.
-- `reports_start_date` (String) The start date from which to begin replicating report data. Any data generated before this date will not be replicated in reports. This is a UTC date in YYYY-MM-DD format.
-- `source_type` (String) must be one of ["bing-ads"]
-- `tenant_id` (String) The Tenant ID of your Microsoft Advertising developer application. Set this to "common" unless you know you need a different value.
-
diff --git a/docs/data-sources/source_braintree.md b/docs/data-sources/source_braintree.md
index 3cdae9dbd..231ce5342 100644
--- a/docs/data-sources/source_braintree.md
+++ b/docs/data-sources/source_braintree.md
@@ -14,7 +14,6 @@ SourceBraintree DataSource
```terraform
data "airbyte_source_braintree" "my_source_braintree" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,27 +25,12 @@ data "airbyte_source_braintree" "my_source_braintree" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `environment` (String) must be one of ["Development", "Sandbox", "Qa", "Production"]
-Environment specifies where the data will come from.
-- `merchant_id` (String) The unique identifier for your entire gateway account. See the docs for more information on how to obtain this ID.
-- `private_key` (String) Braintree Private Key. See the docs for more information on how to obtain this key.
-- `public_key` (String) Braintree Public Key. See the docs for more information on how to obtain this key.
-- `source_type` (String) must be one of ["braintree"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_braze.md b/docs/data-sources/source_braze.md
index acc47cbcf..eaaad1364 100644
--- a/docs/data-sources/source_braze.md
+++ b/docs/data-sources/source_braze.md
@@ -14,7 +14,6 @@ SourceBraze DataSource
```terraform
data "airbyte_source_braze" "my_source_braze" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_braze" "my_source_braze" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Braze REST API key
-- `source_type` (String) must be one of ["braze"]
-- `start_date` (String) Rows after this date will be synced
-- `url` (String) Braze REST API endpoint
-
diff --git a/docs/data-sources/source_cart.md b/docs/data-sources/source_cart.md
new file mode 100644
index 000000000..157dc80f2
--- /dev/null
+++ b/docs/data-sources/source_cart.md
@@ -0,0 +1,36 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_source_cart Data Source - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ SourceCart DataSource
+---
+
+# airbyte_source_cart (Data Source)
+
+SourceCart DataSource
+
+## Example Usage
+
+```terraform
+data "airbyte_source_cart" "my_source_cart" {
+ source_id = "...my_source_id..."
+}
+```
+
+
+## Schema
+
+### Required
+
+- `source_id` (String)
+
+### Read-Only
+
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
+- `name` (String)
+- `source_type` (String)
+- `workspace_id` (String)
+
+
diff --git a/docs/data-sources/source_chargebee.md b/docs/data-sources/source_chargebee.md
index 212dfba5d..19e65c8c1 100644
--- a/docs/data-sources/source_chargebee.md
+++ b/docs/data-sources/source_chargebee.md
@@ -14,7 +14,6 @@ SourceChargebee DataSource
```terraform
data "airbyte_source_chargebee" "my_source_chargebee" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,26 +25,12 @@ data "airbyte_source_chargebee" "my_source_chargebee" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `product_catalog` (String) must be one of ["1.0", "2.0"]
-Product Catalog version of your Chargebee site. Instructions on how to find your version you may find here under `API Version` section.
-- `site` (String) The site prefix for your Chargebee instance.
-- `site_api_key` (String) Chargebee API Key. See the docs for more information on how to obtain this key.
-- `source_type` (String) must be one of ["chargebee"]
-- `start_date` (String) UTC date and time in the format 2021-01-25T00:00:00Z. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_chartmogul.md b/docs/data-sources/source_chartmogul.md
index 81ddb13bf..9edba9fa7 100644
--- a/docs/data-sources/source_chartmogul.md
+++ b/docs/data-sources/source_chartmogul.md
@@ -14,7 +14,6 @@ SourceChartmogul DataSource
```terraform
data "airbyte_source_chartmogul" "my_source_chartmogul" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_chartmogul" "my_source_chartmogul" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Your Chartmogul API key. See the docs for info on how to obtain this.
-- `interval` (String) must be one of ["day", "week", "month", "quarter"]
-Some APIs such as Metrics require intervals to cluster data.
-- `source_type` (String) must be one of ["chartmogul"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. When feasible, any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_clickhouse.md b/docs/data-sources/source_clickhouse.md
index ae49b9271..966cb3dde 100644
--- a/docs/data-sources/source_clickhouse.md
+++ b/docs/data-sources/source_clickhouse.md
@@ -14,7 +14,6 @@ SourceClickhouse DataSource
```terraform
data "airbyte_source_clickhouse" "my_source_clickhouse" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,108 +25,12 @@ data "airbyte_source_clickhouse" "my_source_clickhouse" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `database` (String) The name of the database.
-- `host` (String) The host endpoint of the Clickhouse cluster.
-- `password` (String) The password associated with this username.
-- `port` (Number) The port of the database.
-- `source_type` (String) must be one of ["clickhouse"]
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `username` (String) The username which is used to access the database.
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `source_clickhouse_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_clickhouse_ssh_tunnel_method_no_tunnel))
-- `source_clickhouse_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_clickhouse_ssh_tunnel_method_password_authentication))
-- `source_clickhouse_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_clickhouse_ssh_tunnel_method_ssh_key_authentication))
-- `source_clickhouse_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_clickhouse_update_ssh_tunnel_method_no_tunnel))
-- `source_clickhouse_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_clickhouse_update_ssh_tunnel_method_password_authentication))
-- `source_clickhouse_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_clickhouse_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.source_clickhouse_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_clickhouse_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_clickhouse_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_clickhouse_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_clickhouse_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_clickhouse_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/source_clickup_api.md b/docs/data-sources/source_clickup_api.md
index 9ebe039a3..7e0d708ab 100644
--- a/docs/data-sources/source_clickup_api.md
+++ b/docs/data-sources/source_clickup_api.md
@@ -14,7 +14,6 @@ SourceClickupAPI DataSource
```terraform
data "airbyte_source_clickup_api" "my_source_clickupapi" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,27 +25,12 @@ data "airbyte_source_clickup_api" "my_source_clickupapi" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_token` (String) Every ClickUp API call required authentication. This field is your personal API token. See here.
-- `folder_id` (String) The ID of your folder in your space. Retrieve it from the `/space/{space_id}/folder` of the ClickUp API. See here.
-- `include_closed_tasks` (Boolean) Include or exclude closed tasks. By default, they are excluded. See here.
-- `list_id` (String) The ID of your list in your folder. Retrieve it from the `/folder/{folder_id}/list` of the ClickUp API. See here.
-- `source_type` (String) must be one of ["clickup-api"]
-- `space_id` (String) The ID of your space in your workspace. Retrieve it from the `/team/{team_id}/space` of the ClickUp API. See here.
-- `team_id` (String) The ID of your team in ClickUp. Retrieve it from the `/team` of the ClickUp API. See here.
-
diff --git a/docs/data-sources/source_clockify.md b/docs/data-sources/source_clockify.md
index 7cd00cd81..8b791ebb6 100644
--- a/docs/data-sources/source_clockify.md
+++ b/docs/data-sources/source_clockify.md
@@ -14,7 +14,6 @@ SourceClockify DataSource
```terraform
data "airbyte_source_clockify" "my_source_clockify" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_clockify" "my_source_clockify" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) You can get your api access_key here This API is Case Sensitive.
-- `api_url` (String) The URL for the Clockify API. This should only need to be modified if connecting to an enterprise version of Clockify.
-- `source_type` (String) must be one of ["clockify"]
-- `workspace_id` (String) WorkSpace Id
-
diff --git a/docs/data-sources/source_close_com.md b/docs/data-sources/source_close_com.md
index 0da7ff227..703372f9e 100644
--- a/docs/data-sources/source_close_com.md
+++ b/docs/data-sources/source_close_com.md
@@ -14,7 +14,6 @@ SourceCloseCom DataSource
```terraform
data "airbyte_source_close_com" "my_source_closecom" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_close_com" "my_source_closecom" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Close.com API key (usually starts with 'api_'; find yours here).
-- `source_type` (String) must be one of ["close-com"]
-- `start_date` (String) The start date to sync data; all data after this date will be replicated. Leave blank to retrieve all the data available in the account. Format: YYYY-MM-DD.
-
diff --git a/docs/data-sources/source_coda.md b/docs/data-sources/source_coda.md
index 14ac64934..4958cd3d2 100644
--- a/docs/data-sources/source_coda.md
+++ b/docs/data-sources/source_coda.md
@@ -14,7 +14,6 @@ SourceCoda DataSource
```terraform
data "airbyte_source_coda" "my_source_coda" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_coda" "my_source_coda" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `auth_token` (String) Bearer token
-- `source_type` (String) must be one of ["coda"]
-
diff --git a/docs/data-sources/source_coin_api.md b/docs/data-sources/source_coin_api.md
index 85aabd87d..1181b4dac 100644
--- a/docs/data-sources/source_coin_api.md
+++ b/docs/data-sources/source_coin_api.md
@@ -14,7 +14,6 @@ SourceCoinAPI DataSource
```terraform
data "airbyte_source_coin_api" "my_source_coinapi" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,34 +25,12 @@ data "airbyte_source_coin_api" "my_source_coinapi" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) API Key
-- `end_date` (String) The end date in ISO 8601 format. If not supplied, data will be returned
-from the start date to the current time, or when the count of result
-elements reaches its limit.
-- `environment` (String) must be one of ["sandbox", "production"]
-The environment to use. Either sandbox or production.
-- `limit` (Number) The maximum number of elements to return. If not supplied, the default
-is 100. For numbers larger than 100, each 100 items is counted as one
-request for pricing purposes. Maximum value is 100000.
-- `period` (String) The period to use. See the documentation for a list. https://docs.coinapi.io/#list-all-periods-get
-- `source_type` (String) must be one of ["coin-api"]
-- `start_date` (String) The start date in ISO 8601 format.
-- `symbol_id` (String) The symbol ID to use. See the documentation for a list.
-https://docs.coinapi.io/#list-all-symbols-get
-
diff --git a/docs/data-sources/source_coinmarketcap.md b/docs/data-sources/source_coinmarketcap.md
index 9427827f5..e724a2c96 100644
--- a/docs/data-sources/source_coinmarketcap.md
+++ b/docs/data-sources/source_coinmarketcap.md
@@ -14,7 +14,6 @@ SourceCoinmarketcap DataSource
```terraform
data "airbyte_source_coinmarketcap" "my_source_coinmarketcap" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_coinmarketcap" "my_source_coinmarketcap" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Your API Key. See here. The token is case sensitive.
-- `data_type` (String) must be one of ["latest", "historical"]
-/latest: Latest market ticker quotes and averages for cryptocurrencies and exchanges. /historical: Intervals of historic market data like OHLCV data or data for use in charting libraries. See here.
-- `source_type` (String) must be one of ["coinmarketcap"]
-- `symbols` (List of String) Cryptocurrency symbols. (only used for quotes stream)
-
diff --git a/docs/data-sources/source_configcat.md b/docs/data-sources/source_configcat.md
index c12e0b154..8963963ca 100644
--- a/docs/data-sources/source_configcat.md
+++ b/docs/data-sources/source_configcat.md
@@ -14,7 +14,6 @@ SourceConfigcat DataSource
```terraform
data "airbyte_source_configcat" "my_source_configcat" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_configcat" "my_source_configcat" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `password` (String) Basic auth password. See here.
-- `source_type` (String) must be one of ["configcat"]
-- `username` (String) Basic auth user name. See here.
-
diff --git a/docs/data-sources/source_confluence.md b/docs/data-sources/source_confluence.md
index 13d270644..91d2d8bae 100644
--- a/docs/data-sources/source_confluence.md
+++ b/docs/data-sources/source_confluence.md
@@ -14,7 +14,6 @@ SourceConfluence DataSource
```terraform
data "airbyte_source_confluence" "my_source_confluence" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_confluence" "my_source_confluence" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_token` (String) Please follow the Jira confluence for generating an API token: generating an API token.
-- `domain_name` (String) Your Confluence domain name
-- `email` (String) Your Confluence login email
-- `source_type` (String) must be one of ["confluence"]
-
diff --git a/docs/data-sources/source_convex.md b/docs/data-sources/source_convex.md
index 69ae8565c..3edacfa70 100644
--- a/docs/data-sources/source_convex.md
+++ b/docs/data-sources/source_convex.md
@@ -14,7 +14,6 @@ SourceConvex DataSource
```terraform
data "airbyte_source_convex" "my_source_convex" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_convex" "my_source_convex" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_key` (String) API access key used to retrieve data from Convex.
-- `deployment_url` (String)
-- `source_type` (String) must be one of ["convex"]
-
diff --git a/docs/data-sources/source_datascope.md b/docs/data-sources/source_datascope.md
index a0c83b327..373f8387e 100644
--- a/docs/data-sources/source_datascope.md
+++ b/docs/data-sources/source_datascope.md
@@ -14,7 +14,6 @@ SourceDatascope DataSource
```terraform
data "airbyte_source_datascope" "my_source_datascope" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_datascope" "my_source_datascope" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) API Key
-- `source_type` (String) must be one of ["datascope"]
-- `start_date` (String) Start date for the data to be replicated
-
diff --git a/docs/data-sources/source_delighted.md b/docs/data-sources/source_delighted.md
index 8a9f5ad3b..501b7349c 100644
--- a/docs/data-sources/source_delighted.md
+++ b/docs/data-sources/source_delighted.md
@@ -14,7 +14,6 @@ SourceDelighted DataSource
```terraform
data "airbyte_source_delighted" "my_source_delighted" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_delighted" "my_source_delighted" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) A Delighted API key.
-- `since` (String) The date from which you'd like to replicate the data
-- `source_type` (String) must be one of ["delighted"]
-
diff --git a/docs/data-sources/source_dixa.md b/docs/data-sources/source_dixa.md
index 8e131050d..9436fca5f 100644
--- a/docs/data-sources/source_dixa.md
+++ b/docs/data-sources/source_dixa.md
@@ -14,7 +14,6 @@ SourceDixa DataSource
```terraform
data "airbyte_source_dixa" "my_source_dixa" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_dixa" "my_source_dixa" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_token` (String) Dixa API token
-- `batch_size` (Number) Number of days to batch into one request. Max 31.
-- `source_type` (String) must be one of ["dixa"]
-- `start_date` (String) The connector pulls records updated from this date onwards.
-
diff --git a/docs/data-sources/source_dockerhub.md b/docs/data-sources/source_dockerhub.md
index 089092ff2..cf44298f0 100644
--- a/docs/data-sources/source_dockerhub.md
+++ b/docs/data-sources/source_dockerhub.md
@@ -14,7 +14,6 @@ SourceDockerhub DataSource
```terraform
data "airbyte_source_dockerhub" "my_source_dockerhub" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_dockerhub" "my_source_dockerhub" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `docker_username` (String) Username of DockerHub person or organization (for https://hub.docker.com/v2/repositories/USERNAME/ API call)
-- `source_type` (String) must be one of ["dockerhub"]
-
diff --git a/docs/data-sources/source_dremio.md b/docs/data-sources/source_dremio.md
index e4e27aab9..86680e653 100644
--- a/docs/data-sources/source_dremio.md
+++ b/docs/data-sources/source_dremio.md
@@ -14,7 +14,6 @@ SourceDremio DataSource
```terraform
data "airbyte_source_dremio" "my_source_dremio" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_dremio" "my_source_dremio" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) API Key that is generated when you authenticate to Dremio API
-- `base_url` (String) URL of your Dremio instance
-- `source_type` (String) must be one of ["dremio"]
-
diff --git a/docs/data-sources/source_dynamodb.md b/docs/data-sources/source_dynamodb.md
index fb4e21664..28f6e02ff 100644
--- a/docs/data-sources/source_dynamodb.md
+++ b/docs/data-sources/source_dynamodb.md
@@ -14,7 +14,6 @@ SourceDynamodb DataSource
```terraform
data "airbyte_source_dynamodb" "my_source_dynamodb" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,27 +25,12 @@ data "airbyte_source_dynamodb" "my_source_dynamodb" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_key_id` (String) The access key id to access Dynamodb. Airbyte requires read permissions to the database
-- `endpoint` (String) the URL of the Dynamodb database
-- `region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
-The region of the Dynamodb database
-- `reserved_attribute_names` (String) Comma separated reserved attribute names present in your tables
-- `secret_access_key` (String) The corresponding secret to the access key id.
-- `source_type` (String) must be one of ["dynamodb"]
-
diff --git a/docs/data-sources/source_e2e_test_cloud.md b/docs/data-sources/source_e2e_test_cloud.md
deleted file mode 100644
index 9cda57220..000000000
--- a/docs/data-sources/source_e2e_test_cloud.md
+++ /dev/null
@@ -1,100 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_e2e_test_cloud Data Source - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceE2eTestCloud DataSource
----
-
-# airbyte_source_e2e_test_cloud (Data Source)
-
-SourceE2eTestCloud DataSource
-
-## Example Usage
-
-```terraform
-data "airbyte_source_e2e_test_cloud" "my_source_e2etestcloud" {
- secret_id = "...my_secret_id..."
- source_id = "...my_source_id..."
-}
-```
-
-
-## Schema
-
-### Required
-
-- `source_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `max_messages` (Number) Number of records to emit per stream. Min 1. Max 100 billion.
-- `message_interval_ms` (Number) Interval between messages in ms. Min 0 ms. Max 60000 ms (1 minute).
-- `mock_catalog` (Attributes) (see [below for nested schema](#nestedatt--configuration--mock_catalog))
-- `seed` (Number) When the seed is unspecified, the current time millis will be used as the seed. Range: [0, 1000000].
-- `source_type` (String) must be one of ["e2e-test-cloud"]
-- `type` (String) must be one of ["CONTINUOUS_FEED"]
-
-
-### Nested Schema for `configuration.mock_catalog`
-
-Read-Only:
-
-- `source_e2e_test_cloud_mock_catalog_multi_schema` (Attributes) A catalog with multiple data streams, each with a different schema. (see [below for nested schema](#nestedatt--configuration--mock_catalog--source_e2e_test_cloud_mock_catalog_multi_schema))
-- `source_e2e_test_cloud_mock_catalog_single_schema` (Attributes) A catalog with one or multiple streams that share the same schema. (see [below for nested schema](#nestedatt--configuration--mock_catalog--source_e2e_test_cloud_mock_catalog_single_schema))
-- `source_e2e_test_cloud_update_mock_catalog_multi_schema` (Attributes) A catalog with multiple data streams, each with a different schema. (see [below for nested schema](#nestedatt--configuration--mock_catalog--source_e2e_test_cloud_update_mock_catalog_multi_schema))
-- `source_e2e_test_cloud_update_mock_catalog_single_schema` (Attributes) A catalog with one or multiple streams that share the same schema. (see [below for nested schema](#nestedatt--configuration--mock_catalog--source_e2e_test_cloud_update_mock_catalog_single_schema))
-
-
-### Nested Schema for `configuration.mock_catalog.source_e2e_test_cloud_mock_catalog_multi_schema`
-
-Read-Only:
-
-- `stream_schemas` (String) A Json object specifying multiple data streams and their schemas. Each key in this object is one stream name. Each value is the schema for that stream. The schema should be compatible with draft-07. See this doc for examples.
-- `type` (String) must be one of ["MULTI_STREAM"]
-
-
-
-### Nested Schema for `configuration.mock_catalog.source_e2e_test_cloud_mock_catalog_single_schema`
-
-Read-Only:
-
-- `stream_duplication` (Number) Duplicate the stream for easy load testing. Each stream name will have a number suffix. For example, if the stream name is "ds", the duplicated streams will be "ds_0", "ds_1", etc.
-- `stream_name` (String) Name of the data stream.
-- `stream_schema` (String) A Json schema for the stream. The schema should be compatible with draft-07. See this doc for examples.
-- `type` (String) must be one of ["SINGLE_STREAM"]
-
-
-
-### Nested Schema for `configuration.mock_catalog.source_e2e_test_cloud_update_mock_catalog_multi_schema`
-
-Read-Only:
-
-- `stream_schemas` (String) A Json object specifying multiple data streams and their schemas. Each key in this object is one stream name. Each value is the schema for that stream. The schema should be compatible with draft-07. See this doc for examples.
-- `type` (String) must be one of ["MULTI_STREAM"]
-
-
-
-### Nested Schema for `configuration.mock_catalog.source_e2e_test_cloud_update_mock_catalog_single_schema`
-
-Read-Only:
-
-- `stream_duplication` (Number) Duplicate the stream for easy load testing. Each stream name will have a number suffix. For example, if the stream name is "ds", the duplicated streams will be "ds_0", "ds_1", etc.
-- `stream_name` (String) Name of the data stream.
-- `stream_schema` (String) A Json schema for the stream. The schema should be compatible with draft-07. See this doc for examples.
-- `type` (String) must be one of ["SINGLE_STREAM"]
-
-
diff --git a/docs/data-sources/source_emailoctopus.md b/docs/data-sources/source_emailoctopus.md
index c06821419..6e4987515 100644
--- a/docs/data-sources/source_emailoctopus.md
+++ b/docs/data-sources/source_emailoctopus.md
@@ -14,7 +14,6 @@ SourceEmailoctopus DataSource
```terraform
data "airbyte_source_emailoctopus" "my_source_emailoctopus" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_emailoctopus" "my_source_emailoctopus" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) EmailOctopus API Key. See the docs for information on how to generate this key.
-- `source_type` (String) must be one of ["emailoctopus"]
-
diff --git a/docs/data-sources/source_exchange_rates.md b/docs/data-sources/source_exchange_rates.md
index 5656b0341..1e93acebc 100644
--- a/docs/data-sources/source_exchange_rates.md
+++ b/docs/data-sources/source_exchange_rates.md
@@ -14,7 +14,6 @@ SourceExchangeRates DataSource
```terraform
data "airbyte_source_exchange_rates" "my_source_exchangerates" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_exchange_rates" "my_source_exchangerates" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_key` (String) Your API Key. See here. The key is case sensitive.
-- `base` (String) ISO reference currency. See here. Free plan doesn't support Source Currency Switching, default base currency is EUR
-- `ignore_weekends` (Boolean) Ignore weekends? (Exchanges don't run on weekends)
-- `source_type` (String) must be one of ["exchange-rates"]
-- `start_date` (String) Start getting data from that date.
-
diff --git a/docs/data-sources/source_facebook_marketing.md b/docs/data-sources/source_facebook_marketing.md
index 37aac5827..4ab085b0e 100644
--- a/docs/data-sources/source_facebook_marketing.md
+++ b/docs/data-sources/source_facebook_marketing.md
@@ -14,7 +14,6 @@ SourceFacebookMarketing DataSource
```terraform
data "airbyte_source_facebook_marketing" "my_source_facebookmarketing" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,52 +25,12 @@ data "airbyte_source_facebook_marketing" "my_source_facebookmarketing" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_token` (String) The value of the generated access token. From your App’s Dashboard, click on "Marketing API" then "Tools". Select permissions ads_management, ads_read, read_insights, business_management. Then click on "Get token". See the docs for more information.
-- `account_id` (String) The Facebook Ad account ID to use when pulling data from the Facebook Marketing API. Open your Meta Ads Manager. The Ad account ID number is in the account dropdown menu or in your browser's address bar. See the docs for more information.
-- `action_breakdowns_allow_empty` (Boolean) Allows action_breakdowns to be an empty list
-- `client_id` (String) The Client Id for your OAuth app
-- `client_secret` (String) The Client Secret for your OAuth app
-- `custom_insights` (Attributes List) A list which contains ad statistics entries, each entry must have a name and can contains fields, breakdowns or action_breakdowns. Click on "add" to fill this field. (see [below for nested schema](#nestedatt--configuration--custom_insights))
-- `end_date` (String) The date until which you'd like to replicate data for all incremental streams, in the format YYYY-MM-DDT00:00:00Z. All data generated between the start date and this end date will be replicated. Not setting this option will result in always syncing the latest data.
-- `fetch_thumbnail_images` (Boolean) Set to active if you want to fetch the thumbnail_url and store the result in thumbnail_data_url for each Ad Creative.
-- `include_deleted` (Boolean) Set to active if you want to include data from deleted Campaigns, Ads, and AdSets.
-- `insights_lookback_window` (Number) The attribution window. Facebook freezes insight data 28 days after it was generated, which means that all data from the past 28 days may have changed since we last emitted it, so you can retrieve refreshed insights from the past by setting this parameter. If you set a custom lookback window value in Facebook account, please provide the same value here.
-- `max_batch_size` (Number) Maximum batch size used when sending batch requests to Facebook API. Most users do not need to set this field unless they specifically need to tune the connector to address specific issues or use cases.
-- `page_size` (Number) Page size used when sending requests to Facebook API to specify number of records per page when response has pagination. Most users do not need to set this field unless they specifically need to tune the connector to address specific issues or use cases.
-- `source_type` (String) must be one of ["facebook-marketing"]
-- `start_date` (String) The date from which you'd like to replicate data for all incremental streams, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
-
-
-### Nested Schema for `configuration.custom_insights`
-
-Read-Only:
-
-- `action_breakdowns` (List of String) A list of chosen action_breakdowns for action_breakdowns
-- `action_report_time` (String) must be one of ["conversion", "impression", "mixed"]
-Determines the report time of action stats. For example, if a person saw the ad on Jan 1st but converted on Jan 2nd, when you query the API with action_report_time=impression, you see a conversion on Jan 1st. When you query the API with action_report_time=conversion, you see a conversion on Jan 2nd.
-- `breakdowns` (List of String) A list of chosen breakdowns for breakdowns
-- `end_date` (String) The date until which you'd like to replicate data for this stream, in the format YYYY-MM-DDT00:00:00Z. All data generated between the start date and this end date will be replicated. Not setting this option will result in always syncing the latest data.
-- `fields` (List of String) A list of chosen fields for fields parameter
-- `insights_lookback_window` (Number) The attribution window
-- `level` (String) must be one of ["ad", "adset", "campaign", "account"]
-Chosen level for API
-- `name` (String) The name value of insight
-- `start_date` (String) The date from which you'd like to replicate data for this stream, in the format YYYY-MM-DDT00:00:00Z.
-- `time_increment` (Number) Time window in days by which to aggregate statistics. The sync will be chunked into N day intervals, where N is the number of days you specified. For example, if you set this value to 7, then all statistics will be reported as 7-day aggregates by starting from the start_date. If the start and end dates are October 1st and October 30th, then the connector will output 5 records: 01 - 06, 07 - 13, 14 - 20, 21 - 27, and 28 - 30 (3 days only).
-
diff --git a/docs/data-sources/source_facebook_pages.md b/docs/data-sources/source_facebook_pages.md
index ae7b9d1a9..e4ae98878 100644
--- a/docs/data-sources/source_facebook_pages.md
+++ b/docs/data-sources/source_facebook_pages.md
@@ -14,7 +14,6 @@ SourceFacebookPages DataSource
```terraform
data "airbyte_source_facebook_pages" "my_source_facebookpages" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_facebook_pages" "my_source_facebookpages" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_token` (String) Facebook Page Access Token
-- `page_id` (String) Page ID
-- `source_type` (String) must be one of ["facebook-pages"]
-
diff --git a/docs/data-sources/source_faker.md b/docs/data-sources/source_faker.md
index 6abb65858..4f570faca 100644
--- a/docs/data-sources/source_faker.md
+++ b/docs/data-sources/source_faker.md
@@ -14,7 +14,6 @@ SourceFaker DataSource
```terraform
data "airbyte_source_faker" "my_source_faker" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,26 +25,12 @@ data "airbyte_source_faker" "my_source_faker" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `always_updated` (Boolean) Should the updated_at values for every record be new each sync? Setting this to false will case the source to stop emitting records after COUNT records have been emitted.
-- `count` (Number) How many users should be generated in total. This setting does not apply to the purchases or products stream.
-- `parallelism` (Number) How many parallel workers should we use to generate fake data? Choose a value equal to the number of CPUs you will allocate to this source.
-- `records_per_slice` (Number) How many fake records will be in each page (stream slice), before a state message is emitted?
-- `seed` (Number) Manually control the faker random seed to return the same values on subsequent runs (leave -1 for random)
-- `source_type` (String) must be one of ["faker"]
-
diff --git a/docs/data-sources/source_fauna.md b/docs/data-sources/source_fauna.md
index bc859a89e..19924c450 100644
--- a/docs/data-sources/source_fauna.md
+++ b/docs/data-sources/source_fauna.md
@@ -14,7 +14,6 @@ SourceFauna DataSource
```terraform
data "airbyte_source_fauna" "my_source_fauna" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,94 +25,12 @@ data "airbyte_source_fauna" "my_source_fauna" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `collection` (Attributes) Settings for the Fauna Collection. (see [below for nested schema](#nestedatt--configuration--collection))
-- `domain` (String) Domain of Fauna to query. Defaults db.fauna.com. See the docs.
-- `port` (Number) Endpoint port.
-- `scheme` (String) URL scheme.
-- `secret` (String) Fauna secret, used when authenticating with the database.
-- `source_type` (String) must be one of ["fauna"]
-
-
-### Nested Schema for `configuration.collection`
-
-Read-Only:
-
-- `deletions` (Attributes) This only applies to incremental syncs.
-Enabling deletion mode informs your destination of deleted documents.
-Disabled - Leave this feature disabled, and ignore deleted documents.
-Enabled - Enables this feature. When a document is deleted, the connector exports a record with a "deleted at" column containing the time that the document was deleted. (see [below for nested schema](#nestedatt--configuration--collection--deletions))
-- `page_size` (Number) The page size used when reading documents from the database. The larger the page size, the faster the connector processes documents. However, if a page is too large, the connector may fail.
-Choose your page size based on how large the documents are.
-See the docs.
-
-
-### Nested Schema for `configuration.collection.deletions`
-
-Read-Only:
-
-- `source_fauna_collection_deletion_mode_disabled` (Attributes) This only applies to incremental syncs.
-Enabling deletion mode informs your destination of deleted documents.
-Disabled - Leave this feature disabled, and ignore deleted documents.
-Enabled - Enables this feature. When a document is deleted, the connector exports a record with a "deleted at" column containing the time that the document was deleted. (see [below for nested schema](#nestedatt--configuration--collection--deletions--source_fauna_collection_deletion_mode_disabled))
-- `source_fauna_collection_deletion_mode_enabled` (Attributes) This only applies to incremental syncs.
-Enabling deletion mode informs your destination of deleted documents.
-Disabled - Leave this feature disabled, and ignore deleted documents.
-Enabled - Enables this feature. When a document is deleted, the connector exports a record with a "deleted at" column containing the time that the document was deleted. (see [below for nested schema](#nestedatt--configuration--collection--deletions--source_fauna_collection_deletion_mode_enabled))
-- `source_fauna_update_collection_deletion_mode_disabled` (Attributes) This only applies to incremental syncs.
-Enabling deletion mode informs your destination of deleted documents.
-Disabled - Leave this feature disabled, and ignore deleted documents.
-Enabled - Enables this feature. When a document is deleted, the connector exports a record with a "deleted at" column containing the time that the document was deleted. (see [below for nested schema](#nestedatt--configuration--collection--deletions--source_fauna_update_collection_deletion_mode_disabled))
-- `source_fauna_update_collection_deletion_mode_enabled` (Attributes) This only applies to incremental syncs.
-Enabling deletion mode informs your destination of deleted documents.
-Disabled - Leave this feature disabled, and ignore deleted documents.
-Enabled - Enables this feature. When a document is deleted, the connector exports a record with a "deleted at" column containing the time that the document was deleted. (see [below for nested schema](#nestedatt--configuration--collection--deletions--source_fauna_update_collection_deletion_mode_enabled))
-
-
-### Nested Schema for `configuration.collection.deletions.source_fauna_update_collection_deletion_mode_enabled`
-
-Read-Only:
-
-- `deletion_mode` (String) must be one of ["ignore"]
-
-
-
-### Nested Schema for `configuration.collection.deletions.source_fauna_update_collection_deletion_mode_enabled`
-
-Read-Only:
-
-- `column` (String) Name of the "deleted at" column.
-- `deletion_mode` (String) must be one of ["deleted_field"]
-
-
-
-### Nested Schema for `configuration.collection.deletions.source_fauna_update_collection_deletion_mode_enabled`
-
-Read-Only:
-
-- `deletion_mode` (String) must be one of ["ignore"]
-
-
-
-### Nested Schema for `configuration.collection.deletions.source_fauna_update_collection_deletion_mode_enabled`
-
-Read-Only:
-
-- `column` (String) Name of the "deleted at" column.
-- `deletion_mode` (String) must be one of ["deleted_field"]
-
diff --git a/docs/data-sources/source_file.md b/docs/data-sources/source_file.md
new file mode 100644
index 000000000..75899efce
--- /dev/null
+++ b/docs/data-sources/source_file.md
@@ -0,0 +1,36 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_source_file Data Source - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ SourceFile DataSource
+---
+
+# airbyte_source_file (Data Source)
+
+SourceFile DataSource
+
+## Example Usage
+
+```terraform
+data "airbyte_source_file" "my_source_file" {
+ source_id = "...my_source_id..."
+}
+```
+
+
+## Schema
+
+### Required
+
+- `source_id` (String)
+
+### Read-Only
+
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
+- `name` (String)
+- `source_type` (String)
+- `workspace_id` (String)
+
+
diff --git a/docs/data-sources/source_file_secure.md b/docs/data-sources/source_file_secure.md
deleted file mode 100644
index eb24b43b7..000000000
--- a/docs/data-sources/source_file_secure.md
+++ /dev/null
@@ -1,221 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_file_secure Data Source - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceFileSecure DataSource
----
-
-# airbyte_source_file_secure (Data Source)
-
-SourceFileSecure DataSource
-
-## Example Usage
-
-```terraform
-data "airbyte_source_file_secure" "my_source_filesecure" {
- secret_id = "...my_secret_id..."
- source_id = "...my_source_id..."
-}
-```
-
-
-## Schema
-
-### Required
-
-- `source_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `dataset_name` (String) The Name of the final table to replicate this file into (should include letters, numbers dash and underscores only).
-- `format` (String) must be one of ["csv", "json", "jsonl", "excel", "excel_binary", "feather", "parquet", "yaml"]
-The Format of the file which should be replicated (Warning: some formats may be experimental, please refer to the docs).
-- `provider` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider))
-- `reader_options` (String) This should be a string in JSON format. It depends on the chosen file format to provide additional options and tune its behavior.
-- `source_type` (String) must be one of ["file-secure"]
-- `url` (String) The URL path to access the file which should be replicated.
-
-
-### Nested Schema for `configuration.provider`
-
-Read-Only:
-
-- `source_file_secure_storage_provider_az_blob_azure_blob_storage` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_az_blob_azure_blob_storage))
-- `source_file_secure_storage_provider_gcs_google_cloud_storage` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_gcs_google_cloud_storage))
-- `source_file_secure_storage_provider_https_public_web` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_https_public_web))
-- `source_file_secure_storage_provider_s3_amazon_web_services` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_s3_amazon_web_services))
-- `source_file_secure_storage_provider_scp_secure_copy_protocol` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_scp_secure_copy_protocol))
-- `source_file_secure_storage_provider_sftp_secure_file_transfer_protocol` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_sftp_secure_file_transfer_protocol))
-- `source_file_secure_storage_provider_ssh_secure_shell` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_ssh_secure_shell))
-- `source_file_secure_update_storage_provider_az_blob_azure_blob_storage` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_az_blob_azure_blob_storage))
-- `source_file_secure_update_storage_provider_gcs_google_cloud_storage` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_gcs_google_cloud_storage))
-- `source_file_secure_update_storage_provider_https_public_web` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_https_public_web))
-- `source_file_secure_update_storage_provider_s3_amazon_web_services` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_s3_amazon_web_services))
-- `source_file_secure_update_storage_provider_scp_secure_copy_protocol` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_scp_secure_copy_protocol))
-- `source_file_secure_update_storage_provider_sftp_secure_file_transfer_protocol` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_sftp_secure_file_transfer_protocol))
-- `source_file_secure_update_storage_provider_ssh_secure_shell` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_ssh_secure_shell))
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_az_blob_azure_blob_storage`
-
-Read-Only:
-
-- `sas_token` (String) To access Azure Blob Storage, this connector would need credentials with the proper permissions. One option is a SAS (Shared Access Signature) token. If accessing publicly available data, this field is not necessary.
-- `shared_key` (String) To access Azure Blob Storage, this connector would need credentials with the proper permissions. One option is a storage account shared key (aka account key or access key). If accessing publicly available data, this field is not necessary.
-- `storage` (String) must be one of ["AzBlob"]
-- `storage_account` (String) The globally unique name of the storage account that the desired blob sits within. See here for more details.
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_gcs_google_cloud_storage`
-
-Read-Only:
-
-- `service_account_json` (String) In order to access private Buckets stored on Google Cloud, this connector would need a service account json credentials with the proper permissions as described here. Please generate the credentials.json file and copy/paste its content to this field (expecting JSON formats). If accessing publicly available data, this field is not necessary.
-- `storage` (String) must be one of ["GCS"]
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_https_public_web`
-
-Read-Only:
-
-- `storage` (String) must be one of ["HTTPS"]
-- `user_agent` (Boolean) Add User-Agent to request
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_s3_amazon_web_services`
-
-Read-Only:
-
-- `aws_access_key_id` (String) In order to access private Buckets stored on AWS S3, this connector would need credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-- `aws_secret_access_key` (String) In order to access private Buckets stored on AWS S3, this connector would need credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-- `storage` (String) must be one of ["S3"]
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_scp_secure_copy_protocol`
-
-Read-Only:
-
-- `host` (String)
-- `password` (String)
-- `port` (String)
-- `storage` (String) must be one of ["SCP"]
-- `user` (String)
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_sftp_secure_file_transfer_protocol`
-
-Read-Only:
-
-- `host` (String)
-- `password` (String)
-- `port` (String)
-- `storage` (String) must be one of ["SFTP"]
-- `user` (String)
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_ssh_secure_shell`
-
-Read-Only:
-
-- `host` (String)
-- `password` (String)
-- `port` (String)
-- `storage` (String) must be one of ["SSH"]
-- `user` (String)
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_az_blob_azure_blob_storage`
-
-Read-Only:
-
-- `sas_token` (String) To access Azure Blob Storage, this connector would need credentials with the proper permissions. One option is a SAS (Shared Access Signature) token. If accessing publicly available data, this field is not necessary.
-- `shared_key` (String) To access Azure Blob Storage, this connector would need credentials with the proper permissions. One option is a storage account shared key (aka account key or access key). If accessing publicly available data, this field is not necessary.
-- `storage` (String) must be one of ["AzBlob"]
-- `storage_account` (String) The globally unique name of the storage account that the desired blob sits within. See here for more details.
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_gcs_google_cloud_storage`
-
-Read-Only:
-
-- `service_account_json` (String) In order to access private Buckets stored on Google Cloud, this connector would need a service account json credentials with the proper permissions as described here. Please generate the credentials.json file and copy/paste its content to this field (expecting JSON formats). If accessing publicly available data, this field is not necessary.
-- `storage` (String) must be one of ["GCS"]
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_https_public_web`
-
-Read-Only:
-
-- `storage` (String) must be one of ["HTTPS"]
-- `user_agent` (Boolean) Add User-Agent to request
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_s3_amazon_web_services`
-
-Read-Only:
-
-- `aws_access_key_id` (String) In order to access private Buckets stored on AWS S3, this connector would need credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-- `aws_secret_access_key` (String) In order to access private Buckets stored on AWS S3, this connector would need credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-- `storage` (String) must be one of ["S3"]
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_scp_secure_copy_protocol`
-
-Read-Only:
-
-- `host` (String)
-- `password` (String)
-- `port` (String)
-- `storage` (String) must be one of ["SCP"]
-- `user` (String)
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_sftp_secure_file_transfer_protocol`
-
-Read-Only:
-
-- `host` (String)
-- `password` (String)
-- `port` (String)
-- `storage` (String) must be one of ["SFTP"]
-- `user` (String)
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_ssh_secure_shell`
-
-Read-Only:
-
-- `host` (String)
-- `password` (String)
-- `port` (String)
-- `storage` (String) must be one of ["SSH"]
-- `user` (String)
-
-
diff --git a/docs/data-sources/source_firebolt.md b/docs/data-sources/source_firebolt.md
index d2b817259..b6b325c85 100644
--- a/docs/data-sources/source_firebolt.md
+++ b/docs/data-sources/source_firebolt.md
@@ -14,7 +14,6 @@ SourceFirebolt DataSource
```terraform
data "airbyte_source_firebolt" "my_source_firebolt" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,27 +25,12 @@ data "airbyte_source_firebolt" "my_source_firebolt" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `account` (String) Firebolt account to login.
-- `database` (String) The database to connect to.
-- `engine` (String) Engine name or url to connect to.
-- `host` (String) The host name of your Firebolt database.
-- `password` (String) Firebolt password.
-- `source_type` (String) must be one of ["firebolt"]
-- `username` (String) Firebolt email address you use to login.
-
diff --git a/docs/data-sources/source_freshcaller.md b/docs/data-sources/source_freshcaller.md
index 68a9ae713..36bd63589 100644
--- a/docs/data-sources/source_freshcaller.md
+++ b/docs/data-sources/source_freshcaller.md
@@ -14,7 +14,6 @@ SourceFreshcaller DataSource
```terraform
data "airbyte_source_freshcaller" "my_source_freshcaller" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,26 +25,12 @@ data "airbyte_source_freshcaller" "my_source_freshcaller" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Freshcaller API Key. See the docs for more information on how to obtain this key.
-- `domain` (String) Used to construct Base URL for the Freshcaller APIs
-- `requests_per_minute` (Number) The number of requests per minute that this source allowed to use. There is a rate limit of 50 requests per minute per app per account.
-- `source_type` (String) must be one of ["freshcaller"]
-- `start_date` (String) UTC date and time. Any data created after this date will be replicated.
-- `sync_lag_minutes` (Number) Lag in minutes for each sync, i.e., at time T, data for the time range [prev_sync_time, T-30] will be fetched
-
diff --git a/docs/data-sources/source_freshdesk.md b/docs/data-sources/source_freshdesk.md
index 88ab4b73e..fe6f18c3c 100644
--- a/docs/data-sources/source_freshdesk.md
+++ b/docs/data-sources/source_freshdesk.md
@@ -14,7 +14,6 @@ SourceFreshdesk DataSource
```terraform
data "airbyte_source_freshdesk" "my_source_freshdesk" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_freshdesk" "my_source_freshdesk" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Freshdesk API Key. See the docs for more information on how to obtain this key.
-- `domain` (String) Freshdesk domain
-- `requests_per_minute` (Number) The number of requests per minute that this source allowed to use. There is a rate limit of 50 requests per minute per app per account.
-- `source_type` (String) must be one of ["freshdesk"]
-- `start_date` (String) UTC date and time. Any data created after this date will be replicated. If this parameter is not set, all data will be replicated.
-
diff --git a/docs/data-sources/source_freshsales.md b/docs/data-sources/source_freshsales.md
index ceb870b50..14994a6ab 100644
--- a/docs/data-sources/source_freshsales.md
+++ b/docs/data-sources/source_freshsales.md
@@ -14,7 +14,6 @@ SourceFreshsales DataSource
```terraform
data "airbyte_source_freshsales" "my_source_freshsales" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_freshsales" "my_source_freshsales" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Freshsales API Key. See here. The key is case sensitive.
-- `domain_name` (String) The Name of your Freshsales domain
-- `source_type` (String) must be one of ["freshsales"]
-
diff --git a/docs/data-sources/source_gainsight_px.md b/docs/data-sources/source_gainsight_px.md
index f464387d7..054b08bd5 100644
--- a/docs/data-sources/source_gainsight_px.md
+++ b/docs/data-sources/source_gainsight_px.md
@@ -14,7 +14,6 @@ SourceGainsightPx DataSource
```terraform
data "airbyte_source_gainsight_px" "my_source_gainsightpx" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_gainsight_px" "my_source_gainsightpx" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) The Aptrinsic API Key which is recieved from the dashboard settings (ref - https://app.aptrinsic.com/settings/api-keys)
-- `source_type` (String) must be one of ["gainsight-px"]
-
diff --git a/docs/data-sources/source_gcs.md b/docs/data-sources/source_gcs.md
index df00cdc2a..6b73f71a8 100644
--- a/docs/data-sources/source_gcs.md
+++ b/docs/data-sources/source_gcs.md
@@ -14,7 +14,6 @@ SourceGcs DataSource
```terraform
data "airbyte_source_gcs" "my_source_gcs" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_gcs" "my_source_gcs" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `gcs_bucket` (String) GCS bucket name
-- `gcs_path` (String) GCS path to data
-- `service_account` (String) Enter your Google Cloud service account key in JSON format
-- `source_type` (String) must be one of ["gcs"]
-
diff --git a/docs/data-sources/source_getlago.md b/docs/data-sources/source_getlago.md
index 0d1b331d4..f7615a0aa 100644
--- a/docs/data-sources/source_getlago.md
+++ b/docs/data-sources/source_getlago.md
@@ -14,7 +14,6 @@ SourceGetlago DataSource
```terraform
data "airbyte_source_getlago" "my_source_getlago" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_getlago" "my_source_getlago" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Your API Key. See here.
-- `source_type` (String) must be one of ["getlago"]
-
diff --git a/docs/data-sources/source_github.md b/docs/data-sources/source_github.md
index 439abaa9d..2f460a7df 100644
--- a/docs/data-sources/source_github.md
+++ b/docs/data-sources/source_github.md
@@ -14,7 +14,6 @@ SourceGithub DataSource
```terraform
data "airbyte_source_github" "my_source_github" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,75 +25,12 @@ data "airbyte_source_github" "my_source_github" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `branch` (String) Space-delimited list of GitHub repository branches to pull commits for, e.g. `airbytehq/airbyte/master`. If no branches are specified for a repository, the default branch will be pulled.
-- `credentials` (Attributes) Choose how to authenticate to GitHub (see [below for nested schema](#nestedatt--configuration--credentials))
-- `repository` (String) Space-delimited list of GitHub organizations/repositories, e.g. `airbytehq/airbyte` for single repository, `airbytehq/*` for get all repositories from organization and `airbytehq/airbyte airbytehq/another-repo` for multiple repositories.
-- `requests_per_hour` (Number) The GitHub API allows for a maximum of 5000 requests per hour (15000 for Github Enterprise). You can specify a lower value to limit your use of the API quota.
-- `source_type` (String) must be one of ["github"]
-- `start_date` (String) The date from which you'd like to replicate data from GitHub in the format YYYY-MM-DDT00:00:00Z. For the streams which support this configuration, only data generated on or after the start date will be replicated. This field doesn't apply to all streams, see the docs for more info
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_github_authentication_o_auth` (Attributes) Choose how to authenticate to GitHub (see [below for nested schema](#nestedatt--configuration--credentials--source_github_authentication_o_auth))
-- `source_github_authentication_personal_access_token` (Attributes) Choose how to authenticate to GitHub (see [below for nested schema](#nestedatt--configuration--credentials--source_github_authentication_personal_access_token))
-- `source_github_update_authentication_o_auth` (Attributes) Choose how to authenticate to GitHub (see [below for nested schema](#nestedatt--configuration--credentials--source_github_update_authentication_o_auth))
-- `source_github_update_authentication_personal_access_token` (Attributes) Choose how to authenticate to GitHub (see [below for nested schema](#nestedatt--configuration--credentials--source_github_update_authentication_personal_access_token))
-
-
-### Nested Schema for `configuration.credentials.source_github_authentication_o_auth`
-
-Read-Only:
-
-- `access_token` (String) OAuth access token
-- `client_id` (String) OAuth Client Id
-- `client_secret` (String) OAuth Client secret
-- `option_title` (String) must be one of ["OAuth Credentials"]
-
-
-
-### Nested Schema for `configuration.credentials.source_github_authentication_personal_access_token`
-
-Read-Only:
-
-- `option_title` (String) must be one of ["PAT Credentials"]
-- `personal_access_token` (String) Log into GitHub and then generate a personal access token. To load balance your API quota consumption across multiple API tokens, input multiple tokens separated with ","
-
-
-
-### Nested Schema for `configuration.credentials.source_github_update_authentication_o_auth`
-
-Read-Only:
-
-- `access_token` (String) OAuth access token
-- `client_id` (String) OAuth Client Id
-- `client_secret` (String) OAuth Client secret
-- `option_title` (String) must be one of ["OAuth Credentials"]
-
-
-
-### Nested Schema for `configuration.credentials.source_github_update_authentication_personal_access_token`
-
-Read-Only:
-
-- `option_title` (String) must be one of ["PAT Credentials"]
-- `personal_access_token` (String) Log into GitHub and then generate a personal access token. To load balance your API quota consumption across multiple API tokens, input multiple tokens separated with ","
-
diff --git a/docs/data-sources/source_gitlab.md b/docs/data-sources/source_gitlab.md
index 74e3c6d91..d813cdf4d 100644
--- a/docs/data-sources/source_gitlab.md
+++ b/docs/data-sources/source_gitlab.md
@@ -14,7 +14,6 @@ SourceGitlab DataSource
```terraform
data "airbyte_source_gitlab" "my_source_gitlab" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,79 +25,12 @@ data "airbyte_source_gitlab" "my_source_gitlab" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_url` (String) Please enter your basic URL from GitLab instance.
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `groups` (String) Space-delimited list of groups. e.g. airbyte.io.
-- `projects` (String) Space-delimited list of projects. e.g. airbyte.io/documentation meltano/tap-gitlab.
-- `source_type` (String) must be one of ["gitlab"]
-- `start_date` (String) The date from which you'd like to replicate data for GitLab API, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_gitlab_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_gitlab_authorization_method_o_auth2_0))
-- `source_gitlab_authorization_method_private_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_gitlab_authorization_method_private_token))
-- `source_gitlab_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_gitlab_update_authorization_method_o_auth2_0))
-- `source_gitlab_update_authorization_method_private_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_gitlab_update_authorization_method_private_token))
-
-
-### Nested Schema for `configuration.credentials.source_gitlab_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The API ID of the Gitlab developer application.
-- `client_secret` (String) The API Secret the Gitlab developer application.
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-
-
-### Nested Schema for `configuration.credentials.source_gitlab_authorization_method_private_token`
-
-Read-Only:
-
-- `access_token` (String) Log into your Gitlab account and then generate a personal Access Token.
-- `auth_type` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_gitlab_update_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The API ID of the Gitlab developer application.
-- `client_secret` (String) The API Secret the Gitlab developer application.
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-
-
-### Nested Schema for `configuration.credentials.source_gitlab_update_authorization_method_private_token`
-
-Read-Only:
-
-- `access_token` (String) Log into your Gitlab account and then generate a personal Access Token.
-- `auth_type` (String) must be one of ["access_token"]
-
diff --git a/docs/data-sources/source_glassfrog.md b/docs/data-sources/source_glassfrog.md
index 392ef9675..afee93207 100644
--- a/docs/data-sources/source_glassfrog.md
+++ b/docs/data-sources/source_glassfrog.md
@@ -14,7 +14,6 @@ SourceGlassfrog DataSource
```terraform
data "airbyte_source_glassfrog" "my_source_glassfrog" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_glassfrog" "my_source_glassfrog" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) API key provided by Glassfrog
-- `source_type` (String) must be one of ["glassfrog"]
-
diff --git a/docs/data-sources/source_gnews.md b/docs/data-sources/source_gnews.md
index 301d66d2b..a81d0e57d 100644
--- a/docs/data-sources/source_gnews.md
+++ b/docs/data-sources/source_gnews.md
@@ -14,7 +14,6 @@ SourceGnews DataSource
```terraform
data "airbyte_source_gnews" "my_source_gnews" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,63 +25,12 @@ data "airbyte_source_gnews" "my_source_gnews" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) API Key
-- `country` (String) must be one of ["au", "br", "ca", "cn", "eg", "fr", "de", "gr", "hk", "in", "ie", "il", "it", "jp", "nl", "no", "pk", "pe", "ph", "pt", "ro", "ru", "sg", "es", "se", "ch", "tw", "ua", "gb", "us"]
-This parameter allows you to specify the country where the news articles returned by the API were published, the contents of the articles are not necessarily related to the specified country. You have to set as value the 2 letters code of the country you want to filter.
-- `end_date` (String) This parameter allows you to filter the articles that have a publication date smaller than or equal to the specified value. The date must respect the following format: YYYY-MM-DD hh:mm:ss (in UTC)
-- `in` (List of String) This parameter allows you to choose in which attributes the keywords are searched. The attributes that can be set are title, description and content. It is possible to combine several attributes.
-- `language` (String) must be one of ["ar", "zh", "nl", "en", "fr", "de", "el", "he", "hi", "it", "ja", "ml", "mr", "no", "pt", "ro", "ru", "es", "sv", "ta", "te", "uk"]
-- `nullable` (List of String) This parameter allows you to specify the attributes that you allow to return null values. The attributes that can be set are title, description and content. It is possible to combine several attributes
-- `query` (String) This parameter allows you to specify your search keywords to find the news articles you are looking for. The keywords will be used to return the most relevant articles. It is possible to use logical operators with keywords. - Phrase Search Operator: This operator allows you to make an exact search. Keywords surrounded by
- quotation marks are used to search for articles with the exact same keyword sequence.
- For example the query: "Apple iPhone" will return articles matching at least once this sequence of keywords.
-- Logical AND Operator: This operator allows you to make sure that several keywords are all used in the article
- search. By default the space character acts as an AND operator, it is possible to replace the space character
- by AND to obtain the same result. For example the query: Apple Microsoft is equivalent to Apple AND Microsoft
-- Logical OR Operator: This operator allows you to retrieve articles matching the keyword a or the keyword b.
- It is important to note that this operator has a higher precedence than the AND operator. For example the
- query: Apple OR Microsoft will return all articles matching the keyword Apple as well as all articles matching
- the keyword Microsoft
-- Logical NOT Operator: This operator allows you to remove from the results the articles corresponding to the
- specified keywords. To use it, you need to add NOT in front of each word or phrase surrounded by quotes.
- For example the query: Apple NOT iPhone will return all articles matching the keyword Apple but not the keyword
- iPhone
-- `sortby` (String) must be one of ["publishedAt", "relevance"]
-This parameter allows you to choose with which type of sorting the articles should be returned. Two values are possible:
- - publishedAt = sort by publication date, the articles with the most recent publication date are returned first
- - relevance = sort by best match to keywords, the articles with the best match are returned first
-- `source_type` (String) must be one of ["gnews"]
-- `start_date` (String) This parameter allows you to filter the articles that have a publication date greater than or equal to the specified value. The date must respect the following format: YYYY-MM-DD hh:mm:ss (in UTC)
-- `top_headlines_query` (String) This parameter allows you to specify your search keywords to find the news articles you are looking for. The keywords will be used to return the most relevant articles. It is possible to use logical operators with keywords. - Phrase Search Operator: This operator allows you to make an exact search. Keywords surrounded by
- quotation marks are used to search for articles with the exact same keyword sequence.
- For example the query: "Apple iPhone" will return articles matching at least once this sequence of keywords.
-- Logical AND Operator: This operator allows you to make sure that several keywords are all used in the article
- search. By default the space character acts as an AND operator, it is possible to replace the space character
- by AND to obtain the same result. For example the query: Apple Microsoft is equivalent to Apple AND Microsoft
-- Logical OR Operator: This operator allows you to retrieve articles matching the keyword a or the keyword b.
- It is important to note that this operator has a higher precedence than the AND operator. For example the
- query: Apple OR Microsoft will return all articles matching the keyword Apple as well as all articles matching
- the keyword Microsoft
-- Logical NOT Operator: This operator allows you to remove from the results the articles corresponding to the
- specified keywords. To use it, you need to add NOT in front of each word or phrase surrounded by quotes.
- For example the query: Apple NOT iPhone will return all articles matching the keyword Apple but not the keyword
- iPhone
-- `top_headlines_topic` (String) must be one of ["breaking-news", "world", "nation", "business", "technology", "entertainment", "sports", "science", "health"]
-This parameter allows you to change the category for the request.
-
diff --git a/docs/data-sources/source_google_ads.md b/docs/data-sources/source_google_ads.md
index d69d22932..88919455a 100644
--- a/docs/data-sources/source_google_ads.md
+++ b/docs/data-sources/source_google_ads.md
@@ -14,7 +14,6 @@ SourceGoogleAds DataSource
```terraform
data "airbyte_source_google_ads" "my_source_googleads" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,48 +25,12 @@ data "airbyte_source_google_ads" "my_source_googleads" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `conversion_window_days` (Number) A conversion window is the number of days after an ad interaction (such as an ad click or video view) during which a conversion, such as a purchase, is recorded in Google Ads. For more information, see Google's documentation.
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `custom_queries` (Attributes List) (see [below for nested schema](#nestedatt--configuration--custom_queries))
-- `customer_id` (String) Comma-separated list of (client) customer IDs. Each customer ID must be specified as a 10-digit number without dashes. For detailed instructions on finding this value, refer to our documentation.
-- `end_date` (String) UTC date in the format YYYY-MM-DD. Any data after this date will not be replicated. (Default value of today is used if not set)
-- `login_customer_id` (String) If your access to the customer account is through a manager account, this field is required, and must be set to the 10-digit customer ID of the manager account. For more information about this field, refer to Google's documentation.
-- `source_type` (String) must be one of ["google-ads"]
-- `start_date` (String) UTC date in the format YYYY-MM-DD. Any data before this date will not be replicated. (Default value of two years ago is used if not set)
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `access_token` (String) The Access Token for making authenticated requests. For detailed instructions on finding this value, refer to our documentation.
-- `client_id` (String) The Client ID of your Google Ads developer application. For detailed instructions on finding this value, refer to our documentation.
-- `client_secret` (String) The Client Secret of your Google Ads developer application. For detailed instructions on finding this value, refer to our documentation.
-- `developer_token` (String) The Developer Token granted by Google to use their APIs. For detailed instructions on finding this value, refer to our documentation.
-- `refresh_token` (String) The token used to obtain a new Access Token. For detailed instructions on finding this value, refer to our documentation.
-
-
-
-### Nested Schema for `configuration.custom_queries`
-
-Read-Only:
-
-- `query` (String) A custom defined GAQL query for building the report. Avoid including the segments.date field; wherever possible, Airbyte will automatically include it for incremental syncs. For more information, refer to Google's documentation.
-- `table_name` (String) The table name in your destination database for the chosen query.
-
diff --git a/docs/data-sources/source_google_analytics_data_api.md b/docs/data-sources/source_google_analytics_data_api.md
index c29393f3e..424e81348 100644
--- a/docs/data-sources/source_google_analytics_data_api.md
+++ b/docs/data-sources/source_google_analytics_data_api.md
@@ -14,7 +14,6 @@ SourceGoogleAnalyticsDataAPI DataSource
```terraform
data "airbyte_source_google_analytics_data_api" "my_source_googleanalyticsdataapi" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,77 +25,12 @@ data "airbyte_source_google_analytics_data_api" "my_source_googleanalyticsdataap
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials))
-- `custom_reports` (String) A JSON array describing the custom reports you want to sync from Google Analytics. See the documentation for more information about the exact format you can use to fill out this field.
-- `date_ranges_start_date` (String) The start date from which to replicate report data in the format YYYY-MM-DD. Data generated before this date will not be included in the report. Not applied to custom Cohort reports.
-- `property_id` (String) The Property ID is a unique number assigned to each property in Google Analytics, found in your GA4 property URL. This ID allows the connector to track the specific events associated with your property. Refer to the Google Analytics documentation to locate your property ID.
-- `source_type` (String) must be one of ["google-analytics-data-api"]
-- `window_in_days` (Number) The interval in days for each data request made to the Google Analytics API. A larger value speeds up data sync, but increases the chance of data sampling, which may result in inaccuracies. We recommend a value of 1 to minimize sampling, unless speed is an absolute priority over accuracy. Acceptable values range from 1 to 364. Does not apply to custom Cohort reports. More information is available in the documentation.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_google_analytics_data_api_credentials_authenticate_via_google_oauth` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_data_api_credentials_authenticate_via_google_oauth))
-- `source_google_analytics_data_api_credentials_service_account_key_authentication` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_data_api_credentials_service_account_key_authentication))
-- `source_google_analytics_data_api_update_credentials_authenticate_via_google_oauth` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_data_api_update_credentials_authenticate_via_google_oauth))
-- `source_google_analytics_data_api_update_credentials_service_account_key_authentication` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_data_api_update_credentials_service_account_key_authentication))
-
-
-### Nested Schema for `configuration.credentials.source_google_analytics_data_api_credentials_authenticate_via_google_oauth`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Google Analytics developer application.
-- `client_secret` (String) The Client Secret of your Google Analytics developer application.
-- `refresh_token` (String) The token for obtaining a new access token.
-
-
-
-### Nested Schema for `configuration.credentials.source_google_analytics_data_api_credentials_service_account_key_authentication`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Service"]
-- `credentials_json` (String) The JSON key linked to the service account used for authorization. For steps on obtaining this key, refer to the setup guide.
-
-
-
-### Nested Schema for `configuration.credentials.source_google_analytics_data_api_update_credentials_authenticate_via_google_oauth`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Google Analytics developer application.
-- `client_secret` (String) The Client Secret of your Google Analytics developer application.
-- `refresh_token` (String) The token for obtaining a new access token.
-
-
-
-### Nested Schema for `configuration.credentials.source_google_analytics_data_api_update_credentials_service_account_key_authentication`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Service"]
-- `credentials_json` (String) The JSON key linked to the service account used for authorization. For steps on obtaining this key, refer to the setup guide.
-
diff --git a/docs/data-sources/source_google_analytics_v4.md b/docs/data-sources/source_google_analytics_v4.md
deleted file mode 100644
index b17e2c30e..000000000
--- a/docs/data-sources/source_google_analytics_v4.md
+++ /dev/null
@@ -1,102 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_google_analytics_v4 Data Source - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceGoogleAnalyticsV4 DataSource
----
-
-# airbyte_source_google_analytics_v4 (Data Source)
-
-SourceGoogleAnalyticsV4 DataSource
-
-## Example Usage
-
-```terraform
-data "airbyte_source_google_analytics_v4" "my_source_googleanalyticsv4" {
- secret_id = "...my_secret_id..."
- source_id = "...my_source_id..."
-}
-```
-
-
-## Schema
-
-### Required
-
-- `source_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials))
-- `custom_reports` (String) A JSON array describing the custom reports you want to sync from Google Analytics. See the docs for more information about the exact format you can use to fill out this field.
-- `source_type` (String) must be one of ["google-analytics-v4"]
-- `start_date` (String) The date in the format YYYY-MM-DD. Any data before this date will not be replicated.
-- `view_id` (String) The ID for the Google Analytics View you want to fetch data from. This can be found from the Google Analytics Account Explorer.
-- `window_in_days` (Number) The time increment used by the connector when requesting data from the Google Analytics API. More information is available in the the docs. The bigger this value is, the faster the sync will be, but the more likely that sampling will be applied to your data, potentially causing inaccuracies in the returned results. We recommend setting this to 1 unless you have a hard requirement to make the sync faster at the expense of accuracy. The minimum allowed value for this field is 1, and the maximum is 364.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_google_analytics_v4_credentials_authenticate_via_google_oauth` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_v4_credentials_authenticate_via_google_oauth))
-- `source_google_analytics_v4_credentials_service_account_key_authentication` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_v4_credentials_service_account_key_authentication))
-- `source_google_analytics_v4_update_credentials_authenticate_via_google_oauth` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_v4_update_credentials_authenticate_via_google_oauth))
-- `source_google_analytics_v4_update_credentials_service_account_key_authentication` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_v4_update_credentials_service_account_key_authentication))
-
-
-### Nested Schema for `configuration.credentials.source_google_analytics_v4_credentials_authenticate_via_google_oauth`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Google Analytics developer application.
-- `client_secret` (String) The Client Secret of your Google Analytics developer application.
-- `refresh_token` (String) The token for obtaining a new access token.
-
-
-
-### Nested Schema for `configuration.credentials.source_google_analytics_v4_credentials_service_account_key_authentication`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Service"]
-- `credentials_json` (String) The JSON key of the service account to use for authorization
-
-
-
-### Nested Schema for `configuration.credentials.source_google_analytics_v4_update_credentials_authenticate_via_google_oauth`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Google Analytics developer application.
-- `client_secret` (String) The Client Secret of your Google Analytics developer application.
-- `refresh_token` (String) The token for obtaining a new access token.
-
-
-
-### Nested Schema for `configuration.credentials.source_google_analytics_v4_update_credentials_service_account_key_authentication`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Service"]
-- `credentials_json` (String) The JSON key of the service account to use for authorization
-
-
diff --git a/docs/data-sources/source_google_directory.md b/docs/data-sources/source_google_directory.md
index 6119cc061..1c0cad5b4 100644
--- a/docs/data-sources/source_google_directory.md
+++ b/docs/data-sources/source_google_directory.md
@@ -14,7 +14,6 @@ SourceGoogleDirectory DataSource
```terraform
data "airbyte_source_google_directory" "my_source_googledirectory" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,77 +25,12 @@ data "airbyte_source_google_directory" "my_source_googledirectory" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Google APIs use the OAuth 2.0 protocol for authentication and authorization. The Source supports Web server application and Service accounts scenarios. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["google-directory"]
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_google_directory_google_credentials_service_account_key` (Attributes) For these scenario user should obtain service account's credentials from the Google API Console and provide delegated email. (see [below for nested schema](#nestedatt--configuration--credentials--source_google_directory_google_credentials_service_account_key))
-- `source_google_directory_google_credentials_sign_in_via_google_o_auth` (Attributes) For these scenario user only needs to give permission to read Google Directory data. (see [below for nested schema](#nestedatt--configuration--credentials--source_google_directory_google_credentials_sign_in_via_google_o_auth))
-- `source_google_directory_update_google_credentials_service_account_key` (Attributes) For these scenario user should obtain service account's credentials from the Google API Console and provide delegated email. (see [below for nested schema](#nestedatt--configuration--credentials--source_google_directory_update_google_credentials_service_account_key))
-- `source_google_directory_update_google_credentials_sign_in_via_google_o_auth` (Attributes) For these scenario user only needs to give permission to read Google Directory data. (see [below for nested schema](#nestedatt--configuration--credentials--source_google_directory_update_google_credentials_sign_in_via_google_o_auth))
-
-
-### Nested Schema for `configuration.credentials.source_google_directory_google_credentials_service_account_key`
-
-Read-Only:
-
-- `credentials_json` (String) The contents of the JSON service account key. See the docs for more information on how to generate this key.
-- `credentials_title` (String) must be one of ["Service accounts"]
-Authentication Scenario
-- `email` (String) The email of the user, which has permissions to access the Google Workspace Admin APIs.
-
-
-
-### Nested Schema for `configuration.credentials.source_google_directory_google_credentials_sign_in_via_google_o_auth`
-
-Read-Only:
-
-- `client_id` (String) The Client ID of the developer application.
-- `client_secret` (String) The Client Secret of the developer application.
-- `credentials_title` (String) must be one of ["Web server app"]
-Authentication Scenario
-- `refresh_token` (String) The Token for obtaining a new access token.
-
-
-
-### Nested Schema for `configuration.credentials.source_google_directory_update_google_credentials_service_account_key`
-
-Read-Only:
-
-- `credentials_json` (String) The contents of the JSON service account key. See the docs for more information on how to generate this key.
-- `credentials_title` (String) must be one of ["Service accounts"]
-Authentication Scenario
-- `email` (String) The email of the user, which has permissions to access the Google Workspace Admin APIs.
-
-
-
-### Nested Schema for `configuration.credentials.source_google_directory_update_google_credentials_sign_in_via_google_o_auth`
-
-Read-Only:
-
-- `client_id` (String) The Client ID of the developer application.
-- `client_secret` (String) The Client Secret of the developer application.
-- `credentials_title` (String) must be one of ["Web server app"]
-Authentication Scenario
-- `refresh_token` (String) The Token for obtaining a new access token.
-
diff --git a/docs/data-sources/source_google_drive.md b/docs/data-sources/source_google_drive.md
new file mode 100644
index 000000000..b748389a3
--- /dev/null
+++ b/docs/data-sources/source_google_drive.md
@@ -0,0 +1,36 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_source_google_drive Data Source - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ SourceGoogleDrive DataSource
+---
+
+# airbyte_source_google_drive (Data Source)
+
+SourceGoogleDrive DataSource
+
+## Example Usage
+
+```terraform
+data "airbyte_source_google_drive" "my_source_googledrive" {
+ source_id = "...my_source_id..."
+}
+```
+
+
+## Schema
+
+### Required
+
+- `source_id` (String)
+
+### Read-Only
+
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
+- `name` (String)
+- `source_type` (String)
+- `workspace_id` (String)
+
+
diff --git a/docs/data-sources/source_google_pagespeed_insights.md b/docs/data-sources/source_google_pagespeed_insights.md
index 722c1e312..f6b040f3a 100644
--- a/docs/data-sources/source_google_pagespeed_insights.md
+++ b/docs/data-sources/source_google_pagespeed_insights.md
@@ -14,7 +14,6 @@ SourceGooglePagespeedInsights DataSource
```terraform
data "airbyte_source_google_pagespeed_insights" "my_source_googlepagespeedinsights" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_google_pagespeed_insights" "my_source_googlepagespeedinsigh
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Google PageSpeed API Key. See here. The key is optional - however the API is heavily rate limited when using without API Key. Creating and using the API key therefore is recommended. The key is case sensitive.
-- `categories` (List of String) Defines which Lighthouse category to run. One or many of: "accessibility", "best-practices", "performance", "pwa", "seo".
-- `source_type` (String) must be one of ["google-pagespeed-insights"]
-- `strategies` (List of String) The analyses strategy to use. Either "desktop" or "mobile".
-- `urls` (List of String) The URLs to retrieve pagespeed information from. The connector will attempt to sync PageSpeed reports for all the defined URLs. Format: https://(www.)url.domain
-
diff --git a/docs/data-sources/source_google_search_console.md b/docs/data-sources/source_google_search_console.md
index c13b41baa..3570fda5b 100644
--- a/docs/data-sources/source_google_search_console.md
+++ b/docs/data-sources/source_google_search_console.md
@@ -14,7 +14,6 @@ SourceGoogleSearchConsole DataSource
```terraform
data "airbyte_source_google_search_console" "my_source_googlesearchconsole" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,92 +25,12 @@ data "airbyte_source_google_search_console" "my_source_googlesearchconsole" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `authorization` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization))
-- `custom_reports` (String) (DEPRCATED) A JSON array describing the custom reports you want to sync from Google Search Console. See our documentation for more information on formulating custom reports.
-- `custom_reports_array` (Attributes List) You can add your Custom Analytics report by creating one. (see [below for nested schema](#nestedatt--configuration--custom_reports_array))
-- `data_state` (String) must be one of ["final", "all"]
-If set to 'final', the returned data will include only finalized, stable data. If set to 'all', fresh data will be included. When using Incremental sync mode, we do not recommend setting this parameter to 'all' as it may cause data loss. More information can be found in our full documentation.
-- `end_date` (String) UTC date in the format YYYY-MM-DD. Any data created after this date will not be replicated. Must be greater or equal to the start date field. Leaving this field blank will replicate all data from the start date onward.
-- `site_urls` (List of String) The URLs of the website property attached to your GSC account. Learn more about properties here.
-- `source_type` (String) must be one of ["google-search-console"]
-- `start_date` (String) UTC date in the format YYYY-MM-DD. Any data before this date will not be replicated.
-
-
-### Nested Schema for `configuration.authorization`
-
-Read-Only:
-
-- `source_google_search_console_authentication_type_o_auth` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization--source_google_search_console_authentication_type_o_auth))
-- `source_google_search_console_authentication_type_service_account_key_authentication` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization--source_google_search_console_authentication_type_service_account_key_authentication))
-- `source_google_search_console_update_authentication_type_o_auth` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization--source_google_search_console_update_authentication_type_o_auth))
-- `source_google_search_console_update_authentication_type_service_account_key_authentication` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization--source_google_search_console_update_authentication_type_service_account_key_authentication))
-
-
-### Nested Schema for `configuration.authorization.source_google_search_console_authentication_type_o_auth`
-
-Read-Only:
-
-- `access_token` (String) Access token for making authenticated requests. Read more here.
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The client ID of your Google Search Console developer application. Read more here.
-- `client_secret` (String) The client secret of your Google Search Console developer application. Read more here.
-- `refresh_token` (String) The token for obtaining a new access token. Read more here.
-
-
-
-### Nested Schema for `configuration.authorization.source_google_search_console_authentication_type_service_account_key_authentication`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Service"]
-- `email` (String) The email of the user which has permissions to access the Google Workspace Admin APIs.
-- `service_account_info` (String) The JSON key of the service account to use for authorization. Read more here.
-
-
-
-### Nested Schema for `configuration.authorization.source_google_search_console_update_authentication_type_o_auth`
-
-Read-Only:
-
-- `access_token` (String) Access token for making authenticated requests. Read more here.
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The client ID of your Google Search Console developer application. Read more here.
-- `client_secret` (String) The client secret of your Google Search Console developer application. Read more here.
-- `refresh_token` (String) The token for obtaining a new access token. Read more here.
-
-
-
-### Nested Schema for `configuration.authorization.source_google_search_console_update_authentication_type_service_account_key_authentication`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Service"]
-- `email` (String) The email of the user which has permissions to access the Google Workspace Admin APIs.
-- `service_account_info` (String) The JSON key of the service account to use for authorization. Read more here.
-
-
-
-
-### Nested Schema for `configuration.custom_reports_array`
-
-Read-Only:
-
-- `dimensions` (List of String) A list of dimensions (country, date, device, page, query)
-- `name` (String) The name of the custom report, this name would be used as stream name
-
diff --git a/docs/data-sources/source_google_sheets.md b/docs/data-sources/source_google_sheets.md
index 9a1e2595f..355a7059d 100644
--- a/docs/data-sources/source_google_sheets.md
+++ b/docs/data-sources/source_google_sheets.md
@@ -14,7 +14,6 @@ SourceGoogleSheets DataSource
```terraform
data "airbyte_source_google_sheets" "my_source_googlesheets" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,73 +25,12 @@ data "airbyte_source_google_sheets" "my_source_googlesheets" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Credentials for connecting to the Google Sheets API (see [below for nested schema](#nestedatt--configuration--credentials))
-- `names_conversion` (Boolean) Enables the conversion of column names to a standardized, SQL-compliant format. For example, 'My Name' -> 'my_name'. Enable this option if your destination is SQL-based.
-- `source_type` (String) must be one of ["google-sheets"]
-- `spreadsheet_id` (String) Enter the link to the Google spreadsheet you want to sync. To copy the link, click the 'Share' button in the top-right corner of the spreadsheet, then click 'Copy link'.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_google_sheets_authentication_authenticate_via_google_o_auth` (Attributes) Credentials for connecting to the Google Sheets API (see [below for nested schema](#nestedatt--configuration--credentials--source_google_sheets_authentication_authenticate_via_google_o_auth))
-- `source_google_sheets_authentication_service_account_key_authentication` (Attributes) Credentials for connecting to the Google Sheets API (see [below for nested schema](#nestedatt--configuration--credentials--source_google_sheets_authentication_service_account_key_authentication))
-- `source_google_sheets_update_authentication_authenticate_via_google_o_auth` (Attributes) Credentials for connecting to the Google Sheets API (see [below for nested schema](#nestedatt--configuration--credentials--source_google_sheets_update_authentication_authenticate_via_google_o_auth))
-- `source_google_sheets_update_authentication_service_account_key_authentication` (Attributes) Credentials for connecting to the Google Sheets API (see [below for nested schema](#nestedatt--configuration--credentials--source_google_sheets_update_authentication_service_account_key_authentication))
-
-
-### Nested Schema for `configuration.credentials.source_google_sheets_authentication_authenticate_via_google_o_auth`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) Enter your Google application's Client ID. See Google's documentation for more information.
-- `client_secret` (String) Enter your Google application's Client Secret. See Google's documentation for more information.
-- `refresh_token` (String) Enter your Google application's refresh token. See Google's documentation for more information.
-
-
-
-### Nested Schema for `configuration.credentials.source_google_sheets_authentication_service_account_key_authentication`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Service"]
-- `service_account_info` (String) The JSON key of the service account to use for authorization. Read more here.
-
-
-
-### Nested Schema for `configuration.credentials.source_google_sheets_update_authentication_authenticate_via_google_o_auth`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) Enter your Google application's Client ID. See Google's documentation for more information.
-- `client_secret` (String) Enter your Google application's Client Secret. See Google's documentation for more information.
-- `refresh_token` (String) Enter your Google application's refresh token. See Google's documentation for more information.
-
-
-
-### Nested Schema for `configuration.credentials.source_google_sheets_update_authentication_service_account_key_authentication`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Service"]
-- `service_account_info` (String) The JSON key of the service account to use for authorization. Read more here.
-
diff --git a/docs/data-sources/source_google_webfonts.md b/docs/data-sources/source_google_webfonts.md
index 20e758d7f..76e3af3eb 100644
--- a/docs/data-sources/source_google_webfonts.md
+++ b/docs/data-sources/source_google_webfonts.md
@@ -14,7 +14,6 @@ SourceGoogleWebfonts DataSource
```terraform
data "airbyte_source_google_webfonts" "my_source_googlewebfonts" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_google_webfonts" "my_source_googlewebfonts" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `alt` (String) Optional, Available params- json, media, proto
-- `api_key` (String) API key is required to access google apis, For getting your's goto google console and generate api key for Webfonts
-- `pretty_print` (String) Optional, boolean type
-- `sort` (String) Optional, to find how to sort
-- `source_type` (String) must be one of ["google-webfonts"]
-
diff --git a/docs/data-sources/source_google_workspace_admin_reports.md b/docs/data-sources/source_google_workspace_admin_reports.md
index 5dc2f07f8..fa4febe8d 100644
--- a/docs/data-sources/source_google_workspace_admin_reports.md
+++ b/docs/data-sources/source_google_workspace_admin_reports.md
@@ -14,7 +14,6 @@ SourceGoogleWorkspaceAdminReports DataSource
```terraform
data "airbyte_source_google_workspace_admin_reports" "my_source_googleworkspaceadminreports" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_google_workspace_admin_reports" "my_source_googleworkspacea
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials_json` (String) The contents of the JSON service account key. See the docs for more information on how to generate this key.
-- `email` (String) The email of the user, which has permissions to access the Google Workspace Admin APIs.
-- `lookback` (Number) Sets the range of time shown in the report. Reports API allows from up to 180 days ago.
-- `source_type` (String) must be one of ["google-workspace-admin-reports"]
-
diff --git a/docs/data-sources/source_greenhouse.md b/docs/data-sources/source_greenhouse.md
index 8530aaa59..710b07762 100644
--- a/docs/data-sources/source_greenhouse.md
+++ b/docs/data-sources/source_greenhouse.md
@@ -14,7 +14,6 @@ SourceGreenhouse DataSource
```terraform
data "airbyte_source_greenhouse" "my_source_greenhouse" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_greenhouse" "my_source_greenhouse" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Greenhouse API Key. See the docs for more information on how to generate this key.
-- `source_type` (String) must be one of ["greenhouse"]
-
diff --git a/docs/data-sources/source_gridly.md b/docs/data-sources/source_gridly.md
index 7f10be2e7..976eb75fe 100644
--- a/docs/data-sources/source_gridly.md
+++ b/docs/data-sources/source_gridly.md
@@ -14,7 +14,6 @@ SourceGridly DataSource
```terraform
data "airbyte_source_gridly" "my_source_gridly" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_gridly" "my_source_gridly" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String)
-- `grid_id` (String) ID of a grid, or can be ID of a branch
-- `source_type` (String) must be one of ["gridly"]
-
diff --git a/docs/data-sources/source_harvest.md b/docs/data-sources/source_harvest.md
index db58945e7..cf421ee65 100644
--- a/docs/data-sources/source_harvest.md
+++ b/docs/data-sources/source_harvest.md
@@ -14,7 +14,6 @@ SourceHarvest DataSource
```terraform
data "airbyte_source_harvest" "my_source_harvest" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,90 +25,12 @@ data "airbyte_source_harvest" "my_source_harvest" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `account_id` (String) Harvest account ID. Required for all Harvest requests in pair with Personal Access Token
-- `credentials` (Attributes) Choose how to authenticate to Harvest. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `replication_end_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data after this date will not be replicated.
-- `replication_start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-- `source_type` (String) must be one of ["harvest"]
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_harvest_authentication_mechanism_authenticate_via_harvest_o_auth` (Attributes) Choose how to authenticate to Harvest. (see [below for nested schema](#nestedatt--configuration--credentials--source_harvest_authentication_mechanism_authenticate_via_harvest_o_auth))
-- `source_harvest_authentication_mechanism_authenticate_with_personal_access_token` (Attributes) Choose how to authenticate to Harvest. (see [below for nested schema](#nestedatt--configuration--credentials--source_harvest_authentication_mechanism_authenticate_with_personal_access_token))
-- `source_harvest_update_authentication_mechanism_authenticate_via_harvest_o_auth` (Attributes) Choose how to authenticate to Harvest. (see [below for nested schema](#nestedatt--configuration--credentials--source_harvest_update_authentication_mechanism_authenticate_via_harvest_o_auth))
-- `source_harvest_update_authentication_mechanism_authenticate_with_personal_access_token` (Attributes) Choose how to authenticate to Harvest. (see [below for nested schema](#nestedatt--configuration--credentials--source_harvest_update_authentication_mechanism_authenticate_with_personal_access_token))
-
-
-### Nested Schema for `configuration.credentials.source_harvest_authentication_mechanism_authenticate_via_harvest_o_auth`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Harvest developer application.
-- `client_secret` (String) The Client Secret of your Harvest developer application.
-- `refresh_token` (String) Refresh Token to renew the expired Access Token.
-
-
-
-### Nested Schema for `configuration.credentials.source_harvest_authentication_mechanism_authenticate_with_personal_access_token`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `api_token` (String) Log into Harvest and then create new personal access token.
-- `auth_type` (String) must be one of ["Token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_harvest_update_authentication_mechanism_authenticate_via_harvest_o_auth`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Harvest developer application.
-- `client_secret` (String) The Client Secret of your Harvest developer application.
-- `refresh_token` (String) Refresh Token to renew the expired Access Token.
-
-
-
-### Nested Schema for `configuration.credentials.source_harvest_update_authentication_mechanism_authenticate_with_personal_access_token`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `api_token` (String) Log into Harvest and then create new personal access token.
-- `auth_type` (String) must be one of ["Token"]
-
diff --git a/docs/data-sources/source_hubplanner.md b/docs/data-sources/source_hubplanner.md
index 88ec25a98..f96ecad7a 100644
--- a/docs/data-sources/source_hubplanner.md
+++ b/docs/data-sources/source_hubplanner.md
@@ -14,7 +14,6 @@ SourceHubplanner DataSource
```terraform
data "airbyte_source_hubplanner" "my_source_hubplanner" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_hubplanner" "my_source_hubplanner" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Hubplanner API key. See https://github.com/hubplanner/API#authentication for more details.
-- `source_type` (String) must be one of ["hubplanner"]
-
diff --git a/docs/data-sources/source_hubspot.md b/docs/data-sources/source_hubspot.md
index 46f3ebcf4..20793d6fc 100644
--- a/docs/data-sources/source_hubspot.md
+++ b/docs/data-sources/source_hubspot.md
@@ -14,7 +14,6 @@ SourceHubspot DataSource
```terraform
data "airbyte_source_hubspot" "my_source_hubspot" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,76 +25,12 @@ data "airbyte_source_hubspot" "my_source_hubspot" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Choose how to authenticate to HubSpot. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["hubspot"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_hubspot_authentication_o_auth` (Attributes) Choose how to authenticate to HubSpot. (see [below for nested schema](#nestedatt--configuration--credentials--source_hubspot_authentication_o_auth))
-- `source_hubspot_authentication_private_app` (Attributes) Choose how to authenticate to HubSpot. (see [below for nested schema](#nestedatt--configuration--credentials--source_hubspot_authentication_private_app))
-- `source_hubspot_update_authentication_o_auth` (Attributes) Choose how to authenticate to HubSpot. (see [below for nested schema](#nestedatt--configuration--credentials--source_hubspot_update_authentication_o_auth))
-- `source_hubspot_update_authentication_private_app` (Attributes) Choose how to authenticate to HubSpot. (see [below for nested schema](#nestedatt--configuration--credentials--source_hubspot_update_authentication_private_app))
-
-
-### Nested Schema for `configuration.credentials.source_hubspot_authentication_o_auth`
-
-Read-Only:
-
-- `client_id` (String) The Client ID of your HubSpot developer application. See the Hubspot docs if you need help finding this ID.
-- `client_secret` (String) The client secret for your HubSpot developer application. See the Hubspot docs if you need help finding this secret.
-- `credentials_title` (String) must be one of ["OAuth Credentials"]
-Name of the credentials
-- `refresh_token` (String) Refresh token to renew an expired access token. See the Hubspot docs if you need help finding this token.
-
-
-
-### Nested Schema for `configuration.credentials.source_hubspot_authentication_private_app`
-
-Read-Only:
-
-- `access_token` (String) HubSpot Access token. See the Hubspot docs if you need help finding this token.
-- `credentials_title` (String) must be one of ["Private App Credentials"]
-Name of the credentials set
-
-
-
-### Nested Schema for `configuration.credentials.source_hubspot_update_authentication_o_auth`
-
-Read-Only:
-
-- `client_id` (String) The Client ID of your HubSpot developer application. See the Hubspot docs if you need help finding this ID.
-- `client_secret` (String) The client secret for your HubSpot developer application. See the Hubspot docs if you need help finding this secret.
-- `credentials_title` (String) must be one of ["OAuth Credentials"]
-Name of the credentials
-- `refresh_token` (String) Refresh token to renew an expired access token. See the Hubspot docs if you need help finding this token.
-
-
-
-### Nested Schema for `configuration.credentials.source_hubspot_update_authentication_private_app`
-
-Read-Only:
-
-- `access_token` (String) HubSpot Access token. See the Hubspot docs if you need help finding this token.
-- `credentials_title` (String) must be one of ["Private App Credentials"]
-Name of the credentials set
-
diff --git a/docs/data-sources/source_insightly.md b/docs/data-sources/source_insightly.md
index 11276454f..df18fbd22 100644
--- a/docs/data-sources/source_insightly.md
+++ b/docs/data-sources/source_insightly.md
@@ -14,7 +14,6 @@ SourceInsightly DataSource
```terraform
data "airbyte_source_insightly" "my_source_insightly" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_insightly" "my_source_insightly" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `source_type` (String) must be one of ["insightly"]
-- `start_date` (String) The date from which you'd like to replicate data for Insightly in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated. Note that it will be used only for incremental streams.
-- `token` (String) Your Insightly API token.
-
diff --git a/docs/data-sources/source_instagram.md b/docs/data-sources/source_instagram.md
index 25108283f..441051576 100644
--- a/docs/data-sources/source_instagram.md
+++ b/docs/data-sources/source_instagram.md
@@ -14,7 +14,6 @@ SourceInstagram DataSource
```terraform
data "airbyte_source_instagram" "my_source_instagram" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_instagram" "my_source_instagram" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_token` (String) The value of the access token generated with instagram_basic, instagram_manage_insights, pages_show_list, pages_read_engagement, Instagram Public Content Access permissions. See the docs for more information
-- `client_id` (String) The Client ID for your Oauth application
-- `client_secret` (String) The Client Secret for your Oauth application
-- `source_type` (String) must be one of ["instagram"]
-- `start_date` (String) The date from which you'd like to replicate data for User Insights, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
-
diff --git a/docs/data-sources/source_instatus.md b/docs/data-sources/source_instatus.md
index 63719d228..123f360b4 100644
--- a/docs/data-sources/source_instatus.md
+++ b/docs/data-sources/source_instatus.md
@@ -14,7 +14,6 @@ SourceInstatus DataSource
```terraform
data "airbyte_source_instatus" "my_source_instatus" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_instatus" "my_source_instatus" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Instatus REST API key
-- `source_type` (String) must be one of ["instatus"]
-
diff --git a/docs/data-sources/source_intercom.md b/docs/data-sources/source_intercom.md
index 4c16c3ba4..4eae4cbe9 100644
--- a/docs/data-sources/source_intercom.md
+++ b/docs/data-sources/source_intercom.md
@@ -14,7 +14,6 @@ SourceIntercom DataSource
```terraform
data "airbyte_source_intercom" "my_source_intercom" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_intercom" "my_source_intercom" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_token` (String) Access token for making authenticated requests. See the Intercom docs for more information.
-- `client_id` (String) Client Id for your Intercom application.
-- `client_secret` (String) Client Secret for your Intercom application.
-- `source_type` (String) must be one of ["intercom"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_ip2whois.md b/docs/data-sources/source_ip2whois.md
index eec3c2c07..5f9333cc0 100644
--- a/docs/data-sources/source_ip2whois.md
+++ b/docs/data-sources/source_ip2whois.md
@@ -14,7 +14,6 @@ SourceIp2whois DataSource
```terraform
data "airbyte_source_ip2whois" "my_source_ip2whois" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_ip2whois" "my_source_ip2whois" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Your API Key. See here.
-- `domain` (String) Domain name. See here.
-- `source_type` (String) must be one of ["ip2whois"]
-
diff --git a/docs/data-sources/source_iterable.md b/docs/data-sources/source_iterable.md
index 468137bf6..c8dd782ea 100644
--- a/docs/data-sources/source_iterable.md
+++ b/docs/data-sources/source_iterable.md
@@ -14,7 +14,6 @@ SourceIterable DataSource
```terraform
data "airbyte_source_iterable" "my_source_iterable" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_iterable" "my_source_iterable" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Iterable API Key. See the docs for more information on how to obtain this key.
-- `source_type` (String) must be one of ["iterable"]
-- `start_date` (String) The date from which you'd like to replicate data for Iterable, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
-
diff --git a/docs/data-sources/source_jira.md b/docs/data-sources/source_jira.md
index a7ea1f407..741ebb013 100644
--- a/docs/data-sources/source_jira.md
+++ b/docs/data-sources/source_jira.md
@@ -14,7 +14,6 @@ SourceJira DataSource
```terraform
data "airbyte_source_jira" "my_source_jira" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,29 +25,12 @@ data "airbyte_source_jira" "my_source_jira" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_token` (String) Jira API Token. See the docs for more information on how to generate this key. API Token is used for Authorization to your account by BasicAuth.
-- `domain` (String) The Domain for your Jira account, e.g. airbyteio.atlassian.net, airbyteio.jira.com, jira.your-domain.com
-- `email` (String) The user email for your Jira account which you used to generate the API token. This field is used for Authorization to your account by BasicAuth.
-- `enable_experimental_streams` (Boolean) Allow the use of experimental streams which rely on undocumented Jira API endpoints. See https://docs.airbyte.com/integrations/sources/jira#experimental-tables for more info.
-- `expand_issue_changelog` (Boolean) Expand the changelog when replicating issues.
-- `projects` (List of String) List of Jira project keys to replicate data for, or leave it empty if you want to replicate data for all projects.
-- `render_fields` (Boolean) Render issue fields in HTML format in addition to Jira JSON-like format.
-- `source_type` (String) must be one of ["jira"]
-- `start_date` (String) The date from which you want to replicate data from Jira, use the format YYYY-MM-DDT00:00:00Z. Note that this field only applies to certain streams, and only data generated on or after the start date will be replicated. Or leave it empty if you want to replicate all data. For more information, refer to the documentation.
-
diff --git a/docs/data-sources/source_k6_cloud.md b/docs/data-sources/source_k6_cloud.md
index f664b0a39..4e9987d57 100644
--- a/docs/data-sources/source_k6_cloud.md
+++ b/docs/data-sources/source_k6_cloud.md
@@ -14,7 +14,6 @@ SourceK6Cloud DataSource
```terraform
data "airbyte_source_k6_cloud" "my_source_k6cloud" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_k6_cloud" "my_source_k6cloud" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_token` (String) Your API Token. See here. The key is case sensitive.
-- `source_type` (String) must be one of ["k6-cloud"]
-
diff --git a/docs/data-sources/source_klarna.md b/docs/data-sources/source_klarna.md
index 6102e86a7..d0640b0e1 100644
--- a/docs/data-sources/source_klarna.md
+++ b/docs/data-sources/source_klarna.md
@@ -14,7 +14,6 @@ SourceKlarna DataSource
```terraform
data "airbyte_source_klarna" "my_source_klarna" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,26 +25,12 @@ data "airbyte_source_klarna" "my_source_klarna" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `password` (String) A string which is associated with your Merchant ID and is used to authorize use of Klarna's APIs (https://developers.klarna.com/api/#authentication)
-- `playground` (Boolean) Propertie defining if connector is used against playground or production environment
-- `region` (String) must be one of ["eu", "us", "oc"]
-Base url region (For playground eu https://docs.klarna.com/klarna-payments/api/payments-api/#tag/API-URLs). Supported 'eu', 'us', 'oc'
-- `source_type` (String) must be one of ["klarna"]
-- `username` (String) Consists of your Merchant ID (eid) - a unique number that identifies your e-store, combined with a random string (https://developers.klarna.com/api/#authentication)
-
diff --git a/docs/data-sources/source_klaviyo.md b/docs/data-sources/source_klaviyo.md
index d4a7f5180..0a77a6582 100644
--- a/docs/data-sources/source_klaviyo.md
+++ b/docs/data-sources/source_klaviyo.md
@@ -14,7 +14,6 @@ SourceKlaviyo DataSource
```terraform
data "airbyte_source_klaviyo" "my_source_klaviyo" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_klaviyo" "my_source_klaviyo" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Klaviyo API Key. See our docs if you need help finding this key.
-- `source_type` (String) must be one of ["klaviyo"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_kustomer_singer.md b/docs/data-sources/source_kustomer_singer.md
index 5a4537dca..8c54ff0a6 100644
--- a/docs/data-sources/source_kustomer_singer.md
+++ b/docs/data-sources/source_kustomer_singer.md
@@ -14,7 +14,6 @@ SourceKustomerSinger DataSource
```terraform
data "airbyte_source_kustomer_singer" "my_source_kustomersinger" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_kustomer_singer" "my_source_kustomersinger" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_token` (String) Kustomer API Token. See the docs on how to obtain this
-- `source_type` (String) must be one of ["kustomer-singer"]
-- `start_date` (String) The date from which you'd like to replicate the data
-
diff --git a/docs/data-sources/source_kyve.md b/docs/data-sources/source_kyve.md
index 304bbf03c..0f038d0c9 100644
--- a/docs/data-sources/source_kyve.md
+++ b/docs/data-sources/source_kyve.md
@@ -14,7 +14,6 @@ SourceKyve DataSource
```terraform
data "airbyte_source_kyve" "my_source_kyve" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,26 +25,12 @@ data "airbyte_source_kyve" "my_source_kyve" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `max_pages` (Number) The maximum amount of pages to go trough. Set to 'null' for all pages.
-- `page_size` (Number) The pagesize for pagination, smaller numbers are used in integration tests.
-- `pool_ids` (String) The IDs of the KYVE storage pool you want to archive. (Comma separated)
-- `source_type` (String) must be one of ["kyve"]
-- `start_ids` (String) The start-id defines, from which bundle id the pipeline should start to extract the data (Comma separated)
-- `url_base` (String) URL to the KYVE Chain API.
-
diff --git a/docs/data-sources/source_launchdarkly.md b/docs/data-sources/source_launchdarkly.md
index 58e8a139a..8061b8517 100644
--- a/docs/data-sources/source_launchdarkly.md
+++ b/docs/data-sources/source_launchdarkly.md
@@ -14,7 +14,6 @@ SourceLaunchdarkly DataSource
```terraform
data "airbyte_source_launchdarkly" "my_source_launchdarkly" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_launchdarkly" "my_source_launchdarkly" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_token` (String) Your Access token. See here.
-- `source_type` (String) must be one of ["launchdarkly"]
-
diff --git a/docs/data-sources/source_lemlist.md b/docs/data-sources/source_lemlist.md
index c08e60316..2816b7a34 100644
--- a/docs/data-sources/source_lemlist.md
+++ b/docs/data-sources/source_lemlist.md
@@ -14,7 +14,6 @@ SourceLemlist DataSource
```terraform
data "airbyte_source_lemlist" "my_source_lemlist" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_lemlist" "my_source_lemlist" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Lemlist API key,
-- `source_type` (String) must be one of ["lemlist"]
-
diff --git a/docs/data-sources/source_lever_hiring.md b/docs/data-sources/source_lever_hiring.md
index 1c78dc637..9b3f2b685 100644
--- a/docs/data-sources/source_lever_hiring.md
+++ b/docs/data-sources/source_lever_hiring.md
@@ -14,7 +14,6 @@ SourceLeverHiring DataSource
```terraform
data "airbyte_source_lever_hiring" "my_source_leverhiring" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,74 +25,12 @@ data "airbyte_source_lever_hiring" "my_source_leverhiring" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Choose how to authenticate to Lever Hiring. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `environment` (String) must be one of ["Production", "Sandbox"]
-The environment in which you'd like to replicate data for Lever. This is used to determine which Lever API endpoint to use.
-- `source_type` (String) must be one of ["lever-hiring"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated. Note that it will be used only in the following incremental streams: comments, commits, and issues.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_lever_hiring_authentication_mechanism_authenticate_via_lever_api_key` (Attributes) Choose how to authenticate to Lever Hiring. (see [below for nested schema](#nestedatt--configuration--credentials--source_lever_hiring_authentication_mechanism_authenticate_via_lever_api_key))
-- `source_lever_hiring_authentication_mechanism_authenticate_via_lever_o_auth` (Attributes) Choose how to authenticate to Lever Hiring. (see [below for nested schema](#nestedatt--configuration--credentials--source_lever_hiring_authentication_mechanism_authenticate_via_lever_o_auth))
-- `source_lever_hiring_update_authentication_mechanism_authenticate_via_lever_api_key` (Attributes) Choose how to authenticate to Lever Hiring. (see [below for nested schema](#nestedatt--configuration--credentials--source_lever_hiring_update_authentication_mechanism_authenticate_via_lever_api_key))
-- `source_lever_hiring_update_authentication_mechanism_authenticate_via_lever_o_auth` (Attributes) Choose how to authenticate to Lever Hiring. (see [below for nested schema](#nestedatt--configuration--credentials--source_lever_hiring_update_authentication_mechanism_authenticate_via_lever_o_auth))
-
-
-### Nested Schema for `configuration.credentials.source_lever_hiring_authentication_mechanism_authenticate_via_lever_api_key`
-
-Read-Only:
-
-- `api_key` (String) The Api Key of your Lever Hiring account.
-- `auth_type` (String) must be one of ["Api Key"]
-
-
-
-### Nested Schema for `configuration.credentials.source_lever_hiring_authentication_mechanism_authenticate_via_lever_o_auth`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Lever Hiring developer application.
-- `client_secret` (String) The Client Secret of your Lever Hiring developer application.
-- `refresh_token` (String) The token for obtaining new access token.
-
-
-
-### Nested Schema for `configuration.credentials.source_lever_hiring_update_authentication_mechanism_authenticate_via_lever_api_key`
-
-Read-Only:
-
-- `api_key` (String) The Api Key of your Lever Hiring account.
-- `auth_type` (String) must be one of ["Api Key"]
-
-
-
-### Nested Schema for `configuration.credentials.source_lever_hiring_update_authentication_mechanism_authenticate_via_lever_o_auth`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Lever Hiring developer application.
-- `client_secret` (String) The Client Secret of your Lever Hiring developer application.
-- `refresh_token` (String) The token for obtaining new access token.
-
diff --git a/docs/data-sources/source_linkedin_ads.md b/docs/data-sources/source_linkedin_ads.md
index 2cb32b690..aaa556947 100644
--- a/docs/data-sources/source_linkedin_ads.md
+++ b/docs/data-sources/source_linkedin_ads.md
@@ -14,7 +14,6 @@ SourceLinkedinAds DataSource
```terraform
data "airbyte_source_linkedin_ads" "my_source_linkedinads" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,86 +25,12 @@ data "airbyte_source_linkedin_ads" "my_source_linkedinads" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `account_ids` (List of Number) Specify the account IDs to pull data from, separated by a space. Leave this field empty if you want to pull the data from all accounts accessible by the authenticated user. See the LinkedIn docs to locate these IDs.
-- `ad_analytics_reports` (Attributes List) (see [below for nested schema](#nestedatt--configuration--ad_analytics_reports))
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["linkedin-ads"]
-- `start_date` (String) UTC date in the format YYYY-MM-DD. Any data before this date will not be replicated.
-
-
-### Nested Schema for `configuration.ad_analytics_reports`
-
-Read-Only:
-
-- `name` (String) The name for the custom report.
-- `pivot_by` (String) must be one of ["COMPANY", "ACCOUNT", "SHARE", "CAMPAIGN", "CREATIVE", "CAMPAIGN_GROUP", "CONVERSION", "CONVERSATION_NODE", "CONVERSATION_NODE_OPTION_INDEX", "SERVING_LOCATION", "CARD_INDEX", "MEMBER_COMPANY_SIZE", "MEMBER_INDUSTRY", "MEMBER_SENIORITY", "MEMBER_JOB_TITLE ", "MEMBER_JOB_FUNCTION ", "MEMBER_COUNTRY_V2 ", "MEMBER_REGION_V2", "MEMBER_COMPANY", "PLACEMENT_NAME", "IMPRESSION_DEVICE_TYPE"]
-Choose a category to pivot your analytics report around. This selection will organize your data based on the chosen attribute, allowing you to analyze trends and performance from different perspectives.
-- `time_granularity` (String) must be one of ["ALL", "DAILY", "MONTHLY", "YEARLY"]
-Choose how to group the data in your report by time. The options are:
- 'ALL': A single result summarizing the entire time range.
- 'DAILY': Group results by each day.
- 'MONTHLY': Group results by each month.
- 'YEARLY': Group results by each year.
Selecting a time grouping helps you analyze trends and patterns over different time periods.
-
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_linkedin_ads_authentication_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_ads_authentication_access_token))
-- `source_linkedin_ads_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_ads_authentication_o_auth2_0))
-- `source_linkedin_ads_update_authentication_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_ads_update_authentication_access_token))
-- `source_linkedin_ads_update_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_ads_update_authentication_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_ads_authentication_access_token`
-
-Read-Only:
-
-- `access_token` (String) The access token generated for your developer application. Refer to our documentation for more information.
-- `auth_method` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_ads_authentication_o_auth2_0`
-
-Read-Only:
-
-- `auth_method` (String) must be one of ["oAuth2.0"]
-- `client_id` (String) The client ID of your developer application. Refer to our documentation for more information.
-- `client_secret` (String) The client secret of your developer application. Refer to our documentation for more information.
-- `refresh_token` (String) The key to refresh the expired access token. Refer to our documentation for more information.
-
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_ads_update_authentication_access_token`
-
-Read-Only:
-
-- `access_token` (String) The access token generated for your developer application. Refer to our documentation for more information.
-- `auth_method` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_ads_update_authentication_o_auth2_0`
-
-Read-Only:
-
-- `auth_method` (String) must be one of ["oAuth2.0"]
-- `client_id` (String) The client ID of your developer application. Refer to our documentation for more information.
-- `client_secret` (String) The client secret of your developer application. Refer to our documentation for more information.
-- `refresh_token` (String) The key to refresh the expired access token. Refer to our documentation for more information.
-
diff --git a/docs/data-sources/source_linkedin_pages.md b/docs/data-sources/source_linkedin_pages.md
index 83897879a..1bbe02716 100644
--- a/docs/data-sources/source_linkedin_pages.md
+++ b/docs/data-sources/source_linkedin_pages.md
@@ -14,7 +14,6 @@ SourceLinkedinPages DataSource
```terraform
data "airbyte_source_linkedin_pages" "my_source_linkedinpages" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,72 +25,12 @@ data "airbyte_source_linkedin_pages" "my_source_linkedinpages" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `org_id` (String) Specify the Organization ID
-- `source_type` (String) must be one of ["linkedin-pages"]
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_linkedin_pages_authentication_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_pages_authentication_access_token))
-- `source_linkedin_pages_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_pages_authentication_o_auth2_0))
-- `source_linkedin_pages_update_authentication_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_pages_update_authentication_access_token))
-- `source_linkedin_pages_update_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_pages_update_authentication_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_pages_authentication_access_token`
-
-Read-Only:
-
-- `access_token` (String) The token value generated using the LinkedIn Developers OAuth Token Tools. See the docs to obtain yours.
-- `auth_method` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_pages_authentication_o_auth2_0`
-
-Read-Only:
-
-- `auth_method` (String) must be one of ["oAuth2.0"]
-- `client_id` (String) The client ID of the LinkedIn developer application.
-- `client_secret` (String) The client secret of the LinkedIn developer application.
-- `refresh_token` (String) The token value generated using the LinkedIn Developers OAuth Token Tools. See the docs to obtain yours.
-
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_pages_update_authentication_access_token`
-
-Read-Only:
-
-- `access_token` (String) The token value generated using the LinkedIn Developers OAuth Token Tools. See the docs to obtain yours.
-- `auth_method` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_pages_update_authentication_o_auth2_0`
-
-Read-Only:
-
-- `auth_method` (String) must be one of ["oAuth2.0"]
-- `client_id` (String) The client ID of the LinkedIn developer application.
-- `client_secret` (String) The client secret of the LinkedIn developer application.
-- `refresh_token` (String) The token value generated using the LinkedIn Developers OAuth Token Tools. See the docs to obtain yours.
-
diff --git a/docs/data-sources/source_linnworks.md b/docs/data-sources/source_linnworks.md
index 4b1de1a3f..3a2bcfdfd 100644
--- a/docs/data-sources/source_linnworks.md
+++ b/docs/data-sources/source_linnworks.md
@@ -14,7 +14,6 @@ SourceLinnworks DataSource
```terraform
data "airbyte_source_linnworks" "my_source_linnworks" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_linnworks" "my_source_linnworks" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `application_id` (String) Linnworks Application ID
-- `application_secret` (String) Linnworks Application Secret
-- `source_type` (String) must be one of ["linnworks"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-- `token` (String)
-
diff --git a/docs/data-sources/source_lokalise.md b/docs/data-sources/source_lokalise.md
index df63115c4..f056d4f99 100644
--- a/docs/data-sources/source_lokalise.md
+++ b/docs/data-sources/source_lokalise.md
@@ -14,7 +14,6 @@ SourceLokalise DataSource
```terraform
data "airbyte_source_lokalise" "my_source_lokalise" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_lokalise" "my_source_lokalise" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Lokalise API Key with read-access. Available at Profile settings > API tokens. See here.
-- `project_id` (String) Lokalise project ID. Available at Project Settings > General.
-- `source_type` (String) must be one of ["lokalise"]
-
diff --git a/docs/data-sources/source_mailchimp.md b/docs/data-sources/source_mailchimp.md
index 30257bf41..c23f08560 100644
--- a/docs/data-sources/source_mailchimp.md
+++ b/docs/data-sources/source_mailchimp.md
@@ -14,7 +14,6 @@ SourceMailchimp DataSource
```terraform
data "airbyte_source_mailchimp" "my_source_mailchimp" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,72 +25,12 @@ data "airbyte_source_mailchimp" "my_source_mailchimp" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `campaign_id` (String)
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["mailchimp"]
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_mailchimp_authentication_api_key` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_mailchimp_authentication_api_key))
-- `source_mailchimp_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_mailchimp_authentication_o_auth2_0))
-- `source_mailchimp_update_authentication_api_key` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_mailchimp_update_authentication_api_key))
-- `source_mailchimp_update_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_mailchimp_update_authentication_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_mailchimp_authentication_api_key`
-
-Read-Only:
-
-- `apikey` (String) Mailchimp API Key. See the docs for information on how to generate this key.
-- `auth_type` (String) must be one of ["apikey"]
-
-
-
-### Nested Schema for `configuration.credentials.source_mailchimp_authentication_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) An access token generated using the above client ID and secret.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your OAuth application.
-- `client_secret` (String) The Client Secret of your OAuth application.
-
-
-
-### Nested Schema for `configuration.credentials.source_mailchimp_update_authentication_api_key`
-
-Read-Only:
-
-- `apikey` (String) Mailchimp API Key. See the docs for information on how to generate this key.
-- `auth_type` (String) must be one of ["apikey"]
-
-
-
-### Nested Schema for `configuration.credentials.source_mailchimp_update_authentication_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) An access token generated using the above client ID and secret.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your OAuth application.
-- `client_secret` (String) The Client Secret of your OAuth application.
-
diff --git a/docs/data-sources/source_mailgun.md b/docs/data-sources/source_mailgun.md
index dfb97bde8..7a66cc04c 100644
--- a/docs/data-sources/source_mailgun.md
+++ b/docs/data-sources/source_mailgun.md
@@ -14,7 +14,6 @@ SourceMailgun DataSource
```terraform
data "airbyte_source_mailgun" "my_source_mailgun" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_mailgun" "my_source_mailgun" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `domain_region` (String) Domain region code. 'EU' or 'US' are possible values. The default is 'US'.
-- `private_key` (String) Primary account API key to access your Mailgun data.
-- `source_type` (String) must be one of ["mailgun"]
-- `start_date` (String) UTC date and time in the format 2020-10-01 00:00:00. Any data before this date will not be replicated. If omitted, defaults to 3 days ago.
-
diff --git a/docs/data-sources/source_mailjet_sms.md b/docs/data-sources/source_mailjet_sms.md
index 826541e2d..9caabb4aa 100644
--- a/docs/data-sources/source_mailjet_sms.md
+++ b/docs/data-sources/source_mailjet_sms.md
@@ -14,7 +14,6 @@ SourceMailjetSms DataSource
```terraform
data "airbyte_source_mailjet_sms" "my_source_mailjetsms" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_mailjet_sms" "my_source_mailjetsms" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `end_date` (Number) Retrieve SMS messages created before the specified timestamp. Required format - Unix timestamp.
-- `source_type` (String) must be one of ["mailjet-sms"]
-- `start_date` (Number) Retrieve SMS messages created after the specified timestamp. Required format - Unix timestamp.
-- `token` (String) Your access token. See here.
-
diff --git a/docs/data-sources/source_marketo.md b/docs/data-sources/source_marketo.md
index 89cea25cd..a71adddc4 100644
--- a/docs/data-sources/source_marketo.md
+++ b/docs/data-sources/source_marketo.md
@@ -14,7 +14,6 @@ SourceMarketo DataSource
```terraform
data "airbyte_source_marketo" "my_source_marketo" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_marketo" "my_source_marketo" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `client_id` (String) The Client ID of your Marketo developer application. See the docs for info on how to obtain this.
-- `client_secret` (String) The Client Secret of your Marketo developer application. See the docs for info on how to obtain this.
-- `domain_url` (String) Your Marketo Base URL. See the docs for info on how to obtain this.
-- `source_type` (String) must be one of ["marketo"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_metabase.md b/docs/data-sources/source_metabase.md
index ef27be2ed..bc269e410 100644
--- a/docs/data-sources/source_metabase.md
+++ b/docs/data-sources/source_metabase.md
@@ -14,7 +14,6 @@ SourceMetabase DataSource
```terraform
data "airbyte_source_metabase" "my_source_metabase" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,30 +25,12 @@ data "airbyte_source_metabase" "my_source_metabase" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `instance_api_url` (String) URL to your metabase instance API
-- `password` (String)
-- `session_token` (String) To generate your session token, you need to run the following command: ``` curl -X POST \
- -H "Content-Type: application/json" \
- -d '{"username": "person@metabase.com", "password": "fakepassword"}' \
- http://localhost:3000/api/session
-``` Then copy the value of the `id` field returned by a successful call to that API.
-Note that by default, sessions are good for 14 days and needs to be regenerated.
-- `source_type` (String) must be one of ["metabase"]
-- `username` (String)
-
diff --git a/docs/data-sources/source_microsoft_teams.md b/docs/data-sources/source_microsoft_teams.md
index a1e6174a4..546c27d16 100644
--- a/docs/data-sources/source_microsoft_teams.md
+++ b/docs/data-sources/source_microsoft_teams.md
@@ -14,7 +14,6 @@ SourceMicrosoftTeams DataSource
```terraform
data "airbyte_source_microsoft_teams" "my_source_microsoftteams" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,78 +25,12 @@ data "airbyte_source_microsoft_teams" "my_source_microsoftteams" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Choose how to authenticate to Microsoft (see [below for nested schema](#nestedatt--configuration--credentials))
-- `period` (String) Specifies the length of time over which the Team Device Report stream is aggregated. The supported values are: D7, D30, D90, and D180.
-- `source_type` (String) must be one of ["microsoft-teams"]
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_microsoft_teams_authentication_mechanism_authenticate_via_microsoft` (Attributes) Choose how to authenticate to Microsoft (see [below for nested schema](#nestedatt--configuration--credentials--source_microsoft_teams_authentication_mechanism_authenticate_via_microsoft))
-- `source_microsoft_teams_authentication_mechanism_authenticate_via_microsoft_o_auth_2_0` (Attributes) Choose how to authenticate to Microsoft (see [below for nested schema](#nestedatt--configuration--credentials--source_microsoft_teams_authentication_mechanism_authenticate_via_microsoft_o_auth_2_0))
-- `source_microsoft_teams_update_authentication_mechanism_authenticate_via_microsoft` (Attributes) Choose how to authenticate to Microsoft (see [below for nested schema](#nestedatt--configuration--credentials--source_microsoft_teams_update_authentication_mechanism_authenticate_via_microsoft))
-- `source_microsoft_teams_update_authentication_mechanism_authenticate_via_microsoft_o_auth_2_0` (Attributes) Choose how to authenticate to Microsoft (see [below for nested schema](#nestedatt--configuration--credentials--source_microsoft_teams_update_authentication_mechanism_authenticate_via_microsoft_o_auth_2_0))
-
-
-### Nested Schema for `configuration.credentials.source_microsoft_teams_authentication_mechanism_authenticate_via_microsoft`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Token"]
-- `client_id` (String) The Client ID of your Microsoft Teams developer application.
-- `client_secret` (String) The Client Secret of your Microsoft Teams developer application.
-- `tenant_id` (String) A globally unique identifier (GUID) that is different than your organization name or domain. Follow these steps to obtain: open one of the Teams where you belong inside the Teams Application -> Click on the … next to the Team title -> Click on Get link to team -> Copy the link to the team and grab the tenant ID form the URL
-
-
-
-### Nested Schema for `configuration.credentials.source_microsoft_teams_authentication_mechanism_authenticate_via_microsoft_o_auth_2_0`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Microsoft Teams developer application.
-- `client_secret` (String) The Client Secret of your Microsoft Teams developer application.
-- `refresh_token` (String) A Refresh Token to renew the expired Access Token.
-- `tenant_id` (String) A globally unique identifier (GUID) that is different than your organization name or domain. Follow these steps to obtain: open one of the Teams where you belong inside the Teams Application -> Click on the … next to the Team title -> Click on Get link to team -> Copy the link to the team and grab the tenant ID form the URL
-
-
-
-### Nested Schema for `configuration.credentials.source_microsoft_teams_update_authentication_mechanism_authenticate_via_microsoft`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Token"]
-- `client_id` (String) The Client ID of your Microsoft Teams developer application.
-- `client_secret` (String) The Client Secret of your Microsoft Teams developer application.
-- `tenant_id` (String) A globally unique identifier (GUID) that is different than your organization name or domain. Follow these steps to obtain: open one of the Teams where you belong inside the Teams Application -> Click on the … next to the Team title -> Click on Get link to team -> Copy the link to the team and grab the tenant ID form the URL
-
-
-
-### Nested Schema for `configuration.credentials.source_microsoft_teams_update_authentication_mechanism_authenticate_via_microsoft_o_auth_2_0`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Microsoft Teams developer application.
-- `client_secret` (String) The Client Secret of your Microsoft Teams developer application.
-- `refresh_token` (String) A Refresh Token to renew the expired Access Token.
-- `tenant_id` (String) A globally unique identifier (GUID) that is different than your organization name or domain. Follow these steps to obtain: open one of the Teams where you belong inside the Teams Application -> Click on the … next to the Team title -> Click on Get link to team -> Copy the link to the team and grab the tenant ID form the URL
-
diff --git a/docs/data-sources/source_mixpanel.md b/docs/data-sources/source_mixpanel.md
index a607eabc2..db2ee2f16 100644
--- a/docs/data-sources/source_mixpanel.md
+++ b/docs/data-sources/source_mixpanel.md
@@ -14,7 +14,6 @@ SourceMixpanel DataSource
```terraform
data "airbyte_source_mixpanel" "my_source_mixpanel" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,78 +25,12 @@ data "airbyte_source_mixpanel" "my_source_mixpanel" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `attribution_window` (Number) A period of time for attributing results to ads and the lookback period after those actions occur during which ad results are counted. Default attribution window is 5 days.
-- `credentials` (Attributes) Choose how to authenticate to Mixpanel (see [below for nested schema](#nestedatt--configuration--credentials))
-- `date_window_size` (Number) Defines window size in days, that used to slice through data. You can reduce it, if amount of data in each window is too big for your environment.
-- `end_date` (String) The date in the format YYYY-MM-DD. Any data after this date will not be replicated. Left empty to always sync to most recent date
-- `project_id` (Number) Your project ID number. See the docs for more information on how to obtain this.
-- `project_timezone` (String) Time zone in which integer date times are stored. The project timezone may be found in the project settings in the Mixpanel console.
-- `region` (String) must be one of ["US", "EU"]
-The region of mixpanel domain instance either US or EU.
-- `select_properties_by_default` (Boolean) Setting this config parameter to TRUE ensures that new properties on events and engage records are captured. Otherwise new properties will be ignored.
-- `source_type` (String) must be one of ["mixpanel"]
-- `start_date` (String) The date in the format YYYY-MM-DD. Any data before this date will not be replicated. If this option is not set, the connector will replicate data from up to one year ago by default.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_mixpanel_authentication_wildcard_project_secret` (Attributes) Choose how to authenticate to Mixpanel (see [below for nested schema](#nestedatt--configuration--credentials--source_mixpanel_authentication_wildcard_project_secret))
-- `source_mixpanel_authentication_wildcard_service_account` (Attributes) Choose how to authenticate to Mixpanel (see [below for nested schema](#nestedatt--configuration--credentials--source_mixpanel_authentication_wildcard_service_account))
-- `source_mixpanel_update_authentication_wildcard_project_secret` (Attributes) Choose how to authenticate to Mixpanel (see [below for nested schema](#nestedatt--configuration--credentials--source_mixpanel_update_authentication_wildcard_project_secret))
-- `source_mixpanel_update_authentication_wildcard_service_account` (Attributes) Choose how to authenticate to Mixpanel (see [below for nested schema](#nestedatt--configuration--credentials--source_mixpanel_update_authentication_wildcard_service_account))
-
-
-### Nested Schema for `configuration.credentials.source_mixpanel_authentication_wildcard_project_secret`
-
-Read-Only:
-
-- `api_secret` (String) Mixpanel project secret. See the docs for more information on how to obtain this.
-- `option_title` (String) must be one of ["Project Secret"]
-
-
-
-### Nested Schema for `configuration.credentials.source_mixpanel_authentication_wildcard_service_account`
-
-Read-Only:
-
-- `option_title` (String) must be one of ["Service Account"]
-- `secret` (String) Mixpanel Service Account Secret. See the docs for more information on how to obtain this.
-- `username` (String) Mixpanel Service Account Username. See the docs for more information on how to obtain this.
-
-
-
-### Nested Schema for `configuration.credentials.source_mixpanel_update_authentication_wildcard_project_secret`
-
-Read-Only:
-
-- `api_secret` (String) Mixpanel project secret. See the docs for more information on how to obtain this.
-- `option_title` (String) must be one of ["Project Secret"]
-
-
-
-### Nested Schema for `configuration.credentials.source_mixpanel_update_authentication_wildcard_service_account`
-
-Read-Only:
-
-- `option_title` (String) must be one of ["Service Account"]
-- `secret` (String) Mixpanel Service Account Secret. See the docs for more information on how to obtain this.
-- `username` (String) Mixpanel Service Account Username. See the docs for more information on how to obtain this.
-
diff --git a/docs/data-sources/source_monday.md b/docs/data-sources/source_monday.md
index bbc505251..b170ecd6b 100644
--- a/docs/data-sources/source_monday.md
+++ b/docs/data-sources/source_monday.md
@@ -14,7 +14,6 @@ SourceMonday DataSource
```terraform
data "airbyte_source_monday" "my_source_monday" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,73 +25,12 @@ data "airbyte_source_monday" "my_source_monday" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["monday"]
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_monday_authorization_method_api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_monday_authorization_method_api_token))
-- `source_monday_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_monday_authorization_method_o_auth2_0))
-- `source_monday_update_authorization_method_api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_monday_update_authorization_method_api_token))
-- `source_monday_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_monday_update_authorization_method_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_monday_authorization_method_api_token`
-
-Read-Only:
-
-- `api_token` (String) API Token for making authenticated requests.
-- `auth_type` (String) must be one of ["api_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_monday_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your OAuth application.
-- `client_secret` (String) The Client Secret of your OAuth application.
-- `subdomain` (String) Slug/subdomain of the account, or the first part of the URL that comes before .monday.com
-
-
-
-### Nested Schema for `configuration.credentials.source_monday_update_authorization_method_api_token`
-
-Read-Only:
-
-- `api_token` (String) API Token for making authenticated requests.
-- `auth_type` (String) must be one of ["api_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_monday_update_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your OAuth application.
-- `client_secret` (String) The Client Secret of your OAuth application.
-- `subdomain` (String) Slug/subdomain of the account, or the first part of the URL that comes before .monday.com
-
diff --git a/docs/data-sources/source_mongodb.md b/docs/data-sources/source_mongodb.md
deleted file mode 100644
index 61daff387..000000000
--- a/docs/data-sources/source_mongodb.md
+++ /dev/null
@@ -1,128 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_mongodb Data Source - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceMongodb DataSource
----
-
-# airbyte_source_mongodb (Data Source)
-
-SourceMongodb DataSource
-
-## Example Usage
-
-```terraform
-data "airbyte_source_mongodb" "my_source_mongodb" {
- secret_id = "...my_secret_id..."
- source_id = "...my_source_id..."
-}
-```
-
-
-## Schema
-
-### Required
-
-- `source_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `auth_source` (String) The authentication source where the user information is stored.
-- `database` (String) The database you want to replicate.
-- `instance_type` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type))
-- `password` (String) The password associated with this username.
-- `source_type` (String) must be one of ["mongodb"]
-- `user` (String) The username which is used to access the database.
-
-
-### Nested Schema for `configuration.instance_type`
-
-Read-Only:
-
-- `source_mongodb_mongo_db_instance_type_mongo_db_atlas` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--source_mongodb_mongo_db_instance_type_mongo_db_atlas))
-- `source_mongodb_mongo_db_instance_type_replica_set` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--source_mongodb_mongo_db_instance_type_replica_set))
-- `source_mongodb_mongo_db_instance_type_standalone_mongo_db_instance` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--source_mongodb_mongo_db_instance_type_standalone_mongo_db_instance))
-- `source_mongodb_update_mongo_db_instance_type_mongo_db_atlas` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--source_mongodb_update_mongo_db_instance_type_mongo_db_atlas))
-- `source_mongodb_update_mongo_db_instance_type_replica_set` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--source_mongodb_update_mongo_db_instance_type_replica_set))
-- `source_mongodb_update_mongo_db_instance_type_standalone_mongo_db_instance` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--source_mongodb_update_mongo_db_instance_type_standalone_mongo_db_instance))
-
-
-### Nested Schema for `configuration.instance_type.source_mongodb_mongo_db_instance_type_mongo_db_atlas`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `cluster_url` (String) The URL of a cluster to connect to.
-- `instance` (String) must be one of ["atlas"]
-
-
-
-### Nested Schema for `configuration.instance_type.source_mongodb_mongo_db_instance_type_replica_set`
-
-Read-Only:
-
-- `instance` (String) must be one of ["replica"]
-- `replica_set` (String) A replica set in MongoDB is a group of mongod processes that maintain the same data set.
-- `server_addresses` (String) The members of a replica set. Please specify `host`:`port` of each member separated by comma.
-
-
-
-### Nested Schema for `configuration.instance_type.source_mongodb_mongo_db_instance_type_standalone_mongo_db_instance`
-
-Read-Only:
-
-- `host` (String) The host name of the Mongo database.
-- `instance` (String) must be one of ["standalone"]
-- `port` (Number) The port of the Mongo database.
-
-
-
-### Nested Schema for `configuration.instance_type.source_mongodb_update_mongo_db_instance_type_mongo_db_atlas`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `cluster_url` (String) The URL of a cluster to connect to.
-- `instance` (String) must be one of ["atlas"]
-
-
-
-### Nested Schema for `configuration.instance_type.source_mongodb_update_mongo_db_instance_type_replica_set`
-
-Read-Only:
-
-- `instance` (String) must be one of ["replica"]
-- `replica_set` (String) A replica set in MongoDB is a group of mongod processes that maintain the same data set.
-- `server_addresses` (String) The members of a replica set. Please specify `host`:`port` of each member separated by comma.
-
-
-
-### Nested Schema for `configuration.instance_type.source_mongodb_update_mongo_db_instance_type_standalone_mongo_db_instance`
-
-Read-Only:
-
-- `host` (String) The host name of the Mongo database.
-- `instance` (String) must be one of ["standalone"]
-- `port` (Number) The port of the Mongo database.
-
-
diff --git a/docs/data-sources/source_mongodb_internal_poc.md b/docs/data-sources/source_mongodb_internal_poc.md
index c05ab8a42..11bf95c9e 100644
--- a/docs/data-sources/source_mongodb_internal_poc.md
+++ b/docs/data-sources/source_mongodb_internal_poc.md
@@ -14,7 +14,6 @@ SourceMongodbInternalPoc DataSource
```terraform
data "airbyte_source_mongodb_internal_poc" "my_source_mongodbinternalpoc" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,26 +25,12 @@ data "airbyte_source_mongodb_internal_poc" "my_source_mongodbinternalpoc" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `auth_source` (String) The authentication source where the user information is stored.
-- `connection_string` (String) The connection string of the database that you want to replicate..
-- `password` (String) The password associated with this username.
-- `replica_set` (String) The name of the replica set to be replicated.
-- `source_type` (String) must be one of ["mongodb-internal-poc"]
-- `user` (String) The username which is used to access the database.
-
diff --git a/docs/data-sources/source_mongodb_v2.md b/docs/data-sources/source_mongodb_v2.md
new file mode 100644
index 000000000..453a12d49
--- /dev/null
+++ b/docs/data-sources/source_mongodb_v2.md
@@ -0,0 +1,36 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_source_mongodb_v2 Data Source - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ SourceMongodbV2 DataSource
+---
+
+# airbyte_source_mongodb_v2 (Data Source)
+
+SourceMongodbV2 DataSource
+
+## Example Usage
+
+```terraform
+data "airbyte_source_mongodb_v2" "my_source_mongodbv2" {
+ source_id = "...my_source_id..."
+}
+```
+
+
+## Schema
+
+### Required
+
+- `source_id` (String)
+
+### Read-Only
+
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
+- `name` (String)
+- `source_type` (String)
+- `workspace_id` (String)
+
+
diff --git a/docs/data-sources/source_mssql.md b/docs/data-sources/source_mssql.md
index ded4cb2bc..13a783502 100644
--- a/docs/data-sources/source_mssql.md
+++ b/docs/data-sources/source_mssql.md
@@ -14,7 +14,6 @@ SourceMssql DataSource
```terraform
data "airbyte_source_mssql" "my_source_mssql" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,210 +25,12 @@ data "airbyte_source_mssql" "my_source_mssql" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `database` (String) The name of the database.
-- `host` (String) The hostname of the database.
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) The password associated with the username.
-- `port` (Number) The port of the database.
-- `replication_method` (Attributes) Configures how data is extracted from the database. (see [below for nested schema](#nestedatt--configuration--replication_method))
-- `schemas` (List of String) The list of schemas to sync from. Defaults to user. Case sensitive.
-- `source_type` (String) must be one of ["mssql"]
-- `ssl_method` (Attributes) The encryption method which is used when communicating with the database. (see [below for nested schema](#nestedatt--configuration--ssl_method))
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `username` (String) The username which is used to access the database.
-
-
-### Nested Schema for `configuration.replication_method`
-
-Read-Only:
-
-- `source_mssql_update_method_read_changes_using_change_data_capture_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the SQL Server's change data capture feature. This must be enabled on your database. (see [below for nested schema](#nestedatt--configuration--replication_method--source_mssql_update_method_read_changes_using_change_data_capture_cdc))
-- `source_mssql_update_method_scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--source_mssql_update_method_scan_changes_with_user_defined_cursor))
-- `source_mssql_update_update_method_read_changes_using_change_data_capture_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the SQL Server's change data capture feature. This must be enabled on your database. (see [below for nested schema](#nestedatt--configuration--replication_method--source_mssql_update_update_method_read_changes_using_change_data_capture_cdc))
-- `source_mssql_update_update_method_scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--source_mssql_update_update_method_scan_changes_with_user_defined_cursor))
-
-
-### Nested Schema for `configuration.replication_method.source_mssql_update_method_read_changes_using_change_data_capture_cdc`
-
-Read-Only:
-
-- `data_to_sync` (String) must be one of ["Existing and New", "New Changes Only"]
-What data should be synced under the CDC. "Existing and New" will read existing data as a snapshot, and sync new changes through CDC. "New Changes Only" will skip the initial snapshot, and only sync new changes through CDC.
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `method` (String) must be one of ["CDC"]
-- `snapshot_isolation` (String) must be one of ["Snapshot", "Read Committed"]
-Existing data in the database are synced through an initial snapshot. This parameter controls the isolation level that will be used during the initial snapshotting. If you choose the "Snapshot" level, you must enable the snapshot isolation mode on the database.
-
-
-
-### Nested Schema for `configuration.replication_method.source_mssql_update_method_scan_changes_with_user_defined_cursor`
-
-Read-Only:
-
-- `method` (String) must be one of ["STANDARD"]
-
-
-
-### Nested Schema for `configuration.replication_method.source_mssql_update_update_method_read_changes_using_change_data_capture_cdc`
-
-Read-Only:
-
-- `data_to_sync` (String) must be one of ["Existing and New", "New Changes Only"]
-What data should be synced under the CDC. "Existing and New" will read existing data as a snapshot, and sync new changes through CDC. "New Changes Only" will skip the initial snapshot, and only sync new changes through CDC.
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `method` (String) must be one of ["CDC"]
-- `snapshot_isolation` (String) must be one of ["Snapshot", "Read Committed"]
-Existing data in the database are synced through an initial snapshot. This parameter controls the isolation level that will be used during the initial snapshotting. If you choose the "Snapshot" level, you must enable the snapshot isolation mode on the database.
-
-
-
-### Nested Schema for `configuration.replication_method.source_mssql_update_update_method_scan_changes_with_user_defined_cursor`
-
-Read-Only:
-
-- `method` (String) must be one of ["STANDARD"]
-
-
-
-
-### Nested Schema for `configuration.ssl_method`
-
-Read-Only:
-
-- `source_mssql_ssl_method_encrypted_trust_server_certificate` (Attributes) Use the certificate provided by the server without verification. (For testing purposes only!) (see [below for nested schema](#nestedatt--configuration--ssl_method--source_mssql_ssl_method_encrypted_trust_server_certificate))
-- `source_mssql_ssl_method_encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--ssl_method--source_mssql_ssl_method_encrypted_verify_certificate))
-- `source_mssql_update_ssl_method_encrypted_trust_server_certificate` (Attributes) Use the certificate provided by the server without verification. (For testing purposes only!) (see [below for nested schema](#nestedatt--configuration--ssl_method--source_mssql_update_ssl_method_encrypted_trust_server_certificate))
-- `source_mssql_update_ssl_method_encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--ssl_method--source_mssql_update_ssl_method_encrypted_verify_certificate))
-
-
-### Nested Schema for `configuration.ssl_method.source_mssql_ssl_method_encrypted_trust_server_certificate`
-
-Read-Only:
-
-- `ssl_method` (String) must be one of ["encrypted_trust_server_certificate"]
-
-
-
-### Nested Schema for `configuration.ssl_method.source_mssql_ssl_method_encrypted_verify_certificate`
-
-Read-Only:
-
-- `host_name_in_certificate` (String) Specifies the host name of the server. The value of this property must match the subject property of the certificate.
-- `ssl_method` (String) must be one of ["encrypted_verify_certificate"]
-
-
-
-### Nested Schema for `configuration.ssl_method.source_mssql_update_ssl_method_encrypted_trust_server_certificate`
-
-Read-Only:
-
-- `ssl_method` (String) must be one of ["encrypted_trust_server_certificate"]
-
-
-
-### Nested Schema for `configuration.ssl_method.source_mssql_update_ssl_method_encrypted_verify_certificate`
-
-Read-Only:
-
-- `host_name_in_certificate` (String) Specifies the host name of the server. The value of this property must match the subject property of the certificate.
-- `ssl_method` (String) must be one of ["encrypted_verify_certificate"]
-
-
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `source_mssql_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mssql_ssh_tunnel_method_no_tunnel))
-- `source_mssql_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mssql_ssh_tunnel_method_password_authentication))
-- `source_mssql_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mssql_ssh_tunnel_method_ssh_key_authentication))
-- `source_mssql_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mssql_update_ssh_tunnel_method_no_tunnel))
-- `source_mssql_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mssql_update_ssh_tunnel_method_password_authentication))
-- `source_mssql_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mssql_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.source_mssql_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mssql_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mssql_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mssql_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mssql_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mssql_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/source_my_hours.md b/docs/data-sources/source_my_hours.md
index 6ea4adaff..2a13e704d 100644
--- a/docs/data-sources/source_my_hours.md
+++ b/docs/data-sources/source_my_hours.md
@@ -14,7 +14,6 @@ SourceMyHours DataSource
```terraform
data "airbyte_source_my_hours" "my_source_myhours" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_my_hours" "my_source_myhours" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `email` (String) Your My Hours username
-- `logs_batch_size` (Number) Pagination size used for retrieving logs in days
-- `password` (String) The password associated to the username
-- `source_type` (String) must be one of ["my-hours"]
-- `start_date` (String) Start date for collecting time logs
-
diff --git a/docs/data-sources/source_mysql.md b/docs/data-sources/source_mysql.md
index 3f367ecab..cf0600aa0 100644
--- a/docs/data-sources/source_mysql.md
+++ b/docs/data-sources/source_mysql.md
@@ -14,7 +14,6 @@ SourceMysql DataSource
```terraform
data "airbyte_source_mysql" "my_source_mysql" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,253 +25,12 @@ data "airbyte_source_mysql" "my_source_mysql" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `database` (String) The database name.
-- `host` (String) The host name of the database.
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3). For more information read about JDBC URL parameters.
-- `password` (String) The password associated with the username.
-- `port` (Number) The port to connect to.
-- `replication_method` (Attributes) Configures how data is extracted from the database. (see [below for nested schema](#nestedatt--configuration--replication_method))
-- `source_type` (String) must be one of ["mysql"]
-- `ssl_mode` (Attributes) SSL connection modes. Read more in the docs. (see [below for nested schema](#nestedatt--configuration--ssl_mode))
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `username` (String) The username which is used to access the database.
-
-
-### Nested Schema for `configuration.replication_method`
-
-Read-Only:
-
-- `source_mysql_update_method_read_changes_using_binary_log_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the MySQL binary log. This must be enabled on your database. (see [below for nested schema](#nestedatt--configuration--replication_method--source_mysql_update_method_read_changes_using_binary_log_cdc))
-- `source_mysql_update_method_scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--source_mysql_update_method_scan_changes_with_user_defined_cursor))
-- `source_mysql_update_update_method_read_changes_using_binary_log_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the MySQL binary log. This must be enabled on your database. (see [below for nested schema](#nestedatt--configuration--replication_method--source_mysql_update_update_method_read_changes_using_binary_log_cdc))
-- `source_mysql_update_update_method_scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--source_mysql_update_update_method_scan_changes_with_user_defined_cursor))
-
-
-### Nested Schema for `configuration.replication_method.source_mysql_update_method_read_changes_using_binary_log_cdc`
-
-Read-Only:
-
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `method` (String) must be one of ["CDC"]
-- `server_time_zone` (String) Enter the configured MySQL server timezone. This should only be done if the configured timezone in your MySQL instance does not conform to IANNA standard.
-
-
-
-### Nested Schema for `configuration.replication_method.source_mysql_update_method_scan_changes_with_user_defined_cursor`
-
-Read-Only:
-
-- `method` (String) must be one of ["STANDARD"]
-
-
-
-### Nested Schema for `configuration.replication_method.source_mysql_update_update_method_read_changes_using_binary_log_cdc`
-
-Read-Only:
-
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `method` (String) must be one of ["CDC"]
-- `server_time_zone` (String) Enter the configured MySQL server timezone. This should only be done if the configured timezone in your MySQL instance does not conform to IANNA standard.
-
-
-
-### Nested Schema for `configuration.replication_method.source_mysql_update_update_method_scan_changes_with_user_defined_cursor`
-
-Read-Only:
-
-- `method` (String) must be one of ["STANDARD"]
-
-
-
-
-### Nested Schema for `configuration.ssl_mode`
-
-Read-Only:
-
-- `source_mysql_ssl_modes_preferred` (Attributes) Automatically attempt SSL connection. If the MySQL server does not support SSL, continue with a regular connection. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_ssl_modes_preferred))
-- `source_mysql_ssl_modes_required` (Attributes) Always connect with SSL. If the MySQL server doesn’t support SSL, the connection will not be established. Certificate Authority (CA) and Hostname are not verified. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_ssl_modes_required))
-- `source_mysql_ssl_modes_verify_ca` (Attributes) Always connect with SSL. Verifies CA, but allows connection even if Hostname does not match. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_ssl_modes_verify_ca))
-- `source_mysql_ssl_modes_verify_identity` (Attributes) Always connect with SSL. Verify both CA and Hostname. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_ssl_modes_verify_identity))
-- `source_mysql_update_ssl_modes_preferred` (Attributes) Automatically attempt SSL connection. If the MySQL server does not support SSL, continue with a regular connection. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_update_ssl_modes_preferred))
-- `source_mysql_update_ssl_modes_required` (Attributes) Always connect with SSL. If the MySQL server doesn’t support SSL, the connection will not be established. Certificate Authority (CA) and Hostname are not verified. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_update_ssl_modes_required))
-- `source_mysql_update_ssl_modes_verify_ca` (Attributes) Always connect with SSL. Verifies CA, but allows connection even if Hostname does not match. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_update_ssl_modes_verify_ca))
-- `source_mysql_update_ssl_modes_verify_identity` (Attributes) Always connect with SSL. Verify both CA and Hostname. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_update_ssl_modes_verify_identity))
-
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_ssl_modes_preferred`
-
-Read-Only:
-
-- `mode` (String) must be one of ["preferred"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_ssl_modes_required`
-
-Read-Only:
-
-- `mode` (String) must be one of ["required"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_ssl_modes_verify_ca`
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate (this is not a required field, but if you want to use it, you will need to add the Client key as well)
-- `client_key` (String) Client key (this is not a required field, but if you want to use it, you will need to add the Client certificate as well)
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify_ca"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_ssl_modes_verify_identity`
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate (this is not a required field, but if you want to use it, you will need to add the Client key as well)
-- `client_key` (String) Client key (this is not a required field, but if you want to use it, you will need to add the Client certificate as well)
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify_identity"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_update_ssl_modes_preferred`
-
-Read-Only:
-
-- `mode` (String) must be one of ["preferred"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_update_ssl_modes_required`
-
-Read-Only:
-
-- `mode` (String) must be one of ["required"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_update_ssl_modes_verify_ca`
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate (this is not a required field, but if you want to use it, you will need to add the Client key as well)
-- `client_key` (String) Client key (this is not a required field, but if you want to use it, you will need to add the Client certificate as well)
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify_ca"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_update_ssl_modes_verify_identity`
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate (this is not a required field, but if you want to use it, you will need to add the Client key as well)
-- `client_key` (String) Client key (this is not a required field, but if you want to use it, you will need to add the Client certificate as well)
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify_identity"]
-
-
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `source_mysql_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mysql_ssh_tunnel_method_no_tunnel))
-- `source_mysql_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mysql_ssh_tunnel_method_password_authentication))
-- `source_mysql_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mysql_ssh_tunnel_method_ssh_key_authentication))
-- `source_mysql_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mysql_update_ssh_tunnel_method_no_tunnel))
-- `source_mysql_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mysql_update_ssh_tunnel_method_password_authentication))
-- `source_mysql_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mysql_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.source_mysql_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mysql_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mysql_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mysql_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mysql_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mysql_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/source_netsuite.md b/docs/data-sources/source_netsuite.md
index a472df8de..6b735119a 100644
--- a/docs/data-sources/source_netsuite.md
+++ b/docs/data-sources/source_netsuite.md
@@ -14,7 +14,6 @@ SourceNetsuite DataSource
```terraform
data "airbyte_source_netsuite" "my_source_netsuite" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,29 +25,12 @@ data "airbyte_source_netsuite" "my_source_netsuite" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `consumer_key` (String) Consumer key associated with your integration
-- `consumer_secret` (String) Consumer secret associated with your integration
-- `object_types` (List of String) The API names of the Netsuite objects you want to sync. Setting this speeds up the connection setup process by limiting the number of schemas that need to be retrieved from Netsuite.
-- `realm` (String) Netsuite realm e.g. 2344535, as for `production` or 2344535_SB1, as for the `sandbox`
-- `source_type` (String) must be one of ["netsuite"]
-- `start_datetime` (String) Starting point for your data replication, in format of "YYYY-MM-DDTHH:mm:ssZ"
-- `token_key` (String) Access token key
-- `token_secret` (String) Access token secret
-- `window_in_days` (Number) The amount of days used to query the data with date chunks. Set smaller value, if you have lots of data.
-
diff --git a/docs/data-sources/source_notion.md b/docs/data-sources/source_notion.md
index cc7539cd2..b07d51619 100644
--- a/docs/data-sources/source_notion.md
+++ b/docs/data-sources/source_notion.md
@@ -14,7 +14,6 @@ SourceNotion DataSource
```terraform
data "airbyte_source_notion" "my_source_notion" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,72 +25,12 @@ data "airbyte_source_notion" "my_source_notion" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Pick an authentication method. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["notion"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00.000Z. Any data before this date will not be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_notion_authenticate_using_access_token` (Attributes) Pick an authentication method. (see [below for nested schema](#nestedatt--configuration--credentials--source_notion_authenticate_using_access_token))
-- `source_notion_authenticate_using_o_auth2_0` (Attributes) Pick an authentication method. (see [below for nested schema](#nestedatt--configuration--credentials--source_notion_authenticate_using_o_auth2_0))
-- `source_notion_update_authenticate_using_access_token` (Attributes) Pick an authentication method. (see [below for nested schema](#nestedatt--configuration--credentials--source_notion_update_authenticate_using_access_token))
-- `source_notion_update_authenticate_using_o_auth2_0` (Attributes) Pick an authentication method. (see [below for nested schema](#nestedatt--configuration--credentials--source_notion_update_authenticate_using_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_notion_authenticate_using_access_token`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["token"]
-- `token` (String) Notion API access token, see the docs for more information on how to obtain this token.
-
-
-
-### Nested Schema for `configuration.credentials.source_notion_authenticate_using_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token is a token you received by complete the OauthWebFlow of Notion.
-- `auth_type` (String) must be one of ["OAuth2.0"]
-- `client_id` (String) The ClientID of your Notion integration.
-- `client_secret` (String) The ClientSecret of your Notion integration.
-
-
-
-### Nested Schema for `configuration.credentials.source_notion_update_authenticate_using_access_token`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["token"]
-- `token` (String) Notion API access token, see the docs for more information on how to obtain this token.
-
-
-
-### Nested Schema for `configuration.credentials.source_notion_update_authenticate_using_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token is a token you received by complete the OauthWebFlow of Notion.
-- `auth_type` (String) must be one of ["OAuth2.0"]
-- `client_id` (String) The ClientID of your Notion integration.
-- `client_secret` (String) The ClientSecret of your Notion integration.
-
diff --git a/docs/data-sources/source_nytimes.md b/docs/data-sources/source_nytimes.md
index 417c9f015..4f7e7b6da 100644
--- a/docs/data-sources/source_nytimes.md
+++ b/docs/data-sources/source_nytimes.md
@@ -14,7 +14,6 @@ SourceNytimes DataSource
```terraform
data "airbyte_source_nytimes" "my_source_nytimes" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,28 +25,12 @@ data "airbyte_source_nytimes" "my_source_nytimes" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) API Key
-- `end_date` (String) End date to stop the article retrieval (format YYYY-MM)
-- `period` (Number) must be one of ["1", "7", "30"]
-Period of time (in days)
-- `share_type` (String) must be one of ["facebook"]
-Share Type
-- `source_type` (String) must be one of ["nytimes"]
-- `start_date` (String) Start date to begin the article retrieval (format YYYY-MM)
-
diff --git a/docs/data-sources/source_okta.md b/docs/data-sources/source_okta.md
index d0c4b8961..77498a19d 100644
--- a/docs/data-sources/source_okta.md
+++ b/docs/data-sources/source_okta.md
@@ -14,7 +14,6 @@ SourceOkta DataSource
```terraform
data "airbyte_source_okta" "my_source_okta" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,73 +25,12 @@ data "airbyte_source_okta" "my_source_okta" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `domain` (String) The Okta domain. See the docs for instructions on how to find it.
-- `source_type` (String) must be one of ["okta"]
-- `start_date` (String) UTC date and time in the format YYYY-MM-DDTHH:MM:SSZ. Any data before this date will not be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_okta_authorization_method_api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_okta_authorization_method_api_token))
-- `source_okta_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_okta_authorization_method_o_auth2_0))
-- `source_okta_update_authorization_method_api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_okta_update_authorization_method_api_token))
-- `source_okta_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_okta_update_authorization_method_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_okta_authorization_method_api_token`
-
-Read-Only:
-
-- `api_token` (String) An Okta token. See the docs for instructions on how to generate it.
-- `auth_type` (String) must be one of ["api_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_okta_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your OAuth application.
-- `client_secret` (String) The Client Secret of your OAuth application.
-- `refresh_token` (String) Refresh Token to obtain new Access Token, when it's expired.
-
-
-
-### Nested Schema for `configuration.credentials.source_okta_update_authorization_method_api_token`
-
-Read-Only:
-
-- `api_token` (String) An Okta token. See the docs for instructions on how to generate it.
-- `auth_type` (String) must be one of ["api_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_okta_update_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your OAuth application.
-- `client_secret` (String) The Client Secret of your OAuth application.
-- `refresh_token` (String) Refresh Token to obtain new Access Token, when it's expired.
-
diff --git a/docs/data-sources/source_omnisend.md b/docs/data-sources/source_omnisend.md
index f584043cc..390b92ab5 100644
--- a/docs/data-sources/source_omnisend.md
+++ b/docs/data-sources/source_omnisend.md
@@ -14,7 +14,6 @@ SourceOmnisend DataSource
```terraform
data "airbyte_source_omnisend" "my_source_omnisend" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_omnisend" "my_source_omnisend" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) API Key
-- `source_type` (String) must be one of ["omnisend"]
-
diff --git a/docs/data-sources/source_onesignal.md b/docs/data-sources/source_onesignal.md
index b45b15d65..5af01e8fe 100644
--- a/docs/data-sources/source_onesignal.md
+++ b/docs/data-sources/source_onesignal.md
@@ -14,7 +14,6 @@ SourceOnesignal DataSource
```terraform
data "airbyte_source_onesignal" "my_source_onesignal" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,34 +25,12 @@ data "airbyte_source_onesignal" "my_source_onesignal" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `applications` (Attributes List) Applications keys, see the docs for more information on how to obtain this data (see [below for nested schema](#nestedatt--configuration--applications))
-- `outcome_names` (String) Comma-separated list of names and the value (sum/count) for the returned outcome data. See the docs for more details
-- `source_type` (String) must be one of ["onesignal"]
-- `start_date` (String) The date from which you'd like to replicate data for OneSignal API, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
-- `user_auth_key` (String) OneSignal User Auth Key, see the docs for more information on how to obtain this key.
-
-
-### Nested Schema for `configuration.applications`
-
-Read-Only:
-
-- `app_api_key` (String)
-- `app_id` (String)
-- `app_name` (String)
-
diff --git a/docs/data-sources/source_oracle.md b/docs/data-sources/source_oracle.md
index 639edae99..031f1cef1 100644
--- a/docs/data-sources/source_oracle.md
+++ b/docs/data-sources/source_oracle.md
@@ -14,7 +14,6 @@ SourceOracle DataSource
```terraform
data "airbyte_source_oracle" "my_source_oracle" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,210 +25,12 @@ data "airbyte_source_oracle" "my_source_oracle" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `connection_data` (Attributes) Connect data that will be used for DB connection (see [below for nested schema](#nestedatt--configuration--connection_data))
-- `encryption` (Attributes) The encryption method with is used when communicating with the database. (see [below for nested schema](#nestedatt--configuration--encryption))
-- `host` (String) Hostname of the database.
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) The password associated with the username.
-- `port` (Number) Port of the database.
-Oracle Corporations recommends the following port numbers:
-1521 - Default listening port for client connections to the listener.
-2484 - Recommended and officially registered listening port for client connections to the listener using TCP/IP with SSL
-- `schemas` (List of String) The list of schemas to sync from. Defaults to user. Case sensitive.
-- `source_type` (String) must be one of ["oracle"]
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `username` (String) The username which is used to access the database.
-
-
-### Nested Schema for `configuration.connection_data`
-
-Read-Only:
-
-- `source_oracle_connect_by_service_name` (Attributes) Use service name (see [below for nested schema](#nestedatt--configuration--connection_data--source_oracle_connect_by_service_name))
-- `source_oracle_connect_by_system_id_sid` (Attributes) Use SID (Oracle System Identifier) (see [below for nested schema](#nestedatt--configuration--connection_data--source_oracle_connect_by_system_id_sid))
-- `source_oracle_update_connect_by_service_name` (Attributes) Use service name (see [below for nested schema](#nestedatt--configuration--connection_data--source_oracle_update_connect_by_service_name))
-- `source_oracle_update_connect_by_system_id_sid` (Attributes) Use SID (Oracle System Identifier) (see [below for nested schema](#nestedatt--configuration--connection_data--source_oracle_update_connect_by_system_id_sid))
-
-
-### Nested Schema for `configuration.connection_data.source_oracle_connect_by_service_name`
-
-Read-Only:
-
-- `connection_type` (String) must be one of ["service_name"]
-- `service_name` (String)
-
-
-
-### Nested Schema for `configuration.connection_data.source_oracle_connect_by_system_id_sid`
-
-Read-Only:
-
-- `connection_type` (String) must be one of ["sid"]
-- `sid` (String)
-
-
-
-### Nested Schema for `configuration.connection_data.source_oracle_update_connect_by_service_name`
-
-Read-Only:
-
-- `connection_type` (String) must be one of ["service_name"]
-- `service_name` (String)
-
-
-
-### Nested Schema for `configuration.connection_data.source_oracle_update_connect_by_system_id_sid`
-
-Read-Only:
-
-- `connection_type` (String) must be one of ["sid"]
-- `sid` (String)
-
-
-
-
-### Nested Schema for `configuration.encryption`
-
-Read-Only:
-
-- `source_oracle_encryption_native_network_encryption_nne` (Attributes) The native network encryption gives you the ability to encrypt database connections, without the configuration overhead of TCP/IP and SSL/TLS and without the need to open and listen on different ports. (see [below for nested schema](#nestedatt--configuration--encryption--source_oracle_encryption_native_network_encryption_nne))
-- `source_oracle_encryption_tls_encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--encryption--source_oracle_encryption_tls_encrypted_verify_certificate))
-- `source_oracle_update_encryption_native_network_encryption_nne` (Attributes) The native network encryption gives you the ability to encrypt database connections, without the configuration overhead of TCP/IP and SSL/TLS and without the need to open and listen on different ports. (see [below for nested schema](#nestedatt--configuration--encryption--source_oracle_update_encryption_native_network_encryption_nne))
-- `source_oracle_update_encryption_tls_encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--encryption--source_oracle_update_encryption_tls_encrypted_verify_certificate))
-
-
-### Nested Schema for `configuration.encryption.source_oracle_encryption_native_network_encryption_nne`
-
-Read-Only:
-
-- `encryption_algorithm` (String) must be one of ["AES256", "RC4_56", "3DES168"]
-This parameter defines what encryption algorithm is used.
-- `encryption_method` (String) must be one of ["client_nne"]
-
-
-
-### Nested Schema for `configuration.encryption.source_oracle_encryption_tls_encrypted_verify_certificate`
-
-Read-Only:
-
-- `encryption_method` (String) must be one of ["encrypted_verify_certificate"]
-- `ssl_certificate` (String) Privacy Enhanced Mail (PEM) files are concatenated certificate containers frequently used in certificate installations.
-
-
-
-### Nested Schema for `configuration.encryption.source_oracle_update_encryption_native_network_encryption_nne`
-
-Read-Only:
-
-- `encryption_algorithm` (String) must be one of ["AES256", "RC4_56", "3DES168"]
-This parameter defines what encryption algorithm is used.
-- `encryption_method` (String) must be one of ["client_nne"]
-
-
-
-### Nested Schema for `configuration.encryption.source_oracle_update_encryption_tls_encrypted_verify_certificate`
-
-Read-Only:
-
-- `encryption_method` (String) must be one of ["encrypted_verify_certificate"]
-- `ssl_certificate` (String) Privacy Enhanced Mail (PEM) files are concatenated certificate containers frequently used in certificate installations.
-
-
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `source_oracle_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_oracle_ssh_tunnel_method_no_tunnel))
-- `source_oracle_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_oracle_ssh_tunnel_method_password_authentication))
-- `source_oracle_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_oracle_ssh_tunnel_method_ssh_key_authentication))
-- `source_oracle_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_oracle_update_ssh_tunnel_method_no_tunnel))
-- `source_oracle_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_oracle_update_ssh_tunnel_method_password_authentication))
-- `source_oracle_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_oracle_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.source_oracle_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_oracle_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_oracle_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_oracle_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_oracle_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_oracle_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/source_orb.md b/docs/data-sources/source_orb.md
index 9e5783179..a1201af20 100644
--- a/docs/data-sources/source_orb.md
+++ b/docs/data-sources/source_orb.md
@@ -14,7 +14,6 @@ SourceOrb DataSource
```terraform
data "airbyte_source_orb" "my_source_orb" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,28 +25,12 @@ data "airbyte_source_orb" "my_source_orb" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Orb API Key, issued from the Orb admin console.
-- `lookback_window_days` (Number) When set to N, the connector will always refresh resources created within the past N days. By default, updated objects that are not newly created are not incrementally synced.
-- `numeric_event_properties_keys` (List of String) Property key names to extract from all events, in order to enrich ledger entries corresponding to an event deduction.
-- `plan_id` (String) Orb Plan ID to filter subscriptions that should have usage fetched.
-- `source_type` (String) must be one of ["orb"]
-- `start_date` (String) UTC date and time in the format 2022-03-01T00:00:00Z. Any data with created_at before this data will not be synced. For Subscription Usage, this becomes the `timeframe_start` API parameter.
-- `string_event_properties_keys` (List of String) Property key names to extract from all events, in order to enrich ledger entries corresponding to an event deduction.
-- `subscription_usage_grouping_key` (String) Property key name to group subscription usage by.
-
diff --git a/docs/data-sources/source_orbit.md b/docs/data-sources/source_orbit.md
index 9fa7ac3d1..db43f656a 100644
--- a/docs/data-sources/source_orbit.md
+++ b/docs/data-sources/source_orbit.md
@@ -14,7 +14,6 @@ SourceOrbit DataSource
```terraform
data "airbyte_source_orbit" "my_source_orbit" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_orbit" "my_source_orbit" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_token` (String) Authorizes you to work with Orbit workspaces associated with the token.
-- `source_type` (String) must be one of ["orbit"]
-- `start_date` (String) Date in the format 2022-06-26. Only load members whose last activities are after this date.
-- `workspace` (String) The unique name of the workspace that your API token is associated with.
-
diff --git a/docs/data-sources/source_outbrain_amplify.md b/docs/data-sources/source_outbrain_amplify.md
index b65430536..9f1843705 100644
--- a/docs/data-sources/source_outbrain_amplify.md
+++ b/docs/data-sources/source_outbrain_amplify.md
@@ -14,7 +14,6 @@ SourceOutbrainAmplify DataSource
```terraform
data "airbyte_source_outbrain_amplify" "my_source_outbrainamplify" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,75 +25,12 @@ data "airbyte_source_outbrain_amplify" "my_source_outbrainamplify" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Credentials for making authenticated requests requires either username/password or access_token. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `end_date` (String) Date in the format YYYY-MM-DD.
-- `geo_location_breakdown` (String) must be one of ["country", "region", "subregion"]
-The granularity used for geo location data in reports.
-- `report_granularity` (String) must be one of ["daily", "weekly", "monthly"]
-The granularity used for periodic data in reports. See the docs.
-- `source_type` (String) must be one of ["outbrain-amplify"]
-- `start_date` (String) Date in the format YYYY-MM-DD eg. 2017-01-25. Any data before this date will not be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_outbrain_amplify_authentication_method_access_token` (Attributes) Credentials for making authenticated requests requires either username/password or access_token. (see [below for nested schema](#nestedatt--configuration--credentials--source_outbrain_amplify_authentication_method_access_token))
-- `source_outbrain_amplify_authentication_method_username_password` (Attributes) Credentials for making authenticated requests requires either username/password or access_token. (see [below for nested schema](#nestedatt--configuration--credentials--source_outbrain_amplify_authentication_method_username_password))
-- `source_outbrain_amplify_update_authentication_method_access_token` (Attributes) Credentials for making authenticated requests requires either username/password or access_token. (see [below for nested schema](#nestedatt--configuration--credentials--source_outbrain_amplify_update_authentication_method_access_token))
-- `source_outbrain_amplify_update_authentication_method_username_password` (Attributes) Credentials for making authenticated requests requires either username/password or access_token. (see [below for nested schema](#nestedatt--configuration--credentials--source_outbrain_amplify_update_authentication_method_username_password))
-
-
-### Nested Schema for `configuration.credentials.source_outbrain_amplify_authentication_method_access_token`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `type` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_outbrain_amplify_authentication_method_username_password`
-
-Read-Only:
-
-- `password` (String) Add Password for authentication.
-- `type` (String) must be one of ["username_password"]
-- `username` (String) Add Username for authentication.
-
-
-
-### Nested Schema for `configuration.credentials.source_outbrain_amplify_update_authentication_method_access_token`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `type` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_outbrain_amplify_update_authentication_method_username_password`
-
-Read-Only:
-
-- `password` (String) Add Password for authentication.
-- `type` (String) must be one of ["username_password"]
-- `username` (String) Add Username for authentication.
-
diff --git a/docs/data-sources/source_outreach.md b/docs/data-sources/source_outreach.md
index 59d661161..b0a493a85 100644
--- a/docs/data-sources/source_outreach.md
+++ b/docs/data-sources/source_outreach.md
@@ -14,7 +14,6 @@ SourceOutreach DataSource
```terraform
data "airbyte_source_outreach" "my_source_outreach" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,26 +25,12 @@ data "airbyte_source_outreach" "my_source_outreach" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `client_id` (String) The Client ID of your Outreach developer application.
-- `client_secret` (String) The Client Secret of your Outreach developer application.
-- `redirect_uri` (String) A Redirect URI is the location where the authorization server sends the user once the app has been successfully authorized and granted an authorization code or access token.
-- `refresh_token` (String) The token for obtaining the new access token.
-- `source_type` (String) must be one of ["outreach"]
-- `start_date` (String) The date from which you'd like to replicate data for Outreach API, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
-
diff --git a/docs/data-sources/source_paypal_transaction.md b/docs/data-sources/source_paypal_transaction.md
index cffbcf512..efd8b2226 100644
--- a/docs/data-sources/source_paypal_transaction.md
+++ b/docs/data-sources/source_paypal_transaction.md
@@ -14,7 +14,6 @@ SourcePaypalTransaction DataSource
```terraform
data "airbyte_source_paypal_transaction" "my_source_paypaltransaction" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,26 +25,12 @@ data "airbyte_source_paypal_transaction" "my_source_paypaltransaction" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `client_id` (String) The Client ID of your Paypal developer application.
-- `client_secret` (String) The Client Secret of your Paypal developer application.
-- `is_sandbox` (Boolean) Determines whether to use the sandbox or production environment.
-- `refresh_token` (String) The key to refresh the expired access token.
-- `source_type` (String) must be one of ["paypal-transaction"]
-- `start_date` (String) Start Date for data extraction in ISO format. Date must be in range from 3 years till 12 hrs before present time.
-
diff --git a/docs/data-sources/source_paystack.md b/docs/data-sources/source_paystack.md
index 117f68e3a..3779820bd 100644
--- a/docs/data-sources/source_paystack.md
+++ b/docs/data-sources/source_paystack.md
@@ -14,7 +14,6 @@ SourcePaystack DataSource
```terraform
data "airbyte_source_paystack" "my_source_paystack" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_paystack" "my_source_paystack" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `lookback_window_days` (Number) When set, the connector will always reload data from the past N days, where N is the value set here. This is useful if your data is updated after creation.
-- `secret_key` (String) The Paystack API key (usually starts with 'sk_live_'; find yours here).
-- `source_type` (String) must be one of ["paystack"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_pendo.md b/docs/data-sources/source_pendo.md
index c6f4dc766..ab975dbb9 100644
--- a/docs/data-sources/source_pendo.md
+++ b/docs/data-sources/source_pendo.md
@@ -14,7 +14,6 @@ SourcePendo DataSource
```terraform
data "airbyte_source_pendo" "my_source_pendo" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_pendo" "my_source_pendo" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String)
-- `source_type` (String) must be one of ["pendo"]
-
diff --git a/docs/data-sources/source_persistiq.md b/docs/data-sources/source_persistiq.md
index ddcc19899..f738cea0c 100644
--- a/docs/data-sources/source_persistiq.md
+++ b/docs/data-sources/source_persistiq.md
@@ -14,7 +14,6 @@ SourcePersistiq DataSource
```terraform
data "airbyte_source_persistiq" "my_source_persistiq" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_persistiq" "my_source_persistiq" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) PersistIq API Key. See the docs for more information on where to find that key.
-- `source_type` (String) must be one of ["persistiq"]
-
diff --git a/docs/data-sources/source_pexels_api.md b/docs/data-sources/source_pexels_api.md
index d1b8bb290..aebc7a2d9 100644
--- a/docs/data-sources/source_pexels_api.md
+++ b/docs/data-sources/source_pexels_api.md
@@ -14,7 +14,6 @@ SourcePexelsAPI DataSource
```terraform
data "airbyte_source_pexels_api" "my_source_pexelsapi" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,27 +25,12 @@ data "airbyte_source_pexels_api" "my_source_pexelsapi" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) API key is required to access pexels api, For getting your's goto https://www.pexels.com/api/documentation and create account for free.
-- `color` (String) Optional, Desired photo color. Supported colors red, orange, yellow, green, turquoise, blue, violet, pink, brown, black, gray, white or any hexidecimal color code.
-- `locale` (String) Optional, The locale of the search you are performing. The current supported locales are 'en-US' 'pt-BR' 'es-ES' 'ca-ES' 'de-DE' 'it-IT' 'fr-FR' 'sv-SE' 'id-ID' 'pl-PL' 'ja-JP' 'zh-TW' 'zh-CN' 'ko-KR' 'th-TH' 'nl-NL' 'hu-HU' 'vi-VN' 'cs-CZ' 'da-DK' 'fi-FI' 'uk-UA' 'el-GR' 'ro-RO' 'nb-NO' 'sk-SK' 'tr-TR' 'ru-RU'.
-- `orientation` (String) Optional, Desired photo orientation. The current supported orientations are landscape, portrait or square
-- `query` (String) Optional, the search query, Example Ocean, Tigers, Pears, etc.
-- `size` (String) Optional, Minimum photo size. The current supported sizes are large(24MP), medium(12MP) or small(4MP).
-- `source_type` (String) must be one of ["pexels-api"]
-
diff --git a/docs/data-sources/source_pinterest.md b/docs/data-sources/source_pinterest.md
index 31f2e026c..8d9f7ab72 100644
--- a/docs/data-sources/source_pinterest.md
+++ b/docs/data-sources/source_pinterest.md
@@ -14,7 +14,6 @@ SourcePinterest DataSource
```terraform
data "airbyte_source_pinterest" "my_source_pinterest" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,73 +25,12 @@ data "airbyte_source_pinterest" "my_source_pinterest" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["pinterest"]
-- `start_date` (String) A date in the format YYYY-MM-DD. If you have not set a date, it would be defaulted to latest allowed date by api (89 days from today).
-- `status` (List of String) Entity statuses based off of campaigns, ad_groups, and ads. If you do not have a status set, it will be ignored completely.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_pinterest_authorization_method_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_pinterest_authorization_method_access_token))
-- `source_pinterest_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_pinterest_authorization_method_o_auth2_0))
-- `source_pinterest_update_authorization_method_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_pinterest_update_authorization_method_access_token))
-- `source_pinterest_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_pinterest_update_authorization_method_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_pinterest_authorization_method_access_token`
-
-Read-Only:
-
-- `access_token` (String) The Access Token to make authenticated requests.
-- `auth_method` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_pinterest_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your OAuth application
-- `client_secret` (String) The Client Secret of your OAuth application.
-- `refresh_token` (String) Refresh Token to obtain new Access Token, when it's expired.
-
-
-
-### Nested Schema for `configuration.credentials.source_pinterest_update_authorization_method_access_token`
-
-Read-Only:
-
-- `access_token` (String) The Access Token to make authenticated requests.
-- `auth_method` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_pinterest_update_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your OAuth application
-- `client_secret` (String) The Client Secret of your OAuth application.
-- `refresh_token` (String) Refresh Token to obtain new Access Token, when it's expired.
-
diff --git a/docs/data-sources/source_pipedrive.md b/docs/data-sources/source_pipedrive.md
index 0ebebedbd..f4539f829 100644
--- a/docs/data-sources/source_pipedrive.md
+++ b/docs/data-sources/source_pipedrive.md
@@ -14,7 +14,6 @@ SourcePipedrive DataSource
```terraform
data "airbyte_source_pipedrive" "my_source_pipedrive" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,31 +25,12 @@ data "airbyte_source_pipedrive" "my_source_pipedrive" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `authorization` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization))
-- `replication_start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated. When specified and not None, then stream will behave as incremental
-- `source_type` (String) must be one of ["pipedrive"]
-
-
-### Nested Schema for `configuration.authorization`
-
-Read-Only:
-
-- `api_token` (String) The Pipedrive API Token.
-- `auth_type` (String) must be one of ["Token"]
-
diff --git a/docs/data-sources/source_pocket.md b/docs/data-sources/source_pocket.md
index c34b1f9eb..d054132d3 100644
--- a/docs/data-sources/source_pocket.md
+++ b/docs/data-sources/source_pocket.md
@@ -14,7 +14,6 @@ SourcePocket DataSource
```terraform
data "airbyte_source_pocket" "my_source_pocket" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,36 +25,12 @@ data "airbyte_source_pocket" "my_source_pocket" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_token` (String) The user's Pocket access token.
-- `consumer_key` (String) Your application's Consumer Key.
-- `content_type` (String) must be one of ["article", "video", "image"]
-Select the content type of the items to retrieve.
-- `detail_type` (String) must be one of ["simple", "complete"]
-Select the granularity of the information about each item.
-- `domain` (String) Only return items from a particular `domain`.
-- `favorite` (Boolean) Retrieve only favorited items.
-- `search` (String) Only return items whose title or url contain the `search` string.
-- `since` (String) Only return items modified since the given timestamp.
-- `sort` (String) must be one of ["newest", "oldest", "title", "site"]
-Sort retrieved items by the given criteria.
-- `source_type` (String) must be one of ["pocket"]
-- `state` (String) must be one of ["unread", "archive", "all"]
-Select the state of the items to retrieve.
-- `tag` (String) Return only items tagged with this tag name. Use _untagged_ for retrieving only untagged items.
-
diff --git a/docs/data-sources/source_pokeapi.md b/docs/data-sources/source_pokeapi.md
index 99da2a445..aff4a4239 100644
--- a/docs/data-sources/source_pokeapi.md
+++ b/docs/data-sources/source_pokeapi.md
@@ -14,7 +14,6 @@ SourcePokeapi DataSource
```terraform
data "airbyte_source_pokeapi" "my_source_pokeapi" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_pokeapi" "my_source_pokeapi" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `pokemon_name` (String) Pokemon requested from the API.
-- `source_type` (String) must be one of ["pokeapi"]
-
diff --git a/docs/data-sources/source_polygon_stock_api.md b/docs/data-sources/source_polygon_stock_api.md
index a9ef45744..80af1f707 100644
--- a/docs/data-sources/source_polygon_stock_api.md
+++ b/docs/data-sources/source_polygon_stock_api.md
@@ -14,7 +14,6 @@ SourcePolygonStockAPI DataSource
```terraform
data "airbyte_source_polygon_stock_api" "my_source_polygonstockapi" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,30 +25,12 @@ data "airbyte_source_polygon_stock_api" "my_source_polygonstockapi" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `adjusted` (String) Determines whether or not the results are adjusted for splits. By default, results are adjusted and set to true. Set this to false to get results that are NOT adjusted for splits.
-- `api_key` (String) Your API ACCESS Key
-- `end_date` (String) The target date for the aggregate window.
-- `limit` (Number) The target date for the aggregate window.
-- `multiplier` (Number) The size of the timespan multiplier.
-- `sort` (String) Sort the results by timestamp. asc will return results in ascending order (oldest at the top), desc will return results in descending order (newest at the top).
-- `source_type` (String) must be one of ["polygon-stock-api"]
-- `start_date` (String) The beginning date for the aggregate window.
-- `stocks_ticker` (String) The exchange symbol that this item is traded under.
-- `timespan` (String) The size of the time window.
-
diff --git a/docs/data-sources/source_postgres.md b/docs/data-sources/source_postgres.md
index 54cf73891..529f28ad9 100644
--- a/docs/data-sources/source_postgres.md
+++ b/docs/data-sources/source_postgres.md
@@ -14,7 +14,6 @@ SourcePostgres DataSource
```terraform
data "airbyte_source_postgres" "my_source_postgres" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,377 +25,12 @@ data "airbyte_source_postgres" "my_source_postgres" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `database` (String) Name of the database.
-- `host` (String) Hostname of the database.
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (Eg. key1=value1&key2=value2&key3=value3). For more information read about JDBC URL parameters.
-- `password` (String) Password associated with the username.
-- `port` (Number) Port of the database.
-- `replication_method` (Attributes) Configures how data is extracted from the database. (see [below for nested schema](#nestedatt--configuration--replication_method))
-- `schemas` (List of String) The list of schemas (case sensitive) to sync from. Defaults to public.
-- `source_type` (String) must be one of ["postgres"]
-- `ssl_mode` (Attributes) SSL connection modes.
- Read more in the docs. (see [below for nested schema](#nestedatt--configuration--ssl_mode))
-- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `username` (String) Username to access the database.
-
-
-### Nested Schema for `configuration.replication_method`
-
-Read-Only:
-
-- `source_postgres_update_method_detect_changes_with_xmin_system_column` (Attributes) Recommended - Incrementally reads new inserts and updates via Postgres Xmin system column. Only recommended for tables up to 500GB. (see [below for nested schema](#nestedatt--configuration--replication_method--source_postgres_update_method_detect_changes_with_xmin_system_column))
-- `source_postgres_update_method_read_changes_using_write_ahead_log_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the Postgres write-ahead log (WAL). This needs to be configured on the source database itself. Recommended for tables of any size. (see [below for nested schema](#nestedatt--configuration--replication_method--source_postgres_update_method_read_changes_using_write_ahead_log_cdc))
-- `source_postgres_update_method_scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--source_postgres_update_method_scan_changes_with_user_defined_cursor))
-- `source_postgres_update_update_method_detect_changes_with_xmin_system_column` (Attributes) Recommended - Incrementally reads new inserts and updates via Postgres Xmin system column. Only recommended for tables up to 500GB. (see [below for nested schema](#nestedatt--configuration--replication_method--source_postgres_update_update_method_detect_changes_with_xmin_system_column))
-- `source_postgres_update_update_method_read_changes_using_write_ahead_log_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the Postgres write-ahead log (WAL). This needs to be configured on the source database itself. Recommended for tables of any size. (see [below for nested schema](#nestedatt--configuration--replication_method--source_postgres_update_update_method_read_changes_using_write_ahead_log_cdc))
-- `source_postgres_update_update_method_scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--source_postgres_update_update_method_scan_changes_with_user_defined_cursor))
-
-
-### Nested Schema for `configuration.replication_method.source_postgres_update_method_detect_changes_with_xmin_system_column`
-
-Read-Only:
-
-- `method` (String) must be one of ["Xmin"]
-
-
-
-### Nested Schema for `configuration.replication_method.source_postgres_update_method_read_changes_using_write_ahead_log_cdc`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `lsn_commit_behaviour` (String) must be one of ["While reading Data", "After loading Data in the destination"]
-Determines when Airbtye should flush the LSN of processed WAL logs in the source database. `After loading Data in the destination` is default. If `While reading Data` is selected, in case of a downstream failure (while loading data into the destination), next sync would result in a full sync.
-- `method` (String) must be one of ["CDC"]
-- `plugin` (String) must be one of ["pgoutput"]
-A logical decoding plugin installed on the PostgreSQL server.
-- `publication` (String) A Postgres publication used for consuming changes. Read about publications and replication identities.
-- `queue_size` (Number) The size of the internal queue. This may interfere with memory consumption and efficiency of the connector, please be careful.
-- `replication_slot` (String) A plugin logical replication slot. Read about replication slots.
-
-
-
-### Nested Schema for `configuration.replication_method.source_postgres_update_method_scan_changes_with_user_defined_cursor`
-
-Read-Only:
-
-- `method` (String) must be one of ["Standard"]
-
-
-
-### Nested Schema for `configuration.replication_method.source_postgres_update_update_method_detect_changes_with_xmin_system_column`
-
-Read-Only:
-
-- `method` (String) must be one of ["Xmin"]
-
-
-
-### Nested Schema for `configuration.replication_method.source_postgres_update_update_method_read_changes_using_write_ahead_log_cdc`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `lsn_commit_behaviour` (String) must be one of ["While reading Data", "After loading Data in the destination"]
-Determines when Airbtye should flush the LSN of processed WAL logs in the source database. `After loading Data in the destination` is default. If `While reading Data` is selected, in case of a downstream failure (while loading data into the destination), next sync would result in a full sync.
-- `method` (String) must be one of ["CDC"]
-- `plugin` (String) must be one of ["pgoutput"]
-A logical decoding plugin installed on the PostgreSQL server.
-- `publication` (String) A Postgres publication used for consuming changes. Read about publications and replication identities.
-- `queue_size` (Number) The size of the internal queue. This may interfere with memory consumption and efficiency of the connector, please be careful.
-- `replication_slot` (String) A plugin logical replication slot. Read about replication slots.
-
-
-
-### Nested Schema for `configuration.replication_method.source_postgres_update_update_method_scan_changes_with_user_defined_cursor`
-
-Read-Only:
-
-- `method` (String) must be one of ["Standard"]
-
-
-
-
-### Nested Schema for `configuration.ssl_mode`
-
-Read-Only:
-
-- `source_postgres_ssl_modes_allow` (Attributes) Enables encryption only when required by the source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_ssl_modes_allow))
-- `source_postgres_ssl_modes_disable` (Attributes) Disables encryption of communication between Airbyte and source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_ssl_modes_disable))
-- `source_postgres_ssl_modes_prefer` (Attributes) Allows unencrypted connection only if the source database does not support encryption. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_ssl_modes_prefer))
-- `source_postgres_ssl_modes_require` (Attributes) Always require encryption. If the source database server does not support encryption, connection will fail. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_ssl_modes_require))
-- `source_postgres_ssl_modes_verify_ca` (Attributes) Always require encryption and verifies that the source database server has a valid SSL certificate. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_ssl_modes_verify_ca))
-- `source_postgres_ssl_modes_verify_full` (Attributes) This is the most secure mode. Always require encryption and verifies the identity of the source database server. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_ssl_modes_verify_full))
-- `source_postgres_update_ssl_modes_allow` (Attributes) Enables encryption only when required by the source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_update_ssl_modes_allow))
-- `source_postgres_update_ssl_modes_disable` (Attributes) Disables encryption of communication between Airbyte and source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_update_ssl_modes_disable))
-- `source_postgres_update_ssl_modes_prefer` (Attributes) Allows unencrypted connection only if the source database does not support encryption. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_update_ssl_modes_prefer))
-- `source_postgres_update_ssl_modes_require` (Attributes) Always require encryption. If the source database server does not support encryption, connection will fail. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_update_ssl_modes_require))
-- `source_postgres_update_ssl_modes_verify_ca` (Attributes) Always require encryption and verifies that the source database server has a valid SSL certificate. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_update_ssl_modes_verify_ca))
-- `source_postgres_update_ssl_modes_verify_full` (Attributes) This is the most secure mode. Always require encryption and verifies the identity of the source database server. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_update_ssl_modes_verify_full))
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_ssl_modes_allow`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["allow"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_ssl_modes_disable`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["disable"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_ssl_modes_prefer`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["prefer"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_ssl_modes_require`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["require"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_ssl_modes_verify_ca`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-ca"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_ssl_modes_verify_full`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-full"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_update_ssl_modes_allow`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["allow"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_update_ssl_modes_disable`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["disable"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_update_ssl_modes_prefer`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["prefer"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_update_ssl_modes_require`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `mode` (String) must be one of ["require"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_update_ssl_modes_verify_ca`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-ca"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_update_ssl_modes_verify_full`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-- `mode` (String) must be one of ["verify-full"]
-
-
-
-
-### Nested Schema for `configuration.tunnel_method`
-
-Read-Only:
-
-- `source_postgres_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_postgres_ssh_tunnel_method_no_tunnel))
-- `source_postgres_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_postgres_ssh_tunnel_method_password_authentication))
-- `source_postgres_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_postgres_ssh_tunnel_method_ssh_key_authentication))
-- `source_postgres_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_postgres_update_ssh_tunnel_method_no_tunnel))
-- `source_postgres_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_postgres_update_ssh_tunnel_method_password_authentication))
-- `source_postgres_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_postgres_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.source_postgres_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_postgres_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_postgres_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_postgres_update_ssh_tunnel_method_no_tunnel`
-
-Read-Only:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_postgres_update_ssh_tunnel_method_password_authentication`
-
-Read-Only:
-
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_postgres_update_ssh_tunnel_method_ssh_key_authentication`
-
-Read-Only:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
diff --git a/docs/data-sources/source_posthog.md b/docs/data-sources/source_posthog.md
index e076d3b63..8edb56f7c 100644
--- a/docs/data-sources/source_posthog.md
+++ b/docs/data-sources/source_posthog.md
@@ -14,7 +14,6 @@ SourcePosthog DataSource
```terraform
data "airbyte_source_posthog" "my_source_posthog" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_posthog" "my_source_posthog" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) API Key. See the docs for information on how to generate this key.
-- `base_url` (String) Base PostHog url. Defaults to PostHog Cloud (https://app.posthog.com).
-- `events_time_step` (Number) Set lower value in case of failing long running sync of events stream.
-- `source_type` (String) must be one of ["posthog"]
-- `start_date` (String) The date from which you'd like to replicate the data. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_postmarkapp.md b/docs/data-sources/source_postmarkapp.md
index b1afac15d..551408386 100644
--- a/docs/data-sources/source_postmarkapp.md
+++ b/docs/data-sources/source_postmarkapp.md
@@ -14,7 +14,6 @@ SourcePostmarkapp DataSource
```terraform
data "airbyte_source_postmarkapp" "my_source_postmarkapp" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_postmarkapp" "my_source_postmarkapp" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `source_type` (String) must be one of ["postmarkapp"]
-- `x_postmark_account_token` (String) API Key for account
-- `x_postmark_server_token` (String) API Key for server
-
diff --git a/docs/data-sources/source_prestashop.md b/docs/data-sources/source_prestashop.md
index 3d3aca6cb..9330ec0ca 100644
--- a/docs/data-sources/source_prestashop.md
+++ b/docs/data-sources/source_prestashop.md
@@ -14,7 +14,6 @@ SourcePrestashop DataSource
```terraform
data "airbyte_source_prestashop" "my_source_prestashop" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_prestashop" "my_source_prestashop" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_key` (String) Your PrestaShop access key. See the docs for info on how to obtain this.
-- `source_type` (String) must be one of ["prestashop"]
-- `start_date` (String) The Start date in the format YYYY-MM-DD.
-- `url` (String) Shop URL without trailing slash.
-
diff --git a/docs/data-sources/source_punk_api.md b/docs/data-sources/source_punk_api.md
index dedfcb423..a6ff7d0ce 100644
--- a/docs/data-sources/source_punk_api.md
+++ b/docs/data-sources/source_punk_api.md
@@ -14,7 +14,6 @@ SourcePunkAPI DataSource
```terraform
data "airbyte_source_punk_api" "my_source_punkapi" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_punk_api" "my_source_punkapi" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `brewed_after` (String) To extract specific data with Unique ID
-- `brewed_before` (String) To extract specific data with Unique ID
-- `id` (String) To extract specific data with Unique ID
-- `source_type` (String) must be one of ["punk-api"]
-
diff --git a/docs/data-sources/source_pypi.md b/docs/data-sources/source_pypi.md
index ec211a78e..5a0466a9e 100644
--- a/docs/data-sources/source_pypi.md
+++ b/docs/data-sources/source_pypi.md
@@ -14,7 +14,6 @@ SourcePypi DataSource
```terraform
data "airbyte_source_pypi" "my_source_pypi" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_pypi" "my_source_pypi" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `project_name` (String) Name of the project/package. Can only be in lowercase with hyphen. This is the name used using pip command for installing the package.
-- `source_type` (String) must be one of ["pypi"]
-- `version` (String) Version of the project/package. Use it to find a particular release instead of all releases.
-
diff --git a/docs/data-sources/source_qualaroo.md b/docs/data-sources/source_qualaroo.md
index c4eedcf28..5f72d5de5 100644
--- a/docs/data-sources/source_qualaroo.md
+++ b/docs/data-sources/source_qualaroo.md
@@ -14,7 +14,6 @@ SourceQualaroo DataSource
```terraform
data "airbyte_source_qualaroo" "my_source_qualaroo" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_qualaroo" "my_source_qualaroo" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `key` (String) A Qualaroo token. See the docs for instructions on how to generate it.
-- `source_type` (String) must be one of ["qualaroo"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-- `survey_ids` (List of String) IDs of the surveys from which you'd like to replicate data. If left empty, data from all surveys to which you have access will be replicated.
-- `token` (String) A Qualaroo token. See the docs for instructions on how to generate it.
-
diff --git a/docs/data-sources/source_quickbooks.md b/docs/data-sources/source_quickbooks.md
index eeb81b122..4a783b4d3 100644
--- a/docs/data-sources/source_quickbooks.md
+++ b/docs/data-sources/source_quickbooks.md
@@ -14,7 +14,6 @@ SourceQuickbooks DataSource
```terraform
data "airbyte_source_quickbooks" "my_source_quickbooks" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,59 +25,12 @@ data "airbyte_source_quickbooks" "my_source_quickbooks" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `sandbox` (Boolean) Determines whether to use the sandbox or production environment.
-- `source_type` (String) must be one of ["quickbooks"]
-- `start_date` (String) The default value to use if no bookmark exists for an endpoint (rfc3339 date string). E.g, 2021-03-20T00:00:00Z. Any data before this date will not be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_quickbooks_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_quickbooks_authorization_method_o_auth2_0))
-- `source_quickbooks_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_quickbooks_update_authorization_method_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_quickbooks_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access token fot making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) Identifies which app is making the request. Obtain this value from the Keys tab on the app profile via My Apps on the developer site. There are two versions of this key: development and production.
-- `client_secret` (String) Obtain this value from the Keys tab on the app profile via My Apps on the developer site. There are two versions of this key: development and production.
-- `realm_id` (String) Labeled Company ID. The Make API Calls panel is populated with the realm id and the current access token.
-- `refresh_token` (String) A token used when refreshing the access token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-
-
-### Nested Schema for `configuration.credentials.source_quickbooks_update_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access token fot making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) Identifies which app is making the request. Obtain this value from the Keys tab on the app profile via My Apps on the developer site. There are two versions of this key: development and production.
-- `client_secret` (String) Obtain this value from the Keys tab on the app profile via My Apps on the developer site. There are two versions of this key: development and production.
-- `realm_id` (String) Labeled Company ID. The Make API Calls panel is populated with the realm id and the current access token.
-- `refresh_token` (String) A token used when refreshing the access token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
diff --git a/docs/data-sources/source_railz.md b/docs/data-sources/source_railz.md
index f5998c435..0db8ca0b8 100644
--- a/docs/data-sources/source_railz.md
+++ b/docs/data-sources/source_railz.md
@@ -14,7 +14,6 @@ SourceRailz DataSource
```terraform
data "airbyte_source_railz" "my_source_railz" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_railz" "my_source_railz" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `client_id` (String) Client ID (client_id)
-- `secret_key` (String) Secret key (secret_key)
-- `source_type` (String) must be one of ["railz"]
-- `start_date` (String) Start date
-
diff --git a/docs/data-sources/source_recharge.md b/docs/data-sources/source_recharge.md
index 2a6b96d10..3fdf04c83 100644
--- a/docs/data-sources/source_recharge.md
+++ b/docs/data-sources/source_recharge.md
@@ -14,7 +14,6 @@ SourceRecharge DataSource
```terraform
data "airbyte_source_recharge" "my_source_recharge" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_recharge" "my_source_recharge" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_token` (String) The value of the Access Token generated. See the docs for more information.
-- `source_type` (String) must be one of ["recharge"]
-- `start_date` (String) The date from which you'd like to replicate data for Recharge API, in the format YYYY-MM-DDT00:00:00Z. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_recreation.md b/docs/data-sources/source_recreation.md
index 522c2b5fb..6bd0c5bf1 100644
--- a/docs/data-sources/source_recreation.md
+++ b/docs/data-sources/source_recreation.md
@@ -14,7 +14,6 @@ SourceRecreation DataSource
```terraform
data "airbyte_source_recreation" "my_source_recreation" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_recreation" "my_source_recreation" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `apikey` (String) API Key
-- `query_campsites` (String)
-- `source_type` (String) must be one of ["recreation"]
-
diff --git a/docs/data-sources/source_recruitee.md b/docs/data-sources/source_recruitee.md
index 59bddb6f9..9831a0113 100644
--- a/docs/data-sources/source_recruitee.md
+++ b/docs/data-sources/source_recruitee.md
@@ -14,7 +14,6 @@ SourceRecruitee DataSource
```terraform
data "airbyte_source_recruitee" "my_source_recruitee" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_recruitee" "my_source_recruitee" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Recruitee API Key. See here.
-- `company_id` (Number) Recruitee Company ID. You can also find this ID on the Recruitee API tokens page.
-- `source_type` (String) must be one of ["recruitee"]
-
diff --git a/docs/data-sources/source_recurly.md b/docs/data-sources/source_recurly.md
index a1c7b17c2..38f97cef4 100644
--- a/docs/data-sources/source_recurly.md
+++ b/docs/data-sources/source_recurly.md
@@ -14,7 +14,6 @@ SourceRecurly DataSource
```terraform
data "airbyte_source_recurly" "my_source_recurly" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_recurly" "my_source_recurly" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Recurly API Key. See the docs for more information on how to generate this key.
-- `begin_time` (String) ISO8601 timestamp from which the replication from Recurly API will start from.
-- `end_time` (String) ISO8601 timestamp to which the replication from Recurly API will stop. Records after that date won't be imported.
-- `source_type` (String) must be one of ["recurly"]
-
diff --git a/docs/data-sources/source_redshift.md b/docs/data-sources/source_redshift.md
index ebae7c06a..1b3d3fdf3 100644
--- a/docs/data-sources/source_redshift.md
+++ b/docs/data-sources/source_redshift.md
@@ -14,7 +14,6 @@ SourceRedshift DataSource
```terraform
data "airbyte_source_redshift" "my_source_redshift" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,28 +25,12 @@ data "airbyte_source_redshift" "my_source_redshift" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `database` (String) Name of the database.
-- `host` (String) Host Endpoint of the Redshift Cluster (must include the cluster-id, region and end with .redshift.amazonaws.com).
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) Password associated with the username.
-- `port` (Number) Port of the database.
-- `schemas` (List of String) The list of schemas to sync from. Specify one or more explicitly or keep empty to process all schemas. Schema names are case sensitive.
-- `source_type` (String) must be one of ["redshift"]
-- `username` (String) Username to use to access the database.
-
diff --git a/docs/data-sources/source_retently.md b/docs/data-sources/source_retently.md
index 9724ec511..76168d2b1 100644
--- a/docs/data-sources/source_retently.md
+++ b/docs/data-sources/source_retently.md
@@ -14,7 +14,6 @@ SourceRetently DataSource
```terraform
data "airbyte_source_retently" "my_source_retently" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,87 +25,12 @@ data "airbyte_source_retently" "my_source_retently" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Choose how to authenticate to Retently (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["retently"]
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_retently_authentication_mechanism_authenticate_via_retently_o_auth` (Attributes) Choose how to authenticate to Retently (see [below for nested schema](#nestedatt--configuration--credentials--source_retently_authentication_mechanism_authenticate_via_retently_o_auth))
-- `source_retently_authentication_mechanism_authenticate_with_api_token` (Attributes) Choose how to authenticate to Retently (see [below for nested schema](#nestedatt--configuration--credentials--source_retently_authentication_mechanism_authenticate_with_api_token))
-- `source_retently_update_authentication_mechanism_authenticate_via_retently_o_auth` (Attributes) Choose how to authenticate to Retently (see [below for nested schema](#nestedatt--configuration--credentials--source_retently_update_authentication_mechanism_authenticate_via_retently_o_auth))
-- `source_retently_update_authentication_mechanism_authenticate_with_api_token` (Attributes) Choose how to authenticate to Retently (see [below for nested schema](#nestedatt--configuration--credentials--source_retently_update_authentication_mechanism_authenticate_with_api_token))
-
-
-### Nested Schema for `configuration.credentials.source_retently_authentication_mechanism_authenticate_via_retently_o_auth`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Retently developer application.
-- `client_secret` (String) The Client Secret of your Retently developer application.
-- `refresh_token` (String) Retently Refresh Token which can be used to fetch new Bearer Tokens when the current one expires.
-
-
-
-### Nested Schema for `configuration.credentials.source_retently_authentication_mechanism_authenticate_with_api_token`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `api_key` (String) Retently API Token. See the docs for more information on how to obtain this key.
-- `auth_type` (String) must be one of ["Token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_retently_update_authentication_mechanism_authenticate_via_retently_o_auth`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Retently developer application.
-- `client_secret` (String) The Client Secret of your Retently developer application.
-- `refresh_token` (String) Retently Refresh Token which can be used to fetch new Bearer Tokens when the current one expires.
-
-
-
-### Nested Schema for `configuration.credentials.source_retently_update_authentication_mechanism_authenticate_with_api_token`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `api_key` (String) Retently API Token. See the docs for more information on how to obtain this key.
-- `auth_type` (String) must be one of ["Token"]
-
diff --git a/docs/data-sources/source_rki_covid.md b/docs/data-sources/source_rki_covid.md
index 2dddd1a2f..745a69b98 100644
--- a/docs/data-sources/source_rki_covid.md
+++ b/docs/data-sources/source_rki_covid.md
@@ -14,7 +14,6 @@ SourceRkiCovid DataSource
```terraform
data "airbyte_source_rki_covid" "my_source_rkicovid" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_rki_covid" "my_source_rkicovid" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `source_type` (String) must be one of ["rki-covid"]
-- `start_date` (String) UTC date in the format 2017-01-25. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_rss.md b/docs/data-sources/source_rss.md
index de1a555d2..98fe7fc1e 100644
--- a/docs/data-sources/source_rss.md
+++ b/docs/data-sources/source_rss.md
@@ -14,7 +14,6 @@ SourceRss DataSource
```terraform
data "airbyte_source_rss" "my_source_rss" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_rss" "my_source_rss" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `source_type` (String) must be one of ["rss"]
-- `url` (String) RSS Feed URL
-
diff --git a/docs/data-sources/source_s3.md b/docs/data-sources/source_s3.md
index 24db6a956..5ccf08821 100644
--- a/docs/data-sources/source_s3.md
+++ b/docs/data-sources/source_s3.md
@@ -14,7 +14,6 @@ SourceS3 DataSource
```terraform
data "airbyte_source_s3" "my_source_s3" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,355 +25,12 @@ data "airbyte_source_s3" "my_source_s3" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) NOTE: When this Spec is changed, legacy_config_transformer.py must also be modified to uptake the changes
-because it is responsible for converting legacy S3 v3 configs into v4 configs using the File-Based CDK. (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `aws_access_key_id` (String) In order to access private Buckets stored on AWS S3, this connector requires credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-- `aws_secret_access_key` (String) In order to access private Buckets stored on AWS S3, this connector requires credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-- `bucket` (String) Name of the S3 bucket where the file(s) exist.
-- `dataset` (String) Deprecated and will be removed soon. Please do not use this field anymore and use streams.name instead. The name of the stream you would like this source to output. Can contain letters, numbers, or underscores.
-- `endpoint` (String) Endpoint to an S3 compatible service. Leave empty to use AWS.
-- `format` (Attributes) Deprecated and will be removed soon. Please do not use this field anymore and use streams.format instead. The format of the files you'd like to replicate (see [below for nested schema](#nestedatt--configuration--format))
-- `path_pattern` (String) Deprecated and will be removed soon. Please do not use this field anymore and use streams.globs instead. A regular expression which tells the connector which files to replicate. All files which match this pattern will be replicated. Use | to separate multiple patterns. See this page to understand pattern syntax (GLOBSTAR and SPLIT flags are enabled). Use pattern ** to pick up all files.
-- `provider` (Attributes) Deprecated and will be removed soon. Please do not use this field anymore and use bucket, aws_access_key_id, aws_secret_access_key and endpoint instead. Use this to load files from S3 or S3-compatible services (see [below for nested schema](#nestedatt--configuration--provider))
-- `schema` (String) Deprecated and will be removed soon. Please do not use this field anymore and use streams.input_schema instead. Optionally provide a schema to enforce, as a valid JSON string. Ensure this is a mapping of { "column" : "type" }, where types are valid JSON Schema datatypes. Leave as {} to auto-infer the schema.
-- `source_type` (String) must be one of ["s3"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00.000000Z. Any file modified before this date will not be replicated.
-- `streams` (Attributes List) Each instance of this configuration defines a stream. Use this to define which files belong in the stream, their format, and how they should be parsed and validated. When sending data to warehouse destination such as Snowflake or BigQuery, each stream is a separate table. (see [below for nested schema](#nestedatt--configuration--streams))
-
-
-### Nested Schema for `configuration.format`
-
-Read-Only:
-
-- `source_s3_file_format_avro` (Attributes) This connector utilises fastavro for Avro parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_file_format_avro))
-- `source_s3_file_format_csv` (Attributes) This connector utilises PyArrow (Apache Arrow) for CSV parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_file_format_csv))
-- `source_s3_file_format_jsonl` (Attributes) This connector uses PyArrow for JSON Lines (jsonl) file parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_file_format_jsonl))
-- `source_s3_file_format_parquet` (Attributes) This connector utilises PyArrow (Apache Arrow) for Parquet parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_file_format_parquet))
-- `source_s3_update_file_format_avro` (Attributes) This connector utilises fastavro for Avro parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_update_file_format_avro))
-- `source_s3_update_file_format_csv` (Attributes) This connector utilises PyArrow (Apache Arrow) for CSV parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_update_file_format_csv))
-- `source_s3_update_file_format_jsonl` (Attributes) This connector uses PyArrow for JSON Lines (jsonl) file parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_update_file_format_jsonl))
-- `source_s3_update_file_format_parquet` (Attributes) This connector utilises PyArrow (Apache Arrow) for Parquet parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_update_file_format_parquet))
-
-
-### Nested Schema for `configuration.format.source_s3_file_format_avro`
-
-Read-Only:
-
-- `filetype` (String) must be one of ["avro"]
-
-
-
-### Nested Schema for `configuration.format.source_s3_file_format_csv`
-
-Read-Only:
-
-- `additional_reader_options` (String) Optionally add a valid JSON string here to provide additional options to the csv reader. Mappings must correspond to options detailed here. 'column_types' is used internally to handle schema so overriding that would likely cause problems.
-- `advanced_options` (String) Optionally add a valid JSON string here to provide additional Pyarrow ReadOptions. Specify 'column_names' here if your CSV doesn't have header, or if you want to use custom column names. 'block_size' and 'encoding' are already used above, specify them again here will override the values above.
-- `block_size` (Number) The chunk size in bytes to process at a time in memory from each file. If your data is particularly wide and failing during schema detection, increasing this should solve it. Beware of raising this too high as you could hit OOM errors.
-- `delimiter` (String) The character delimiting individual cells in the CSV data. This may only be a 1-character string. For tab-delimited data enter '\t'.
-- `double_quote` (Boolean) Whether two quotes in a quoted CSV value denote a single quote in the data.
-- `encoding` (String) The character encoding of the CSV data. Leave blank to default to UTF8. See list of python encodings for allowable options.
-- `escape_char` (String) The character used for escaping special characters. To disallow escaping, leave this field blank.
-- `filetype` (String) must be one of ["csv"]
-- `infer_datatypes` (Boolean) Configures whether a schema for the source should be inferred from the current data or not. If set to false and a custom schema is set, then the manually enforced schema is used. If a schema is not manually set, and this is set to false, then all fields will be read as strings
-- `newlines_in_values` (Boolean) Whether newline characters are allowed in CSV values. Turning this on may affect performance. Leave blank to default to False.
-- `quote_char` (String) The character used for quoting CSV values. To disallow quoting, make this field blank.
-
-
-
-### Nested Schema for `configuration.format.source_s3_file_format_jsonl`
-
-Read-Only:
-
-- `block_size` (Number) The chunk size in bytes to process at a time in memory from each file. If your data is particularly wide and failing during schema detection, increasing this should solve it. Beware of raising this too high as you could hit OOM errors.
-- `filetype` (String) must be one of ["jsonl"]
-- `newlines_in_values` (Boolean) Whether newline characters are allowed in JSON values. Turning this on may affect performance. Leave blank to default to False.
-- `unexpected_field_behavior` (String) must be one of ["ignore", "infer", "error"]
-How JSON fields outside of explicit_schema (if given) are treated. Check PyArrow documentation for details
-
-
-
-### Nested Schema for `configuration.format.source_s3_file_format_parquet`
-
-Read-Only:
-
-- `batch_size` (Number) Maximum number of records per batch read from the input files. Batches may be smaller if there aren’t enough rows in the file. This option can help avoid out-of-memory errors if your data is particularly wide.
-- `buffer_size` (Number) Perform read buffering when deserializing individual column chunks. By default every group column will be loaded fully to memory. This option can help avoid out-of-memory errors if your data is particularly wide.
-- `columns` (List of String) If you only want to sync a subset of the columns from the file(s), add the columns you want here as a comma-delimited list. Leave it empty to sync all columns.
-- `filetype` (String) must be one of ["parquet"]
-
-
-
-### Nested Schema for `configuration.format.source_s3_update_file_format_avro`
-
-Read-Only:
-
-- `filetype` (String) must be one of ["avro"]
-
-
-
-### Nested Schema for `configuration.format.source_s3_update_file_format_csv`
-
-Read-Only:
-
-- `additional_reader_options` (String) Optionally add a valid JSON string here to provide additional options to the csv reader. Mappings must correspond to options detailed here. 'column_types' is used internally to handle schema so overriding that would likely cause problems.
-- `advanced_options` (String) Optionally add a valid JSON string here to provide additional Pyarrow ReadOptions. Specify 'column_names' here if your CSV doesn't have header, or if you want to use custom column names. 'block_size' and 'encoding' are already used above, specify them again here will override the values above.
-- `block_size` (Number) The chunk size in bytes to process at a time in memory from each file. If your data is particularly wide and failing during schema detection, increasing this should solve it. Beware of raising this too high as you could hit OOM errors.
-- `delimiter` (String) The character delimiting individual cells in the CSV data. This may only be a 1-character string. For tab-delimited data enter '\t'.
-- `double_quote` (Boolean) Whether two quotes in a quoted CSV value denote a single quote in the data.
-- `encoding` (String) The character encoding of the CSV data. Leave blank to default to UTF8. See list of python encodings for allowable options.
-- `escape_char` (String) The character used for escaping special characters. To disallow escaping, leave this field blank.
-- `filetype` (String) must be one of ["csv"]
-- `infer_datatypes` (Boolean) Configures whether a schema for the source should be inferred from the current data or not. If set to false and a custom schema is set, then the manually enforced schema is used. If a schema is not manually set, and this is set to false, then all fields will be read as strings
-- `newlines_in_values` (Boolean) Whether newline characters are allowed in CSV values. Turning this on may affect performance. Leave blank to default to False.
-- `quote_char` (String) The character used for quoting CSV values. To disallow quoting, make this field blank.
-
-
-
-### Nested Schema for `configuration.format.source_s3_update_file_format_jsonl`
-
-Read-Only:
-
-- `block_size` (Number) The chunk size in bytes to process at a time in memory from each file. If your data is particularly wide and failing during schema detection, increasing this should solve it. Beware of raising this too high as you could hit OOM errors.
-- `filetype` (String) must be one of ["jsonl"]
-- `newlines_in_values` (Boolean) Whether newline characters are allowed in JSON values. Turning this on may affect performance. Leave blank to default to False.
-- `unexpected_field_behavior` (String) must be one of ["ignore", "infer", "error"]
-How JSON fields outside of explicit_schema (if given) are treated. Check PyArrow documentation for details
-
-
-
-### Nested Schema for `configuration.format.source_s3_update_file_format_parquet`
-
-Read-Only:
-
-- `batch_size` (Number) Maximum number of records per batch read from the input files. Batches may be smaller if there aren’t enough rows in the file. This option can help avoid out-of-memory errors if your data is particularly wide.
-- `buffer_size` (Number) Perform read buffering when deserializing individual column chunks. By default every group column will be loaded fully to memory. This option can help avoid out-of-memory errors if your data is particularly wide.
-- `columns` (List of String) If you only want to sync a subset of the columns from the file(s), add the columns you want here as a comma-delimited list. Leave it empty to sync all columns.
-- `filetype` (String) must be one of ["parquet"]
-
-
-
-
-### Nested Schema for `configuration.provider`
-
-Read-Only:
-
-- `aws_access_key_id` (String) In order to access private Buckets stored on AWS S3, this connector requires credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-- `aws_secret_access_key` (String) In order to access private Buckets stored on AWS S3, this connector requires credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-- `bucket` (String) Name of the S3 bucket where the file(s) exist.
-- `endpoint` (String) Endpoint to an S3 compatible service. Leave empty to use AWS.
-- `path_prefix` (String) By providing a path-like prefix (e.g. myFolder/thisTable/) under which all the relevant files sit, we can optimize finding these in S3. This is optional but recommended if your bucket contains many folders/files which you don't need to replicate.
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any file modified before this date will not be replicated.
-
-
-
-### Nested Schema for `configuration.streams`
-
-Read-Only:
-
-- `days_to_sync_if_history_is_full` (Number) When the state history of the file store is full, syncs will only read files that were last modified in the provided day range.
-- `file_type` (String) The data file type that is being extracted for a stream.
-- `format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format))
-- `globs` (List of String) The pattern used to specify which files should be selected from the file system. For more information on glob pattern matching look here.
-- `input_schema` (String) The schema that will be used to validate records extracted from the file. This will override the stream schema that is auto-detected from incoming files.
-- `legacy_prefix` (String) The path prefix configured in v3 versions of the S3 connector. This option is deprecated in favor of a single glob.
-- `name` (String) The name of the stream.
-- `primary_key` (String) The column or columns (for a composite key) that serves as the unique identifier of a record.
-- `schemaless` (Boolean) When enabled, syncs will not validate or structure records against the stream's schema.
-- `validation_policy` (String) must be one of ["Emit Record", "Skip Record", "Wait for Discover"]
-The name of the validation policy that dictates sync behavior when a record does not adhere to the stream schema.
-
-
-### Nested Schema for `configuration.streams.format`
-
-Read-Only:
-
-- `source_s3_file_based_stream_config_format_avro_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_file_based_stream_config_format_avro_format))
-- `source_s3_file_based_stream_config_format_csv_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_file_based_stream_config_format_csv_format))
-- `source_s3_file_based_stream_config_format_jsonl_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_file_based_stream_config_format_jsonl_format))
-- `source_s3_file_based_stream_config_format_parquet_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_file_based_stream_config_format_parquet_format))
-- `source_s3_update_file_based_stream_config_format_avro_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_avro_format))
-- `source_s3_update_file_based_stream_config_format_csv_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_csv_format))
-- `source_s3_update_file_based_stream_config_format_jsonl_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_jsonl_format))
-- `source_s3_update_file_based_stream_config_format_parquet_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format))
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
-
-Read-Only:
-
-- `double_as_string` (Boolean) Whether to convert double fields to strings. This is recommended if you have decimal numbers with a high degree of precision because there can be a loss precision when handling floating point numbers.
-- `filetype` (String) must be one of ["avro"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
-
-Read-Only:
-
-- `delimiter` (String) The character delimiting individual cells in the CSV data. This may only be a 1-character string. For tab-delimited data enter '\t'.
-- `double_quote` (Boolean) Whether two quotes in a quoted CSV value denote a single quote in the data.
-- `encoding` (String) The character encoding of the CSV data. Leave blank to default to UTF8. See list of python encodings for allowable options.
-- `escape_char` (String) The character used for escaping special characters. To disallow escaping, leave this field blank.
-- `false_values` (List of String) A set of case-sensitive strings that should be interpreted as false values.
-- `filetype` (String) must be one of ["csv"]
-- `header_definition` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition))
-- `inference_type` (String) must be one of ["None", "Primitive Types Only"]
-How to infer the types of the columns. If none, inference default to strings.
-- `null_values` (List of String) A set of case-sensitive strings that should be interpreted as null values. For example, if the value 'NA' should be interpreted as null, enter 'NA' in this field.
-- `quote_char` (String) The character used for quoting CSV values. To disallow quoting, make this field blank.
-- `skip_rows_after_header` (Number) The number of rows to skip after the header row.
-- `skip_rows_before_header` (Number) The number of rows to skip before the header row. For example, if the header row is on the 3rd row, enter 2 in this field.
-- `strings_can_be_null` (Boolean) Whether strings can be interpreted as null values. If true, strings that match the null_values set will be interpreted as null. If false, strings that match the null_values set will be interpreted as the string itself.
-- `true_values` (List of String) A set of case-sensitive strings that should be interpreted as true values.
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition`
-
-Read-Only:
-
-- `source_s3_file_based_stream_config_format_csv_format_csv_header_definition_autogenerated` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition--source_s3_file_based_stream_config_format_csv_format_csv_header_definition_autogenerated))
-- `source_s3_file_based_stream_config_format_csv_format_csv_header_definition_from_csv` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition--source_s3_file_based_stream_config_format_csv_format_csv_header_definition_from_csv))
-- `source_s3_file_based_stream_config_format_csv_format_csv_header_definition_user_provided` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition--source_s3_file_based_stream_config_format_csv_format_csv_header_definition_user_provided))
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition.source_s3_file_based_stream_config_format_csv_format_csv_header_definition_user_provided`
-
-Read-Only:
-
-- `header_definition_type` (String) must be one of ["Autogenerated"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition.source_s3_file_based_stream_config_format_csv_format_csv_header_definition_user_provided`
-
-Read-Only:
-
-- `header_definition_type` (String) must be one of ["From CSV"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition.source_s3_file_based_stream_config_format_csv_format_csv_header_definition_user_provided`
-
-Read-Only:
-
-- `column_names` (List of String) The column names that will be used while emitting the CSV records
-- `header_definition_type` (String) must be one of ["User Provided"]
-
-
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
-
-Read-Only:
-
-- `filetype` (String) must be one of ["jsonl"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
-
-Read-Only:
-
-- `decimal_as_float` (Boolean) Whether to convert decimal fields to floats. There is a loss of precision when converting decimals to floats, so this is not recommended.
-- `filetype` (String) must be one of ["parquet"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
-
-Read-Only:
-
-- `double_as_string` (Boolean) Whether to convert double fields to strings. This is recommended if you have decimal numbers with a high degree of precision because there can be a loss precision when handling floating point numbers.
-- `filetype` (String) must be one of ["avro"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
-
-Read-Only:
-
-- `delimiter` (String) The character delimiting individual cells in the CSV data. This may only be a 1-character string. For tab-delimited data enter '\t'.
-- `double_quote` (Boolean) Whether two quotes in a quoted CSV value denote a single quote in the data.
-- `encoding` (String) The character encoding of the CSV data. Leave blank to default to UTF8. See list of python encodings for allowable options.
-- `escape_char` (String) The character used for escaping special characters. To disallow escaping, leave this field blank.
-- `false_values` (List of String) A set of case-sensitive strings that should be interpreted as false values.
-- `filetype` (String) must be one of ["csv"]
-- `header_definition` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition))
-- `inference_type` (String) must be one of ["None", "Primitive Types Only"]
-How to infer the types of the columns. If none, inference default to strings.
-- `null_values` (List of String) A set of case-sensitive strings that should be interpreted as null values. For example, if the value 'NA' should be interpreted as null, enter 'NA' in this field.
-- `quote_char` (String) The character used for quoting CSV values. To disallow quoting, make this field blank.
-- `skip_rows_after_header` (Number) The number of rows to skip after the header row.
-- `skip_rows_before_header` (Number) The number of rows to skip before the header row. For example, if the header row is on the 3rd row, enter 2 in this field.
-- `strings_can_be_null` (Boolean) Whether strings can be interpreted as null values. If true, strings that match the null_values set will be interpreted as null. If false, strings that match the null_values set will be interpreted as the string itself.
-- `true_values` (List of String) A set of case-sensitive strings that should be interpreted as true values.
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition`
-
-Read-Only:
-
-- `source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_autogenerated` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition--source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_autogenerated))
-- `source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_from_csv` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition--source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_from_csv))
-- `source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_user_provided` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition--source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_user_provided))
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition.source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_user_provided`
-
-Read-Only:
-
-- `header_definition_type` (String) must be one of ["Autogenerated"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition.source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_user_provided`
-
-Read-Only:
-
-- `header_definition_type` (String) must be one of ["From CSV"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition.source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_user_provided`
-
-Read-Only:
-
-- `column_names` (List of String) The column names that will be used while emitting the CSV records
-- `header_definition_type` (String) must be one of ["User Provided"]
-
-
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
-
-Read-Only:
-
-- `filetype` (String) must be one of ["jsonl"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
-
-Read-Only:
-
-- `decimal_as_float` (Boolean) Whether to convert decimal fields to floats. There is a loss of precision when converting decimals to floats, so this is not recommended.
-- `filetype` (String) must be one of ["parquet"]
-
diff --git a/docs/data-sources/source_salesforce.md b/docs/data-sources/source_salesforce.md
index 108f31092..74ed7a3b6 100644
--- a/docs/data-sources/source_salesforce.md
+++ b/docs/data-sources/source_salesforce.md
@@ -14,7 +14,6 @@ SourceSalesforce DataSource
```terraform
data "airbyte_source_salesforce" "my_source_salesforce" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,37 +25,12 @@ data "airbyte_source_salesforce" "my_source_salesforce" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) Enter your Salesforce developer application's Client ID
-- `client_secret` (String) Enter your Salesforce developer application's Client secret
-- `force_use_bulk_api` (Boolean) Toggle to use Bulk API (this might cause empty fields for some streams)
-- `is_sandbox` (Boolean) Toggle if you're using a Salesforce Sandbox
-- `refresh_token` (String) Enter your application's Salesforce Refresh Token used for Airbyte to access your Salesforce account.
-- `source_type` (String) must be one of ["salesforce"]
-- `start_date` (String) Enter the date (or date-time) in the YYYY-MM-DD or YYYY-MM-DDTHH:mm:ssZ format. Airbyte will replicate the data updated on and after this date. If this field is blank, Airbyte will replicate the data for last two years.
-- `streams_criteria` (Attributes List) Add filters to select only required stream based on `SObject` name. Use this field to filter which tables are displayed by this connector. This is useful if your Salesforce account has a large number of tables (>1000), in which case you may find it easier to navigate the UI and speed up the connector's performance if you restrict the tables displayed by this connector. (see [below for nested schema](#nestedatt--configuration--streams_criteria))
-
-
-### Nested Schema for `configuration.streams_criteria`
-
-Read-Only:
-
-- `criteria` (String) must be one of ["starts with", "ends with", "contains", "exacts", "starts not with", "ends not with", "not contains", "not exacts"]
-- `value` (String)
-
diff --git a/docs/data-sources/source_salesloft.md b/docs/data-sources/source_salesloft.md
index c2029f97f..276a0880d 100644
--- a/docs/data-sources/source_salesloft.md
+++ b/docs/data-sources/source_salesloft.md
@@ -14,7 +14,6 @@ SourceSalesloft DataSource
```terraform
data "airbyte_source_salesloft" "my_source_salesloft" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,76 +25,12 @@ data "airbyte_source_salesloft" "my_source_salesloft" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["salesloft"]
-- `start_date` (String) The date from which you'd like to replicate data for Salesloft API, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_salesloft_credentials_authenticate_via_api_key` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_salesloft_credentials_authenticate_via_api_key))
-- `source_salesloft_credentials_authenticate_via_o_auth` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_salesloft_credentials_authenticate_via_o_auth))
-- `source_salesloft_update_credentials_authenticate_via_api_key` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_salesloft_update_credentials_authenticate_via_api_key))
-- `source_salesloft_update_credentials_authenticate_via_o_auth` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_salesloft_update_credentials_authenticate_via_o_auth))
-
-
-### Nested Schema for `configuration.credentials.source_salesloft_credentials_authenticate_via_api_key`
-
-Read-Only:
-
-- `api_key` (String) API Key for making authenticated requests. More instruction on how to find this value in our docs
-- `auth_type` (String) must be one of ["api_key"]
-
-
-
-### Nested Schema for `configuration.credentials.source_salesloft_credentials_authenticate_via_o_auth`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your Salesloft developer application.
-- `client_secret` (String) The Client Secret of your Salesloft developer application.
-- `refresh_token` (String) The token for obtaining a new access token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-
-
-### Nested Schema for `configuration.credentials.source_salesloft_update_credentials_authenticate_via_api_key`
-
-Read-Only:
-
-- `api_key` (String) API Key for making authenticated requests. More instruction on how to find this value in our docs
-- `auth_type` (String) must be one of ["api_key"]
-
-
-
-### Nested Schema for `configuration.credentials.source_salesloft_update_credentials_authenticate_via_o_auth`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your Salesloft developer application.
-- `client_secret` (String) The Client Secret of your Salesloft developer application.
-- `refresh_token` (String) The token for obtaining a new access token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
diff --git a/docs/data-sources/source_sap_fieldglass.md b/docs/data-sources/source_sap_fieldglass.md
index be138c261..ae8a2269f 100644
--- a/docs/data-sources/source_sap_fieldglass.md
+++ b/docs/data-sources/source_sap_fieldglass.md
@@ -14,7 +14,6 @@ SourceSapFieldglass DataSource
```terraform
data "airbyte_source_sap_fieldglass" "my_source_sapfieldglass" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_sap_fieldglass" "my_source_sapfieldglass" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) API Key
-- `source_type` (String) must be one of ["sap-fieldglass"]
-
diff --git a/docs/data-sources/source_secoda.md b/docs/data-sources/source_secoda.md
index dcc538e3e..b133b589f 100644
--- a/docs/data-sources/source_secoda.md
+++ b/docs/data-sources/source_secoda.md
@@ -14,7 +14,6 @@ SourceSecoda DataSource
```terraform
data "airbyte_source_secoda" "my_source_secoda" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_secoda" "my_source_secoda" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Your API Access Key. See here. The key is case sensitive.
-- `source_type` (String) must be one of ["secoda"]
-
diff --git a/docs/data-sources/source_sendgrid.md b/docs/data-sources/source_sendgrid.md
index 9a2719f5a..4c535cdc3 100644
--- a/docs/data-sources/source_sendgrid.md
+++ b/docs/data-sources/source_sendgrid.md
@@ -14,7 +14,6 @@ SourceSendgrid DataSource
```terraform
data "airbyte_source_sendgrid" "my_source_sendgrid" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_sendgrid" "my_source_sendgrid" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `apikey` (String) API Key, use admin to generate this key.
-- `source_type` (String) must be one of ["sendgrid"]
-- `start_time` (String) Start time in ISO8601 format. Any data before this time point will not be replicated.
-
diff --git a/docs/data-sources/source_sendinblue.md b/docs/data-sources/source_sendinblue.md
index e36af0d59..ccc357e94 100644
--- a/docs/data-sources/source_sendinblue.md
+++ b/docs/data-sources/source_sendinblue.md
@@ -14,7 +14,6 @@ SourceSendinblue DataSource
```terraform
data "airbyte_source_sendinblue" "my_source_sendinblue" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_sendinblue" "my_source_sendinblue" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Your API Key. See here.
-- `source_type` (String) must be one of ["sendinblue"]
-
diff --git a/docs/data-sources/source_senseforce.md b/docs/data-sources/source_senseforce.md
index 16582c621..fd128aca6 100644
--- a/docs/data-sources/source_senseforce.md
+++ b/docs/data-sources/source_senseforce.md
@@ -14,7 +14,6 @@ SourceSenseforce DataSource
```terraform
data "airbyte_source_senseforce" "my_source_senseforce" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,26 +25,12 @@ data "airbyte_source_senseforce" "my_source_senseforce" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_token` (String) Your API access token. See here. The toke is case sensitive.
-- `backend_url` (String) Your Senseforce API backend URL. This is the URL shown during the Login screen. See here for more details. (Note: Most Senseforce backend APIs have the term 'galaxy' in their ULR)
-- `dataset_id` (String) The ID of the dataset you want to synchronize. The ID can be found in the URL when opening the dataset. See here for more details. (Note: As the Senseforce API only allows to synchronize a specific dataset, each dataset you want to synchronize needs to be implemented as a separate airbyte source).
-- `slice_range` (Number) The time increment used by the connector when requesting data from the Senseforce API. The bigger the value is, the less requests will be made and faster the sync will be. On the other hand, the more seldom the state is persisted and the more likely one could run into rate limites. Furthermore, consider that large chunks of time might take a long time for the Senseforce query to return data - meaning it could take in effect longer than with more smaller time slices. If there are a lot of data per day, set this setting to 1. If there is only very little data per day, you might change the setting to 10 or more.
-- `source_type` (String) must be one of ["senseforce"]
-- `start_date` (String) UTC date and time in the format 2017-01-25. Only data with "Timestamp" after this date will be replicated. Important note: This start date must be set to the first day of where your dataset provides data. If your dataset has data from 2020-10-10 10:21:10, set the start_date to 2020-10-10 or later
-
diff --git a/docs/data-sources/source_sentry.md b/docs/data-sources/source_sentry.md
index 2fdc3a11d..4e707b51b 100644
--- a/docs/data-sources/source_sentry.md
+++ b/docs/data-sources/source_sentry.md
@@ -14,7 +14,6 @@ SourceSentry DataSource
```terraform
data "airbyte_source_sentry" "my_source_sentry" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,26 +25,12 @@ data "airbyte_source_sentry" "my_source_sentry" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `auth_token` (String) Log into Sentry and then create authentication tokens.For self-hosted, you can find or create authentication tokens by visiting "{instance_url_prefix}/settings/account/api/auth-tokens/"
-- `discover_fields` (List of String) Fields to retrieve when fetching discover events
-- `hostname` (String) Host name of Sentry API server.For self-hosted, specify your host name here. Otherwise, leave it empty.
-- `organization` (String) The slug of the organization the groups belong to.
-- `project` (String) The name (slug) of the Project you want to sync.
-- `source_type` (String) must be one of ["sentry"]
-
diff --git a/docs/data-sources/source_sftp.md b/docs/data-sources/source_sftp.md
index 943fa7510..bb02632ab 100644
--- a/docs/data-sources/source_sftp.md
+++ b/docs/data-sources/source_sftp.md
@@ -14,7 +14,6 @@ SourceSftp DataSource
```terraform
data "airbyte_source_sftp" "my_source_sftp" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,77 +25,12 @@ data "airbyte_source_sftp" "my_source_sftp" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) The server authentication method (see [below for nested schema](#nestedatt--configuration--credentials))
-- `file_pattern` (String) The regular expression to specify files for sync in a chosen Folder Path
-- `file_types` (String) Coma separated file types. Currently only 'csv' and 'json' types are supported.
-- `folder_path` (String) The directory to search files for sync
-- `host` (String) The server host address
-- `port` (Number) The server port
-- `source_type` (String) must be one of ["sftp"]
-- `user` (String) The server user
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_sftp_authentication_wildcard_password_authentication` (Attributes) The server authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_sftp_authentication_wildcard_password_authentication))
-- `source_sftp_authentication_wildcard_ssh_key_authentication` (Attributes) The server authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_sftp_authentication_wildcard_ssh_key_authentication))
-- `source_sftp_update_authentication_wildcard_password_authentication` (Attributes) The server authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_sftp_update_authentication_wildcard_password_authentication))
-- `source_sftp_update_authentication_wildcard_ssh_key_authentication` (Attributes) The server authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_sftp_update_authentication_wildcard_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.credentials.source_sftp_authentication_wildcard_password_authentication`
-
-Read-Only:
-
-- `auth_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through password authentication
-- `auth_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.credentials.source_sftp_authentication_wildcard_ssh_key_authentication`
-
-Read-Only:
-
-- `auth_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through ssh key
-- `auth_ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-
-
-
-### Nested Schema for `configuration.credentials.source_sftp_update_authentication_wildcard_password_authentication`
-
-Read-Only:
-
-- `auth_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through password authentication
-- `auth_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.credentials.source_sftp_update_authentication_wildcard_ssh_key_authentication`
-
-Read-Only:
-
-- `auth_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through ssh key
-- `auth_ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-
diff --git a/docs/data-sources/source_sftp_bulk.md b/docs/data-sources/source_sftp_bulk.md
index d0a7712ff..390535747 100644
--- a/docs/data-sources/source_sftp_bulk.md
+++ b/docs/data-sources/source_sftp_bulk.md
@@ -14,7 +14,6 @@ SourceSftpBulk DataSource
```terraform
data "airbyte_source_sftp_bulk" "my_source_sftpbulk" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,34 +25,12 @@ data "airbyte_source_sftp_bulk" "my_source_sftpbulk" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `file_most_recent` (Boolean) Sync only the most recent file for the configured folder path and file pattern
-- `file_pattern` (String) The regular expression to specify files for sync in a chosen Folder Path
-- `file_type` (String) must be one of ["csv", "json"]
-The file type you want to sync. Currently only 'csv' and 'json' files are supported.
-- `folder_path` (String) The directory to search files for sync
-- `host` (String) The server host address
-- `password` (String) OS-level password for logging into the jump server host
-- `port` (Number) The server port
-- `private_key` (String) The private key
-- `separator` (String) The separator used in the CSV files. Define None if you want to use the Sniffer functionality
-- `source_type` (String) must be one of ["sftp-bulk"]
-- `start_date` (String) The date from which you'd like to replicate data for all incremental streams, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
-- `stream_name` (String) The name of the stream or table you want to create
-- `username` (String) The server user
-
diff --git a/docs/data-sources/source_shopify.md b/docs/data-sources/source_shopify.md
index 8cf60d496..0e0a77038 100644
--- a/docs/data-sources/source_shopify.md
+++ b/docs/data-sources/source_shopify.md
@@ -14,7 +14,6 @@ SourceShopify DataSource
```terraform
data "airbyte_source_shopify" "my_source_shopify" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,73 +25,12 @@ data "airbyte_source_shopify" "my_source_shopify" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) The authorization method to use to retrieve data from Shopify (see [below for nested schema](#nestedatt--configuration--credentials))
-- `shop` (String) The name of your Shopify store found in the URL. For example, if your URL was https://NAME.myshopify.com, then the name would be 'NAME' or 'NAME.myshopify.com'.
-- `source_type` (String) must be one of ["shopify"]
-- `start_date` (String) The date you would like to replicate data from. Format: YYYY-MM-DD. Any data before this date will not be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_shopify_shopify_authorization_method_api_password` (Attributes) API Password Auth (see [below for nested schema](#nestedatt--configuration--credentials--source_shopify_shopify_authorization_method_api_password))
-- `source_shopify_shopify_authorization_method_o_auth2_0` (Attributes) OAuth2.0 (see [below for nested schema](#nestedatt--configuration--credentials--source_shopify_shopify_authorization_method_o_auth2_0))
-- `source_shopify_update_shopify_authorization_method_api_password` (Attributes) API Password Auth (see [below for nested schema](#nestedatt--configuration--credentials--source_shopify_update_shopify_authorization_method_api_password))
-- `source_shopify_update_shopify_authorization_method_o_auth2_0` (Attributes) OAuth2.0 (see [below for nested schema](#nestedatt--configuration--credentials--source_shopify_update_shopify_authorization_method_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_shopify_shopify_authorization_method_api_password`
-
-Read-Only:
-
-- `api_password` (String) The API Password for your private application in the `Shopify` store.
-- `auth_method` (String) must be one of ["api_password"]
-
-
-
-### Nested Schema for `configuration.credentials.source_shopify_shopify_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) The Access Token for making authenticated requests.
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of the Shopify developer application.
-- `client_secret` (String) The Client Secret of the Shopify developer application.
-
-
-
-### Nested Schema for `configuration.credentials.source_shopify_update_shopify_authorization_method_api_password`
-
-Read-Only:
-
-- `api_password` (String) The API Password for your private application in the `Shopify` store.
-- `auth_method` (String) must be one of ["api_password"]
-
-
-
-### Nested Schema for `configuration.credentials.source_shopify_update_shopify_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) The Access Token for making authenticated requests.
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of the Shopify developer application.
-- `client_secret` (String) The Client Secret of the Shopify developer application.
-
diff --git a/docs/data-sources/source_shortio.md b/docs/data-sources/source_shortio.md
index 2b7a35624..dce7c1c72 100644
--- a/docs/data-sources/source_shortio.md
+++ b/docs/data-sources/source_shortio.md
@@ -14,7 +14,6 @@ SourceShortio DataSource
```terraform
data "airbyte_source_shortio" "my_source_shortio" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_shortio" "my_source_shortio" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `domain_id` (String)
-- `secret_key` (String) Short.io Secret Key
-- `source_type` (String) must be one of ["shortio"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_slack.md b/docs/data-sources/source_slack.md
index b08a5203b..d7bcbbc3e 100644
--- a/docs/data-sources/source_slack.md
+++ b/docs/data-sources/source_slack.md
@@ -14,7 +14,6 @@ SourceSlack DataSource
```terraform
data "airbyte_source_slack" "my_source_slack" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,75 +25,12 @@ data "airbyte_source_slack" "my_source_slack" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `channel_filter` (List of String) A channel name list (without leading '#' char) which limit the channels from which you'd like to sync. Empty list means no filter.
-- `credentials` (Attributes) Choose how to authenticate into Slack (see [below for nested schema](#nestedatt--configuration--credentials))
-- `join_channels` (Boolean) Whether to join all channels or to sync data only from channels the bot is already in. If false, you'll need to manually add the bot to all the channels from which you'd like to sync messages.
-- `lookback_window` (Number) How far into the past to look for messages in threads, default is 0 days
-- `source_type` (String) must be one of ["slack"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_slack_authentication_mechanism_api_token` (Attributes) Choose how to authenticate into Slack (see [below for nested schema](#nestedatt--configuration--credentials--source_slack_authentication_mechanism_api_token))
-- `source_slack_authentication_mechanism_sign_in_via_slack_o_auth` (Attributes) Choose how to authenticate into Slack (see [below for nested schema](#nestedatt--configuration--credentials--source_slack_authentication_mechanism_sign_in_via_slack_o_auth))
-- `source_slack_update_authentication_mechanism_api_token` (Attributes) Choose how to authenticate into Slack (see [below for nested schema](#nestedatt--configuration--credentials--source_slack_update_authentication_mechanism_api_token))
-- `source_slack_update_authentication_mechanism_sign_in_via_slack_o_auth` (Attributes) Choose how to authenticate into Slack (see [below for nested schema](#nestedatt--configuration--credentials--source_slack_update_authentication_mechanism_sign_in_via_slack_o_auth))
-
-
-### Nested Schema for `configuration.credentials.source_slack_authentication_mechanism_api_token`
-
-Read-Only:
-
-- `api_token` (String) A Slack bot token. See the docs for instructions on how to generate it.
-- `option_title` (String) must be one of ["API Token Credentials"]
-
-
-
-### Nested Schema for `configuration.credentials.source_slack_authentication_mechanism_sign_in_via_slack_o_auth`
-
-Read-Only:
-
-- `access_token` (String) Slack access_token. See our docs if you need help generating the token.
-- `client_id` (String) Slack client_id. See our docs if you need help finding this id.
-- `client_secret` (String) Slack client_secret. See our docs if you need help finding this secret.
-- `option_title` (String) must be one of ["Default OAuth2.0 authorization"]
-
-
-
-### Nested Schema for `configuration.credentials.source_slack_update_authentication_mechanism_api_token`
-
-Read-Only:
-
-- `api_token` (String) A Slack bot token. See the docs for instructions on how to generate it.
-- `option_title` (String) must be one of ["API Token Credentials"]
-
-
-
-### Nested Schema for `configuration.credentials.source_slack_update_authentication_mechanism_sign_in_via_slack_o_auth`
-
-Read-Only:
-
-- `access_token` (String) Slack access_token. See our docs if you need help generating the token.
-- `client_id` (String) Slack client_id. See our docs if you need help finding this id.
-- `client_secret` (String) Slack client_secret. See our docs if you need help finding this secret.
-- `option_title` (String) must be one of ["Default OAuth2.0 authorization"]
-
diff --git a/docs/data-sources/source_smaily.md b/docs/data-sources/source_smaily.md
index 18af44e6a..7e5e55dd0 100644
--- a/docs/data-sources/source_smaily.md
+++ b/docs/data-sources/source_smaily.md
@@ -14,7 +14,6 @@ SourceSmaily DataSource
```terraform
data "airbyte_source_smaily" "my_source_smaily" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_smaily" "my_source_smaily" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_password` (String) API user password. See https://smaily.com/help/api/general/create-api-user/
-- `api_subdomain` (String) API Subdomain. See https://smaily.com/help/api/general/create-api-user/
-- `api_username` (String) API user username. See https://smaily.com/help/api/general/create-api-user/
-- `source_type` (String) must be one of ["smaily"]
-
diff --git a/docs/data-sources/source_smartengage.md b/docs/data-sources/source_smartengage.md
index 06ed19ea9..84fe7388e 100644
--- a/docs/data-sources/source_smartengage.md
+++ b/docs/data-sources/source_smartengage.md
@@ -14,7 +14,6 @@ SourceSmartengage DataSource
```terraform
data "airbyte_source_smartengage" "my_source_smartengage" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_smartengage" "my_source_smartengage" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) API Key
-- `source_type` (String) must be one of ["smartengage"]
-
diff --git a/docs/data-sources/source_smartsheets.md b/docs/data-sources/source_smartsheets.md
index 7768c87cf..ad7dc3fa6 100644
--- a/docs/data-sources/source_smartsheets.md
+++ b/docs/data-sources/source_smartsheets.md
@@ -14,7 +14,6 @@ SourceSmartsheets DataSource
```terraform
data "airbyte_source_smartsheets" "my_source_smartsheets" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,78 +25,12 @@ data "airbyte_source_smartsheets" "my_source_smartsheets" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `metadata_fields` (List of String) A List of available columns which metadata can be pulled from.
-- `source_type` (String) must be one of ["smartsheets"]
-- `spreadsheet_id` (String) The spreadsheet ID. Find it by opening the spreadsheet then navigating to File > Properties
-- `start_datetime` (String) Only rows modified after this date/time will be replicated. This should be an ISO 8601 string, for instance: `2000-01-01T13:00:00`
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_smartsheets_authorization_method_api_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_smartsheets_authorization_method_api_access_token))
-- `source_smartsheets_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_smartsheets_authorization_method_o_auth2_0))
-- `source_smartsheets_update_authorization_method_api_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_smartsheets_update_authorization_method_api_access_token))
-- `source_smartsheets_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_smartsheets_update_authorization_method_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_smartsheets_authorization_method_api_access_token`
-
-Read-Only:
-
-- `access_token` (String) The access token to use for accessing your data from Smartsheets. This access token must be generated by a user with at least read access to the data you'd like to replicate. Generate an access token in the Smartsheets main menu by clicking Account > Apps & Integrations > API Access. See the setup guide for information on how to obtain this token.
-- `auth_type` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_smartsheets_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The API ID of the SmartSheets developer application.
-- `client_secret` (String) The API Secret the SmartSheets developer application.
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-
-
-### Nested Schema for `configuration.credentials.source_smartsheets_update_authorization_method_api_access_token`
-
-Read-Only:
-
-- `access_token` (String) The access token to use for accessing your data from Smartsheets. This access token must be generated by a user with at least read access to the data you'd like to replicate. Generate an access token in the Smartsheets main menu by clicking Account > Apps & Integrations > API Access. See the setup guide for information on how to obtain this token.
-- `auth_type` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_smartsheets_update_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The API ID of the SmartSheets developer application.
-- `client_secret` (String) The API Secret the SmartSheets developer application.
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
diff --git a/docs/data-sources/source_snapchat_marketing.md b/docs/data-sources/source_snapchat_marketing.md
index 26e642b8d..b39594476 100644
--- a/docs/data-sources/source_snapchat_marketing.md
+++ b/docs/data-sources/source_snapchat_marketing.md
@@ -14,7 +14,6 @@ SourceSnapchatMarketing DataSource
```terraform
data "airbyte_source_snapchat_marketing" "my_source_snapchatmarketing" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,26 +25,12 @@ data "airbyte_source_snapchat_marketing" "my_source_snapchatmarketing" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `client_id` (String) The Client ID of your Snapchat developer application.
-- `client_secret` (String) The Client Secret of your Snapchat developer application.
-- `end_date` (String) Date in the format 2017-01-25. Any data after this date will not be replicated.
-- `refresh_token` (String) Refresh Token to renew the expired Access Token.
-- `source_type` (String) must be one of ["snapchat-marketing"]
-- `start_date` (String) Date in the format 2022-01-01. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_snowflake.md b/docs/data-sources/source_snowflake.md
index fbd2c3c57..4a597384d 100644
--- a/docs/data-sources/source_snowflake.md
+++ b/docs/data-sources/source_snowflake.md
@@ -14,7 +14,6 @@ SourceSnowflake DataSource
```terraform
data "airbyte_source_snowflake" "my_source_snowflake" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,81 +25,12 @@ data "airbyte_source_snowflake" "my_source_snowflake" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `database` (String) The database you created for Airbyte to access data.
-- `host` (String) The host domain of the snowflake instance (must include the account, region, cloud environment, and end with snowflakecomputing.com).
-- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `role` (String) The role you created for Airbyte to access Snowflake.
-- `schema` (String) The source Snowflake schema tables. Leave empty to access tables from multiple schemas.
-- `source_type` (String) must be one of ["snowflake"]
-- `warehouse` (String) The warehouse you created for Airbyte to access data.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_snowflake_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_snowflake_authorization_method_o_auth2_0))
-- `source_snowflake_authorization_method_username_and_password` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_snowflake_authorization_method_username_and_password))
-- `source_snowflake_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_snowflake_update_authorization_method_o_auth2_0))
-- `source_snowflake_update_authorization_method_username_and_password` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_snowflake_update_authorization_method_username_and_password))
-
-
-### Nested Schema for `configuration.credentials.source_snowflake_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["OAuth"]
-- `client_id` (String) The Client ID of your Snowflake developer application.
-- `client_secret` (String) The Client Secret of your Snowflake developer application.
-- `refresh_token` (String) Refresh Token for making authenticated requests.
-
-
-
-### Nested Schema for `configuration.credentials.source_snowflake_authorization_method_username_and_password`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["username/password"]
-- `password` (String) The password associated with the username.
-- `username` (String) The username you created to allow Airbyte to access the database.
-
-
-
-### Nested Schema for `configuration.credentials.source_snowflake_update_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["OAuth"]
-- `client_id` (String) The Client ID of your Snowflake developer application.
-- `client_secret` (String) The Client Secret of your Snowflake developer application.
-- `refresh_token` (String) Refresh Token for making authenticated requests.
-
-
-
-### Nested Schema for `configuration.credentials.source_snowflake_update_authorization_method_username_and_password`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["username/password"]
-- `password` (String) The password associated with the username.
-- `username` (String) The username you created to allow Airbyte to access the database.
-
diff --git a/docs/data-sources/source_sonar_cloud.md b/docs/data-sources/source_sonar_cloud.md
index 1de5284fe..7948ba06a 100644
--- a/docs/data-sources/source_sonar_cloud.md
+++ b/docs/data-sources/source_sonar_cloud.md
@@ -14,7 +14,6 @@ SourceSonarCloud DataSource
```terraform
data "airbyte_source_sonar_cloud" "my_source_sonarcloud" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,26 +25,12 @@ data "airbyte_source_sonar_cloud" "my_source_sonarcloud" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `component_keys` (List of String) Comma-separated list of component keys.
-- `end_date` (String) To retrieve issues created before the given date (inclusive).
-- `organization` (String) Organization key. See here.
-- `source_type` (String) must be one of ["sonar-cloud"]
-- `start_date` (String) To retrieve issues created after the given date (inclusive).
-- `user_token` (String) Your User Token. See here. The token is case sensitive.
-
diff --git a/docs/data-sources/source_spacex_api.md b/docs/data-sources/source_spacex_api.md
index db2ea8af5..5a256c803 100644
--- a/docs/data-sources/source_spacex_api.md
+++ b/docs/data-sources/source_spacex_api.md
@@ -14,7 +14,6 @@ SourceSpacexAPI DataSource
```terraform
data "airbyte_source_spacex_api" "my_source_spacexapi" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_spacex_api" "my_source_spacexapi" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `id` (String)
-- `options` (String)
-- `source_type` (String) must be one of ["spacex-api"]
-
diff --git a/docs/data-sources/source_square.md b/docs/data-sources/source_square.md
index fca8cd041..1078aab67 100644
--- a/docs/data-sources/source_square.md
+++ b/docs/data-sources/source_square.md
@@ -14,7 +14,6 @@ SourceSquare DataSource
```terraform
data "airbyte_source_square" "my_source_square" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,74 +25,12 @@ data "airbyte_source_square" "my_source_square" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Choose how to authenticate to Square. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `include_deleted_objects` (Boolean) In some streams there is an option to include deleted objects (Items, Categories, Discounts, Taxes)
-- `is_sandbox` (Boolean) Determines whether to use the sandbox or production environment.
-- `source_type` (String) must be one of ["square"]
-- `start_date` (String) UTC date in the format YYYY-MM-DD. Any data before this date will not be replicated. If not set, all data will be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_square_authentication_api_key` (Attributes) Choose how to authenticate to Square. (see [below for nested schema](#nestedatt--configuration--credentials--source_square_authentication_api_key))
-- `source_square_authentication_oauth_authentication` (Attributes) Choose how to authenticate to Square. (see [below for nested schema](#nestedatt--configuration--credentials--source_square_authentication_oauth_authentication))
-- `source_square_update_authentication_api_key` (Attributes) Choose how to authenticate to Square. (see [below for nested schema](#nestedatt--configuration--credentials--source_square_update_authentication_api_key))
-- `source_square_update_authentication_oauth_authentication` (Attributes) Choose how to authenticate to Square. (see [below for nested schema](#nestedatt--configuration--credentials--source_square_update_authentication_oauth_authentication))
-
-
-### Nested Schema for `configuration.credentials.source_square_authentication_api_key`
-
-Read-Only:
-
-- `api_key` (String) The API key for a Square application
-- `auth_type` (String) must be one of ["API Key"]
-
-
-
-### Nested Schema for `configuration.credentials.source_square_authentication_oauth_authentication`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["OAuth"]
-- `client_id` (String) The Square-issued ID of your application
-- `client_secret` (String) The Square-issued application secret for your application
-- `refresh_token` (String) A refresh token generated using the above client ID and secret
-
-
-
-### Nested Schema for `configuration.credentials.source_square_update_authentication_api_key`
-
-Read-Only:
-
-- `api_key` (String) The API key for a Square application
-- `auth_type` (String) must be one of ["API Key"]
-
-
-
-### Nested Schema for `configuration.credentials.source_square_update_authentication_oauth_authentication`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["OAuth"]
-- `client_id` (String) The Square-issued ID of your application
-- `client_secret` (String) The Square-issued application secret for your application
-- `refresh_token` (String) A refresh token generated using the above client ID and secret
-
diff --git a/docs/data-sources/source_strava.md b/docs/data-sources/source_strava.md
index efa5437bb..f7bd57519 100644
--- a/docs/data-sources/source_strava.md
+++ b/docs/data-sources/source_strava.md
@@ -14,7 +14,6 @@ SourceStrava DataSource
```terraform
data "airbyte_source_strava" "my_source_strava" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,27 +25,12 @@ data "airbyte_source_strava" "my_source_strava" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `athlete_id` (Number) The Athlete ID of your Strava developer application.
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Strava developer application.
-- `client_secret` (String) The Client Secret of your Strava developer application.
-- `refresh_token` (String) The Refresh Token with the activity: read_all permissions.
-- `source_type` (String) must be one of ["strava"]
-- `start_date` (String) UTC date and time. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_stripe.md b/docs/data-sources/source_stripe.md
index a8b965c93..728ec1d65 100644
--- a/docs/data-sources/source_stripe.md
+++ b/docs/data-sources/source_stripe.md
@@ -14,7 +14,6 @@ SourceStripe DataSource
```terraform
data "airbyte_source_stripe" "my_source_stripe" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,26 +25,12 @@ data "airbyte_source_stripe" "my_source_stripe" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `account_id` (String) Your Stripe account ID (starts with 'acct_', find yours here).
-- `client_secret` (String) Stripe API key (usually starts with 'sk_live_'; find yours here).
-- `lookback_window_days` (Number) When set, the connector will always re-export data from the past N days, where N is the value set here. This is useful if your data is frequently updated after creation. Applies only to streams that do not support event-based incremental syncs: CheckoutSessionLineItems, Events, SetupAttempts, ShippingRates, BalanceTransactions, Files, FileLinks. More info here
-- `slice_range` (Number) The time increment used by the connector when requesting data from the Stripe API. The bigger the value is, the less requests will be made and faster the sync will be. On the other hand, the more seldom the state is persisted.
-- `source_type` (String) must be one of ["stripe"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Only data generated after this date will be replicated.
-
diff --git a/docs/data-sources/source_survey_sparrow.md b/docs/data-sources/source_survey_sparrow.md
index 77c24ca50..e383d3024 100644
--- a/docs/data-sources/source_survey_sparrow.md
+++ b/docs/data-sources/source_survey_sparrow.md
@@ -14,7 +14,6 @@ SourceSurveySparrow DataSource
```terraform
data "airbyte_source_survey_sparrow" "my_source_surveysparrow" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,65 +25,12 @@ data "airbyte_source_survey_sparrow" "my_source_surveysparrow" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_token` (String) Your access token. See here. The key is case sensitive.
-- `region` (Attributes) Is your account location is EU based? If yes, the base url to retrieve data will be different. (see [below for nested schema](#nestedatt--configuration--region))
-- `source_type` (String) must be one of ["survey-sparrow"]
-- `survey_id` (List of String) A List of your survey ids for survey-specific stream
-
-
-### Nested Schema for `configuration.region`
-
-Read-Only:
-
-- `source_survey_sparrow_base_url_eu_based_account` (Attributes) Is your account location is EU based? If yes, the base url to retrieve data will be different. (see [below for nested schema](#nestedatt--configuration--region--source_survey_sparrow_base_url_eu_based_account))
-- `source_survey_sparrow_base_url_global_account` (Attributes) Is your account location is EU based? If yes, the base url to retrieve data will be different. (see [below for nested schema](#nestedatt--configuration--region--source_survey_sparrow_base_url_global_account))
-- `source_survey_sparrow_update_base_url_eu_based_account` (Attributes) Is your account location is EU based? If yes, the base url to retrieve data will be different. (see [below for nested schema](#nestedatt--configuration--region--source_survey_sparrow_update_base_url_eu_based_account))
-- `source_survey_sparrow_update_base_url_global_account` (Attributes) Is your account location is EU based? If yes, the base url to retrieve data will be different. (see [below for nested schema](#nestedatt--configuration--region--source_survey_sparrow_update_base_url_global_account))
-
-
-### Nested Schema for `configuration.region.source_survey_sparrow_base_url_eu_based_account`
-
-Read-Only:
-
-- `url_base` (String) must be one of ["https://eu-api.surveysparrow.com/v3"]
-
-
-
-### Nested Schema for `configuration.region.source_survey_sparrow_base_url_global_account`
-
-Read-Only:
-
-- `url_base` (String) must be one of ["https://api.surveysparrow.com/v3"]
-
-
-
-### Nested Schema for `configuration.region.source_survey_sparrow_update_base_url_eu_based_account`
-
-Read-Only:
-
-- `url_base` (String) must be one of ["https://eu-api.surveysparrow.com/v3"]
-
-
-
-### Nested Schema for `configuration.region.source_survey_sparrow_update_base_url_global_account`
-
-Read-Only:
-
-- `url_base` (String) must be one of ["https://api.surveysparrow.com/v3"]
-
diff --git a/docs/data-sources/source_surveymonkey.md b/docs/data-sources/source_surveymonkey.md
index 50d3595df..59bae368a 100644
--- a/docs/data-sources/source_surveymonkey.md
+++ b/docs/data-sources/source_surveymonkey.md
@@ -14,7 +14,6 @@ SourceSurveymonkey DataSource
```terraform
data "airbyte_source_surveymonkey" "my_source_surveymonkey" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,36 +25,12 @@ data "airbyte_source_surveymonkey" "my_source_surveymonkey" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) The authorization method to use to retrieve data from SurveyMonkey (see [below for nested schema](#nestedatt--configuration--credentials))
-- `origin` (String) must be one of ["USA", "Europe", "Canada"]
-Depending on the originating datacenter of the SurveyMonkey account, the API access URL may be different.
-- `source_type` (String) must be one of ["surveymonkey"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-- `survey_ids` (List of String) IDs of the surveys from which you'd like to replicate data. If left empty, data from all boards to which you have access will be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests. See the docs for information on how to generate this key.
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of the SurveyMonkey developer application.
-- `client_secret` (String) The Client Secret of the SurveyMonkey developer application.
-
diff --git a/docs/data-sources/source_tempo.md b/docs/data-sources/source_tempo.md
index 1e714ad7a..fab40e022 100644
--- a/docs/data-sources/source_tempo.md
+++ b/docs/data-sources/source_tempo.md
@@ -14,7 +14,6 @@ SourceTempo DataSource
```terraform
data "airbyte_source_tempo" "my_source_tempo" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_tempo" "my_source_tempo" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_token` (String) Tempo API Token. Go to Tempo>Settings, scroll down to Data Access and select API integration.
-- `source_type` (String) must be one of ["tempo"]
-
diff --git a/docs/data-sources/source_the_guardian_api.md b/docs/data-sources/source_the_guardian_api.md
index 8c2359883..d40fe8275 100644
--- a/docs/data-sources/source_the_guardian_api.md
+++ b/docs/data-sources/source_the_guardian_api.md
@@ -14,7 +14,6 @@ SourceTheGuardianAPI DataSource
```terraform
data "airbyte_source_the_guardian_api" "my_source_theguardianapi" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,27 +25,12 @@ data "airbyte_source_the_guardian_api" "my_source_theguardianapi" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Your API Key. See here. The key is case sensitive.
-- `end_date` (String) (Optional) Use this to set the maximum date (YYYY-MM-DD) of the results. Results newer than the end_date will not be shown. Default is set to the current date (today) for incremental syncs.
-- `query` (String) (Optional) The query (q) parameter filters the results to only those that include that search term. The q parameter supports AND, OR and NOT operators.
-- `section` (String) (Optional) Use this to filter the results by a particular section. See here for a list of all sections, and here for the sections endpoint documentation.
-- `source_type` (String) must be one of ["the-guardian-api"]
-- `start_date` (String) Use this to set the minimum date (YYYY-MM-DD) of the results. Results older than the start_date will not be shown.
-- `tag` (String) (Optional) A tag is a piece of data that is used by The Guardian to categorise content. Use this parameter to filter results by showing only the ones matching the entered tag. See here for a list of all tags, and here for the tags endpoint documentation.
-
diff --git a/docs/data-sources/source_tiktok_marketing.md b/docs/data-sources/source_tiktok_marketing.md
index c023c19fb..da2ec6353 100644
--- a/docs/data-sources/source_tiktok_marketing.md
+++ b/docs/data-sources/source_tiktok_marketing.md
@@ -14,7 +14,6 @@ SourceTiktokMarketing DataSource
```terraform
data "airbyte_source_tiktok_marketing" "my_source_tiktokmarketing" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,79 +25,12 @@ data "airbyte_source_tiktok_marketing" "my_source_tiktokmarketing" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `attribution_window` (Number) The attribution window in days.
-- `credentials` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--credentials))
-- `end_date` (String) The date until which you'd like to replicate data for all incremental streams, in the format YYYY-MM-DD. All data generated between start_date and this date will be replicated. Not setting this option will result in always syncing the data till the current date.
-- `include_deleted` (Boolean) Set to active if you want to include deleted data in reports.
-- `source_type` (String) must be one of ["tiktok-marketing"]
-- `start_date` (String) The Start Date in format: YYYY-MM-DD. Any data before this date will not be replicated. If this parameter is not set, all data will be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_tiktok_marketing_authentication_method_o_auth2_0` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_tiktok_marketing_authentication_method_o_auth2_0))
-- `source_tiktok_marketing_authentication_method_sandbox_access_token` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_tiktok_marketing_authentication_method_sandbox_access_token))
-- `source_tiktok_marketing_update_authentication_method_o_auth2_0` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_tiktok_marketing_update_authentication_method_o_auth2_0))
-- `source_tiktok_marketing_update_authentication_method_sandbox_access_token` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_tiktok_marketing_update_authentication_method_sandbox_access_token))
-
-
-### Nested Schema for `configuration.credentials.source_tiktok_marketing_authentication_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Long-term Authorized Access Token.
-- `advertiser_id` (String) The Advertiser ID to filter reports and streams. Let this empty to retrieve all.
-- `app_id` (String) The Developer Application App ID.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `secret` (String) The Developer Application Secret.
-
-
-
-### Nested Schema for `configuration.credentials.source_tiktok_marketing_authentication_method_sandbox_access_token`
-
-Read-Only:
-
-- `access_token` (String) The long-term authorized access token.
-- `advertiser_id` (String) The Advertiser ID which generated for the developer's Sandbox application.
-- `auth_type` (String) must be one of ["sandbox_access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_tiktok_marketing_update_authentication_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Long-term Authorized Access Token.
-- `advertiser_id` (String) The Advertiser ID to filter reports and streams. Let this empty to retrieve all.
-- `app_id` (String) The Developer Application App ID.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `secret` (String) The Developer Application Secret.
-
-
-
-### Nested Schema for `configuration.credentials.source_tiktok_marketing_update_authentication_method_sandbox_access_token`
-
-Read-Only:
-
-- `access_token` (String) The long-term authorized access token.
-- `advertiser_id` (String) The Advertiser ID which generated for the developer's Sandbox application.
-- `auth_type` (String) must be one of ["sandbox_access_token"]
-
diff --git a/docs/data-sources/source_todoist.md b/docs/data-sources/source_todoist.md
index 9dd87f1cc..06143db99 100644
--- a/docs/data-sources/source_todoist.md
+++ b/docs/data-sources/source_todoist.md
@@ -14,7 +14,6 @@ SourceTodoist DataSource
```terraform
data "airbyte_source_todoist" "my_source_todoist" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_todoist" "my_source_todoist" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `source_type` (String) must be one of ["todoist"]
-- `token` (String) Your API Token. See here. The token is case sensitive.
-
diff --git a/docs/data-sources/source_trello.md b/docs/data-sources/source_trello.md
index 7e782f778..f798e19d2 100644
--- a/docs/data-sources/source_trello.md
+++ b/docs/data-sources/source_trello.md
@@ -14,7 +14,6 @@ SourceTrello DataSource
```terraform
data "airbyte_source_trello" "my_source_trello" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_trello" "my_source_trello" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `board_ids` (List of String) IDs of the boards to replicate data from. If left empty, data from all boards to which you have access will be replicated.
-- `key` (String) Trello API key. See the docs for instructions on how to generate it.
-- `source_type` (String) must be one of ["trello"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-- `token` (String) Trello API token. See the docs for instructions on how to generate it.
-
diff --git a/docs/data-sources/source_trustpilot.md b/docs/data-sources/source_trustpilot.md
index 40768f864..ce38c9bc7 100644
--- a/docs/data-sources/source_trustpilot.md
+++ b/docs/data-sources/source_trustpilot.md
@@ -14,7 +14,6 @@ SourceTrustpilot DataSource
```terraform
data "airbyte_source_trustpilot" "my_source_trustpilot" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,77 +25,12 @@ data "airbyte_source_trustpilot" "my_source_trustpilot" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `business_units` (List of String) The names of business units which shall be synchronized. Some streams e.g. configured_business_units or private_reviews use this configuration.
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["trustpilot"]
-- `start_date` (String) For streams with sync. method incremental the start date time to be used
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_trustpilot_authorization_method_api_key` (Attributes) The API key authentication method gives you access to only the streams which are part of the Public API. When you want to get streams available via the Consumer API (e.g. the private reviews) you need to use authentication method OAuth 2.0. (see [below for nested schema](#nestedatt--configuration--credentials--source_trustpilot_authorization_method_api_key))
-- `source_trustpilot_authorization_method_o_auth_2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_trustpilot_authorization_method_o_auth_2_0))
-- `source_trustpilot_update_authorization_method_api_key` (Attributes) The API key authentication method gives you access to only the streams which are part of the Public API. When you want to get streams available via the Consumer API (e.g. the private reviews) you need to use authentication method OAuth 2.0. (see [below for nested schema](#nestedatt--configuration--credentials--source_trustpilot_update_authorization_method_api_key))
-- `source_trustpilot_update_authorization_method_o_auth_2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_trustpilot_update_authorization_method_o_auth_2_0))
-
-
-### Nested Schema for `configuration.credentials.source_trustpilot_authorization_method_api_key`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["apikey"]
-- `client_id` (String) The API key of the Trustpilot API application.
-
-
-
-### Nested Schema for `configuration.credentials.source_trustpilot_authorization_method_o_auth_2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The API key of the Trustpilot API application. (represents the OAuth Client ID)
-- `client_secret` (String) The Secret of the Trustpilot API application. (represents the OAuth Client Secret)
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-
-
-### Nested Schema for `configuration.credentials.source_trustpilot_update_authorization_method_api_key`
-
-Read-Only:
-
-- `auth_type` (String) must be one of ["apikey"]
-- `client_id` (String) The API key of the Trustpilot API application.
-
-
-
-### Nested Schema for `configuration.credentials.source_trustpilot_update_authorization_method_o_auth_2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The API key of the Trustpilot API application. (represents the OAuth Client ID)
-- `client_secret` (String) The Secret of the Trustpilot API application. (represents the OAuth Client Secret)
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
diff --git a/docs/data-sources/source_tvmaze_schedule.md b/docs/data-sources/source_tvmaze_schedule.md
index e5fb9f150..04a2d1160 100644
--- a/docs/data-sources/source_tvmaze_schedule.md
+++ b/docs/data-sources/source_tvmaze_schedule.md
@@ -14,7 +14,6 @@ SourceTvmazeSchedule DataSource
```terraform
data "airbyte_source_tvmaze_schedule" "my_source_tvmazeschedule" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,27 +25,12 @@ data "airbyte_source_tvmaze_schedule" "my_source_tvmazeschedule" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `domestic_schedule_country_code` (String) Country code for domestic TV schedule retrieval.
-- `end_date` (String) End date for TV schedule retrieval. May be in the future. Optional.
-- `source_type` (String) must be one of ["tvmaze-schedule"]
-- `start_date` (String) Start date for TV schedule retrieval. May be in the future.
-- `web_schedule_country_code` (String) ISO 3166-1 country code for web TV schedule retrieval. Leave blank for
-all countries plus global web channels (e.g. Netflix). Alternatively,
-set to 'global' for just global web channels.
-
diff --git a/docs/data-sources/source_twilio.md b/docs/data-sources/source_twilio.md
index 9c1438cc8..932c5e0f3 100644
--- a/docs/data-sources/source_twilio.md
+++ b/docs/data-sources/source_twilio.md
@@ -14,7 +14,6 @@ SourceTwilio DataSource
```terraform
data "airbyte_source_twilio" "my_source_twilio" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_twilio" "my_source_twilio" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `account_sid` (String) Twilio account SID
-- `auth_token` (String) Twilio Auth Token.
-- `lookback_window` (Number) How far into the past to look for records. (in minutes)
-- `source_type` (String) must be one of ["twilio"]
-- `start_date` (String) UTC date and time in the format 2020-10-01T00:00:00Z. Any data before this date will not be replicated.
-
diff --git a/docs/data-sources/source_twilio_taskrouter.md b/docs/data-sources/source_twilio_taskrouter.md
index ed0927859..834fcdce4 100644
--- a/docs/data-sources/source_twilio_taskrouter.md
+++ b/docs/data-sources/source_twilio_taskrouter.md
@@ -14,7 +14,6 @@ SourceTwilioTaskrouter DataSource
```terraform
data "airbyte_source_twilio_taskrouter" "my_source_twiliotaskrouter" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_twilio_taskrouter" "my_source_twiliotaskrouter" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `account_sid` (String) Twilio Account ID
-- `auth_token` (String) Twilio Auth Token
-- `source_type` (String) must be one of ["twilio-taskrouter"]
-
diff --git a/docs/data-sources/source_twitter.md b/docs/data-sources/source_twitter.md
index 39e43977a..8dcc57d16 100644
--- a/docs/data-sources/source_twitter.md
+++ b/docs/data-sources/source_twitter.md
@@ -14,7 +14,6 @@ SourceTwitter DataSource
```terraform
data "airbyte_source_twitter" "my_source_twitter" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_twitter" "my_source_twitter" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) App only Bearer Token. See the docs for more information on how to obtain this token.
-- `end_date` (String) The end date for retrieving tweets must be a minimum of 10 seconds prior to the request time.
-- `query` (String) Query for matching Tweets. You can learn how to build this query by reading build a query guide .
-- `source_type` (String) must be one of ["twitter"]
-- `start_date` (String) The start date for retrieving tweets cannot be more than 7 days in the past.
-
diff --git a/docs/data-sources/source_typeform.md b/docs/data-sources/source_typeform.md
index fec7cb66f..8da3c5ab3 100644
--- a/docs/data-sources/source_typeform.md
+++ b/docs/data-sources/source_typeform.md
@@ -14,7 +14,6 @@ SourceTypeform DataSource
```terraform
data "airbyte_source_typeform" "my_source_typeform" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,77 +25,12 @@ data "airbyte_source_typeform" "my_source_typeform" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `form_ids` (List of String) When this parameter is set, the connector will replicate data only from the input forms. Otherwise, all forms in your Typeform account will be replicated. You can find form IDs in your form URLs. For example, in the URL "https://mysite.typeform.com/to/u6nXL7" the form_id is u6nXL7. You can find form URLs on Share panel
-- `source_type` (String) must be one of ["typeform"]
-- `start_date` (String) The date from which you'd like to replicate data for Typeform API, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_typeform_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_typeform_authorization_method_o_auth2_0))
-- `source_typeform_authorization_method_private_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_typeform_authorization_method_private_token))
-- `source_typeform_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_typeform_update_authorization_method_o_auth2_0))
-- `source_typeform_update_authorization_method_private_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_typeform_update_authorization_method_private_token))
-
-
-### Nested Schema for `configuration.credentials.source_typeform_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of the Typeform developer application.
-- `client_secret` (String) The Client Secret the Typeform developer application.
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-
-
-### Nested Schema for `configuration.credentials.source_typeform_authorization_method_private_token`
-
-Read-Only:
-
-- `access_token` (String) Log into your Typeform account and then generate a personal Access Token.
-- `auth_type` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_typeform_update_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of the Typeform developer application.
-- `client_secret` (String) The Client Secret the Typeform developer application.
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-
-
-### Nested Schema for `configuration.credentials.source_typeform_update_authorization_method_private_token`
-
-Read-Only:
-
-- `access_token` (String) Log into your Typeform account and then generate a personal Access Token.
-- `auth_type` (String) must be one of ["access_token"]
-
diff --git a/docs/data-sources/source_us_census.md b/docs/data-sources/source_us_census.md
index b54bae126..0bfd24acf 100644
--- a/docs/data-sources/source_us_census.md
+++ b/docs/data-sources/source_us_census.md
@@ -14,7 +14,6 @@ SourceUsCensus DataSource
```terraform
data "airbyte_source_us_census" "my_source_uscensus" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,24 +25,12 @@ data "airbyte_source_us_census" "my_source_uscensus" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Your API Key. Get your key here.
-- `query_params` (String) The query parameters portion of the GET request, without the api key
-- `query_path` (String) The path portion of the GET request
-- `source_type` (String) must be one of ["us-census"]
-
diff --git a/docs/data-sources/source_vantage.md b/docs/data-sources/source_vantage.md
index a555ecf01..f76e6ff96 100644
--- a/docs/data-sources/source_vantage.md
+++ b/docs/data-sources/source_vantage.md
@@ -14,7 +14,6 @@ SourceVantage DataSource
```terraform
data "airbyte_source_vantage" "my_source_vantage" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_vantage" "my_source_vantage" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_token` (String) Your API Access token. See here.
-- `source_type` (String) must be one of ["vantage"]
-
diff --git a/docs/data-sources/source_webflow.md b/docs/data-sources/source_webflow.md
index f842f597c..c50135773 100644
--- a/docs/data-sources/source_webflow.md
+++ b/docs/data-sources/source_webflow.md
@@ -14,7 +14,6 @@ SourceWebflow DataSource
```terraform
data "airbyte_source_webflow" "my_source_webflow" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,23 +25,12 @@ data "airbyte_source_webflow" "my_source_webflow" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) The API token for authenticating to Webflow. See https://university.webflow.com/lesson/intro-to-the-webflow-api
-- `site_id` (String) The id of the Webflow site you are requesting data from. See https://developers.webflow.com/#sites
-- `source_type` (String) must be one of ["webflow"]
-
diff --git a/docs/data-sources/source_whisky_hunter.md b/docs/data-sources/source_whisky_hunter.md
index 410fbfb32..c31769eeb 100644
--- a/docs/data-sources/source_whisky_hunter.md
+++ b/docs/data-sources/source_whisky_hunter.md
@@ -14,7 +14,6 @@ SourceWhiskyHunter DataSource
```terraform
data "airbyte_source_whisky_hunter" "my_source_whiskyhunter" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,21 +25,12 @@ data "airbyte_source_whisky_hunter" "my_source_whiskyhunter" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `source_type` (String) must be one of ["whisky-hunter"]
-
diff --git a/docs/data-sources/source_wikipedia_pageviews.md b/docs/data-sources/source_wikipedia_pageviews.md
index b08e98319..7ec5b9b6c 100644
--- a/docs/data-sources/source_wikipedia_pageviews.md
+++ b/docs/data-sources/source_wikipedia_pageviews.md
@@ -14,7 +14,6 @@ SourceWikipediaPageviews DataSource
```terraform
data "airbyte_source_wikipedia_pageviews" "my_source_wikipediapageviews" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,28 +25,12 @@ data "airbyte_source_wikipedia_pageviews" "my_source_wikipediapageviews" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access` (String) If you want to filter by access method, use one of desktop, mobile-app or mobile-web. If you are interested in pageviews regardless of access method, use all-access.
-- `agent` (String) If you want to filter by agent type, use one of user, automated or spider. If you are interested in pageviews regardless of agent type, use all-agents.
-- `article` (String) The title of any article in the specified project. Any spaces should be replaced with underscores. It also should be URI-encoded, so that non-URI-safe characters like %, / or ? are accepted.
-- `country` (String) The ISO 3166-1 alpha-2 code of a country for which to retrieve top articles.
-- `end` (String) The date of the last day to include, in YYYYMMDD or YYYYMMDDHH format.
-- `project` (String) If you want to filter by project, use the domain of any Wikimedia project.
-- `source_type` (String) must be one of ["wikipedia-pageviews"]
-- `start` (String) The date of the first day to include, in YYYYMMDD or YYYYMMDDHH format.
-
diff --git a/docs/data-sources/source_woocommerce.md b/docs/data-sources/source_woocommerce.md
index 771edd9af..8343df48b 100644
--- a/docs/data-sources/source_woocommerce.md
+++ b/docs/data-sources/source_woocommerce.md
@@ -14,7 +14,6 @@ SourceWoocommerce DataSource
```terraform
data "airbyte_source_woocommerce" "my_source_woocommerce" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_woocommerce" "my_source_woocommerce" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_key` (String) Customer Key for API in WooCommerce shop
-- `api_secret` (String) Customer Secret for API in WooCommerce shop
-- `shop` (String) The name of the store. For https://EXAMPLE.com, the shop name is 'EXAMPLE.com'.
-- `source_type` (String) must be one of ["woocommerce"]
-- `start_date` (String) The date you would like to replicate data from. Format: YYYY-MM-DD
-
diff --git a/docs/data-sources/source_xero.md b/docs/data-sources/source_xero.md
deleted file mode 100644
index f2e902a60..000000000
--- a/docs/data-sources/source_xero.md
+++ /dev/null
@@ -1,60 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_xero Data Source - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceXero DataSource
----
-
-# airbyte_source_xero (Data Source)
-
-SourceXero DataSource
-
-## Example Usage
-
-```terraform
-data "airbyte_source_xero" "my_source_xero" {
- secret_id = "...my_secret_id..."
- source_id = "...my_source_id..."
-}
-```
-
-
-## Schema
-
-### Required
-
-- `source_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `authentication` (Attributes) (see [below for nested schema](#nestedatt--configuration--authentication))
-- `source_type` (String) must be one of ["xero"]
-- `start_date` (String) UTC date and time in the format YYYY-MM-DDTHH:mm:ssZ. Any data with created_at before this data will not be synced.
-- `tenant_id` (String) Enter your Xero organization's Tenant ID
-
-
-### Nested Schema for `configuration.authentication`
-
-Read-Only:
-
-- `access_token` (String) Enter your Xero application's access token
-- `client_id` (String) Enter your Xero application's Client ID
-- `client_secret` (String) Enter your Xero application's Client Secret
-- `refresh_token` (String) Enter your Xero application's refresh token
-- `token_expiry_date` (String) The date-time when the access token should be refreshed
-
-
diff --git a/docs/data-sources/source_xkcd.md b/docs/data-sources/source_xkcd.md
index 8ff9e638d..a62fea152 100644
--- a/docs/data-sources/source_xkcd.md
+++ b/docs/data-sources/source_xkcd.md
@@ -14,7 +14,6 @@ SourceXkcd DataSource
```terraform
data "airbyte_source_xkcd" "my_source_xkcd" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,21 +25,12 @@ data "airbyte_source_xkcd" "my_source_xkcd" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `source_type` (String) must be one of ["xkcd"]
-
diff --git a/docs/data-sources/source_yandex_metrica.md b/docs/data-sources/source_yandex_metrica.md
index b92daa5f2..397c0bb29 100644
--- a/docs/data-sources/source_yandex_metrica.md
+++ b/docs/data-sources/source_yandex_metrica.md
@@ -14,7 +14,6 @@ SourceYandexMetrica DataSource
```terraform
data "airbyte_source_yandex_metrica" "my_source_yandexmetrica" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_yandex_metrica" "my_source_yandexmetrica" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `auth_token` (String) Your Yandex Metrica API access token
-- `counter_id` (String) Counter ID
-- `end_date` (String) Starting point for your data replication, in format of "YYYY-MM-DD". If not provided will sync till most recent date.
-- `source_type` (String) must be one of ["yandex-metrica"]
-- `start_date` (String) Starting point for your data replication, in format of "YYYY-MM-DD".
-
diff --git a/docs/data-sources/source_yotpo.md b/docs/data-sources/source_yotpo.md
index 38279d770..262fbb1a3 100644
--- a/docs/data-sources/source_yotpo.md
+++ b/docs/data-sources/source_yotpo.md
@@ -14,7 +14,6 @@ SourceYotpo DataSource
```terraform
data "airbyte_source_yotpo" "my_source_yotpo" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_yotpo" "my_source_yotpo" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `access_token` (String) Access token recieved as a result of API call to https://api.yotpo.com/oauth/token (Ref- https://apidocs.yotpo.com/reference/yotpo-authentication)
-- `app_key` (String) App key found at settings (Ref- https://settings.yotpo.com/#/general_settings)
-- `email` (String) Email address registered with yotpo.
-- `source_type` (String) must be one of ["yotpo"]
-- `start_date` (String) Date time filter for incremental filter, Specify which date to extract from.
-
diff --git a/docs/data-sources/source_younium.md b/docs/data-sources/source_younium.md
deleted file mode 100644
index 98f3eece0..000000000
--- a/docs/data-sources/source_younium.md
+++ /dev/null
@@ -1,50 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_younium Data Source - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceYounium DataSource
----
-
-# airbyte_source_younium (Data Source)
-
-SourceYounium DataSource
-
-## Example Usage
-
-```terraform
-data "airbyte_source_younium" "my_source_younium" {
- secret_id = "...my_secret_id..."
- source_id = "...my_source_id..."
-}
-```
-
-
-## Schema
-
-### Required
-
-- `source_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `legal_entity` (String) Legal Entity that data should be pulled from
-- `password` (String) Account password for younium account API key
-- `playground` (Boolean) Property defining if connector is used against playground or production environment
-- `source_type` (String) must be one of ["younium"]
-- `username` (String) Username for Younium account
-
-
diff --git a/docs/data-sources/source_youtube_analytics.md b/docs/data-sources/source_youtube_analytics.md
index cb19bd3e7..a3b1e5153 100644
--- a/docs/data-sources/source_youtube_analytics.md
+++ b/docs/data-sources/source_youtube_analytics.md
@@ -14,7 +14,6 @@ SourceYoutubeAnalytics DataSource
```terraform
data "airbyte_source_youtube_analytics" "my_source_youtubeanalytics" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,35 +25,12 @@ data "airbyte_source_youtube_analytics" "my_source_youtubeanalytics" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["youtube-analytics"]
-
-
-### Nested Schema for `configuration.credentials`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `client_id` (String) The Client ID of your developer application
-- `client_secret` (String) The client secret of your developer application
-- `refresh_token` (String) A refresh token generated using the above client ID and secret
-
diff --git a/docs/data-sources/source_zendesk_chat.md b/docs/data-sources/source_zendesk_chat.md
index 6425e6c15..a8a08300f 100644
--- a/docs/data-sources/source_zendesk_chat.md
+++ b/docs/data-sources/source_zendesk_chat.md
@@ -14,7 +14,6 @@ SourceZendeskChat DataSource
```terraform
data "airbyte_source_zendesk_chat" "my_source_zendeskchat" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,75 +25,12 @@ data "airbyte_source_zendesk_chat" "my_source_zendeskchat" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["zendesk-chat"]
-- `start_date` (String) The date from which you'd like to replicate data for Zendesk Chat API, in the format YYYY-MM-DDT00:00:00Z.
-- `subdomain` (String) Required if you access Zendesk Chat from a Zendesk Support subdomain.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_zendesk_chat_authorization_method_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_chat_authorization_method_access_token))
-- `source_zendesk_chat_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_chat_authorization_method_o_auth2_0))
-- `source_zendesk_chat_update_authorization_method_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_chat_update_authorization_method_access_token))
-- `source_zendesk_chat_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_chat_update_authorization_method_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_chat_authorization_method_access_token`
-
-Read-Only:
-
-- `access_token` (String) The Access Token to make authenticated requests.
-- `credentials` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_chat_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `client_id` (String) The Client ID of your OAuth application
-- `client_secret` (String) The Client Secret of your OAuth application.
-- `credentials` (String) must be one of ["oauth2.0"]
-- `refresh_token` (String) Refresh Token to obtain new Access Token, when it's expired.
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_chat_update_authorization_method_access_token`
-
-Read-Only:
-
-- `access_token` (String) The Access Token to make authenticated requests.
-- `credentials` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_chat_update_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `client_id` (String) The Client ID of your OAuth application
-- `client_secret` (String) The Client Secret of your OAuth application.
-- `credentials` (String) must be one of ["oauth2.0"]
-- `refresh_token` (String) Refresh Token to obtain new Access Token, when it's expired.
-
diff --git a/docs/data-sources/source_zendesk_sell.md b/docs/data-sources/source_zendesk_sell.md
new file mode 100644
index 000000000..bd403d397
--- /dev/null
+++ b/docs/data-sources/source_zendesk_sell.md
@@ -0,0 +1,36 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_source_zendesk_sell Data Source - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ SourceZendeskSell DataSource
+---
+
+# airbyte_source_zendesk_sell (Data Source)
+
+SourceZendeskSell DataSource
+
+## Example Usage
+
+```terraform
+data "airbyte_source_zendesk_sell" "my_source_zendesksell" {
+ source_id = "...my_source_id..."
+}
+```
+
+
+## Schema
+
+### Required
+
+- `source_id` (String)
+
+### Read-Only
+
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
+- `name` (String)
+- `source_type` (String)
+- `workspace_id` (String)
+
+
diff --git a/docs/data-sources/source_zendesk_sunshine.md b/docs/data-sources/source_zendesk_sunshine.md
index 20b70b156..b3554f5ea 100644
--- a/docs/data-sources/source_zendesk_sunshine.md
+++ b/docs/data-sources/source_zendesk_sunshine.md
@@ -14,7 +14,6 @@ SourceZendeskSunshine DataSource
```terraform
data "airbyte_source_zendesk_sunshine" "my_source_zendesksunshine" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,75 +25,12 @@ data "airbyte_source_zendesk_sunshine" "my_source_zendesksunshine" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["zendesk-sunshine"]
-- `start_date` (String) The date from which you'd like to replicate data for Zendesk Sunshine API, in the format YYYY-MM-DDT00:00:00Z.
-- `subdomain` (String) The subdomain for your Zendesk Account.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_zendesk_sunshine_authorization_method_api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_sunshine_authorization_method_api_token))
-- `source_zendesk_sunshine_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_sunshine_authorization_method_o_auth2_0))
-- `source_zendesk_sunshine_update_authorization_method_api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_sunshine_update_authorization_method_api_token))
-- `source_zendesk_sunshine_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_sunshine_update_authorization_method_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_sunshine_authorization_method_api_token`
-
-Read-Only:
-
-- `api_token` (String) API Token. See the docs for information on how to generate this key.
-- `auth_method` (String) must be one of ["api_token"]
-- `email` (String) The user email for your Zendesk account
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_sunshine_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Long-term access Token for making authenticated requests.
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your OAuth application.
-- `client_secret` (String) The Client Secret of your OAuth application.
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_sunshine_update_authorization_method_api_token`
-
-Read-Only:
-
-- `api_token` (String) API Token. See the docs for information on how to generate this key.
-- `auth_method` (String) must be one of ["api_token"]
-- `email` (String) The user email for your Zendesk account
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_sunshine_update_authorization_method_o_auth2_0`
-
-Read-Only:
-
-- `access_token` (String) Long-term access Token for making authenticated requests.
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your OAuth application.
-- `client_secret` (String) The Client Secret of your OAuth application.
-
diff --git a/docs/data-sources/source_zendesk_support.md b/docs/data-sources/source_zendesk_support.md
index 9b4abef5d..b29e7588c 100644
--- a/docs/data-sources/source_zendesk_support.md
+++ b/docs/data-sources/source_zendesk_support.md
@@ -14,7 +14,6 @@ SourceZendeskSupport DataSource
```terraform
data "airbyte_source_zendesk_support" "my_source_zendesksupport" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,92 +25,12 @@ data "airbyte_source_zendesk_support" "my_source_zendesksupport" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Zendesk allows two authentication methods. We recommend using `OAuth2.0` for Airbyte Cloud users and `API token` for Airbyte Open Source users. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `ignore_pagination` (Boolean) Makes each stream read a single page of data.
-- `source_type` (String) must be one of ["zendesk-support"]
-- `start_date` (String) The UTC date and time from which you'd like to replicate data, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
-- `subdomain` (String) This is your unique Zendesk subdomain that can be found in your account URL. For example, in https://MY_SUBDOMAIN.zendesk.com/, MY_SUBDOMAIN is the value of your subdomain.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_zendesk_support_authentication_api_token` (Attributes) Zendesk allows two authentication methods. We recommend using `OAuth2.0` for Airbyte Cloud users and `API token` for Airbyte Open Source users. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_support_authentication_api_token))
-- `source_zendesk_support_authentication_o_auth2_0` (Attributes) Zendesk allows two authentication methods. We recommend using `OAuth2.0` for Airbyte Cloud users and `API token` for Airbyte Open Source users. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_support_authentication_o_auth2_0))
-- `source_zendesk_support_update_authentication_api_token` (Attributes) Zendesk allows two authentication methods. We recommend using `OAuth2.0` for Airbyte Cloud users and `API token` for Airbyte Open Source users. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_support_update_authentication_api_token))
-- `source_zendesk_support_update_authentication_o_auth2_0` (Attributes) Zendesk allows two authentication methods. We recommend using `OAuth2.0` for Airbyte Cloud users and `API token` for Airbyte Open Source users. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_support_update_authentication_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_support_authentication_api_token`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `api_token` (String) The value of the API token generated. See our full documentation for more information on generating this token.
-- `credentials` (String) must be one of ["api_token"]
-- `email` (String) The user email for your Zendesk account.
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_support_authentication_o_auth2_0`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `access_token` (String) The OAuth access token. See the Zendesk docs for more information on generating this token.
-- `client_id` (String) The OAuth client's ID. See this guide for more information.
-- `client_secret` (String) The OAuth client secret. See this guide for more information.
-- `credentials` (String) must be one of ["oauth2.0"]
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_support_update_authentication_api_token`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `api_token` (String) The value of the API token generated. See our full documentation for more information on generating this token.
-- `credentials` (String) must be one of ["api_token"]
-- `email` (String) The user email for your Zendesk account.
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_support_update_authentication_o_auth2_0`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `access_token` (String) The OAuth access token. See the Zendesk docs for more information on generating this token.
-- `client_id` (String) The OAuth client's ID. See this guide for more information.
-- `client_secret` (String) The OAuth client secret. See this guide for more information.
-- `credentials` (String) must be one of ["oauth2.0"]
-
diff --git a/docs/data-sources/source_zendesk_talk.md b/docs/data-sources/source_zendesk_talk.md
index 95a5dc86b..83e9ecfbe 100644
--- a/docs/data-sources/source_zendesk_talk.md
+++ b/docs/data-sources/source_zendesk_talk.md
@@ -14,7 +14,6 @@ SourceZendeskTalk DataSource
```terraform
data "airbyte_source_zendesk_talk" "my_source_zendesktalk" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,91 +25,12 @@ data "airbyte_source_zendesk_talk" "my_source_zendesktalk" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `credentials` (Attributes) Zendesk service provides two authentication methods. Choose between: `OAuth2.0` or `API token`. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["zendesk-talk"]
-- `start_date` (String) The date from which you'd like to replicate data for Zendesk Talk API, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
-- `subdomain` (String) This is your Zendesk subdomain that can be found in your account URL. For example, in https://{MY_SUBDOMAIN}.zendesk.com/, where MY_SUBDOMAIN is the value of your subdomain.
-
-
-### Nested Schema for `configuration.credentials`
-
-Read-Only:
-
-- `source_zendesk_talk_authentication_api_token` (Attributes) Zendesk service provides two authentication methods. Choose between: `OAuth2.0` or `API token`. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_talk_authentication_api_token))
-- `source_zendesk_talk_authentication_o_auth2_0` (Attributes) Zendesk service provides two authentication methods. Choose between: `OAuth2.0` or `API token`. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_talk_authentication_o_auth2_0))
-- `source_zendesk_talk_update_authentication_api_token` (Attributes) Zendesk service provides two authentication methods. Choose between: `OAuth2.0` or `API token`. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_talk_update_authentication_api_token))
-- `source_zendesk_talk_update_authentication_o_auth2_0` (Attributes) Zendesk service provides two authentication methods. Choose between: `OAuth2.0` or `API token`. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_talk_update_authentication_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_talk_authentication_api_token`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `api_token` (String) The value of the API token generated. See the docs for more information.
-- `auth_type` (String) must be one of ["api_token"]
-- `email` (String) The user email for your Zendesk account.
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_talk_authentication_o_auth2_0`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `access_token` (String) The value of the API token generated. See the docs for more information.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) Client ID
-- `client_secret` (String) Client Secret
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_talk_update_authentication_api_token`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `api_token` (String) The value of the API token generated. See the docs for more information.
-- `auth_type` (String) must be one of ["api_token"]
-- `email` (String) The user email for your Zendesk account.
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_talk_update_authentication_o_auth2_0`
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-Read-Only:
-
-- `access_token` (String) The value of the API token generated. See the docs for more information.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) Client ID
-- `client_secret` (String) Client Secret
-
diff --git a/docs/data-sources/source_zenloop.md b/docs/data-sources/source_zenloop.md
index ea54a14ee..e6eeee995 100644
--- a/docs/data-sources/source_zenloop.md
+++ b/docs/data-sources/source_zenloop.md
@@ -14,7 +14,6 @@ SourceZenloop DataSource
```terraform
data "airbyte_source_zenloop" "my_source_zenloop" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,25 +25,12 @@ data "airbyte_source_zenloop" "my_source_zenloop" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `api_token` (String) Zenloop API Token. You can get the API token in settings page here
-- `date_from` (String) Zenloop date_from. Format: 2021-10-24T03:30:30Z or 2021-10-24. Leave empty if only data from current data should be synced
-- `source_type` (String) must be one of ["zenloop"]
-- `survey_group_id` (String) Zenloop Survey Group ID. Can be found by pulling All Survey Groups via SurveyGroups stream. Leave empty to pull answers from all survey groups
-- `survey_id` (String) Zenloop Survey ID. Can be found here. Leave empty to pull answers from all surveys
-
diff --git a/docs/data-sources/source_zoho_crm.md b/docs/data-sources/source_zoho_crm.md
index 1c856c4d9..3d44996cb 100644
--- a/docs/data-sources/source_zoho_crm.md
+++ b/docs/data-sources/source_zoho_crm.md
@@ -14,7 +14,6 @@ SourceZohoCrm DataSource
```terraform
data "airbyte_source_zoho_crm" "my_source_zohocrm" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,31 +25,12 @@ data "airbyte_source_zoho_crm" "my_source_zohocrm" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `client_id` (String) OAuth2.0 Client ID
-- `client_secret` (String) OAuth2.0 Client Secret
-- `dc_region` (String) must be one of ["US", "AU", "EU", "IN", "CN", "JP"]
-Please choose the region of your Data Center location. More info by this Link
-- `edition` (String) must be one of ["Free", "Standard", "Professional", "Enterprise", "Ultimate"]
-Choose your Edition of Zoho CRM to determine API Concurrency Limits
-- `environment` (String) must be one of ["Production", "Developer", "Sandbox"]
-Please choose the environment
-- `refresh_token` (String) OAuth2.0 Refresh Token
-- `source_type` (String) must be one of ["zoho-crm"]
-- `start_datetime` (String) ISO 8601, for instance: `YYYY-MM-DD`, `YYYY-MM-DD HH:MM:SS+HH:MM`
-
diff --git a/docs/data-sources/source_zoom.md b/docs/data-sources/source_zoom.md
index 8ad13c4cf..1450109ae 100644
--- a/docs/data-sources/source_zoom.md
+++ b/docs/data-sources/source_zoom.md
@@ -14,7 +14,6 @@ SourceZoom DataSource
```terraform
data "airbyte_source_zoom" "my_source_zoom" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,22 +25,12 @@ data "airbyte_source_zoom" "my_source_zoom" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `jwt_token` (String) JWT Token
-- `source_type` (String) must be one of ["zoom"]
-
diff --git a/docs/data-sources/source_zuora.md b/docs/data-sources/source_zuora.md
index 28e7259ce..b1f8ffca1 100644
--- a/docs/data-sources/source_zuora.md
+++ b/docs/data-sources/source_zuora.md
@@ -14,7 +14,6 @@ SourceZuora DataSource
```terraform
data "airbyte_source_zuora" "my_source_zuora" {
- secret_id = "...my_secret_id..."
source_id = "...my_source_id..."
}
```
@@ -26,29 +25,12 @@ data "airbyte_source_zuora" "my_source_zuora" {
- `source_id` (String)
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
### Read-Only
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `configuration` (String) Parsed as JSON.
+The values required to configure the source.
- `name` (String)
+- `source_type` (String)
- `workspace_id` (String)
-
-### Nested Schema for `configuration`
-
-Read-Only:
-
-- `client_id` (String) Your OAuth user Client ID
-- `client_secret` (String) Your OAuth user Client Secret
-- `data_query` (String) must be one of ["Live", "Unlimited"]
-Choose between `Live`, or `Unlimited` - the optimized, replicated database at 12 hours freshness for high volume extraction Link
-- `source_type` (String) must be one of ["zuora"]
-- `start_date` (String) Start Date in format: YYYY-MM-DD
-- `tenant_endpoint` (String) must be one of ["US Production", "US Cloud Production", "US API Sandbox", "US Cloud API Sandbox", "US Central Sandbox", "US Performance Test", "EU Production", "EU API Sandbox", "EU Central Sandbox"]
-Please choose the right endpoint where your Tenant is located. More info by this Link
-- `window_in_days` (String) The amount of days for each data-chunk begining from start_date. Bigger the value - faster the fetch. (0.1 - as for couple of hours, 1 - as for a Day; 364 - as for a Year).
-
diff --git a/docs/data-sources/workspace.md b/docs/data-sources/workspace.md
index dc149db68..0d40ed212 100644
--- a/docs/data-sources/workspace.md
+++ b/docs/data-sources/workspace.md
@@ -27,7 +27,7 @@ data "airbyte_workspace" "my_workspace" {
### Read-Only
-- `data_residency` (String) must be one of ["auto", "us", "eu"]
-- `name` (String) Name of the workspace
+- `data_residency` (String) must be one of ["auto", "us", "eu"]; Default: "auto"
+- `name` (String)
diff --git a/docs/index.md b/docs/index.md
index be28658ed..8c1d8b88c 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -17,7 +17,7 @@ terraform {
required_providers {
airbyte = {
source = "airbytehq/airbyte"
- version = "0.3.4"
+ version = "0.3.5"
}
}
}
diff --git a/docs/resources/connection.md b/docs/resources/connection.md
index 83e738768..d7324a66c 100644
--- a/docs/resources/connection.md
+++ b/docs/resources/connection.md
@@ -20,7 +20,7 @@ resource "airbyte_connection" "my_connection" {
cursor_field = [
"...",
]
- name = "Terrence Rau"
+ name = "Cecil Johnson"
primary_key = [
[
"...",
@@ -30,19 +30,19 @@ resource "airbyte_connection" "my_connection" {
},
]
}
- data_residency = "us"
- destination_id = "d69a674e-0f46-47cc-8796-ed151a05dfc2"
- name = "Wilfred Wolff"
- namespace_definition = "custom_format"
+ data_residency = "auto"
+ destination_id = "e362083e-afc8-4559-94e0-a570f6dd427d"
+ name = "Melvin O'Connell"
+ namespace_definition = "source"
namespace_format = SOURCE_NAMESPACE
- non_breaking_schema_updates_behavior = "disable_connection"
+ non_breaking_schema_updates_behavior = "propagate_columns"
prefix = "...my_prefix..."
schedule = {
basic_timing = "...my_basic_timing..."
cron_expression = "...my_cron_expression..."
- schedule_type = "cron"
+ schedule_type = "manual"
}
- source_id = "ca1ba928-fc81-4674-acb7-39205929396f"
+ source_id = "78358423-25b6-4c7b-bfd2-fd307d60cb97"
status = "deprecated"
}
```
@@ -58,12 +58,13 @@ resource "airbyte_connection" "my_connection" {
### Optional
- `configurations` (Attributes) A list of configured stream options for a connection. (see [below for nested schema](#nestedatt--configurations))
-- `data_residency` (String) must be one of ["auto", "us", "eu"]
+- `data_residency` (String) must be one of ["auto", "us", "eu"]; Default: "auto"
- `name` (String) Optional name of the connection
-- `namespace_definition` (String) must be one of ["source", "destination", "custom_format"]
+- `namespace_definition` (String) must be one of ["source", "destination", "custom_format"]; Default: "destination"
Define the location where the data will be stored in the destination
-- `namespace_format` (String) Used when namespaceDefinition is 'custom_format'. If blank then behaves like namespaceDefinition = 'destination'. If "${SOURCE_NAMESPACE}" then behaves like namespaceDefinition = 'source'.
-- `non_breaking_schema_updates_behavior` (String) must be one of ["ignore", "disable_connection", "propagate_columns", "propagate_fully"]
+- `namespace_format` (String) Default: null
+Used when namespaceDefinition is 'custom_format'. If blank then behaves like namespaceDefinition = 'destination'. If "${SOURCE_NAMESPACE}" then behaves like namespaceDefinition = 'source'.
+- `non_breaking_schema_updates_behavior` (String) must be one of ["ignore", "disable_connection", "propagate_columns", "propagate_fully"]; Default: "ignore"
Set how Airbyte handles syncs when it detects a non-breaking schema change in the source
- `prefix` (String) Prefix that will be prepended to the name of each stream when it is written to the destination (ex. “airbyte_” causes “projects” => “airbyte_projects”).
- `schedule` (Attributes) schedule for when the the connection should run, per the schedule type (see [below for nested schema](#nestedatt--schedule))
diff --git a/docs/resources/destination_aws_datalake.md b/docs/resources/destination_aws_datalake.md
index a8eb09c1d..65cdffe7f 100644
--- a/docs/resources/destination_aws_datalake.md
+++ b/docs/resources/destination_aws_datalake.md
@@ -19,28 +19,27 @@ resource "airbyte_destination_aws_datalake" "my_destination_awsdatalake" {
bucket_name = "...my_bucket_name..."
bucket_prefix = "...my_bucket_prefix..."
credentials = {
- destination_aws_datalake_authentication_mode_iam_role = {
- credentials_title = "IAM Role"
- role_arn = "...my_role_arn..."
+ iam_role = {
+ role_arn = "...my_role_arn..."
}
}
- destination_type = "aws-datalake"
format = {
- destination_aws_datalake_output_format_wildcard_json_lines_newline_delimited_json = {
+ json_lines_newline_delimited_json = {
compression_codec = "GZIP"
format_type = "JSONL"
}
}
- glue_catalog_float_as_decimal = true
+ glue_catalog_float_as_decimal = false
lakeformation_database_default_tag_key = "pii_level"
lakeformation_database_default_tag_values = "private,public"
lakeformation_database_name = "...my_lakeformation_database_name..."
- lakeformation_governed_tables = true
- partitioning = "DAY"
- region = "ap-southeast-1"
+ lakeformation_governed_tables = false
+ partitioning = "YEAR/MONTH/DAY"
+ region = "eu-west-1"
}
- name = "Dr. Rickey Boyle"
- workspace_id = "aa2352c5-9559-407a-bf1a-3a2fa9467739"
+ definition_id = "635b80f2-a9b0-4de1-897a-c8629f5a79ed"
+ name = "Blanche MacGyver"
+ workspace_id = "e76a2f8d-fb9a-4ea6-8f38-6615e68b5c3f"
}
```
@@ -50,9 +49,13 @@ resource "airbyte_destination_aws_datalake" "my_destination_awsdatalake" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -65,73 +68,47 @@ Required:
- `bucket_name` (String) The name of the S3 bucket. Read more here.
- `credentials` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `destination_type` (String) must be one of ["aws-datalake"]
- `lakeformation_database_name` (String) The default database this destination will use to create tables in per stream. Can be changed per connection by customizing the namespace.
-- `region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
-The region of the S3 bucket. See here for all region codes.
Optional:
- `aws_account_id` (String) target aws account id
- `bucket_prefix` (String) S3 prefix
- `format` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format))
-- `glue_catalog_float_as_decimal` (Boolean) Cast float/double as decimal(38,18). This can help achieve higher accuracy and represent numbers correctly as received from the source.
-- `lakeformation_database_default_tag_key` (String) Add a default tag key to databases created by this destination
+- `glue_catalog_float_as_decimal` (Boolean) Default: false
+Cast float/double as decimal(38,18). This can help achieve higher accuracy and represent numbers correctly as received from the source.
+- `lakeformation_database_default_tag_key` (String, Sensitive) Add a default tag key to databases created by this destination
- `lakeformation_database_default_tag_values` (String) Add default values for the `Tag Key` to databases created by this destination. Comma separate for multiple values.
-- `lakeformation_governed_tables` (Boolean) Whether to create tables as LF governed tables.
-- `partitioning` (String) must be one of ["NO PARTITIONING", "DATE", "YEAR", "MONTH", "DAY", "YEAR/MONTH", "YEAR/MONTH/DAY"]
+- `lakeformation_governed_tables` (Boolean) Default: false
+Whether to create tables as LF governed tables.
+- `partitioning` (String) must be one of ["NO PARTITIONING", "DATE", "YEAR", "MONTH", "DAY", "YEAR/MONTH", "YEAR/MONTH/DAY"]; Default: "NO PARTITIONING"
Partition data by cursor fields when a cursor field is a date
+- `region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]; Default: ""
+The region of the S3 bucket. See here for all region codes.
### Nested Schema for `configuration.credentials`
Optional:
-- `destination_aws_datalake_authentication_mode_iam_role` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--destination_aws_datalake_authentication_mode_iam_role))
-- `destination_aws_datalake_authentication_mode_iam_user` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--destination_aws_datalake_authentication_mode_iam_user))
-- `destination_aws_datalake_update_authentication_mode_iam_role` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--destination_aws_datalake_update_authentication_mode_iam_role))
-- `destination_aws_datalake_update_authentication_mode_iam_user` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--destination_aws_datalake_update_authentication_mode_iam_user))
-
-
-### Nested Schema for `configuration.credentials.destination_aws_datalake_authentication_mode_iam_role`
-
-Required:
-
-- `credentials_title` (String) must be one of ["IAM Role"]
-Name of the credentials
-- `role_arn` (String) Will assume this role to write data to s3
-
+- `iam_role` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--iam_role))
+- `iam_user` (Attributes) Choose How to Authenticate to AWS. (see [below for nested schema](#nestedatt--configuration--credentials--iam_user))
-
-### Nested Schema for `configuration.credentials.destination_aws_datalake_authentication_mode_iam_user`
+
+### Nested Schema for `configuration.credentials.iam_role`
Required:
-- `aws_access_key_id` (String) AWS User Access Key Id
-- `aws_secret_access_key` (String) Secret Access Key
-- `credentials_title` (String) must be one of ["IAM User"]
-Name of the credentials
-
-
-
-### Nested Schema for `configuration.credentials.destination_aws_datalake_update_authentication_mode_iam_role`
-
-Required:
-
-- `credentials_title` (String) must be one of ["IAM Role"]
-Name of the credentials
- `role_arn` (String) Will assume this role to write data to s3
-
-### Nested Schema for `configuration.credentials.destination_aws_datalake_update_authentication_mode_iam_user`
+
+### Nested Schema for `configuration.credentials.iam_user`
Required:
-- `aws_access_key_id` (String) AWS User Access Key Id
-- `aws_secret_access_key` (String) Secret Access Key
-- `credentials_title` (String) must be one of ["IAM User"]
-Name of the credentials
+- `aws_access_key_id` (String, Sensitive) AWS User Access Key Id
+- `aws_secret_access_key` (String, Sensitive) Secret Access Key
@@ -140,60 +117,26 @@ Name of the credentials
Optional:
-- `destination_aws_datalake_output_format_wildcard_json_lines_newline_delimited_json` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--destination_aws_datalake_output_format_wildcard_json_lines_newline_delimited_json))
-- `destination_aws_datalake_output_format_wildcard_parquet_columnar_storage` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--destination_aws_datalake_output_format_wildcard_parquet_columnar_storage))
-- `destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json))
-- `destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage))
+- `json_lines_newline_delimited_json` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json))
+- `parquet_columnar_storage` (Attributes) Format of the data output. (see [below for nested schema](#nestedatt--configuration--format--parquet_columnar_storage))
-
-### Nested Schema for `configuration.format.destination_aws_datalake_output_format_wildcard_json_lines_newline_delimited_json`
-
-Required:
-
-- `format_type` (String) must be one of ["JSONL"]
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json`
Optional:
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "GZIP"]
+- `compression_codec` (String) must be one of ["UNCOMPRESSED", "GZIP"]; Default: "UNCOMPRESSED"
The compression algorithm used to compress data.
+- `format_type` (String) must be one of ["JSONL"]; Default: "JSONL"
-
-### Nested Schema for `configuration.format.destination_aws_datalake_output_format_wildcard_parquet_columnar_storage`
-
-Required:
-
-- `format_type` (String) must be one of ["Parquet"]
-
-Optional:
-
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "ZSTD"]
-The compression algorithm used to compress data.
-
-
-
-### Nested Schema for `configuration.format.destination_aws_datalake_update_output_format_wildcard_json_lines_newline_delimited_json`
-
-Required:
-
-- `format_type` (String) must be one of ["JSONL"]
-
-Optional:
-
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "GZIP"]
-The compression algorithm used to compress data.
-
-
-
-### Nested Schema for `configuration.format.destination_aws_datalake_update_output_format_wildcard_parquet_columnar_storage`
-
-Required:
-
-- `format_type` (String) must be one of ["Parquet"]
+
+### Nested Schema for `configuration.format.parquet_columnar_storage`
Optional:
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "ZSTD"]
+- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "ZSTD"]; Default: "SNAPPY"
The compression algorithm used to compress data.
+- `format_type` (String) must be one of ["Parquet"]; Default: "Parquet"
diff --git a/docs/resources/destination_azure_blob_storage.md b/docs/resources/destination_azure_blob_storage.md
index 4fc8e3411..b9895f27e 100644
--- a/docs/resources/destination_azure_blob_storage.md
+++ b/docs/resources/destination_azure_blob_storage.md
@@ -21,16 +21,15 @@ resource "airbyte_destination_azure_blob_storage" "my_destination_azureblobstora
azure_blob_storage_endpoint_domain_name = "blob.core.windows.net"
azure_blob_storage_output_buffer_size = 5
azure_blob_storage_spill_size = 500
- destination_type = "azure-blob-storage"
format = {
- destination_azure_blob_storage_output_format_csv_comma_separated_values = {
- flattening = "No flattening"
- format_type = "CSV"
+ csv_comma_separated_values = {
+ flattening = "No flattening"
}
}
}
- name = "Matt Hamill"
- workspace_id = "3f5ad019-da1f-4fe7-8f09-7b0074f15471"
+ definition_id = "b38acf3b-23ea-44e3-abf4-ba0e7ac63cda"
+ name = "Rogelio Purdy"
+ workspace_id = "cd76c9fd-07c9-468d-acb9-cb44c87d9163"
}
```
@@ -40,9 +39,13 @@ resource "airbyte_destination_azure_blob_storage" "my_destination_azureblobstora
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -53,61 +56,38 @@ resource "airbyte_destination_azure_blob_storage" "my_destination_azureblobstora
Required:
-- `azure_blob_storage_account_key` (String) The Azure blob storage account key.
+- `azure_blob_storage_account_key` (String, Sensitive) The Azure blob storage account key.
- `azure_blob_storage_account_name` (String) The account's name of the Azure Blob Storage.
-- `destination_type` (String) must be one of ["azure-blob-storage"]
- `format` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format))
Optional:
- `azure_blob_storage_container_name` (String) The name of the Azure blob storage container. If not exists - will be created automatically. May be empty, then will be created automatically airbytecontainer+timestamp
-- `azure_blob_storage_endpoint_domain_name` (String) This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
-- `azure_blob_storage_output_buffer_size` (Number) The amount of megabytes to buffer for the output stream to Azure. This will impact memory footprint on workers, but may need adjustment for performance and appropriate block size in Azure.
-- `azure_blob_storage_spill_size` (Number) The amount of megabytes after which the connector should spill the records in a new blob object. Make sure to configure size greater than individual records. Enter 0 if not applicable
+- `azure_blob_storage_endpoint_domain_name` (String) Default: "blob.core.windows.net"
+This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
+- `azure_blob_storage_output_buffer_size` (Number) Default: 5
+The amount of megabytes to buffer for the output stream to Azure. This will impact memory footprint on workers, but may need adjustment for performance and appropriate block size in Azure.
+- `azure_blob_storage_spill_size` (Number) Default: 500
+The amount of megabytes after which the connector should spill the records in a new blob object. Make sure to configure size greater than individual records. Enter 0 if not applicable
### Nested Schema for `configuration.format`
Optional:
-- `destination_azure_blob_storage_output_format_csv_comma_separated_values` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--destination_azure_blob_storage_output_format_csv_comma_separated_values))
-- `destination_azure_blob_storage_output_format_json_lines_newline_delimited_json` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--destination_azure_blob_storage_output_format_json_lines_newline_delimited_json))
-- `destination_azure_blob_storage_update_output_format_csv_comma_separated_values` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--destination_azure_blob_storage_update_output_format_csv_comma_separated_values))
-- `destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json))
-
-
-### Nested Schema for `configuration.format.destination_azure_blob_storage_output_format_csv_comma_separated_values`
-
-Required:
-
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
-- `format_type` (String) must be one of ["CSV"]
-
-
-
-### Nested Schema for `configuration.format.destination_azure_blob_storage_output_format_json_lines_newline_delimited_json`
-
-Required:
-
-- `format_type` (String) must be one of ["JSONL"]
+- `csv_comma_separated_values` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--csv_comma_separated_values))
+- `json_lines_newline_delimited_json` (Attributes) Output data format (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json))
+
+### Nested Schema for `configuration.format.csv_comma_separated_values`
-
-### Nested Schema for `configuration.format.destination_azure_blob_storage_update_output_format_csv_comma_separated_values`
-
-Required:
+Optional:
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
+- `flattening` (String) must be one of ["No flattening", "Root level flattening"]; Default: "No flattening"
Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
-- `format_type` (String) must be one of ["CSV"]
-
-
-### Nested Schema for `configuration.format.destination_azure_blob_storage_update_output_format_json_lines_newline_delimited_json`
-
-Required:
-- `format_type` (String) must be one of ["JSONL"]
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json`
diff --git a/docs/resources/destination_bigquery.md b/docs/resources/destination_bigquery.md
index aaf0fbe69..45f547141 100644
--- a/docs/resources/destination_bigquery.md
+++ b/docs/resources/destination_bigquery.md
@@ -18,30 +18,28 @@ resource "airbyte_destination_bigquery" "my_destination_bigquery" {
big_query_client_buffer_size_mb = 15
credentials_json = "...my_credentials_json..."
dataset_id = "...my_dataset_id..."
- dataset_location = "australia-southeast2"
- destination_type = "bigquery"
+ dataset_location = "me-central2"
+ disable_type_dedupe = true
loading_method = {
- destination_bigquery_loading_method_gcs_staging = {
+ gcs_staging = {
credential = {
- destination_bigquery_loading_method_gcs_staging_credential_hmac_key = {
- credential_type = "HMAC_KEY"
+ destination_bigquery_hmac_key = {
hmac_key_access_id = "1234567890abcdefghij1234"
hmac_key_secret = "1234567890abcdefghij1234567890ABCDEFGHIJ"
}
}
- file_buffer_count = 10
gcs_bucket_name = "airbyte_sync"
gcs_bucket_path = "data_sync/test"
- keep_files_in_gcs_bucket = "Delete all tmp files from GCS"
- method = "GCS Staging"
+ keep_files_in_gcs_bucket = "Keep all tmp files in GCS"
}
}
project_id = "...my_project_id..."
raw_data_dataset = "...my_raw_data_dataset..."
transformation_priority = "batch"
}
- name = "Edna Pouros"
- workspace_id = "d488e1e9-1e45-40ad-aabd-44269802d502"
+ definition_id = "2d142842-c5e9-475e-80d1-1a3c6d933cc0"
+ name = "Miss Celia Moore"
+ workspace_id = "2d2700dc-d43a-4c80-9ede-88b16b5e1575"
}
```
@@ -51,9 +49,13 @@ resource "airbyte_destination_bigquery" "my_destination_bigquery" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -65,18 +67,20 @@ resource "airbyte_destination_bigquery" "my_destination_bigquery" {
Required:
- `dataset_id` (String) The default BigQuery Dataset ID that tables are replicated to if the source does not specify a namespace. Read more here.
-- `dataset_location` (String) must be one of ["US", "EU", "asia-east1", "asia-east2", "asia-northeast1", "asia-northeast2", "asia-northeast3", "asia-south1", "asia-south2", "asia-southeast1", "asia-southeast2", "australia-southeast1", "australia-southeast2", "europe-central1", "europe-central2", "europe-north1", "europe-southwest1", "europe-west1", "europe-west2", "europe-west3", "europe-west4", "europe-west6", "europe-west7", "europe-west8", "europe-west9", "me-west1", "northamerica-northeast1", "northamerica-northeast2", "southamerica-east1", "southamerica-west1", "us-central1", "us-east1", "us-east2", "us-east3", "us-east4", "us-east5", "us-west1", "us-west2", "us-west3", "us-west4"]
+- `dataset_location` (String) must be one of ["US", "EU", "asia-east1", "asia-east2", "asia-northeast1", "asia-northeast2", "asia-northeast3", "asia-south1", "asia-south2", "asia-southeast1", "asia-southeast2", "australia-southeast1", "australia-southeast2", "europe-central1", "europe-central2", "europe-north1", "europe-southwest1", "europe-west1", "europe-west2", "europe-west3", "europe-west4", "europe-west6", "europe-west7", "europe-west8", "europe-west9", "europe-west12", "me-central1", "me-central2", "me-west1", "northamerica-northeast1", "northamerica-northeast2", "southamerica-east1", "southamerica-west1", "us-central1", "us-east1", "us-east2", "us-east3", "us-east4", "us-east5", "us-south1", "us-west1", "us-west2", "us-west3", "us-west4"]
The location of the dataset. Warning: Changes made after creation will not be applied. Read more here.
-- `destination_type` (String) must be one of ["bigquery"]
- `project_id` (String) The GCP project ID for the project containing the target BigQuery dataset. Read more here.
Optional:
-- `big_query_client_buffer_size_mb` (Number) Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here.
+- `big_query_client_buffer_size_mb` (Number) Default: 15
+Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here.
- `credentials_json` (String) The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
-- `loading_method` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method))
-- `raw_data_dataset` (String) The dataset to write raw tables into
-- `transformation_priority` (String) must be one of ["interactive", "batch"]
+- `disable_type_dedupe` (Boolean) Default: false
+Disable Writing Final Tables. WARNING! The data format in _airbyte_data is likely stable but there are no guarantees that other metadata columns will remain the same in future versions
+- `loading_method` (Attributes) The way data will be uploaded to BigQuery. (see [below for nested schema](#nestedatt--configuration--loading_method))
+- `raw_data_dataset` (String) The dataset to write raw tables into (default: airbyte_internal)
+- `transformation_priority` (String) must be one of ["interactive", "batch"]; Default: "interactive"
Interactive run type means that the query is executed as soon as possible, and these queries count towards concurrent rate limit and daily limit. Read more about interactive run type here. Batch queries are queued and started as soon as idle resources are available in the BigQuery shared resource pool, which usually occurs within a few minutes. Batch queries don’t count towards your concurrent rate limit. Read more about batch queries here. The default "interactive" value is used if not set explicitly.
@@ -84,94 +88,42 @@ Interactive run type means that the query is executed as soon as possible, and t
Optional:
-- `destination_bigquery_loading_method_gcs_staging` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_loading_method_gcs_staging))
-- `destination_bigquery_loading_method_standard_inserts` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_loading_method_standard_inserts))
-- `destination_bigquery_update_loading_method_gcs_staging` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_update_loading_method_gcs_staging))
-- `destination_bigquery_update_loading_method_standard_inserts` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_update_loading_method_standard_inserts))
+- `gcs_staging` (Attributes) (recommended) Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO to load your data into BigQuery. Provides best-in-class speed, reliability and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--gcs_staging))
+- `standard_inserts` (Attributes) (not recommended) Direct loading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In all other cases, you should use GCS staging. (see [below for nested schema](#nestedatt--configuration--loading_method--standard_inserts))
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_loading_method_gcs_staging`
+
+### Nested Schema for `configuration.loading_method.gcs_staging`
Required:
-- `credential` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_loading_method_gcs_staging--credential))
+- `credential` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--gcs_staging--credential))
- `gcs_bucket_name` (String) The name of the GCS bucket. Read more here.
- `gcs_bucket_path` (String) Directory under the GCS bucket where data will be written.
-- `method` (String) must be one of ["GCS Staging"]
Optional:
-- `file_buffer_count` (Number) Number of file buffers allocated for writing data. Increasing this number is beneficial for connections using Change Data Capture (CDC) and up to the number of streams within a connection. Increasing the number of file buffers past the maximum number of streams has deteriorating effects
-- `keep_files_in_gcs_bucket` (String) must be one of ["Delete all tmp files from GCS", "Keep all tmp files in GCS"]
+- `keep_files_in_gcs_bucket` (String) must be one of ["Delete all tmp files from GCS", "Keep all tmp files in GCS"]; Default: "Delete all tmp files from GCS"
This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_loading_method_gcs_staging.keep_files_in_gcs_bucket`
+
+### Nested Schema for `configuration.loading_method.gcs_staging.keep_files_in_gcs_bucket`
Optional:
-- `destination_bigquery_loading_method_gcs_staging_credential_hmac_key` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_loading_method_gcs_staging--keep_files_in_gcs_bucket--destination_bigquery_loading_method_gcs_staging_credential_hmac_key))
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_loading_method_gcs_staging.keep_files_in_gcs_bucket.destination_bigquery_loading_method_gcs_staging_credential_hmac_key`
-
-Required:
-
-- `credential_type` (String) must be one of ["HMAC_KEY"]
-- `hmac_key_access_id` (String) HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long.
-- `hmac_key_secret` (String) The corresponding secret for the access ID. It is a 40-character base-64 encoded string.
-
-
+- `hmac_key` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--gcs_staging--keep_files_in_gcs_bucket--hmac_key))
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_loading_method_standard_inserts`
+
+### Nested Schema for `configuration.loading_method.gcs_staging.keep_files_in_gcs_bucket.hmac_key`
Required:
-- `method` (String) must be one of ["Standard"]
-
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_update_loading_method_gcs_staging`
-
-Required:
+- `hmac_key_access_id` (String, Sensitive) HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long.
+- `hmac_key_secret` (String, Sensitive) The corresponding secret for the access ID. It is a 40-character base-64 encoded string.
-- `credential` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_update_loading_method_gcs_staging--credential))
-- `gcs_bucket_name` (String) The name of the GCS bucket. Read more here.
-- `gcs_bucket_path` (String) Directory under the GCS bucket where data will be written.
-- `method` (String) must be one of ["GCS Staging"]
-Optional:
-- `file_buffer_count` (Number) Number of file buffers allocated for writing data. Increasing this number is beneficial for connections using Change Data Capture (CDC) and up to the number of streams within a connection. Increasing the number of file buffers past the maximum number of streams has deteriorating effects
-- `keep_files_in_gcs_bucket` (String) must be one of ["Delete all tmp files from GCS", "Keep all tmp files in GCS"]
-This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_update_loading_method_gcs_staging.keep_files_in_gcs_bucket`
-
-Optional:
-
-- `destination_bigquery_update_loading_method_gcs_staging_credential_hmac_key` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_update_loading_method_gcs_staging--keep_files_in_gcs_bucket--destination_bigquery_update_loading_method_gcs_staging_credential_hmac_key))
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_update_loading_method_gcs_staging.keep_files_in_gcs_bucket.destination_bigquery_update_loading_method_gcs_staging_credential_hmac_key`
-
-Required:
-
-- `credential_type` (String) must be one of ["HMAC_KEY"]
-- `hmac_key_access_id` (String) HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long.
-- `hmac_key_secret` (String) The corresponding secret for the access ID. It is a 40-character base-64 encoded string.
-
-
-
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_update_loading_method_standard_inserts`
-
-Required:
-- `method` (String) must be one of ["Standard"]
+
+### Nested Schema for `configuration.loading_method.standard_inserts`
diff --git a/docs/resources/destination_bigquery_denormalized.md b/docs/resources/destination_bigquery_denormalized.md
deleted file mode 100644
index 0c6bbd5fc..000000000
--- a/docs/resources/destination_bigquery_denormalized.md
+++ /dev/null
@@ -1,172 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_destination_bigquery_denormalized Resource - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- DestinationBigqueryDenormalized Resource
----
-
-# airbyte_destination_bigquery_denormalized (Resource)
-
-DestinationBigqueryDenormalized Resource
-
-## Example Usage
-
-```terraform
-resource "airbyte_destination_bigquery_denormalized" "my_destination_bigquerydenormalized" {
- configuration = {
- big_query_client_buffer_size_mb = 15
- credentials_json = "...my_credentials_json..."
- dataset_id = "...my_dataset_id..."
- dataset_location = "europe-west7"
- destination_type = "bigquery-denormalized"
- loading_method = {
- destination_bigquery_denormalized_loading_method_gcs_staging = {
- credential = {
- destination_bigquery_denormalized_loading_method_gcs_staging_credential_hmac_key = {
- credential_type = "HMAC_KEY"
- hmac_key_access_id = "1234567890abcdefghij1234"
- hmac_key_secret = "1234567890abcdefghij1234567890ABCDEFGHIJ"
- }
- }
- file_buffer_count = 10
- gcs_bucket_name = "airbyte_sync"
- gcs_bucket_path = "data_sync/test"
- keep_files_in_gcs_bucket = "Keep all tmp files in GCS"
- method = "GCS Staging"
- }
- }
- project_id = "...my_project_id..."
- }
- name = "Francisco Windler"
- workspace_id = "c969e9a3-efa7-47df-b14c-d66ae395efb9"
-}
-```
-
-
-## Schema
-
-### Required
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-### Read-Only
-
-- `destination_id` (String)
-- `destination_type` (String)
-
-
-### Nested Schema for `configuration`
-
-Required:
-
-- `dataset_id` (String) The default BigQuery Dataset ID that tables are replicated to if the source does not specify a namespace. Read more here.
-- `destination_type` (String) must be one of ["bigquery-denormalized"]
-- `project_id` (String) The GCP project ID for the project containing the target BigQuery dataset. Read more here.
-
-Optional:
-
-- `big_query_client_buffer_size_mb` (Number) Google BigQuery client's chunk (buffer) size (MIN=1, MAX = 15) for each table. The size that will be written by a single RPC. Written data will be buffered and only flushed upon reaching this size or closing the channel. The default 15MB value is used if not set explicitly. Read more here.
-- `credentials_json` (String) The contents of the JSON service account key. Check out the docs if you need help generating this key. Default credentials will be used if this field is left empty.
-- `dataset_location` (String) must be one of ["US", "EU", "asia-east1", "asia-east2", "asia-northeast1", "asia-northeast2", "asia-northeast3", "asia-south1", "asia-south2", "asia-southeast1", "asia-southeast2", "australia-southeast1", "australia-southeast2", "europe-central1", "europe-central2", "europe-north1", "europe-southwest1", "europe-west1", "europe-west2", "europe-west3", "europe-west4", "europe-west6", "europe-west7", "europe-west8", "europe-west9", "me-west1", "northamerica-northeast1", "northamerica-northeast2", "southamerica-east1", "southamerica-west1", "us-central1", "us-east1", "us-east2", "us-east3", "us-east4", "us-east5", "us-west1", "us-west2", "us-west3", "us-west4"]
-The location of the dataset. Warning: Changes made after creation will not be applied. The default "US" value is used if not set explicitly. Read more here.
-- `loading_method` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method))
-
-
-### Nested Schema for `configuration.loading_method`
-
-Optional:
-
-- `destination_bigquery_denormalized_loading_method_gcs_staging` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_loading_method_gcs_staging))
-- `destination_bigquery_denormalized_loading_method_standard_inserts` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_loading_method_standard_inserts))
-- `destination_bigquery_denormalized_update_loading_method_gcs_staging` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_update_loading_method_gcs_staging))
-- `destination_bigquery_denormalized_update_loading_method_standard_inserts` (Attributes) Loading method used to send select the way data will be uploaded to BigQuery.
Standard Inserts - Direct uploading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In almost all cases, you should use staging.
GCS Staging - Writes large batches of records to a file, uploads the file to GCS, then uses COPY INTO table to upload the file. Recommended for most workloads for better speed and scalability. Read more about GCS Staging here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_update_loading_method_standard_inserts))
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_loading_method_gcs_staging`
-
-Required:
-
-- `credential` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_loading_method_gcs_staging--credential))
-- `gcs_bucket_name` (String) The name of the GCS bucket. Read more here.
-- `gcs_bucket_path` (String) Directory under the GCS bucket where data will be written. Read more here.
-- `method` (String) must be one of ["GCS Staging"]
-
-Optional:
-
-- `file_buffer_count` (Number) Number of file buffers allocated for writing data. Increasing this number is beneficial for connections using Change Data Capture (CDC) and up to the number of streams within a connection. Increasing the number of file buffers past the maximum number of streams has deteriorating effects
-- `keep_files_in_gcs_bucket` (String) must be one of ["Delete all tmp files from GCS", "Keep all tmp files in GCS"]
-This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_loading_method_gcs_staging.keep_files_in_gcs_bucket`
-
-Optional:
-
-- `destination_bigquery_denormalized_loading_method_gcs_staging_credential_hmac_key` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_loading_method_gcs_staging--keep_files_in_gcs_bucket--destination_bigquery_denormalized_loading_method_gcs_staging_credential_hmac_key))
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_loading_method_gcs_staging.keep_files_in_gcs_bucket.destination_bigquery_denormalized_loading_method_gcs_staging_credential_hmac_key`
-
-Required:
-
-- `credential_type` (String) must be one of ["HMAC_KEY"]
-- `hmac_key_access_id` (String) HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long.
-- `hmac_key_secret` (String) The corresponding secret for the access ID. It is a 40-character base-64 encoded string.
-
-
-
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_loading_method_standard_inserts`
-
-Required:
-
-- `method` (String) must be one of ["Standard"]
-
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_update_loading_method_gcs_staging`
-
-Required:
-
-- `credential` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_update_loading_method_gcs_staging--credential))
-- `gcs_bucket_name` (String) The name of the GCS bucket. Read more here.
-- `gcs_bucket_path` (String) Directory under the GCS bucket where data will be written. Read more here.
-- `method` (String) must be one of ["GCS Staging"]
-
-Optional:
-
-- `file_buffer_count` (Number) Number of file buffers allocated for writing data. Increasing this number is beneficial for connections using Change Data Capture (CDC) and up to the number of streams within a connection. Increasing the number of file buffers past the maximum number of streams has deteriorating effects
-- `keep_files_in_gcs_bucket` (String) must be one of ["Delete all tmp files from GCS", "Keep all tmp files in GCS"]
-This upload method is supposed to temporary store records in GCS bucket. By this select you can chose if these records should be removed from GCS when migration has finished. The default "Delete all tmp files from GCS" value is used if not set explicitly.
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_update_loading_method_gcs_staging.keep_files_in_gcs_bucket`
-
-Optional:
-
-- `destination_bigquery_denormalized_update_loading_method_gcs_staging_credential_hmac_key` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--loading_method--destination_bigquery_denormalized_update_loading_method_gcs_staging--keep_files_in_gcs_bucket--destination_bigquery_denormalized_update_loading_method_gcs_staging_credential_hmac_key))
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_update_loading_method_gcs_staging.keep_files_in_gcs_bucket.destination_bigquery_denormalized_update_loading_method_gcs_staging_credential_hmac_key`
-
-Required:
-
-- `credential_type` (String) must be one of ["HMAC_KEY"]
-- `hmac_key_access_id` (String) HMAC key access ID. When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long.
-- `hmac_key_secret` (String) The corresponding secret for the access ID. It is a 40-character base-64 encoded string.
-
-
-
-
-
-### Nested Schema for `configuration.loading_method.destination_bigquery_denormalized_update_loading_method_standard_inserts`
-
-Required:
-
-- `method` (String) must be one of ["Standard"]
-
-
diff --git a/docs/resources/destination_clickhouse.md b/docs/resources/destination_clickhouse.md
index 5dbe84b6c..5f5919893 100644
--- a/docs/resources/destination_clickhouse.md
+++ b/docs/resources/destination_clickhouse.md
@@ -15,21 +15,19 @@ DestinationClickhouse Resource
```terraform
resource "airbyte_destination_clickhouse" "my_destination_clickhouse" {
configuration = {
- database = "...my_database..."
- destination_type = "clickhouse"
- host = "...my_host..."
- jdbc_url_params = "...my_jdbc_url_params..."
- password = "...my_password..."
- port = 8123
+ database = "...my_database..."
+ host = "...my_host..."
+ jdbc_url_params = "...my_jdbc_url_params..."
+ password = "...my_password..."
+ port = 8123
tunnel_method = {
- destination_clickhouse_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ no_tunnel = {}
}
- username = "Magdalena_Kuvalis"
+ username = "Rhianna_Leannon"
}
- name = "Sandy Huels"
- workspace_id = "97074ba4-469b-46e2-9419-59890afa563e"
+ definition_id = "2c276398-b468-48ad-b426-53c327fa18b5"
+ name = "Gerardo Corwin"
+ workspace_id = "4f41e22e-39b6-461a-89af-71290b2c6d65"
}
```
@@ -39,9 +37,13 @@ resource "airbyte_destination_clickhouse" "my_destination_clickhouse" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -53,15 +55,15 @@ resource "airbyte_destination_clickhouse" "my_destination_clickhouse" {
Required:
- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["clickhouse"]
- `host` (String) Hostname of the database.
-- `port` (Number) HTTP port of the database.
- `username` (String) Username to use to access the database.
Optional:
- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) Password associated with the username.
+- `password` (String, Sensitive) Password associated with the username.
+- `port` (Number) Default: 8123
+HTTP port of the database.
- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
@@ -69,80 +71,41 @@ Optional:
Optional:
-- `destination_clickhouse_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_clickhouse_ssh_tunnel_method_no_tunnel))
-- `destination_clickhouse_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_clickhouse_ssh_tunnel_method_password_authentication))
-- `destination_clickhouse_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_clickhouse_ssh_tunnel_method_ssh_key_authentication))
-- `destination_clickhouse_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_clickhouse_update_ssh_tunnel_method_no_tunnel))
-- `destination_clickhouse_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_clickhouse_update_ssh_tunnel_method_password_authentication))
-- `destination_clickhouse_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_clickhouse_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.destination_clickhouse_ssh_tunnel_method_no_tunnel`
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-
-### Nested Schema for `configuration.tunnel_method.destination_clickhouse_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_clickhouse_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
+Optional:
-
-### Nested Schema for `configuration.tunnel_method.destination_clickhouse_update_ssh_tunnel_method_no_tunnel`
-
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.destination_clickhouse_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_clickhouse_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/destination_convex.md b/docs/resources/destination_convex.md
index 680d10d2b..d592ddc92 100644
--- a/docs/resources/destination_convex.md
+++ b/docs/resources/destination_convex.md
@@ -15,12 +15,12 @@ DestinationConvex Resource
```terraform
resource "airbyte_destination_convex" "my_destination_convex" {
configuration = {
- access_key = "...my_access_key..."
- deployment_url = "https://murky-swan-635.convex.cloud"
- destination_type = "convex"
+ access_key = "...my_access_key..."
+ deployment_url = "https://cluttered-owl-337.convex.cloud"
}
- name = "Joyce Kertzmann"
- workspace_id = "4c8b711e-5b7f-4d2e-9028-921cddc69260"
+ definition_id = "335e03ab-ebb7-41b5-8e87-2ec68b6d2a9c"
+ name = "Patsy Powlowski"
+ workspace_id = "6941566f-22fd-430a-a8af-8c1d27b3e573"
}
```
@@ -30,9 +30,13 @@ resource "airbyte_destination_convex" "my_destination_convex" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -43,8 +47,7 @@ resource "airbyte_destination_convex" "my_destination_convex" {
Required:
-- `access_key` (String) API access key used to send data to a Convex deployment.
+- `access_key` (String, Sensitive) API access key used to send data to a Convex deployment.
- `deployment_url` (String) URL of the Convex deployment that is the destination
-- `destination_type` (String) must be one of ["convex"]
diff --git a/docs/resources/destination_cumulio.md b/docs/resources/destination_cumulio.md
index 29875b578..52ddf5562 100644
--- a/docs/resources/destination_cumulio.md
+++ b/docs/resources/destination_cumulio.md
@@ -15,13 +15,13 @@ DestinationCumulio Resource
```terraform
resource "airbyte_destination_cumulio" "my_destination_cumulio" {
configuration = {
- api_host = "...my_api_host..."
- api_key = "...my_api_key..."
- api_token = "...my_api_token..."
- destination_type = "cumulio"
+ api_host = "...my_api_host..."
+ api_key = "...my_api_key..."
+ api_token = "...my_api_token..."
}
- name = "Ebony Predovic"
- workspace_id = "6b0d5f0d-30c5-4fbb-a587-053202c73d5f"
+ definition_id = "c0eb8223-613d-423c-a875-293aec4aa100"
+ name = "Felipe Champlin"
+ workspace_id = "22581a88-452d-4e7c-b5eb-92a9e952da29"
}
```
@@ -31,9 +31,13 @@ resource "airbyte_destination_cumulio" "my_destination_cumulio" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -44,9 +48,12 @@ resource "airbyte_destination_cumulio" "my_destination_cumulio" {
Required:
-- `api_host` (String) URL of the Cumul.io API (e.g. 'https://api.cumul.io', 'https://api.us.cumul.io', or VPC-specific API url). Defaults to 'https://api.cumul.io'.
-- `api_key` (String) An API key generated in Cumul.io's platform (can be generated here: https://app.cumul.io/start/profile/integration).
-- `api_token` (String) The corresponding API token generated in Cumul.io's platform (can be generated here: https://app.cumul.io/start/profile/integration).
-- `destination_type` (String) must be one of ["cumulio"]
+- `api_key` (String, Sensitive) An API key generated in Cumul.io's platform (can be generated here: https://app.cumul.io/start/profile/integration).
+- `api_token` (String, Sensitive) The corresponding API token generated in Cumul.io's platform (can be generated here: https://app.cumul.io/start/profile/integration).
+
+Optional:
+
+- `api_host` (String) Default: "https://api.cumul.io"
+URL of the Cumul.io API (e.g. 'https://api.cumul.io', 'https://api.us.cumul.io', or VPC-specific API url). Defaults to 'https://api.cumul.io'.
diff --git a/docs/resources/destination_databend.md b/docs/resources/destination_databend.md
index 7cf563b9e..d6e3d0dda 100644
--- a/docs/resources/destination_databend.md
+++ b/docs/resources/destination_databend.md
@@ -15,16 +15,16 @@ DestinationDatabend Resource
```terraform
resource "airbyte_destination_databend" "my_destination_databend" {
configuration = {
- database = "...my_database..."
- destination_type = "databend"
- host = "...my_host..."
- password = "...my_password..."
- port = 443
- table = "default"
- username = "Leo.Purdy"
+ database = "...my_database..."
+ host = "...my_host..."
+ password = "...my_password..."
+ port = 443
+ table = "default"
+ username = "Kira78"
}
- name = "Bobby Kutch V"
- workspace_id = "b3fe49a8-d9cb-4f48-a333-23f9b77f3a41"
+ definition_id = "006aecee-7c88-4461-9655-998ae24eec56"
+ name = "Josefina Rosenbaum"
+ workspace_id = "48d71917-bd77-4158-87e0-4c579843cbfb"
}
```
@@ -34,9 +34,13 @@ resource "airbyte_destination_databend" "my_destination_databend" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -48,14 +52,15 @@ resource "airbyte_destination_databend" "my_destination_databend" {
Required:
- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["databend"]
- `host` (String) Hostname of the database.
- `username` (String) Username to use to access the database.
Optional:
-- `password` (String) Password associated with the username.
-- `port` (Number) Port of the database.
-- `table` (String) The default table was written to.
+- `password` (String, Sensitive) Password associated with the username.
+- `port` (Number) Default: 443
+Port of the database.
+- `table` (String) Default: "default"
+The default table was written to.
diff --git a/docs/resources/destination_databricks.md b/docs/resources/destination_databricks.md
index c8014b63a..bfea10ef6 100644
--- a/docs/resources/destination_databricks.md
+++ b/docs/resources/destination_databricks.md
@@ -17,22 +17,20 @@ resource "airbyte_destination_databricks" "my_destination_databricks" {
configuration = {
accept_terms = false
data_source = {
- destination_databricks_data_source_recommended_managed_tables = {
- data_source_type = "MANAGED_TABLES_STORAGE"
- }
+ recommended_managed_tables = {}
}
database = "...my_database..."
databricks_http_path = "sql/protocolvx/o/1234567489/0000-1111111-abcd90"
databricks_personal_access_token = "dapi0123456789abcdefghij0123456789AB"
databricks_port = "443"
databricks_server_hostname = "abc-12345678-wxyz.cloud.databricks.com"
- destination_type = "databricks"
enable_schema_evolution = true
purge_staging_data = false
schema = "default"
}
- name = "Bertha Thompson"
- workspace_id = "69280d1b-a77a-489e-bf73-7ae4203ce5e6"
+ definition_id = "05d7306c-fa6f-460b-bc11-e74f736d7a95"
+ name = "Meghan Mitchell"
+ workspace_id = "4c049945-edd6-4e95-a416-d119e802e071"
}
```
@@ -42,9 +40,13 @@ resource "airbyte_destination_databricks" "my_destination_databricks" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -55,112 +57,67 @@ resource "airbyte_destination_databricks" "my_destination_databricks" {
Required:
-- `accept_terms` (Boolean) You must agree to the Databricks JDBC Driver Terms & Conditions to use this connector.
- `data_source` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source))
- `databricks_http_path` (String) Databricks Cluster HTTP Path.
-- `databricks_personal_access_token` (String) Databricks Personal Access Token for making authenticated requests.
+- `databricks_personal_access_token` (String, Sensitive) Databricks Personal Access Token for making authenticated requests.
- `databricks_server_hostname` (String) Databricks Cluster Server Hostname.
-- `destination_type` (String) must be one of ["databricks"]
Optional:
+- `accept_terms` (Boolean) Default: false
+You must agree to the Databricks JDBC Driver Terms & Conditions to use this connector.
- `database` (String) The name of the catalog. If not specified otherwise, the "hive_metastore" will be used.
-- `databricks_port` (String) Databricks Cluster Port.
-- `enable_schema_evolution` (Boolean) Support schema evolution for all streams. If "false", the connector might fail when a stream's schema changes.
-- `purge_staging_data` (Boolean) Default to 'true'. Switch it to 'false' for debugging purpose.
-- `schema` (String) The default schema tables are written. If not specified otherwise, the "default" will be used.
+- `databricks_port` (String) Default: "443"
+Databricks Cluster Port.
+- `enable_schema_evolution` (Boolean) Default: false
+Support schema evolution for all streams. If "false", the connector might fail when a stream's schema changes.
+- `purge_staging_data` (Boolean) Default: true
+Default to 'true'. Switch it to 'false' for debugging purpose.
+- `schema` (String) Default: "default"
+The default schema tables are written. If not specified otherwise, the "default" will be used.
### Nested Schema for `configuration.data_source`
Optional:
-- `destination_databricks_data_source_amazon_s3` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--destination_databricks_data_source_amazon_s3))
-- `destination_databricks_data_source_azure_blob_storage` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--destination_databricks_data_source_azure_blob_storage))
-- `destination_databricks_data_source_recommended_managed_tables` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--destination_databricks_data_source_recommended_managed_tables))
-- `destination_databricks_update_data_source_amazon_s3` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--destination_databricks_update_data_source_amazon_s3))
-- `destination_databricks_update_data_source_azure_blob_storage` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--destination_databricks_update_data_source_azure_blob_storage))
-- `destination_databricks_update_data_source_recommended_managed_tables` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--destination_databricks_update_data_source_recommended_managed_tables))
+- `amazon_s3` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--amazon_s3))
+- `azure_blob_storage` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--azure_blob_storage))
+- `recommended_managed_tables` (Attributes) Storage on which the delta lake is built. (see [below for nested schema](#nestedatt--configuration--data_source--recommended_managed_tables))
-
-### Nested Schema for `configuration.data_source.destination_databricks_data_source_amazon_s3`
+
+### Nested Schema for `configuration.data_source.amazon_s3`
Required:
-- `data_source_type` (String) must be one of ["S3_STORAGE"]
-- `s3_access_key_id` (String) The Access Key Id granting allow one to access the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket.
+- `s3_access_key_id` (String, Sensitive) The Access Key Id granting allow one to access the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket.
- `s3_bucket_name` (String) The name of the S3 bucket to use for intermittent staging of the data.
- `s3_bucket_path` (String) The directory under the S3 bucket where data will be written.
-- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
-The region of the S3 staging bucket to use if utilising a copy strategy.
-- `s3_secret_access_key` (String) The corresponding secret to the above access key id.
+- `s3_secret_access_key` (String, Sensitive) The corresponding secret to the above access key id.
Optional:
- `file_name_pattern` (String) The pattern allows you to set the file-name format for the S3 staging file(s)
-
-
-
-### Nested Schema for `configuration.data_source.destination_databricks_data_source_azure_blob_storage`
-
-Required:
-
-- `azure_blob_storage_account_name` (String) The account's name of the Azure Blob Storage.
-- `azure_blob_storage_container_name` (String) The name of the Azure blob storage container.
-- `azure_blob_storage_sas_token` (String) Shared access signature (SAS) token to grant limited access to objects in your storage account.
-- `data_source_type` (String) must be one of ["AZURE_BLOB_STORAGE"]
-
-Optional:
-
-- `azure_blob_storage_endpoint_domain_name` (String) This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
-
-
-
-### Nested Schema for `configuration.data_source.destination_databricks_data_source_recommended_managed_tables`
-
-Required:
-
-- `data_source_type` (String) must be one of ["MANAGED_TABLES_STORAGE"]
-
-
-
-### Nested Schema for `configuration.data_source.destination_databricks_update_data_source_amazon_s3`
-
-Required:
-
-- `data_source_type` (String) must be one of ["S3_STORAGE"]
-- `s3_access_key_id` (String) The Access Key Id granting allow one to access the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket.
-- `s3_bucket_name` (String) The name of the S3 bucket to use for intermittent staging of the data.
-- `s3_bucket_path` (String) The directory under the S3 bucket where data will be written.
-- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
+- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]; Default: ""
The region of the S3 staging bucket to use if utilising a copy strategy.
-- `s3_secret_access_key` (String) The corresponding secret to the above access key id.
-
-Optional:
-
-- `file_name_pattern` (String) The pattern allows you to set the file-name format for the S3 staging file(s)
-
-### Nested Schema for `configuration.data_source.destination_databricks_update_data_source_azure_blob_storage`
+
+### Nested Schema for `configuration.data_source.azure_blob_storage`
Required:
- `azure_blob_storage_account_name` (String) The account's name of the Azure Blob Storage.
- `azure_blob_storage_container_name` (String) The name of the Azure blob storage container.
-- `azure_blob_storage_sas_token` (String) Shared access signature (SAS) token to grant limited access to objects in your storage account.
-- `data_source_type` (String) must be one of ["AZURE_BLOB_STORAGE"]
+- `azure_blob_storage_sas_token` (String, Sensitive) Shared access signature (SAS) token to grant limited access to objects in your storage account.
Optional:
-- `azure_blob_storage_endpoint_domain_name` (String) This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
-
+- `azure_blob_storage_endpoint_domain_name` (String) Default: "blob.core.windows.net"
+This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
-
-### Nested Schema for `configuration.data_source.destination_databricks_update_data_source_recommended_managed_tables`
-
-Required:
-- `data_source_type` (String) must be one of ["MANAGED_TABLES_STORAGE"]
+
+### Nested Schema for `configuration.data_source.recommended_managed_tables`
diff --git a/docs/resources/destination_dev_null.md b/docs/resources/destination_dev_null.md
index 4d2ed7135..da4a14f86 100644
--- a/docs/resources/destination_dev_null.md
+++ b/docs/resources/destination_dev_null.md
@@ -15,15 +15,13 @@ DestinationDevNull Resource
```terraform
resource "airbyte_destination_dev_null" "my_destination_devnull" {
configuration = {
- destination_type = "dev-null"
test_destination = {
- destination_dev_null_test_destination_silent = {
- test_destination_type = "SILENT"
- }
+ silent = {}
}
}
- name = "Rene Hane"
- workspace_id = "a0d446ce-2af7-4a73-8f3b-e453f870b326"
+ definition_id = "29d4644f-9dd3-4d54-87cf-b82ef1e01ef5"
+ name = "Megan King"
+ workspace_id = "9e2c85c9-04a2-403f-b157-a47112db1eec"
}
```
@@ -33,9 +31,13 @@ resource "airbyte_destination_dev_null" "my_destination_devnull" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -46,7 +48,6 @@ resource "airbyte_destination_dev_null" "my_destination_devnull" {
Required:
-- `destination_type` (String) must be one of ["dev-null"]
- `test_destination` (Attributes) The type of destination to be used (see [below for nested schema](#nestedatt--configuration--test_destination))
@@ -54,22 +55,9 @@ Required:
Optional:
-- `destination_dev_null_test_destination_silent` (Attributes) The type of destination to be used (see [below for nested schema](#nestedatt--configuration--test_destination--destination_dev_null_test_destination_silent))
-- `destination_dev_null_update_test_destination_silent` (Attributes) The type of destination to be used (see [below for nested schema](#nestedatt--configuration--test_destination--destination_dev_null_update_test_destination_silent))
-
-
-### Nested Schema for `configuration.test_destination.destination_dev_null_test_destination_silent`
-
-Required:
-
-- `test_destination_type` (String) must be one of ["SILENT"]
-
-
-
-### Nested Schema for `configuration.test_destination.destination_dev_null_update_test_destination_silent`
-
-Required:
+- `silent` (Attributes) The type of destination to be used (see [below for nested schema](#nestedatt--configuration--test_destination--silent))
-- `test_destination_type` (String) must be one of ["SILENT"]
+
+### Nested Schema for `configuration.test_destination.silent`
diff --git a/docs/resources/destination_duckdb.md b/docs/resources/destination_duckdb.md
new file mode 100644
index 000000000..52d86ef2e
--- /dev/null
+++ b/docs/resources/destination_duckdb.md
@@ -0,0 +1,58 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_destination_duckdb Resource - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ DestinationDuckdb Resource
+---
+
+# airbyte_destination_duckdb (Resource)
+
+DestinationDuckdb Resource
+
+## Example Usage
+
+```terraform
+resource "airbyte_destination_duckdb" "my_destination_duckdb" {
+ configuration = {
+ destination_path = "motherduck:"
+ motherduck_api_key = "...my_motherduck_api_key..."
+ schema = "main"
+ }
+ definition_id = "9f91eb58-c332-4574-9699-3f062684640d"
+ name = "Bobbie Lang"
+ workspace_id = "d52cbff0-1858-4935-bdfe-2750539f4b80"
+}
+```
+
+
+## Schema
+
+### Required
+
+- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
+- `workspace_id` (String)
+
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
+### Read-Only
+
+- `destination_id` (String)
+- `destination_type` (String)
+
+
+### Nested Schema for `configuration`
+
+Required:
+
+- `destination_path` (String) Path to the .duckdb file, or the text 'md:' to connect to MotherDuck. The file will be placed inside that local mount. For more information check out our docs
+
+Optional:
+
+- `motherduck_api_key` (String, Sensitive) API key to use for authentication to a MotherDuck database.
+- `schema` (String) Database schema name, default for duckdb is 'main'.
+
+
diff --git a/docs/resources/destination_dynamodb.md b/docs/resources/destination_dynamodb.md
index dc59cc20e..10aa98dd2 100644
--- a/docs/resources/destination_dynamodb.md
+++ b/docs/resources/destination_dynamodb.md
@@ -16,14 +16,14 @@ DestinationDynamodb Resource
resource "airbyte_destination_dynamodb" "my_destination_dynamodb" {
configuration = {
access_key_id = "A012345678910EXAMPLE"
- destination_type = "dynamodb"
dynamodb_endpoint = "http://localhost:9000"
- dynamodb_region = "eu-south-1"
+ dynamodb_region = "ap-southeast-1"
dynamodb_table_name_prefix = "airbyte_sync"
secret_access_key = "a012345678910ABCDEFGH/AbCdEfGhEXAMPLEKEY"
}
- name = "Joanna Kohler"
- workspace_id = "29cdb1a8-422b-4b67-9d23-22715bf0cbb1"
+ definition_id = "f993efae-2dca-4f86-989d-ab1153f466f7"
+ name = "Ms. Larry Reynolds"
+ workspace_id = "5aa0db79-7942-4be7-a5f1-f78855663545"
}
```
@@ -33,9 +33,13 @@ resource "airbyte_destination_dynamodb" "my_destination_dynamodb" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -46,15 +50,15 @@ resource "airbyte_destination_dynamodb" "my_destination_dynamodb" {
Required:
-- `access_key_id` (String) The access key id to access the DynamoDB. Airbyte requires Read and Write permissions to the DynamoDB.
-- `destination_type` (String) must be one of ["dynamodb"]
-- `dynamodb_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
-The region of the DynamoDB.
+- `access_key_id` (String, Sensitive) The access key id to access the DynamoDB. Airbyte requires Read and Write permissions to the DynamoDB.
- `dynamodb_table_name_prefix` (String) The prefix to use when naming DynamoDB tables.
-- `secret_access_key` (String) The corresponding secret to the access key id.
+- `secret_access_key` (String, Sensitive) The corresponding secret to the access key id.
Optional:
-- `dynamodb_endpoint` (String) This is your DynamoDB endpoint url.(if you are working with AWS DynamoDB, just leave empty).
+- `dynamodb_endpoint` (String) Default: ""
+This is your DynamoDB endpoint url.(if you are working with AWS DynamoDB, just leave empty).
+- `dynamodb_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]; Default: ""
+The region of the DynamoDB.
diff --git a/docs/resources/destination_elasticsearch.md b/docs/resources/destination_elasticsearch.md
index df6faf196..2718d954d 100644
--- a/docs/resources/destination_elasticsearch.md
+++ b/docs/resources/destination_elasticsearch.md
@@ -16,19 +16,18 @@ DestinationElasticsearch Resource
resource "airbyte_destination_elasticsearch" "my_destination_elasticsearch" {
configuration = {
authentication_method = {
- destination_elasticsearch_authentication_method_api_key_secret = {
+ api_key_secret = {
api_key_id = "...my_api_key_id..."
api_key_secret = "...my_api_key_secret..."
- method = "secret"
}
}
- ca_certificate = "...my_ca_certificate..."
- destination_type = "elasticsearch"
- endpoint = "...my_endpoint..."
- upsert = true
+ ca_certificate = "...my_ca_certificate..."
+ endpoint = "...my_endpoint..."
+ upsert = false
}
- name = "Carolyn Rohan"
- workspace_id = "90f3443a-1108-4e0a-9cf4-b921879fce95"
+ definition_id = "da65ed46-5e75-48af-92ad-38ed7ed0e5e2"
+ name = "Katherine Considine"
+ workspace_id = "7d0e4e50-95ed-494b-8ecb-397d064562ef"
}
```
@@ -38,9 +37,13 @@ resource "airbyte_destination_elasticsearch" "my_destination_elasticsearch" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -51,62 +54,38 @@ resource "airbyte_destination_elasticsearch" "my_destination_elasticsearch" {
Required:
-- `destination_type` (String) must be one of ["elasticsearch"]
- `endpoint` (String) The full url of the Elasticsearch server
Optional:
- `authentication_method` (Attributes) The type of authentication to be used (see [below for nested schema](#nestedatt--configuration--authentication_method))
- `ca_certificate` (String) CA certificate
-- `upsert` (Boolean) If a primary key identifier is defined in the source, an upsert will be performed using the primary key value as the elasticsearch doc id. Does not support composite primary keys.
+- `upsert` (Boolean) Default: true
+If a primary key identifier is defined in the source, an upsert will be performed using the primary key value as the elasticsearch doc id. Does not support composite primary keys.
### Nested Schema for `configuration.authentication_method`
Optional:
-- `destination_elasticsearch_authentication_method_api_key_secret` (Attributes) Use a api key and secret combination to authenticate (see [below for nested schema](#nestedatt--configuration--authentication_method--destination_elasticsearch_authentication_method_api_key_secret))
-- `destination_elasticsearch_authentication_method_username_password` (Attributes) Basic auth header with a username and password (see [below for nested schema](#nestedatt--configuration--authentication_method--destination_elasticsearch_authentication_method_username_password))
-- `destination_elasticsearch_update_authentication_method_api_key_secret` (Attributes) Use a api key and secret combination to authenticate (see [below for nested schema](#nestedatt--configuration--authentication_method--destination_elasticsearch_update_authentication_method_api_key_secret))
-- `destination_elasticsearch_update_authentication_method_username_password` (Attributes) Basic auth header with a username and password (see [below for nested schema](#nestedatt--configuration--authentication_method--destination_elasticsearch_update_authentication_method_username_password))
-
-
-### Nested Schema for `configuration.authentication_method.destination_elasticsearch_authentication_method_api_key_secret`
-
-Required:
-
-- `api_key_id` (String) The Key ID to used when accessing an enterprise Elasticsearch instance.
-- `api_key_secret` (String) The secret associated with the API Key ID.
-- `method` (String) must be one of ["secret"]
-
-
-
-### Nested Schema for `configuration.authentication_method.destination_elasticsearch_authentication_method_username_password`
-
-Required:
-
-- `method` (String) must be one of ["basic"]
-- `password` (String) Basic auth password to access a secure Elasticsearch server
-- `username` (String) Basic auth username to access a secure Elasticsearch server
-
+- `api_key_secret` (Attributes) Use a api key and secret combination to authenticate (see [below for nested schema](#nestedatt--configuration--authentication_method--api_key_secret))
+- `username_password` (Attributes) Basic auth header with a username and password (see [below for nested schema](#nestedatt--configuration--authentication_method--username_password))
-
-### Nested Schema for `configuration.authentication_method.destination_elasticsearch_update_authentication_method_api_key_secret`
+
+### Nested Schema for `configuration.authentication_method.api_key_secret`
Required:
- `api_key_id` (String) The Key ID to used when accessing an enterprise Elasticsearch instance.
- `api_key_secret` (String) The secret associated with the API Key ID.
-- `method` (String) must be one of ["secret"]
-
-### Nested Schema for `configuration.authentication_method.destination_elasticsearch_update_authentication_method_username_password`
+
+### Nested Schema for `configuration.authentication_method.username_password`
Required:
-- `method` (String) must be one of ["basic"]
-- `password` (String) Basic auth password to access a secure Elasticsearch server
+- `password` (String, Sensitive) Basic auth password to access a secure Elasticsearch server
- `username` (String) Basic auth username to access a secure Elasticsearch server
diff --git a/docs/resources/destination_firebolt.md b/docs/resources/destination_firebolt.md
index 6a7c63d78..f4dc30b3d 100644
--- a/docs/resources/destination_firebolt.md
+++ b/docs/resources/destination_firebolt.md
@@ -15,16 +15,14 @@ DestinationFirebolt Resource
```terraform
resource "airbyte_destination_firebolt" "my_destination_firebolt" {
configuration = {
- account = "...my_account..."
- database = "...my_database..."
- destination_type = "firebolt"
- engine = "...my_engine..."
- host = "api.app.firebolt.io"
+ account = "...my_account..."
+ database = "...my_database..."
+ engine = "...my_engine..."
+ host = "api.app.firebolt.io"
loading_method = {
- destination_firebolt_loading_method_external_table_via_s3 = {
+ external_table_via_s3 = {
aws_key_id = "...my_aws_key_id..."
aws_key_secret = "...my_aws_key_secret..."
- method = "S3"
s3_bucket = "...my_s3_bucket..."
s3_region = "us-east-1"
}
@@ -32,8 +30,9 @@ resource "airbyte_destination_firebolt" "my_destination_firebolt" {
password = "...my_password..."
username = "username@email.com"
}
- name = "Roman Kulas"
- workspace_id = "c7abd74d-d39c-40f5-92cf-f7c70a45626d"
+ definition_id = "d37ea6e5-cbc1-4c07-86ea-3ea494c42020"
+ name = "Jared Spencer"
+ workspace_id = "d1afa414-5a8e-4ad6-8436-1fa9c0130565"
}
```
@@ -43,9 +42,13 @@ resource "airbyte_destination_firebolt" "my_destination_firebolt" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -57,8 +60,7 @@ resource "airbyte_destination_firebolt" "my_destination_firebolt" {
Required:
- `database` (String) The database to connect to.
-- `destination_type` (String) must be one of ["firebolt"]
-- `password` (String) Firebolt password.
+- `password` (String, Sensitive) Firebolt password.
- `username` (String) Firebolt email address you use to login.
Optional:
@@ -73,48 +75,21 @@ Optional:
Optional:
-- `destination_firebolt_loading_method_external_table_via_s3` (Attributes) Loading method used to select the way data will be uploaded to Firebolt (see [below for nested schema](#nestedatt--configuration--loading_method--destination_firebolt_loading_method_external_table_via_s3))
-- `destination_firebolt_loading_method_sql_inserts` (Attributes) Loading method used to select the way data will be uploaded to Firebolt (see [below for nested schema](#nestedatt--configuration--loading_method--destination_firebolt_loading_method_sql_inserts))
-- `destination_firebolt_update_loading_method_external_table_via_s3` (Attributes) Loading method used to select the way data will be uploaded to Firebolt (see [below for nested schema](#nestedatt--configuration--loading_method--destination_firebolt_update_loading_method_external_table_via_s3))
-- `destination_firebolt_update_loading_method_sql_inserts` (Attributes) Loading method used to select the way data will be uploaded to Firebolt (see [below for nested schema](#nestedatt--configuration--loading_method--destination_firebolt_update_loading_method_sql_inserts))
+- `external_table_via_s3` (Attributes) Loading method used to select the way data will be uploaded to Firebolt (see [below for nested schema](#nestedatt--configuration--loading_method--external_table_via_s3))
+- `sql_inserts` (Attributes) Loading method used to select the way data will be uploaded to Firebolt (see [below for nested schema](#nestedatt--configuration--loading_method--sql_inserts))
-
-### Nested Schema for `configuration.loading_method.destination_firebolt_loading_method_external_table_via_s3`
+
+### Nested Schema for `configuration.loading_method.external_table_via_s3`
Required:
-- `aws_key_id` (String) AWS access key granting read and write access to S3.
-- `aws_key_secret` (String) Corresponding secret part of the AWS Key
-- `method` (String) must be one of ["S3"]
+- `aws_key_id` (String, Sensitive) AWS access key granting read and write access to S3.
+- `aws_key_secret` (String, Sensitive) Corresponding secret part of the AWS Key
- `s3_bucket` (String) The name of the S3 bucket.
- `s3_region` (String) Region name of the S3 bucket.
-
-### Nested Schema for `configuration.loading_method.destination_firebolt_loading_method_sql_inserts`
-
-Required:
-
-- `method` (String) must be one of ["SQL"]
-
-
-
-### Nested Schema for `configuration.loading_method.destination_firebolt_update_loading_method_external_table_via_s3`
-
-Required:
-
-- `aws_key_id` (String) AWS access key granting read and write access to S3.
-- `aws_key_secret` (String) Corresponding secret part of the AWS Key
-- `method` (String) must be one of ["S3"]
-- `s3_bucket` (String) The name of the S3 bucket.
-- `s3_region` (String) Region name of the S3 bucket.
-
-
-
-### Nested Schema for `configuration.loading_method.destination_firebolt_update_loading_method_sql_inserts`
-
-Required:
-
-- `method` (String) must be one of ["SQL"]
+
+### Nested Schema for `configuration.loading_method.sql_inserts`
diff --git a/docs/resources/destination_firestore.md b/docs/resources/destination_firestore.md
index 77652908d..1237d484f 100644
--- a/docs/resources/destination_firestore.md
+++ b/docs/resources/destination_firestore.md
@@ -16,11 +16,11 @@ DestinationFirestore Resource
resource "airbyte_destination_firestore" "my_destination_firestore" {
configuration = {
credentials_json = "...my_credentials_json..."
- destination_type = "firestore"
project_id = "...my_project_id..."
}
- name = "Paula Jacobs I"
- workspace_id = "f16d9f5f-ce6c-4556-946c-3e250fb008c4"
+ definition_id = "53a4e50c-dde3-4bcf-b11f-630fa923b2f8"
+ name = "Sheldon Bernhard"
+ workspace_id = "868bf037-297d-4cd6-abcb-9a13f0bea64a"
}
```
@@ -30,9 +30,13 @@ resource "airbyte_destination_firestore" "my_destination_firestore" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -43,7 +47,6 @@ resource "airbyte_destination_firestore" "my_destination_firestore" {
Required:
-- `destination_type` (String) must be one of ["firestore"]
- `project_id` (String) The GCP project ID for the project containing the target BigQuery dataset.
Optional:
diff --git a/docs/resources/destination_gcs.md b/docs/resources/destination_gcs.md
index e187baa31..c2e0564e8 100644
--- a/docs/resources/destination_gcs.md
+++ b/docs/resources/destination_gcs.md
@@ -16,17 +16,16 @@ DestinationGcs Resource
resource "airbyte_destination_gcs" "my_destination_gcs" {
configuration = {
credential = {
- destination_gcs_authentication_hmac_key = {
+ hmac_key = {
credential_type = "HMAC_KEY"
hmac_key_access_id = "1234567890abcdefghij1234"
hmac_key_secret = "1234567890abcdefghij1234567890ABCDEFGHIJ"
}
}
- destination_type = "gcs"
format = {
- destination_gcs_output_format_avro_apache_avro = {
+ avro_apache_avro = {
compression_codec = {
- destination_gcs_output_format_avro_apache_avro_compression_codec_bzip2 = {
+ bzip2 = {
codec = "bzip2"
}
}
@@ -37,8 +36,9 @@ resource "airbyte_destination_gcs" "my_destination_gcs" {
gcs_bucket_path = "data_sync/test"
gcs_bucket_region = "us-west1"
}
- name = "Miss Dennis Friesen"
- workspace_id = "c366c8dd-6b14-4429-8747-4778a7bd466d"
+ definition_id = "37e4a59e-7bfd-41d4-96bd-14d08d4a7d5d"
+ name = "Opal D'Amore"
+ workspace_id = "153b42c3-2f48-4f6e-943a-0f0f39a6c151"
}
```
@@ -48,9 +48,13 @@ resource "airbyte_destination_gcs" "my_destination_gcs" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -62,14 +66,13 @@ resource "airbyte_destination_gcs" "my_destination_gcs" {
Required:
- `credential` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--credential))
-- `destination_type` (String) must be one of ["gcs"]
- `format` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format))
- `gcs_bucket_name` (String) You can find the bucket name in the App Engine Admin console Application Settings page, under the label Google Cloud Storage Bucket. Read more here.
- `gcs_bucket_path` (String) GCS Bucket Path string Subdirectory under the above bucket to sync the data into.
Optional:
-- `gcs_bucket_region` (String) must be one of ["northamerica-northeast1", "northamerica-northeast2", "us-central1", "us-east1", "us-east4", "us-west1", "us-west2", "us-west3", "us-west4", "southamerica-east1", "southamerica-west1", "europe-central2", "europe-north1", "europe-west1", "europe-west2", "europe-west3", "europe-west4", "europe-west6", "asia-east1", "asia-east2", "asia-northeast1", "asia-northeast2", "asia-northeast3", "asia-south1", "asia-south2", "asia-southeast1", "asia-southeast2", "australia-southeast1", "australia-southeast2", "asia", "eu", "us", "asia1", "eur4", "nam4"]
+- `gcs_bucket_region` (String) must be one of ["northamerica-northeast1", "northamerica-northeast2", "us-central1", "us-east1", "us-east4", "us-west1", "us-west2", "us-west3", "us-west4", "southamerica-east1", "southamerica-west1", "europe-central2", "europe-north1", "europe-west1", "europe-west2", "europe-west3", "europe-west4", "europe-west6", "asia-east1", "asia-east2", "asia-northeast1", "asia-northeast2", "asia-northeast3", "asia-south1", "asia-south2", "asia-southeast1", "asia-southeast2", "australia-southeast1", "australia-southeast2", "asia", "eu", "us", "asia1", "eur4", "nam4"]; Default: "us"
Select a Region of the GCS Bucket. Read more here.
@@ -77,27 +80,19 @@ Select a Region of the GCS Bucket. Read more here. (see [below for nested schema](#nestedatt--configuration--credential--destination_gcs_authentication_hmac_key))
-- `destination_gcs_update_authentication_hmac_key` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--credential--destination_gcs_update_authentication_hmac_key))
+- `hmac_key` (Attributes) An HMAC key is a type of credential and can be associated with a service account or a user account in Cloud Storage. Read more here. (see [below for nested schema](#nestedatt--configuration--credential--hmac_key))
-
-### Nested Schema for `configuration.credential.destination_gcs_authentication_hmac_key`
+
+### Nested Schema for `configuration.credential.hmac_key`
Required:
-- `credential_type` (String) must be one of ["HMAC_KEY"]
-- `hmac_key_access_id` (String) When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long. Read more here.
-- `hmac_key_secret` (String) The corresponding secret for the access ID. It is a 40-character base-64 encoded string. Read more here.
-
-
-
-### Nested Schema for `configuration.credential.destination_gcs_update_authentication_hmac_key`
+- `hmac_key_access_id` (String, Sensitive) When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long. Read more here.
+- `hmac_key_secret` (String, Sensitive) The corresponding secret for the access ID. It is a 40-character base-64 encoded string. Read more here.
-Required:
+Optional:
-- `credential_type` (String) must be one of ["HMAC_KEY"]
-- `hmac_key_access_id` (String) When linked to a service account, this ID is 61 characters long; when linked to a user account, it is 24 characters long. Read more here.
-- `hmac_key_secret` (String) The corresponding secret for the access ID. It is a 40-character base-64 encoded string. Read more here.
+- `credential_type` (String) must be one of ["HMAC_KEY"]; Default: "HMAC_KEY"
@@ -106,366 +101,179 @@ Required:
Optional:
-- `destination_gcs_output_format_avro_apache_avro` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro))
-- `destination_gcs_output_format_csv_comma_separated_values` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_csv_comma_separated_values))
-- `destination_gcs_output_format_json_lines_newline_delimited_json` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_json_lines_newline_delimited_json))
-- `destination_gcs_output_format_parquet_columnar_storage` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_parquet_columnar_storage))
-- `destination_gcs_update_output_format_avro_apache_avro` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro))
-- `destination_gcs_update_output_format_csv_comma_separated_values` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_csv_comma_separated_values))
-- `destination_gcs_update_output_format_json_lines_newline_delimited_json` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_json_lines_newline_delimited_json))
-- `destination_gcs_update_output_format_parquet_columnar_storage` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_parquet_columnar_storage))
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro`
-
-Required:
-
-- `compression_codec` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--compression_codec))
-- `format_type` (String) must be one of ["Avro"]
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type`
-
-Optional:
-
-- `destination_gcs_output_format_avro_apache_avro_compression_codec_bzip2` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--format_type--destination_gcs_output_format_avro_apache_avro_compression_codec_bzip2))
-- `destination_gcs_output_format_avro_apache_avro_compression_codec_deflate` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--format_type--destination_gcs_output_format_avro_apache_avro_compression_codec_deflate))
-- `destination_gcs_output_format_avro_apache_avro_compression_codec_no_compression` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--format_type--destination_gcs_output_format_avro_apache_avro_compression_codec_no_compression))
-- `destination_gcs_output_format_avro_apache_avro_compression_codec_snappy` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--format_type--destination_gcs_output_format_avro_apache_avro_compression_codec_snappy))
-- `destination_gcs_output_format_avro_apache_avro_compression_codec_xz` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--format_type--destination_gcs_output_format_avro_apache_avro_compression_codec_xz))
-- `destination_gcs_output_format_avro_apache_avro_compression_codec_zstandard` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_avro_apache_avro--format_type--destination_gcs_output_format_avro_apache_avro_compression_codec_zstandard))
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type.destination_gcs_output_format_avro_apache_avro_compression_codec_bzip2`
-
-Required:
-
-- `codec` (String) must be one of ["bzip2"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type.destination_gcs_output_format_avro_apache_avro_compression_codec_deflate`
-
-Required:
-
-- `codec` (String) must be one of ["Deflate"]
-
-Optional:
-
-- `compression_level` (Number) 0: no compression & fastest, 9: best compression & slowest.
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type.destination_gcs_output_format_avro_apache_avro_compression_codec_no_compression`
-
-Required:
-
-- `codec` (String) must be one of ["no compression"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type.destination_gcs_output_format_avro_apache_avro_compression_codec_snappy`
-
-Required:
-
-- `codec` (String) must be one of ["snappy"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type.destination_gcs_output_format_avro_apache_avro_compression_codec_xz`
-
-Required:
-
-- `codec` (String) must be one of ["xz"]
-
-Optional:
-
-- `compression_level` (Number) The presets 0-3 are fast presets with medium compression. The presets 4-6 are fairly slow presets with high compression. The default preset is 6. The presets 7-9 are like the preset 6 but use bigger dictionaries and have higher compressor and decompressor memory requirements. Unless the uncompressed size of the file exceeds 8 MiB, 16 MiB, or 32 MiB, it is waste of memory to use the presets 7, 8, or 9, respectively. Read more here for details.
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_avro_apache_avro.format_type.destination_gcs_output_format_avro_apache_avro_compression_codec_zstandard`
-
-Required:
-
-- `codec` (String) must be one of ["zstandard"]
-
-Optional:
-
-- `compression_level` (Number) Negative levels are 'fast' modes akin to lz4 or snappy, levels above 9 are generally for archival purposes, and levels above 18 use a lot of memory.
-- `include_checksum` (Boolean) If true, include a checksum with each data block.
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_csv_comma_separated_values`
-
-Required:
-
-- `format_type` (String) must be one of ["CSV"]
-
-Optional:
-
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_csv_comma_separated_values--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input JSON data should be normalized (flattened) in the output CSV. Please refer to docs for details.
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_csv_comma_separated_values.flattening`
-
-Optional:
-
-- `destination_gcs_output_format_csv_comma_separated_values_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_csv_comma_separated_values--flattening--destination_gcs_output_format_csv_comma_separated_values_compression_gzip))
-- `destination_gcs_output_format_csv_comma_separated_values_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_csv_comma_separated_values--flattening--destination_gcs_output_format_csv_comma_separated_values_compression_no_compression))
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_csv_comma_separated_values.flattening.destination_gcs_output_format_csv_comma_separated_values_compression_gzip`
-
-Optional:
-
-- `compression_type` (String) must be one of ["GZIP"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_csv_comma_separated_values.flattening.destination_gcs_output_format_csv_comma_separated_values_compression_no_compression`
-
-Optional:
-
-- `compression_type` (String) must be one of ["No Compression"]
-
-
-
+- `avro_apache_avro` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro))
+- `csv_comma_separated_values` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--csv_comma_separated_values))
+- `json_lines_newline_delimited_json` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json))
+- `parquet_columnar_storage` (Attributes) Output data format. One of the following formats must be selected - AVRO format, PARQUET format, CSV format, or JSONL format. (see [below for nested schema](#nestedatt--configuration--format--parquet_columnar_storage))
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_json_lines_newline_delimited_json`
+
+### Nested Schema for `configuration.format.avro_apache_avro`
Required:
-- `format_type` (String) must be one of ["JSONL"]
+- `compression_codec` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--compression_codec))
Optional:
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_json_lines_newline_delimited_json--compression))
+- `format_type` (String) must be one of ["Avro"]; Default: "Avro"
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_json_lines_newline_delimited_json.compression`
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type`
Optional:
-- `destination_gcs_output_format_json_lines_newline_delimited_json_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_json_lines_newline_delimited_json--compression--destination_gcs_output_format_json_lines_newline_delimited_json_compression_gzip))
-- `destination_gcs_output_format_json_lines_newline_delimited_json_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_output_format_json_lines_newline_delimited_json--compression--destination_gcs_output_format_json_lines_newline_delimited_json_compression_no_compression))
+- `bzip2` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--format_type--bzip2))
+- `deflate` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--format_type--deflate))
+- `no_compression` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--format_type--no_compression))
+- `snappy` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--format_type--snappy))
+- `xz` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--format_type--xz))
+- `zstandard` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--format_type--zstandard))
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_json_lines_newline_delimited_json.compression.destination_gcs_output_format_json_lines_newline_delimited_json_compression_gzip`
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type.bzip2`
Optional:
-- `compression_type` (String) must be one of ["GZIP"]
+- `codec` (String) must be one of ["bzip2"]; Default: "bzip2"
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_json_lines_newline_delimited_json.compression.destination_gcs_output_format_json_lines_newline_delimited_json_compression_no_compression`
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type.deflate`
Optional:
-- `compression_type` (String) must be one of ["No Compression"]
+- `codec` (String) must be one of ["Deflate"]; Default: "Deflate"
+- `compression_level` (Number) Default: 0
+0: no compression & fastest, 9: best compression & slowest.
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_output_format_parquet_columnar_storage`
-
-Required:
-
-- `format_type` (String) must be one of ["Parquet"]
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type.no_compression`
Optional:
-- `block_size_mb` (Number) This is the size of a row group being buffered in memory. It limits the memory usage when writing. Larger values will improve the IO when reading, but consume more memory when writing. Default: 128 MB.
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "LZO", "BROTLI", "LZ4", "ZSTD"]
-The compression algorithm used to compress data pages.
-- `dictionary_encoding` (Boolean) Default: true.
-- `dictionary_page_size_kb` (Number) There is one dictionary page per column per row group when dictionary encoding is used. The dictionary page size works like the page size but for dictionary. Default: 1024 KB.
-- `max_padding_size_mb` (Number) Maximum size allowed as padding to align row groups. This is also the minimum size of a row group. Default: 8 MB.
-- `page_size_kb` (Number) The page size is for compression. A block is composed of pages. A page is the smallest unit that must be read fully to access a single record. If this value is too small, the compression will deteriorate. Default: 1024 KB.
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro`
+- `codec` (String) must be one of ["no compression"]; Default: "no compression"
-Required:
-
-- `compression_codec` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--compression_codec))
-- `format_type` (String) must be one of ["Avro"]
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type`
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type.snappy`
Optional:
-- `destination_gcs_update_output_format_avro_apache_avro_compression_codec_bzip2` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--format_type--destination_gcs_update_output_format_avro_apache_avro_compression_codec_bzip2))
-- `destination_gcs_update_output_format_avro_apache_avro_compression_codec_deflate` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--format_type--destination_gcs_update_output_format_avro_apache_avro_compression_codec_deflate))
-- `destination_gcs_update_output_format_avro_apache_avro_compression_codec_no_compression` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--format_type--destination_gcs_update_output_format_avro_apache_avro_compression_codec_no_compression))
-- `destination_gcs_update_output_format_avro_apache_avro_compression_codec_snappy` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--format_type--destination_gcs_update_output_format_avro_apache_avro_compression_codec_snappy))
-- `destination_gcs_update_output_format_avro_apache_avro_compression_codec_xz` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--format_type--destination_gcs_update_output_format_avro_apache_avro_compression_codec_xz))
-- `destination_gcs_update_output_format_avro_apache_avro_compression_codec_zstandard` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_avro_apache_avro--format_type--destination_gcs_update_output_format_avro_apache_avro_compression_codec_zstandard))
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type.destination_gcs_update_output_format_avro_apache_avro_compression_codec_bzip2`
-
-Required:
-
-- `codec` (String) must be one of ["bzip2"]
+- `codec` (String) must be one of ["snappy"]; Default: "snappy"
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type.destination_gcs_update_output_format_avro_apache_avro_compression_codec_deflate`
-
-Required:
-
-- `codec` (String) must be one of ["Deflate"]
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type.xz`
Optional:
-- `compression_level` (Number) 0: no compression & fastest, 9: best compression & slowest.
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type.destination_gcs_update_output_format_avro_apache_avro_compression_codec_no_compression`
-
-Required:
-
-- `codec` (String) must be one of ["no compression"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type.destination_gcs_update_output_format_avro_apache_avro_compression_codec_snappy`
+- `codec` (String) must be one of ["xz"]; Default: "xz"
+- `compression_level` (Number) Default: 6
+The presets 0-3 are fast presets with medium compression. The presets 4-6 are fairly slow presets with high compression. The default preset is 6. The presets 7-9 are like the preset 6 but use bigger dictionaries and have higher compressor and decompressor memory requirements. Unless the uncompressed size of the file exceeds 8 MiB, 16 MiB, or 32 MiB, it is waste of memory to use the presets 7, 8, or 9, respectively. Read more here for details.
-Required:
-
-- `codec` (String) must be one of ["snappy"]
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type.destination_gcs_update_output_format_avro_apache_avro_compression_codec_xz`
-Required:
-
-- `codec` (String) must be one of ["xz"]
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type.zstandard`
Optional:
-- `compression_level` (Number) The presets 0-3 are fast presets with medium compression. The presets 4-6 are fairly slow presets with high compression. The default preset is 6. The presets 7-9 are like the preset 6 but use bigger dictionaries and have higher compressor and decompressor memory requirements. Unless the uncompressed size of the file exceeds 8 MiB, 16 MiB, or 32 MiB, it is waste of memory to use the presets 7, 8, or 9, respectively. Read more here for details.
+- `codec` (String) must be one of ["zstandard"]; Default: "zstandard"
+- `compression_level` (Number) Default: 3
+Negative levels are 'fast' modes akin to lz4 or snappy, levels above 9 are generally for archival purposes, and levels above 18 use a lot of memory.
+- `include_checksum` (Boolean) Default: false
+If true, include a checksum with each data block.
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_avro_apache_avro.format_type.destination_gcs_update_output_format_avro_apache_avro_compression_codec_zstandard`
-Required:
-- `codec` (String) must be one of ["zstandard"]
+
+### Nested Schema for `configuration.format.csv_comma_separated_values`
Optional:
-- `compression_level` (Number) Negative levels are 'fast' modes akin to lz4 or snappy, levels above 9 are generally for archival purposes, and levels above 18 use a lot of memory.
-- `include_checksum` (Boolean) If true, include a checksum with each data block.
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_csv_comma_separated_values`
-
-Required:
-
-- `format_type` (String) must be one of ["CSV"]
-
-Optional:
-
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_csv_comma_separated_values--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
+- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--csv_comma_separated_values--compression))
+- `flattening` (String) must be one of ["No flattening", "Root level flattening"]; Default: "No flattening"
Whether the input JSON data should be normalized (flattened) in the output CSV. Please refer to docs for details.
+- `format_type` (String) must be one of ["CSV"]; Default: "CSV"
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_csv_comma_separated_values.flattening`
+
+### Nested Schema for `configuration.format.csv_comma_separated_values.format_type`
Optional:
-- `destination_gcs_update_output_format_csv_comma_separated_values_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_csv_comma_separated_values--flattening--destination_gcs_update_output_format_csv_comma_separated_values_compression_gzip))
-- `destination_gcs_update_output_format_csv_comma_separated_values_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_csv_comma_separated_values--flattening--destination_gcs_update_output_format_csv_comma_separated_values_compression_no_compression))
+- `gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--csv_comma_separated_values--format_type--gzip))
+- `no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--csv_comma_separated_values--format_type--no_compression))
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_csv_comma_separated_values.flattening.destination_gcs_update_output_format_csv_comma_separated_values_compression_gzip`
+
+### Nested Schema for `configuration.format.csv_comma_separated_values.format_type.gzip`
Optional:
-- `compression_type` (String) must be one of ["GZIP"]
+- `compression_type` (String) must be one of ["GZIP"]; Default: "GZIP"
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_csv_comma_separated_values.flattening.destination_gcs_update_output_format_csv_comma_separated_values_compression_no_compression`
+
+### Nested Schema for `configuration.format.csv_comma_separated_values.format_type.no_compression`
Optional:
-- `compression_type` (String) must be one of ["No Compression"]
-
+- `compression_type` (String) must be one of ["No Compression"]; Default: "No Compression"
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_json_lines_newline_delimited_json`
-
-Required:
-- `format_type` (String) must be one of ["JSONL"]
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json`
Optional:
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_json_lines_newline_delimited_json--compression))
+- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json--compression))
+- `format_type` (String) must be one of ["JSONL"]; Default: "JSONL"
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_json_lines_newline_delimited_json.compression`
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json.format_type`
Optional:
-- `destination_gcs_update_output_format_json_lines_newline_delimited_json_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_json_lines_newline_delimited_json--compression--destination_gcs_update_output_format_json_lines_newline_delimited_json_compression_gzip))
-- `destination_gcs_update_output_format_json_lines_newline_delimited_json_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_gcs_update_output_format_json_lines_newline_delimited_json--compression--destination_gcs_update_output_format_json_lines_newline_delimited_json_compression_no_compression))
+- `gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json--format_type--gzip))
+- `no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json--format_type--no_compression))
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_json_lines_newline_delimited_json.compression.destination_gcs_update_output_format_json_lines_newline_delimited_json_compression_gzip`
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json.format_type.gzip`
Optional:
-- `compression_type` (String) must be one of ["GZIP"]
+- `compression_type` (String) must be one of ["GZIP"]; Default: "GZIP"
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_json_lines_newline_delimited_json.compression.destination_gcs_update_output_format_json_lines_newline_delimited_json_compression_no_compression`
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json.format_type.no_compression`
Optional:
-- `compression_type` (String) must be one of ["No Compression"]
+- `compression_type` (String) must be one of ["No Compression"]; Default: "No Compression"
-
-### Nested Schema for `configuration.format.destination_gcs_update_output_format_parquet_columnar_storage`
-
-Required:
-
-- `format_type` (String) must be one of ["Parquet"]
+
+### Nested Schema for `configuration.format.parquet_columnar_storage`
Optional:
-- `block_size_mb` (Number) This is the size of a row group being buffered in memory. It limits the memory usage when writing. Larger values will improve the IO when reading, but consume more memory when writing. Default: 128 MB.
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "LZO", "BROTLI", "LZ4", "ZSTD"]
+- `block_size_mb` (Number) Default: 128
+This is the size of a row group being buffered in memory. It limits the memory usage when writing. Larger values will improve the IO when reading, but consume more memory when writing. Default: 128 MB.
+- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "LZO", "BROTLI", "LZ4", "ZSTD"]; Default: "UNCOMPRESSED"
The compression algorithm used to compress data pages.
-- `dictionary_encoding` (Boolean) Default: true.
-- `dictionary_page_size_kb` (Number) There is one dictionary page per column per row group when dictionary encoding is used. The dictionary page size works like the page size but for dictionary. Default: 1024 KB.
-- `max_padding_size_mb` (Number) Maximum size allowed as padding to align row groups. This is also the minimum size of a row group. Default: 8 MB.
-- `page_size_kb` (Number) The page size is for compression. A block is composed of pages. A page is the smallest unit that must be read fully to access a single record. If this value is too small, the compression will deteriorate. Default: 1024 KB.
+- `dictionary_encoding` (Boolean) Default: true
+Default: true.
+- `dictionary_page_size_kb` (Number) Default: 1024
+There is one dictionary page per column per row group when dictionary encoding is used. The dictionary page size works like the page size but for dictionary. Default: 1024 KB.
+- `format_type` (String) must be one of ["Parquet"]; Default: "Parquet"
+- `max_padding_size_mb` (Number) Default: 8
+Maximum size allowed as padding to align row groups. This is also the minimum size of a row group. Default: 8 MB.
+- `page_size_kb` (Number) Default: 1024
+The page size is for compression. A block is composed of pages. A page is the smallest unit that must be read fully to access a single record. If this value is too small, the compression will deteriorate. Default: 1024 KB.
diff --git a/docs/resources/destination_google_sheets.md b/docs/resources/destination_google_sheets.md
index c628da1fd..d9d7de769 100644
--- a/docs/resources/destination_google_sheets.md
+++ b/docs/resources/destination_google_sheets.md
@@ -20,11 +20,11 @@ resource "airbyte_destination_google_sheets" "my_destination_googlesheets" {
client_secret = "...my_client_secret..."
refresh_token = "...my_refresh_token..."
}
- destination_type = "google-sheets"
- spreadsheet_id = "https://docs.google.com/spreadsheets/d/1hLd9Qqti3UyLXZB2aFfUWDT7BG/edit"
+ spreadsheet_id = "https://docs.google.com/spreadsheets/d/1hLd9Qqti3UyLXZB2aFfUWDT7BG/edit"
}
- name = "Mr. Irma Schaefer"
- workspace_id = "b3cdca42-5190-44e5-a3c7-e0bc7178e479"
+ definition_id = "a78cf13c-3589-4bc3-aaba-63d3987f09ed"
+ name = "Manuel Cronin IV"
+ workspace_id = "dddbef1f-87bb-4506-9e16-a5a735a4e180"
}
```
@@ -34,9 +34,13 @@ resource "airbyte_destination_google_sheets" "my_destination_googlesheets" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -48,7 +52,6 @@ resource "airbyte_destination_google_sheets" "my_destination_googlesheets" {
Required:
- `credentials` (Attributes) Google API Credentials for connecting to Google Sheets and Google Drive APIs (see [below for nested schema](#nestedatt--configuration--credentials))
-- `destination_type` (String) must be one of ["google-sheets"]
- `spreadsheet_id` (String) The link to your spreadsheet. See this guide for more details.
@@ -58,6 +61,6 @@ Required:
- `client_id` (String) The Client ID of your Google Sheets developer application.
- `client_secret` (String) The Client Secret of your Google Sheets developer application.
-- `refresh_token` (String) The token for obtaining new access token.
+- `refresh_token` (String, Sensitive) The token for obtaining new access token.
diff --git a/docs/resources/destination_keen.md b/docs/resources/destination_keen.md
index 55eeac9fe..e0e719b5e 100644
--- a/docs/resources/destination_keen.md
+++ b/docs/resources/destination_keen.md
@@ -15,13 +15,13 @@ DestinationKeen Resource
```terraform
resource "airbyte_destination_keen" "my_destination_keen" {
configuration = {
- api_key = "ABCDEFGHIJKLMNOPRSTUWXYZ"
- destination_type = "keen"
- infer_timestamp = false
- project_id = "58b4acc22ba938934e888322e"
+ api_key = "ABCDEFGHIJKLMNOPRSTUWXYZ"
+ infer_timestamp = false
+ project_id = "58b4acc22ba938934e888322e"
}
- name = "Todd Oberbrunner DDS"
- workspace_id = "688282aa-4825-462f-a22e-9817ee17cbe6"
+ definition_id = "23f0d76f-b78b-4f74-ba22-de12791b5f13"
+ name = "Mr. Angelina Becker"
+ workspace_id = "49774ae8-7c30-4892-bfb0-f41f82248d60"
}
```
@@ -31,9 +31,13 @@ resource "airbyte_destination_keen" "my_destination_keen" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -44,12 +48,12 @@ resource "airbyte_destination_keen" "my_destination_keen" {
Required:
-- `api_key` (String) To get Keen Master API Key, navigate to the Access tab from the left-hand, side panel and check the Project Details section.
-- `destination_type` (String) must be one of ["keen"]
+- `api_key` (String, Sensitive) To get Keen Master API Key, navigate to the Access tab from the left-hand, side panel and check the Project Details section.
- `project_id` (String) To get Keen Project ID, navigate to the Access tab from the left-hand, side panel and check the Project Details section.
Optional:
-- `infer_timestamp` (Boolean) Allow connector to guess keen.timestamp value based on the streamed data.
+- `infer_timestamp` (Boolean) Default: true
+Allow connector to guess keen.timestamp value based on the streamed data.
diff --git a/docs/resources/destination_kinesis.md b/docs/resources/destination_kinesis.md
index b788f2018..42b0775d7 100644
--- a/docs/resources/destination_kinesis.md
+++ b/docs/resources/destination_kinesis.md
@@ -15,16 +15,16 @@ DestinationKinesis Resource
```terraform
resource "airbyte_destination_kinesis" "my_destination_kinesis" {
configuration = {
- access_key = "...my_access_key..."
- buffer_size = 1
- destination_type = "kinesis"
- endpoint = "kinesis.us‑west‑1.amazonaws.com"
- private_key = "...my_private_key..."
- region = "us‑west‑1"
- shard_count = 9
+ access_key = "...my_access_key..."
+ buffer_size = 1
+ endpoint = "kinesis.us‑west‑1.amazonaws.com"
+ private_key = "...my_private_key..."
+ region = "us‑west‑1"
+ shard_count = 1
}
- name = "Opal Kozey"
- workspace_id = "5bc0ab3c-20c4-4f37-89fd-871f99dd2efd"
+ definition_id = "83384bd8-7b5c-4ce3-a148-54333df23c5e"
+ name = "Mary Monahan"
+ workspace_id = "52521a04-7878-4c25-8cd1-84fd116e75f1"
}
```
@@ -34,9 +34,13 @@ resource "airbyte_destination_kinesis" "my_destination_kinesis" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -48,11 +52,15 @@ resource "airbyte_destination_kinesis" "my_destination_kinesis" {
Required:
- `access_key` (String) Generate the AWS Access Key for current user.
-- `buffer_size` (Number) Buffer size for storing kinesis records before being batch streamed.
-- `destination_type` (String) must be one of ["kinesis"]
- `endpoint` (String) AWS Kinesis endpoint.
- `private_key` (String) The AWS Private Key - a string of numbers and letters that are unique for each account, also known as a "recovery phrase".
- `region` (String) AWS region. Your account determines the Regions that are available to you.
-- `shard_count` (Number) Number of shards to which the data should be streamed.
+
+Optional:
+
+- `buffer_size` (Number) Default: 100
+Buffer size for storing kinesis records before being batch streamed.
+- `shard_count` (Number) Default: 5
+Number of shards to which the data should be streamed.
diff --git a/docs/resources/destination_langchain.md b/docs/resources/destination_langchain.md
index c6dc2e93e..6bb7c0272 100644
--- a/docs/resources/destination_langchain.md
+++ b/docs/resources/destination_langchain.md
@@ -15,29 +15,26 @@ DestinationLangchain Resource
```terraform
resource "airbyte_destination_langchain" "my_destination_langchain" {
configuration = {
- destination_type = "langchain"
embedding = {
- destination_langchain_embedding_fake = {
- mode = "fake"
- }
+ fake = {}
}
indexing = {
- destination_langchain_indexing_chroma_local_persistance_ = {
+ chroma_local_persistance = {
collection_name = "...my_collection_name..."
destination_path = "/local/my_chroma_db"
- mode = "chroma_local"
}
}
processing = {
- chunk_overlap = 0
- chunk_size = 1
+ chunk_overlap = 8
+ chunk_size = 3
text_fields = [
"...",
]
}
}
- name = "Hattie Nader"
- workspace_id = "1e674bdb-04f1-4575-a082-d68ea19f1d17"
+ definition_id = "0c9ec767-47b0-46cf-86fe-4a6f8bb810ed"
+ name = "Megan Kertzmann"
+ workspace_id = "02e7b218-3b2b-4c4f-adb7-afdacad2c14c"
}
```
@@ -47,9 +44,13 @@ resource "airbyte_destination_langchain" "my_destination_langchain" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -60,7 +61,6 @@ resource "airbyte_destination_langchain" "my_destination_langchain" {
Required:
-- `destination_type` (String) must be one of ["langchain"]
- `embedding` (Attributes) Embedding configuration (see [below for nested schema](#nestedatt--configuration--embedding))
- `indexing` (Attributes) Indexing configuration (see [below for nested schema](#nestedatt--configuration--indexing))
- `processing` (Attributes) (see [below for nested schema](#nestedatt--configuration--processing))
@@ -70,49 +70,19 @@ Required:
Optional:
-- `destination_langchain_embedding_fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--destination_langchain_embedding_fake))
-- `destination_langchain_embedding_open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--destination_langchain_embedding_open_ai))
-- `destination_langchain_update_embedding_fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--destination_langchain_update_embedding_fake))
-- `destination_langchain_update_embedding_open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--destination_langchain_update_embedding_open_ai))
-
-
-### Nested Schema for `configuration.embedding.destination_langchain_embedding_fake`
-
-Optional:
-
-- `mode` (String) must be one of ["fake"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_langchain_embedding_open_ai`
-
-Required:
-
-- `openai_key` (String)
-
-Optional:
-
-- `mode` (String) must be one of ["openai"]
-
+- `fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--fake))
+- `open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--open_ai))
-
-### Nested Schema for `configuration.embedding.destination_langchain_update_embedding_fake`
-
-Optional:
+
+### Nested Schema for `configuration.embedding.fake`
-- `mode` (String) must be one of ["fake"]
-
-
-### Nested Schema for `configuration.embedding.destination_langchain_update_embedding_open_ai`
+
+### Nested Schema for `configuration.embedding.open_ai`
Required:
-- `openai_key` (String)
-
-Optional:
-
-- `mode` (String) must be one of ["openai"]
+- `openai_key` (String, Sensitive)
@@ -121,54 +91,12 @@ Optional:
Optional:
-- `destination_langchain_indexing_chroma_local_persistance` (Attributes) Chroma is a popular vector store that can be used to store and retrieve embeddings. It will build its index in memory and persist it to disk by the end of the sync. (see [below for nested schema](#nestedatt--configuration--indexing--destination_langchain_indexing_chroma_local_persistance))
-- `destination_langchain_indexing_doc_array_hnsw_search` (Attributes) DocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite. (see [below for nested schema](#nestedatt--configuration--indexing--destination_langchain_indexing_doc_array_hnsw_search))
-- `destination_langchain_indexing_pinecone` (Attributes) Pinecone is a popular vector store that can be used to store and retrieve embeddings. It is a managed service and can also be queried from outside of langchain. (see [below for nested schema](#nestedatt--configuration--indexing--destination_langchain_indexing_pinecone))
-- `destination_langchain_update_indexing_chroma_local_persistance` (Attributes) Chroma is a popular vector store that can be used to store and retrieve embeddings. It will build its index in memory and persist it to disk by the end of the sync. (see [below for nested schema](#nestedatt--configuration--indexing--destination_langchain_update_indexing_chroma_local_persistance))
-- `destination_langchain_update_indexing_doc_array_hnsw_search` (Attributes) DocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite. (see [below for nested schema](#nestedatt--configuration--indexing--destination_langchain_update_indexing_doc_array_hnsw_search))
-- `destination_langchain_update_indexing_pinecone` (Attributes) Pinecone is a popular vector store that can be used to store and retrieve embeddings. It is a managed service and can also be queried from outside of langchain. (see [below for nested schema](#nestedatt--configuration--indexing--destination_langchain_update_indexing_pinecone))
-
-
-### Nested Schema for `configuration.indexing.destination_langchain_indexing_chroma_local_persistance`
-
-Required:
-
-- `destination_path` (String) Path to the directory where chroma files will be written. The files will be placed inside that local mount.
-
-Optional:
-
-- `collection_name` (String) Name of the collection to use.
-- `mode` (String) must be one of ["chroma_local"]
-
-
-
-### Nested Schema for `configuration.indexing.destination_langchain_indexing_doc_array_hnsw_search`
-
-Required:
-
-- `destination_path` (String) Path to the directory where hnswlib and meta data files will be written. The files will be placed inside that local mount. All files in the specified destination directory will be deleted on each run.
-
-Optional:
-
-- `mode` (String) must be one of ["DocArrayHnswSearch"]
-
-
-
-### Nested Schema for `configuration.indexing.destination_langchain_indexing_pinecone`
-
-Required:
-
-- `index` (String) Pinecone index to use
-- `pinecone_environment` (String) Pinecone environment to use
-- `pinecone_key` (String)
-
-Optional:
-
-- `mode` (String) must be one of ["pinecone"]
+- `chroma_local_persistance` (Attributes) Chroma is a popular vector store that can be used to store and retrieve embeddings. It will build its index in memory and persist it to disk by the end of the sync. (see [below for nested schema](#nestedatt--configuration--indexing--chroma_local_persistance))
+- `doc_array_hnsw_search` (Attributes) DocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite. (see [below for nested schema](#nestedatt--configuration--indexing--doc_array_hnsw_search))
+- `pinecone` (Attributes) Pinecone is a popular vector store that can be used to store and retrieve embeddings. It is a managed service and can also be queried from outside of langchain. (see [below for nested schema](#nestedatt--configuration--indexing--pinecone))
-
-
-### Nested Schema for `configuration.indexing.destination_langchain_update_indexing_chroma_local_persistance`
+
+### Nested Schema for `configuration.indexing.chroma_local_persistance`
Required:
@@ -176,34 +104,26 @@ Required:
Optional:
-- `collection_name` (String) Name of the collection to use.
-- `mode` (String) must be one of ["chroma_local"]
+- `collection_name` (String) Default: "langchain"
+Name of the collection to use.
-
-### Nested Schema for `configuration.indexing.destination_langchain_update_indexing_doc_array_hnsw_search`
+
+### Nested Schema for `configuration.indexing.doc_array_hnsw_search`
Required:
- `destination_path` (String) Path to the directory where hnswlib and meta data files will be written. The files will be placed inside that local mount. All files in the specified destination directory will be deleted on each run.
-Optional:
-
-- `mode` (String) must be one of ["DocArrayHnswSearch"]
-
-
-### Nested Schema for `configuration.indexing.destination_langchain_update_indexing_pinecone`
+
+### Nested Schema for `configuration.indexing.pinecone`
Required:
- `index` (String) Pinecone index to use
- `pinecone_environment` (String) Pinecone environment to use
-- `pinecone_key` (String)
-
-Optional:
-
-- `mode` (String) must be one of ["pinecone"]
+- `pinecone_key` (String, Sensitive)
@@ -217,6 +137,7 @@ Required:
Optional:
-- `chunk_overlap` (Number) Size of overlap between chunks in tokens to store in vector store to better capture relevant context
+- `chunk_overlap` (Number) Default: 0
+Size of overlap between chunks in tokens to store in vector store to better capture relevant context
diff --git a/docs/resources/destination_milvus.md b/docs/resources/destination_milvus.md
index 74d5422fb..7834ef199 100644
--- a/docs/resources/destination_milvus.md
+++ b/docs/resources/destination_milvus.md
@@ -15,39 +15,50 @@ DestinationMilvus Resource
```terraform
resource "airbyte_destination_milvus" "my_destination_milvus" {
configuration = {
- destination_type = "milvus"
embedding = {
- destination_milvus_embedding_cohere = {
- cohere_key = "...my_cohere_key..."
- mode = "cohere"
+ azure_open_ai = {
+ api_base = "https://your-resource-name.openai.azure.com"
+ deployment = "your-resource-name"
+ openai_key = "...my_openai_key..."
}
}
indexing = {
auth = {
- destination_milvus_indexing_authentication_api_token = {
- mode = "token"
+ destination_milvus_api_token = {
token = "...my_token..."
}
}
collection = "...my_collection..."
db = "...my_db..."
- host = "https://my-instance.zone.zillizcloud.com"
+ host = "tcp://my-local-milvus:19530"
text_field = "...my_text_field..."
vector_field = "...my_vector_field..."
}
processing = {
- chunk_overlap = 3
- chunk_size = 0
+ chunk_overlap = 1
+ chunk_size = 5
+ field_name_mappings = [
+ {
+ from_field = "...my_from_field..."
+ to_field = "...my_to_field..."
+ },
+ ]
metadata_fields = [
"...",
]
text_fields = [
"...",
]
+ text_splitter = {
+ by_markdown_header = {
+ split_level = 7
+ }
+ }
}
}
- name = "Sherry Morar IV"
- workspace_id = "086a1840-394c-4260-b1f9-3f5f0642dac7"
+ definition_id = "6683bb76-cbdd-442c-84b7-b603cc8cd887"
+ name = "Mr. Karl Jacobson"
+ workspace_id = "13ef7fc0-d176-4e5f-8145-49f1242182d1"
}
```
@@ -57,9 +68,13 @@ resource "airbyte_destination_milvus" "my_destination_milvus" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -70,7 +85,6 @@ resource "airbyte_destination_milvus" "my_destination_milvus" {
Required:
-- `destination_type` (String) must be one of ["milvus"]
- `embedding` (Attributes) Embedding configuration (see [below for nested schema](#nestedatt--configuration--embedding))
- `indexing` (Attributes) Indexing configuration (see [below for nested schema](#nestedatt--configuration--indexing))
- `processing` (Attributes) (see [below for nested schema](#nestedatt--configuration--processing))
@@ -80,103 +94,65 @@ Required:
Optional:
-- `destination_milvus_embedding_cohere` (Attributes) Use the Cohere API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_embedding_cohere))
-- `destination_milvus_embedding_fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_embedding_fake))
-- `destination_milvus_embedding_from_field` (Attributes) Use a field in the record as the embedding. This is useful if you already have an embedding for your data and want to store it in the vector store. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_embedding_from_field))
-- `destination_milvus_embedding_open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_embedding_open_ai))
-- `destination_milvus_update_embedding_cohere` (Attributes) Use the Cohere API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_update_embedding_cohere))
-- `destination_milvus_update_embedding_fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_update_embedding_fake))
-- `destination_milvus_update_embedding_from_field` (Attributes) Use a field in the record as the embedding. This is useful if you already have an embedding for your data and want to store it in the vector store. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_update_embedding_from_field))
-- `destination_milvus_update_embedding_open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--destination_milvus_update_embedding_open_ai))
+- `azure_open_ai` (Attributes) Use the Azure-hosted OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--azure_open_ai))
+- `cohere` (Attributes) Use the Cohere API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--cohere))
+- `fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--fake))
+- `from_field` (Attributes) Use a field in the record as the embedding. This is useful if you already have an embedding for your data and want to store it in the vector store. (see [below for nested schema](#nestedatt--configuration--embedding--from_field))
+- `open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--open_ai))
+- `open_ai_compatible` (Attributes) Use a service that's compatible with the OpenAI API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--open_ai_compatible))
-
-### Nested Schema for `configuration.embedding.destination_milvus_embedding_cohere`
+
+### Nested Schema for `configuration.embedding.azure_open_ai`
Required:
-- `cohere_key` (String)
+- `api_base` (String) The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource
+- `deployment` (String) The deployment for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource
+- `openai_key` (String, Sensitive) The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource
-Optional:
-- `mode` (String) must be one of ["cohere"]
+
+### Nested Schema for `configuration.embedding.cohere`
+Required:
-
-### Nested Schema for `configuration.embedding.destination_milvus_embedding_fake`
+- `cohere_key` (String, Sensitive)
-Optional:
-- `mode` (String) must be one of ["fake"]
+
+### Nested Schema for `configuration.embedding.fake`
-
-### Nested Schema for `configuration.embedding.destination_milvus_embedding_from_field`
+
+### Nested Schema for `configuration.embedding.from_field`
Required:
- `dimensions` (Number) The number of dimensions the embedding model is generating
- `field_name` (String) Name of the field in the record that contains the embedding
-Optional:
-
-- `mode` (String) must be one of ["from_field"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_milvus_embedding_open_ai`
-
-Required:
-
-- `openai_key` (String)
-Optional:
-
-- `mode` (String) must be one of ["openai"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_milvus_update_embedding_cohere`
+
+### Nested Schema for `configuration.embedding.open_ai`
Required:
-- `cohere_key` (String)
+- `openai_key` (String, Sensitive)
-Optional:
-
-- `mode` (String) must be one of ["cohere"]
-
-
-### Nested Schema for `configuration.embedding.destination_milvus_update_embedding_fake`
-
-Optional:
-
-- `mode` (String) must be one of ["fake"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_milvus_update_embedding_from_field`
+
+### Nested Schema for `configuration.embedding.open_ai_compatible`
Required:
+- `base_url` (String) The base URL for your OpenAI-compatible service
- `dimensions` (Number) The number of dimensions the embedding model is generating
-- `field_name` (String) Name of the field in the record that contains the embedding
-
-Optional:
-
-- `mode` (String) must be one of ["from_field"]
-
-
-
-### Nested Schema for `configuration.embedding.destination_milvus_update_embedding_open_ai`
-
-Required:
-
-- `openai_key` (String)
Optional:
-- `mode` (String) must be one of ["openai"]
+- `api_key` (String, Sensitive) Default: ""
+- `model_name` (String) Default: "text-embedding-ada-002"
+The name of the model to use for embedding
@@ -191,101 +167,104 @@ Required:
Optional:
-- `db` (String) The database to connect to
-- `text_field` (String) The field in the entity that contains the embedded text
-- `vector_field` (String) The field in the entity that contains the vector
+- `db` (String) Default: ""
+The database to connect to
+- `text_field` (String) Default: "text"
+The field in the entity that contains the embedded text
+- `vector_field` (String) Default: "vector"
+The field in the entity that contains the vector
### Nested Schema for `configuration.indexing.auth`
Optional:
-- `destination_milvus_indexing_authentication_api_token` (Attributes) Authenticate using an API token (suitable for Zilliz Cloud) (see [below for nested schema](#nestedatt--configuration--indexing--auth--destination_milvus_indexing_authentication_api_token))
-- `destination_milvus_indexing_authentication_no_auth` (Attributes) Do not authenticate (suitable for locally running test clusters, do not use for clusters with public IP addresses) (see [below for nested schema](#nestedatt--configuration--indexing--auth--destination_milvus_indexing_authentication_no_auth))
-- `destination_milvus_indexing_authentication_username_password` (Attributes) Authenticate using username and password (suitable for self-managed Milvus clusters) (see [below for nested schema](#nestedatt--configuration--indexing--auth--destination_milvus_indexing_authentication_username_password))
-- `destination_milvus_update_indexing_authentication_api_token` (Attributes) Authenticate using an API token (suitable for Zilliz Cloud) (see [below for nested schema](#nestedatt--configuration--indexing--auth--destination_milvus_update_indexing_authentication_api_token))
-- `destination_milvus_update_indexing_authentication_no_auth` (Attributes) Do not authenticate (suitable for locally running test clusters, do not use for clusters with public IP addresses) (see [below for nested schema](#nestedatt--configuration--indexing--auth--destination_milvus_update_indexing_authentication_no_auth))
-- `destination_milvus_update_indexing_authentication_username_password` (Attributes) Authenticate using username and password (suitable for self-managed Milvus clusters) (see [below for nested schema](#nestedatt--configuration--indexing--auth--destination_milvus_update_indexing_authentication_username_password))
+- `api_token` (Attributes) Authenticate using an API token (suitable for Zilliz Cloud) (see [below for nested schema](#nestedatt--configuration--indexing--auth--api_token))
+- `no_auth` (Attributes) Do not authenticate (suitable for locally running test clusters, do not use for clusters with public IP addresses) (see [below for nested schema](#nestedatt--configuration--indexing--auth--no_auth))
+- `username_password` (Attributes) Authenticate using username and password (suitable for self-managed Milvus clusters) (see [below for nested schema](#nestedatt--configuration--indexing--auth--username_password))
-
-### Nested Schema for `configuration.indexing.auth.destination_milvus_update_indexing_authentication_username_password`
+
+### Nested Schema for `configuration.indexing.auth.username_password`
Required:
-- `token` (String) API Token for the Milvus instance
+- `token` (String, Sensitive) API Token for the Milvus instance
-Optional:
-- `mode` (String) must be one of ["token"]
+
+### Nested Schema for `configuration.indexing.auth.username_password`
-
-### Nested Schema for `configuration.indexing.auth.destination_milvus_update_indexing_authentication_username_password`
-
-Optional:
-
-- `mode` (String) must be one of ["no_auth"]
-
-
-
-### Nested Schema for `configuration.indexing.auth.destination_milvus_update_indexing_authentication_username_password`
+
+### Nested Schema for `configuration.indexing.auth.username_password`
Required:
-- `password` (String) Password for the Milvus instance
+- `password` (String, Sensitive) Password for the Milvus instance
- `username` (String) Username for the Milvus instance
-Optional:
-- `mode` (String) must be one of ["username_password"]
-
-### Nested Schema for `configuration.indexing.auth.destination_milvus_update_indexing_authentication_username_password`
+
+### Nested Schema for `configuration.processing`
Required:
-- `token` (String) API Token for the Milvus instance
+- `chunk_size` (Number) Size of chunks in tokens to store in vector store (make sure it is not too big for the context if your LLM)
Optional:
-- `mode` (String) must be one of ["token"]
+- `chunk_overlap` (Number) Default: 0
+Size of overlap between chunks in tokens to store in vector store to better capture relevant context
+- `field_name_mappings` (Attributes List) List of fields to rename. Not applicable for nested fields, but can be used to rename fields already flattened via dot notation. (see [below for nested schema](#nestedatt--configuration--processing--field_name_mappings))
+- `metadata_fields` (List of String) List of fields in the record that should be stored as metadata. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered metadata fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array. When specifying nested paths, all matching values are flattened into an array set to a field named by the path.
+- `text_fields` (List of String) List of fields in the record that should be used to calculate the embedding. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered text fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array.
+- `text_splitter` (Attributes) Split text fields into chunks based on the specified method. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter))
+
+### Nested Schema for `configuration.processing.field_name_mappings`
-
-### Nested Schema for `configuration.indexing.auth.destination_milvus_update_indexing_authentication_username_password`
+Required:
-Optional:
+- `from_field` (String) The field name in the source
+- `to_field` (String) The field name to use in the destination
-- `mode` (String) must be one of ["no_auth"]
+
+### Nested Schema for `configuration.processing.text_splitter`
-
-### Nested Schema for `configuration.indexing.auth.destination_milvus_update_indexing_authentication_username_password`
+Optional:
-Required:
+- `by_markdown_header` (Attributes) Split the text by Markdown headers down to the specified header level. If the chunk size fits multiple sections, they will be combined into a single chunk. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter--by_markdown_header))
+- `by_programming_language` (Attributes) Split the text by suitable delimiters based on the programming language. This is useful for splitting code into chunks. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter--by_programming_language))
+- `by_separator` (Attributes) Split the text by the list of separators until the chunk size is reached, using the earlier mentioned separators where possible. This is useful for splitting text fields by paragraphs, sentences, words, etc. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter--by_separator))
-- `password` (String) Password for the Milvus instance
-- `username` (String) Username for the Milvus instance
+
+### Nested Schema for `configuration.processing.text_splitter.by_separator`
Optional:
-- `mode` (String) must be one of ["username_password"]
+- `split_level` (Number) Default: 1
+Level of markdown headers to split text fields by. Headings down to the specified level will be used as split points
+
+### Nested Schema for `configuration.processing.text_splitter.by_separator`
+Required:
-
-### Nested Schema for `configuration.processing`
+- `language` (String) must be one of ["cpp", "go", "java", "js", "php", "proto", "python", "rst", "ruby", "rust", "scala", "swift", "markdown", "latex", "html", "sol"]
+Split code in suitable places based on the programming language
-Required:
-- `chunk_size` (Number) Size of chunks in tokens to store in vector store (make sure it is not too big for the context if your LLM)
+
+### Nested Schema for `configuration.processing.text_splitter.by_separator`
Optional:
-- `chunk_overlap` (Number) Size of overlap between chunks in tokens to store in vector store to better capture relevant context
-- `metadata_fields` (List of String) List of fields in the record that should be stored as metadata. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered metadata fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array. When specifying nested paths, all matching values are flattened into an array set to a field named by the path.
-- `text_fields` (List of String) List of fields in the record that should be used to calculate the embedding. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered text fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array.
+- `keep_separator` (Boolean) Default: false
+Whether to keep the separator in the resulting chunks
+- `separators` (List of String) List of separator strings to split text fields by. The separator itself needs to be wrapped in double quotes, e.g. to split by the dot character, use ".". To split by a newline, use "\n".
diff --git a/docs/resources/destination_mongodb.md b/docs/resources/destination_mongodb.md
index 472867bfe..ec6866242 100644
--- a/docs/resources/destination_mongodb.md
+++ b/docs/resources/destination_mongodb.md
@@ -16,28 +16,25 @@ DestinationMongodb Resource
resource "airbyte_destination_mongodb" "my_destination_mongodb" {
configuration = {
auth_type = {
- destination_mongodb_authorization_type_login_password = {
- authorization = "login/password"
- password = "...my_password..."
- username = "Lucienne.Yundt"
+ login_password = {
+ password = "...my_password..."
+ username = "Emmalee.Towne89"
}
}
- database = "...my_database..."
- destination_type = "mongodb"
+ database = "...my_database..."
instance_type = {
- destination_mongodb_mongo_db_instance_type_mongo_db_atlas = {
+ mongo_db_atlas = {
cluster_url = "...my_cluster_url..."
instance = "atlas"
}
}
tunnel_method = {
- destination_mongodb_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ destination_mongodb_no_tunnel = {}
}
}
- name = "Robyn Schmitt I"
- workspace_id = "aa63aae8-d678-464d-bb67-5fd5e60b375e"
+ definition_id = "895c9212-6184-452d-9432-f33897fec4ca"
+ name = "Adrienne Lockman"
+ workspace_id = "bf882725-c3c6-4bc3-9a6d-3f396b39ea0e"
}
```
@@ -47,9 +44,13 @@ resource "airbyte_destination_mongodb" "my_destination_mongodb" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -62,7 +63,6 @@ Required:
- `auth_type` (Attributes) Authorization type. (see [below for nested schema](#nestedatt--configuration--auth_type))
- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["mongodb"]
Optional:
@@ -74,45 +74,20 @@ Optional:
Optional:
-- `destination_mongodb_authorization_type_login_password` (Attributes) Login/Password. (see [below for nested schema](#nestedatt--configuration--auth_type--destination_mongodb_authorization_type_login_password))
-- `destination_mongodb_authorization_type_none` (Attributes) None. (see [below for nested schema](#nestedatt--configuration--auth_type--destination_mongodb_authorization_type_none))
-- `destination_mongodb_update_authorization_type_login_password` (Attributes) Login/Password. (see [below for nested schema](#nestedatt--configuration--auth_type--destination_mongodb_update_authorization_type_login_password))
-- `destination_mongodb_update_authorization_type_none` (Attributes) None. (see [below for nested schema](#nestedatt--configuration--auth_type--destination_mongodb_update_authorization_type_none))
-
-
-### Nested Schema for `configuration.auth_type.destination_mongodb_authorization_type_login_password`
-
-Required:
-
-- `authorization` (String) must be one of ["login/password"]
-- `password` (String) Password associated with the username.
-- `username` (String) Username to use to access the database.
-
-
-
-### Nested Schema for `configuration.auth_type.destination_mongodb_authorization_type_none`
-
-Required:
-
-- `authorization` (String) must be one of ["none"]
-
+- `login_password` (Attributes) Login/Password. (see [below for nested schema](#nestedatt--configuration--auth_type--login_password))
+- `none` (Attributes) None. (see [below for nested schema](#nestedatt--configuration--auth_type--none))
-
-### Nested Schema for `configuration.auth_type.destination_mongodb_update_authorization_type_login_password`
+
+### Nested Schema for `configuration.auth_type.login_password`
Required:
-- `authorization` (String) must be one of ["login/password"]
-- `password` (String) Password associated with the username.
+- `password` (String, Sensitive) Password associated with the username.
- `username` (String) Username to use to access the database.
-
-### Nested Schema for `configuration.auth_type.destination_mongodb_update_authorization_type_none`
-
-Required:
-
-- `authorization` (String) must be one of ["none"]
+
+### Nested Schema for `configuration.auth_type.none`
@@ -121,75 +96,47 @@ Required:
Optional:
-- `destination_mongodb_mongo_db_instance_type_mongo_db_atlas` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--destination_mongodb_mongo_db_instance_type_mongo_db_atlas))
-- `destination_mongodb_mongo_db_instance_type_replica_set` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--destination_mongodb_mongo_db_instance_type_replica_set))
-- `destination_mongodb_mongo_db_instance_type_standalone_mongo_db_instance` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--destination_mongodb_mongo_db_instance_type_standalone_mongo_db_instance))
-- `destination_mongodb_update_mongo_db_instance_type_mongo_db_atlas` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--destination_mongodb_update_mongo_db_instance_type_mongo_db_atlas))
-- `destination_mongodb_update_mongo_db_instance_type_replica_set` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--destination_mongodb_update_mongo_db_instance_type_replica_set))
-- `destination_mongodb_update_mongo_db_instance_type_standalone_mongo_db_instance` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--destination_mongodb_update_mongo_db_instance_type_standalone_mongo_db_instance))
+- `mongo_db_atlas` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--mongo_db_atlas))
+- `replica_set` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--replica_set))
+- `standalone_mongo_db_instance` (Attributes) MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--standalone_mongo_db_instance))
-
-### Nested Schema for `configuration.instance_type.destination_mongodb_mongo_db_instance_type_mongo_db_atlas`
+
+### Nested Schema for `configuration.instance_type.mongo_db_atlas`
Required:
- `cluster_url` (String) URL of a cluster to connect to.
-- `instance` (String) must be one of ["atlas"]
-
-
-
-### Nested Schema for `configuration.instance_type.destination_mongodb_mongo_db_instance_type_replica_set`
-
-Required:
-
-- `instance` (String) must be one of ["replica"]
-- `server_addresses` (String) The members of a replica set. Please specify `host`:`port` of each member seperated by comma.
Optional:
-- `replica_set` (String) A replica set name.
+- `instance` (String) must be one of ["atlas"]; Default: "atlas"
-
-### Nested Schema for `configuration.instance_type.destination_mongodb_mongo_db_instance_type_standalone_mongo_db_instance`
+
+### Nested Schema for `configuration.instance_type.replica_set`
Required:
-- `host` (String) The Host of a Mongo database to be replicated.
-- `instance` (String) must be one of ["standalone"]
-- `port` (Number) The Port of a Mongo database to be replicated.
-
-
-
-### Nested Schema for `configuration.instance_type.destination_mongodb_update_mongo_db_instance_type_mongo_db_atlas`
-
-Required:
-
-- `cluster_url` (String) URL of a cluster to connect to.
-- `instance` (String) must be one of ["atlas"]
-
-
-
-### Nested Schema for `configuration.instance_type.destination_mongodb_update_mongo_db_instance_type_replica_set`
-
-Required:
-
-- `instance` (String) must be one of ["replica"]
- `server_addresses` (String) The members of a replica set. Please specify `host`:`port` of each member seperated by comma.
Optional:
+- `instance` (String) must be one of ["replica"]; Default: "replica"
- `replica_set` (String) A replica set name.
-
-### Nested Schema for `configuration.instance_type.destination_mongodb_update_mongo_db_instance_type_standalone_mongo_db_instance`
+
+### Nested Schema for `configuration.instance_type.standalone_mongo_db_instance`
Required:
- `host` (String) The Host of a Mongo database to be replicated.
-- `instance` (String) must be one of ["standalone"]
-- `port` (Number) The Port of a Mongo database to be replicated.
+
+Optional:
+
+- `instance` (String) must be one of ["standalone"]; Default: "standalone"
+- `port` (Number) Default: 27017
+The Port of a Mongo database to be replicated.
@@ -198,80 +145,41 @@ Required:
Optional:
-- `destination_mongodb_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mongodb_ssh_tunnel_method_no_tunnel))
-- `destination_mongodb_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mongodb_ssh_tunnel_method_password_authentication))
-- `destination_mongodb_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mongodb_ssh_tunnel_method_ssh_key_authentication))
-- `destination_mongodb_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mongodb_update_ssh_tunnel_method_no_tunnel))
-- `destination_mongodb_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mongodb_update_ssh_tunnel_method_password_authentication))
-- `destination_mongodb_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mongodb_update_ssh_tunnel_method_ssh_key_authentication))
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-
-### Nested Schema for `configuration.tunnel_method.destination_mongodb_ssh_tunnel_method_no_tunnel`
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mongodb_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mongodb_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mongodb_update_ssh_tunnel_method_no_tunnel`
-
-Required:
+Optional:
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.destination_mongodb_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mongodb_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/destination_mssql.md b/docs/resources/destination_mssql.md
index 860001e3c..f653ec9c5 100644
--- a/docs/resources/destination_mssql.md
+++ b/docs/resources/destination_mssql.md
@@ -15,27 +15,23 @@ DestinationMssql Resource
```terraform
resource "airbyte_destination_mssql" "my_destination_mssql" {
configuration = {
- database = "...my_database..."
- destination_type = "mssql"
- host = "...my_host..."
- jdbc_url_params = "...my_jdbc_url_params..."
- password = "...my_password..."
- port = 1433
- schema = "public"
+ database = "...my_database..."
+ host = "...my_host..."
+ jdbc_url_params = "...my_jdbc_url_params..."
+ password = "...my_password..."
+ port = 1433
+ schema = "public"
ssl_method = {
- destination_mssql_ssl_method_encrypted_trust_server_certificate_ = {
- ssl_method = "encrypted_trust_server_certificate"
- }
+ encrypted_trust_server_certificate = {}
}
tunnel_method = {
- destination_mssql_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ destination_mssql_no_tunnel = {}
}
- username = "Desiree_Yost"
+ username = "Amalia.Blick"
}
- name = "Bert Treutel DVM"
- workspace_id = "33317fe3-5b60-4eb1-aa42-6555ba3c2874"
+ definition_id = "90e1a2bc-7de0-4ff6-b737-4915d3efc2cd"
+ name = "Jorge Beahan"
+ workspace_id = "6acc1e6f-1291-4560-8b55-b326e06d2448"
}
```
@@ -45,9 +41,13 @@ resource "airbyte_destination_mssql" "my_destination_mssql" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -59,16 +59,17 @@ resource "airbyte_destination_mssql" "my_destination_mssql" {
Required:
- `database` (String) The name of the MSSQL database.
-- `destination_type` (String) must be one of ["mssql"]
- `host` (String) The host name of the MSSQL database.
-- `port` (Number) The port of the MSSQL database.
-- `schema` (String) The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public".
- `username` (String) The username which is used to access the database.
Optional:
- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) The password associated with this username.
+- `password` (String, Sensitive) The password associated with this username.
+- `port` (Number) Default: 1433
+The port of the MSSQL database.
+- `schema` (String) Default: "public"
+The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public".
- `ssl_method` (Attributes) The encryption method which is used to communicate with the database. (see [below for nested schema](#nestedatt--configuration--ssl_method))
- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
@@ -77,45 +78,15 @@ Optional:
Optional:
-- `destination_mssql_ssl_method_encrypted_trust_server_certificate` (Attributes) Use the certificate provided by the server without verification. (For testing purposes only!) (see [below for nested schema](#nestedatt--configuration--ssl_method--destination_mssql_ssl_method_encrypted_trust_server_certificate))
-- `destination_mssql_ssl_method_encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--ssl_method--destination_mssql_ssl_method_encrypted_verify_certificate))
-- `destination_mssql_update_ssl_method_encrypted_trust_server_certificate` (Attributes) Use the certificate provided by the server without verification. (For testing purposes only!) (see [below for nested schema](#nestedatt--configuration--ssl_method--destination_mssql_update_ssl_method_encrypted_trust_server_certificate))
-- `destination_mssql_update_ssl_method_encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--ssl_method--destination_mssql_update_ssl_method_encrypted_verify_certificate))
-
-
-### Nested Schema for `configuration.ssl_method.destination_mssql_ssl_method_encrypted_trust_server_certificate`
-
-Required:
-
-- `ssl_method` (String) must be one of ["encrypted_trust_server_certificate"]
-
-
-
-### Nested Schema for `configuration.ssl_method.destination_mssql_ssl_method_encrypted_verify_certificate`
-
-Required:
-
-- `ssl_method` (String) must be one of ["encrypted_verify_certificate"]
-
-Optional:
-
-- `host_name_in_certificate` (String) Specifies the host name of the server. The value of this property must match the subject property of the certificate.
-
+- `encrypted_trust_server_certificate` (Attributes) Use the certificate provided by the server without verification. (For testing purposes only!) (see [below for nested schema](#nestedatt--configuration--ssl_method--encrypted_trust_server_certificate))
+- `encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--ssl_method--encrypted_verify_certificate))
-
-### Nested Schema for `configuration.ssl_method.destination_mssql_update_ssl_method_encrypted_trust_server_certificate`
+
+### Nested Schema for `configuration.ssl_method.encrypted_trust_server_certificate`
-Required:
-
-- `ssl_method` (String) must be one of ["encrypted_trust_server_certificate"]
-
-
-
-### Nested Schema for `configuration.ssl_method.destination_mssql_update_ssl_method_encrypted_verify_certificate`
-
-Required:
-- `ssl_method` (String) must be one of ["encrypted_verify_certificate"]
+
+### Nested Schema for `configuration.ssl_method.encrypted_verify_certificate`
Optional:
@@ -128,80 +99,41 @@ Optional:
Optional:
-- `destination_mssql_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mssql_ssh_tunnel_method_no_tunnel))
-- `destination_mssql_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mssql_ssh_tunnel_method_password_authentication))
-- `destination_mssql_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mssql_ssh_tunnel_method_ssh_key_authentication))
-- `destination_mssql_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mssql_update_ssh_tunnel_method_no_tunnel))
-- `destination_mssql_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mssql_update_ssh_tunnel_method_password_authentication))
-- `destination_mssql_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mssql_update_ssh_tunnel_method_ssh_key_authentication))
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-
-### Nested Schema for `configuration.tunnel_method.destination_mssql_ssh_tunnel_method_no_tunnel`
-
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-
-### Nested Schema for `configuration.tunnel_method.destination_mssql_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mssql_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
-
-### Nested Schema for `configuration.tunnel_method.destination_mssql_update_ssh_tunnel_method_no_tunnel`
-
-Required:
+Optional:
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.destination_mssql_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mssql_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/destination_mysql.md b/docs/resources/destination_mysql.md
index 251fcdbc6..99b4555e9 100644
--- a/docs/resources/destination_mysql.md
+++ b/docs/resources/destination_mysql.md
@@ -15,21 +15,19 @@ DestinationMysql Resource
```terraform
resource "airbyte_destination_mysql" "my_destination_mysql" {
configuration = {
- database = "...my_database..."
- destination_type = "mysql"
- host = "...my_host..."
- jdbc_url_params = "...my_jdbc_url_params..."
- password = "...my_password..."
- port = 3306
+ database = "...my_database..."
+ host = "...my_host..."
+ jdbc_url_params = "...my_jdbc_url_params..."
+ password = "...my_password..."
+ port = 3306
tunnel_method = {
- destination_mysql_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ destination_mysql_no_tunnel = {}
}
- username = "Sheldon.Smitham"
+ username = "Elissa16"
}
- name = "Guy Luettgen"
- workspace_id = "a8d8f5c0-b2f2-4fb7-b194-a276b26916fe"
+ definition_id = "a53050a9-afbc-466c-913a-5b78062a6a13"
+ name = "Nick Rogahn"
+ workspace_id = "63598ffb-0429-424f-aeae-5018c3193740"
}
```
@@ -39,9 +37,13 @@ resource "airbyte_destination_mysql" "my_destination_mysql" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -53,15 +55,15 @@ resource "airbyte_destination_mysql" "my_destination_mysql" {
Required:
- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["mysql"]
- `host` (String) Hostname of the database.
-- `port` (Number) Port of the database.
- `username` (String) Username to use to access the database.
Optional:
- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) Password associated with the username.
+- `password` (String, Sensitive) Password associated with the username.
+- `port` (Number) Default: 3306
+Port of the database.
- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
@@ -69,80 +71,41 @@ Optional:
Optional:
-- `destination_mysql_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mysql_ssh_tunnel_method_no_tunnel))
-- `destination_mysql_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mysql_ssh_tunnel_method_password_authentication))
-- `destination_mysql_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mysql_ssh_tunnel_method_ssh_key_authentication))
-- `destination_mysql_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mysql_update_ssh_tunnel_method_no_tunnel))
-- `destination_mysql_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mysql_update_ssh_tunnel_method_password_authentication))
-- `destination_mysql_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_mysql_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mysql_ssh_tunnel_method_no_tunnel`
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-
-### Nested Schema for `configuration.tunnel_method.destination_mysql_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mysql_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
+Optional:
-
-### Nested Schema for `configuration.tunnel_method.destination_mysql_update_ssh_tunnel_method_no_tunnel`
-
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.destination_mysql_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_mysql_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/destination_oracle.md b/docs/resources/destination_oracle.md
index 7f4e8a16b..a8df5b7e1 100644
--- a/docs/resources/destination_oracle.md
+++ b/docs/resources/destination_oracle.md
@@ -15,22 +15,20 @@ DestinationOracle Resource
```terraform
resource "airbyte_destination_oracle" "my_destination_oracle" {
configuration = {
- destination_type = "oracle"
- host = "...my_host..."
- jdbc_url_params = "...my_jdbc_url_params..."
- password = "...my_password..."
- port = 1521
- schema = "airbyte"
- sid = "...my_sid..."
+ host = "...my_host..."
+ jdbc_url_params = "...my_jdbc_url_params..."
+ password = "...my_password..."
+ port = 1521
+ schema = "airbyte"
+ sid = "...my_sid..."
tunnel_method = {
- destination_oracle_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ destination_oracle_no_tunnel = {}
}
- username = "Viviane_Aufderhar"
+ username = "Abdullah_Ward15"
}
- name = "Tammy Medhurst"
- workspace_id = "3698f447-f603-4e8b-845e-80ca55efd20e"
+ definition_id = "2db6fe08-64a8-456a-8417-0ff8566dc323"
+ name = "Brittany Mohr"
+ workspace_id = "b07bf072-8b70-4775-98c6-7348eaa4356f"
}
```
@@ -40,9 +38,13 @@ resource "airbyte_destination_oracle" "my_destination_oracle" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -53,17 +55,18 @@ resource "airbyte_destination_oracle" "my_destination_oracle" {
Required:
-- `destination_type` (String) must be one of ["oracle"]
- `host` (String) The hostname of the database.
-- `port` (Number) The port of the database.
- `sid` (String) The System Identifier uniquely distinguishes the instance from any other instance on the same computer.
- `username` (String) The username to access the database. This user must have CREATE USER privileges in the database.
Optional:
- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) The password associated with the username.
-- `schema` (String) The default schema is used as the target schema for all statements issued from the connection that do not explicitly specify a schema name. The usual value for this field is "airbyte". In Oracle, schemas and users are the same thing, so the "user" parameter is used as the login credentials and this is used for the default Airbyte message schema.
+- `password` (String, Sensitive) The password associated with the username.
+- `port` (Number) Default: 1521
+The port of the database.
+- `schema` (String) Default: "airbyte"
+The default schema is used as the target schema for all statements issued from the connection that do not explicitly specify a schema name. The usual value for this field is "airbyte". In Oracle, schemas and users are the same thing, so the "user" parameter is used as the login credentials and this is used for the default Airbyte message schema.
- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
@@ -71,80 +74,41 @@ Optional:
Optional:
-- `destination_oracle_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_oracle_ssh_tunnel_method_no_tunnel))
-- `destination_oracle_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_oracle_ssh_tunnel_method_password_authentication))
-- `destination_oracle_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_oracle_ssh_tunnel_method_ssh_key_authentication))
-- `destination_oracle_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_oracle_update_ssh_tunnel_method_no_tunnel))
-- `destination_oracle_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_oracle_update_ssh_tunnel_method_password_authentication))
-- `destination_oracle_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_oracle_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.destination_oracle_ssh_tunnel_method_no_tunnel`
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-
-### Nested Schema for `configuration.tunnel_method.destination_oracle_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_oracle_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
+Optional:
-
-### Nested Schema for `configuration.tunnel_method.destination_oracle_update_ssh_tunnel_method_no_tunnel`
-
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.destination_oracle_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_oracle_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/destination_pinecone.md b/docs/resources/destination_pinecone.md
index 6f890911c..2ec2b1567 100644
--- a/docs/resources/destination_pinecone.md
+++ b/docs/resources/destination_pinecone.md
@@ -15,31 +15,43 @@ DestinationPinecone Resource
```terraform
resource "airbyte_destination_pinecone" "my_destination_pinecone" {
configuration = {
- destination_type = "pinecone"
embedding = {
- destination_pinecone_embedding_cohere = {
- cohere_key = "...my_cohere_key..."
- mode = "cohere"
+ destination_pinecone_azure_open_ai = {
+ api_base = "https://your-resource-name.openai.azure.com"
+ deployment = "your-resource-name"
+ openai_key = "...my_openai_key..."
}
}
indexing = {
index = "...my_index..."
- pinecone_environment = "...my_pinecone_environment..."
+ pinecone_environment = "us-west1-gcp"
pinecone_key = "...my_pinecone_key..."
}
processing = {
- chunk_overlap = 2
- chunk_size = 3
+ chunk_overlap = 6
+ chunk_size = 6
+ field_name_mappings = [
+ {
+ from_field = "...my_from_field..."
+ to_field = "...my_to_field..."
+ },
+ ]
metadata_fields = [
"...",
]
text_fields = [
"...",
]
+ text_splitter = {
+ destination_pinecone_by_markdown_header = {
+ split_level = 7
+ }
+ }
}
}
- name = "Cecelia Braun"
- workspace_id = "8b6a89fb-e3a5-4aa8-a482-4d0ab4075088"
+ definition_id = "d49dbc4f-abbf-4199-8382-023b4de2c1a7"
+ name = "Bobby Lemke"
+ workspace_id = "d3cde3c9-d6fa-494b-b4b9-38f85ce1dfc1"
}
```
@@ -49,9 +61,13 @@ resource "airbyte_destination_pinecone" "my_destination_pinecone" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -62,7 +78,6 @@ resource "airbyte_destination_pinecone" "my_destination_pinecone" {
Required:
-- `destination_type` (String) must be one of ["pinecone"]
- `embedding` (Attributes) Embedding configuration (see [below for nested schema](#nestedatt--configuration--embedding))
- `indexing` (Attributes) Pinecone is a popular vector store that can be used to store and retrieve embeddings. (see [below for nested schema](#nestedatt--configuration--indexing))
- `processing` (Attributes) (see [below for nested schema](#nestedatt--configuration--processing))
@@ -72,99 +87,127 @@ Required:
Optional:
-- `destination_pinecone_embedding_cohere` (Attributes) Use the Cohere API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--destination_pinecone_embedding_cohere))
-- `destination_pinecone_embedding_fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--destination_pinecone_embedding_fake))
-- `destination_pinecone_embedding_open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--destination_pinecone_embedding_open_ai))
-- `destination_pinecone_update_embedding_cohere` (Attributes) Use the Cohere API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--destination_pinecone_update_embedding_cohere))
-- `destination_pinecone_update_embedding_fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--destination_pinecone_update_embedding_fake))
-- `destination_pinecone_update_embedding_open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--destination_pinecone_update_embedding_open_ai))
+- `azure_open_ai` (Attributes) Use the Azure-hosted OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--azure_open_ai))
+- `cohere` (Attributes) Use the Cohere API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--cohere))
+- `fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--fake))
+- `open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--open_ai))
+- `open_ai_compatible` (Attributes) Use a service that's compatible with the OpenAI API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--open_ai_compatible))
-
-### Nested Schema for `configuration.embedding.destination_pinecone_embedding_cohere`
+
+### Nested Schema for `configuration.embedding.azure_open_ai`
Required:
-- `cohere_key` (String)
+- `api_base` (String) The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource
+- `deployment` (String) The deployment for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource
+- `openai_key` (String, Sensitive) The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource
-Optional:
-- `mode` (String) must be one of ["cohere"]
+
+### Nested Schema for `configuration.embedding.cohere`
+Required:
-
-### Nested Schema for `configuration.embedding.destination_pinecone_embedding_fake`
+- `cohere_key` (String, Sensitive)
+
+
+
+### Nested Schema for `configuration.embedding.fake`
-Optional:
-- `mode` (String) must be one of ["fake"]
+
+### Nested Schema for `configuration.embedding.open_ai`
+
+Required:
+
+- `openai_key` (String, Sensitive)
-
-### Nested Schema for `configuration.embedding.destination_pinecone_embedding_open_ai`
+
+### Nested Schema for `configuration.embedding.open_ai_compatible`
Required:
-- `openai_key` (String)
+- `base_url` (String) The base URL for your OpenAI-compatible service
+- `dimensions` (Number) The number of dimensions the embedding model is generating
Optional:
-- `mode` (String) must be one of ["openai"]
+- `api_key` (String, Sensitive) Default: ""
+- `model_name` (String) Default: "text-embedding-ada-002"
+The name of the model to use for embedding
-
-### Nested Schema for `configuration.embedding.destination_pinecone_update_embedding_cohere`
+
+
+### Nested Schema for `configuration.indexing`
Required:
-- `cohere_key` (String)
+- `index` (String) Pinecone index in your project to load data into
+- `pinecone_environment` (String) Pinecone Cloud environment to use
+- `pinecone_key` (String, Sensitive) The Pinecone API key to use matching the environment (copy from Pinecone console)
-Optional:
-- `mode` (String) must be one of ["cohere"]
+
+### Nested Schema for `configuration.processing`
+Required:
-
-### Nested Schema for `configuration.embedding.destination_pinecone_update_embedding_fake`
+- `chunk_size` (Number) Size of chunks in tokens to store in vector store (make sure it is not too big for the context if your LLM)
Optional:
-- `mode` (String) must be one of ["fake"]
-
+- `chunk_overlap` (Number) Default: 0
+Size of overlap between chunks in tokens to store in vector store to better capture relevant context
+- `field_name_mappings` (Attributes List) List of fields to rename. Not applicable for nested fields, but can be used to rename fields already flattened via dot notation. (see [below for nested schema](#nestedatt--configuration--processing--field_name_mappings))
+- `metadata_fields` (List of String) List of fields in the record that should be stored as metadata. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered metadata fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array. When specifying nested paths, all matching values are flattened into an array set to a field named by the path.
+- `text_fields` (List of String) List of fields in the record that should be used to calculate the embedding. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered text fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array.
+- `text_splitter` (Attributes) Split text fields into chunks based on the specified method. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter))
-
-### Nested Schema for `configuration.embedding.destination_pinecone_update_embedding_open_ai`
+
+### Nested Schema for `configuration.processing.field_name_mappings`
Required:
-- `openai_key` (String)
+- `from_field` (String) The field name in the source
+- `to_field` (String) The field name to use in the destination
-Optional:
-- `mode` (String) must be one of ["openai"]
+
+### Nested Schema for `configuration.processing.text_splitter`
+Optional:
+- `by_markdown_header` (Attributes) Split the text by Markdown headers down to the specified header level. If the chunk size fits multiple sections, they will be combined into a single chunk. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter--by_markdown_header))
+- `by_programming_language` (Attributes) Split the text by suitable delimiters based on the programming language. This is useful for splitting code into chunks. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter--by_programming_language))
+- `by_separator` (Attributes) Split the text by the list of separators until the chunk size is reached, using the earlier mentioned separators where possible. This is useful for splitting text fields by paragraphs, sentences, words, etc. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter--by_separator))
-
-### Nested Schema for `configuration.indexing`
+
+### Nested Schema for `configuration.processing.text_splitter.by_separator`
-Required:
+Optional:
-- `index` (String) Pinecone index to use
-- `pinecone_environment` (String) Pinecone environment to use
-- `pinecone_key` (String)
+- `split_level` (Number) Default: 1
+Level of markdown headers to split text fields by. Headings down to the specified level will be used as split points
-
-### Nested Schema for `configuration.processing`
+
+### Nested Schema for `configuration.processing.text_splitter.by_separator`
Required:
-- `chunk_size` (Number) Size of chunks in tokens to store in vector store (make sure it is not too big for the context if your LLM)
+- `language` (String) must be one of ["cpp", "go", "java", "js", "php", "proto", "python", "rst", "ruby", "rust", "scala", "swift", "markdown", "latex", "html", "sol"]
+Split code in suitable places based on the programming language
+
+
+
+### Nested Schema for `configuration.processing.text_splitter.by_separator`
Optional:
-- `chunk_overlap` (Number) Size of overlap between chunks in tokens to store in vector store to better capture relevant context
-- `metadata_fields` (List of String) List of fields in the record that should be stored as metadata. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered metadata fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array. When specifying nested paths, all matching values are flattened into an array set to a field named by the path.
-- `text_fields` (List of String) List of fields in the record that should be used to calculate the embedding. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered text fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array.
+- `keep_separator` (Boolean) Default: false
+Whether to keep the separator in the resulting chunks
+- `separators` (List of String) List of separator strings to split text fields by. The separator itself needs to be wrapped in double quotes, e.g. to split by the dot character, use ".". To split by a newline, use "\n".
diff --git a/docs/resources/destination_postgres.md b/docs/resources/destination_postgres.md
index b75470f08..bcc1f9c49 100644
--- a/docs/resources/destination_postgres.md
+++ b/docs/resources/destination_postgres.md
@@ -15,27 +15,23 @@ DestinationPostgres Resource
```terraform
resource "airbyte_destination_postgres" "my_destination_postgres" {
configuration = {
- database = "...my_database..."
- destination_type = "postgres"
- host = "...my_host..."
- jdbc_url_params = "...my_jdbc_url_params..."
- password = "...my_password..."
- port = 5432
- schema = "public"
+ database = "...my_database..."
+ host = "...my_host..."
+ jdbc_url_params = "...my_jdbc_url_params..."
+ password = "...my_password..."
+ port = 5432
+ schema = "public"
ssl_mode = {
- destination_postgres_ssl_modes_allow = {
- mode = "allow"
- }
+ allow = {}
}
tunnel_method = {
- destination_postgres_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ destination_postgres_no_tunnel = {}
}
- username = "Foster.Borer"
+ username = "Burley_Kuhic"
}
- name = "Karen Kautzer"
- workspace_id = "904f3b11-94b8-4abf-a03a-79f9dfe0ab7d"
+ definition_id = "db19e64b-83f6-43d3-8837-0e173ec9d4f3"
+ name = "Dianna Dooley V"
+ workspace_id = "2a8a43c0-f29f-47cb-912b-320943801c36"
}
```
@@ -45,9 +41,13 @@ resource "airbyte_destination_postgres" "my_destination_postgres" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -59,16 +59,17 @@ resource "airbyte_destination_postgres" "my_destination_postgres" {
Required:
- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["postgres"]
- `host` (String) Hostname of the database.
-- `port` (Number) Port of the database.
-- `schema` (String) The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public".
- `username` (String) Username to use to access the database.
Optional:
- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) Password associated with the username.
+- `password` (String, Sensitive) Password associated with the username.
+- `port` (Number) Default: 5432
+Port of the database.
+- `schema` (String) Default: "public"
+The default schema tables are written to if the source does not specify a namespace. The usual value for this field is "public".
- `ssl_mode` (Attributes) SSL connection modes.
disable - Chose this mode to disable encryption of communication between Airbyte and destination database
allow - Chose this mode to enable encryption only when required by the source database
@@ -84,137 +85,53 @@ Optional:
Optional:
-- `destination_postgres_ssl_modes_allow` (Attributes) Allow SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_ssl_modes_allow))
-- `destination_postgres_ssl_modes_disable` (Attributes) Disable SSL. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_ssl_modes_disable))
-- `destination_postgres_ssl_modes_prefer` (Attributes) Prefer SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_ssl_modes_prefer))
-- `destination_postgres_ssl_modes_require` (Attributes) Require SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_ssl_modes_require))
-- `destination_postgres_ssl_modes_verify_ca` (Attributes) Verify-ca SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_ssl_modes_verify_ca))
-- `destination_postgres_ssl_modes_verify_full` (Attributes) Verify-full SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_ssl_modes_verify_full))
-- `destination_postgres_update_ssl_modes_allow` (Attributes) Allow SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_update_ssl_modes_allow))
-- `destination_postgres_update_ssl_modes_disable` (Attributes) Disable SSL. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_update_ssl_modes_disable))
-- `destination_postgres_update_ssl_modes_prefer` (Attributes) Prefer SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_update_ssl_modes_prefer))
-- `destination_postgres_update_ssl_modes_require` (Attributes) Require SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_update_ssl_modes_require))
-- `destination_postgres_update_ssl_modes_verify_ca` (Attributes) Verify-ca SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_update_ssl_modes_verify_ca))
-- `destination_postgres_update_ssl_modes_verify_full` (Attributes) Verify-full SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_postgres_update_ssl_modes_verify_full))
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_ssl_modes_allow`
-
-Required:
-
-- `mode` (String) must be one of ["allow"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_ssl_modes_disable`
-
-Required:
-
-- `mode` (String) must be one of ["disable"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_ssl_modes_prefer`
-
-Required:
-
-- `mode` (String) must be one of ["prefer"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_ssl_modes_require`
-
-Required:
-
-- `mode` (String) must be one of ["require"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_ssl_modes_verify_ca`
-
-Required:
-
-- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify-ca"]
-
-Optional:
-
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_ssl_modes_verify_full`
-
-Required:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `mode` (String) must be one of ["verify-full"]
-
-Optional:
-
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_update_ssl_modes_allow`
-
-Required:
-
-- `mode` (String) must be one of ["allow"]
+- `allow` (Attributes) Allow SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--allow))
+- `disable` (Attributes) Disable SSL. (see [below for nested schema](#nestedatt--configuration--ssl_mode--disable))
+- `prefer` (Attributes) Prefer SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--prefer))
+- `require` (Attributes) Require SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--require))
+- `verify_ca` (Attributes) Verify-ca SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--verify_ca))
+- `verify_full` (Attributes) Verify-full SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--verify_full))
+
+### Nested Schema for `configuration.ssl_mode.allow`
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_update_ssl_modes_disable`
-Required:
+
+### Nested Schema for `configuration.ssl_mode.disable`
-- `mode` (String) must be one of ["disable"]
+
+### Nested Schema for `configuration.ssl_mode.prefer`
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_update_ssl_modes_prefer`
-
-Required:
-- `mode` (String) must be one of ["prefer"]
+
+### Nested Schema for `configuration.ssl_mode.require`
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_update_ssl_modes_require`
-
-Required:
-
-- `mode` (String) must be one of ["require"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_update_ssl_modes_verify_ca`
+
+### Nested Schema for `configuration.ssl_mode.verify_ca`
Required:
- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify-ca"]
Optional:
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
+- `client_key_password` (String, Sensitive) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-
-### Nested Schema for `configuration.ssl_mode.destination_postgres_update_ssl_modes_verify_full`
+
+### Nested Schema for `configuration.ssl_mode.verify_full`
Required:
- `ca_certificate` (String) CA certificate
- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `mode` (String) must be one of ["verify-full"]
+- `client_key` (String, Sensitive) Client key
Optional:
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
+- `client_key_password` (String, Sensitive) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
@@ -223,80 +140,41 @@ Optional:
Optional:
-- `destination_postgres_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_postgres_ssh_tunnel_method_no_tunnel))
-- `destination_postgres_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_postgres_ssh_tunnel_method_password_authentication))
-- `destination_postgres_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_postgres_ssh_tunnel_method_ssh_key_authentication))
-- `destination_postgres_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_postgres_update_ssh_tunnel_method_no_tunnel))
-- `destination_postgres_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_postgres_update_ssh_tunnel_method_password_authentication))
-- `destination_postgres_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_postgres_update_ssh_tunnel_method_ssh_key_authentication))
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-
-### Nested Schema for `configuration.tunnel_method.destination_postgres_ssh_tunnel_method_no_tunnel`
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-### Nested Schema for `configuration.tunnel_method.destination_postgres_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
-
-### Nested Schema for `configuration.tunnel_method.destination_postgres_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_postgres_update_ssh_tunnel_method_no_tunnel`
-
-Required:
+Optional:
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.destination_postgres_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_postgres_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/destination_pubsub.md b/docs/resources/destination_pubsub.md
index e4efa0825..fd9cd8338 100644
--- a/docs/resources/destination_pubsub.md
+++ b/docs/resources/destination_pubsub.md
@@ -15,18 +15,18 @@ DestinationPubsub Resource
```terraform
resource "airbyte_destination_pubsub" "my_destination_pubsub" {
configuration = {
- batching_delay_threshold = 7
+ batching_delay_threshold = 5
batching_element_count_threshold = 5
batching_enabled = true
batching_request_bytes_threshold = 3
credentials_json = "...my_credentials_json..."
- destination_type = "pubsub"
ordering_enabled = true
project_id = "...my_project_id..."
topic_id = "...my_topic_id..."
}
- name = "Phil Boyer"
- workspace_id = "f86bc173-d689-4eee-9526-f8d986e881ea"
+ definition_id = "b6294a31-a29a-4af3-8680-70eca1537042"
+ name = "Ada Harber"
+ workspace_id = "e54dc306-1658-46b7-b990-fea69beba7dc"
}
```
@@ -36,9 +36,13 @@ resource "airbyte_destination_pubsub" "my_destination_pubsub" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -49,17 +53,21 @@ resource "airbyte_destination_pubsub" "my_destination_pubsub" {
Required:
-- `batching_enabled` (Boolean) If TRUE messages will be buffered instead of sending them one by one
- `credentials_json` (String) The contents of the JSON service account key. Check out the docs if you need help generating this key.
-- `destination_type` (String) must be one of ["pubsub"]
-- `ordering_enabled` (Boolean) If TRUE PubSub publisher will have message ordering enabled. Every message will have an ordering key of stream
- `project_id` (String) The GCP project ID for the project containing the target PubSub.
- `topic_id` (String) The PubSub topic ID in the given GCP project ID.
Optional:
-- `batching_delay_threshold` (Number) Number of ms before the buffer is flushed
-- `batching_element_count_threshold` (Number) Number of messages before the buffer is flushed
-- `batching_request_bytes_threshold` (Number) Number of bytes before the buffer is flushed
+- `batching_delay_threshold` (Number) Default: 1
+Number of ms before the buffer is flushed
+- `batching_element_count_threshold` (Number) Default: 1
+Number of messages before the buffer is flushed
+- `batching_enabled` (Boolean) Default: false
+If TRUE messages will be buffered instead of sending them one by one
+- `batching_request_bytes_threshold` (Number) Default: 1
+Number of bytes before the buffer is flushed
+- `ordering_enabled` (Boolean) Default: false
+If TRUE PubSub publisher will have message ordering enabled. Every message will have an ordering key of stream
diff --git a/docs/resources/destination_qdrant.md b/docs/resources/destination_qdrant.md
new file mode 100644
index 000000000..460f59a7c
--- /dev/null
+++ b/docs/resources/destination_qdrant.md
@@ -0,0 +1,283 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_destination_qdrant Resource - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ DestinationQdrant Resource
+---
+
+# airbyte_destination_qdrant (Resource)
+
+DestinationQdrant Resource
+
+## Example Usage
+
+```terraform
+resource "airbyte_destination_qdrant" "my_destination_qdrant" {
+ configuration = {
+ embedding = {
+ destination_qdrant_azure_open_ai = {
+ api_base = "https://your-resource-name.openai.azure.com"
+ deployment = "your-resource-name"
+ openai_key = "...my_openai_key..."
+ }
+ }
+ indexing = {
+ auth_method = {
+ api_key_auth = {
+ api_key = "...my_api_key..."
+ }
+ }
+ collection = "...my_collection..."
+ distance_metric = {
+ cos = {}
+ }
+ prefer_grpc = true
+ text_field = "...my_text_field..."
+ url = "...my_url..."
+ }
+ processing = {
+ chunk_overlap = 8
+ chunk_size = 9
+ field_name_mappings = [
+ {
+ from_field = "...my_from_field..."
+ to_field = "...my_to_field..."
+ },
+ ]
+ metadata_fields = [
+ "...",
+ ]
+ text_fields = [
+ "...",
+ ]
+ text_splitter = {
+ destination_qdrant_by_markdown_header = {
+ split_level = 9
+ }
+ }
+ }
+ }
+ definition_id = "8f8d8392-aab1-45fb-858b-ad9ea7671d58"
+ name = "Kathryn O'Keefe"
+ workspace_id = "9de520ce-3420-4a29-9e5c-09962877b187"
+}
+```
+
+
+## Schema
+
+### Required
+
+- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
+- `workspace_id` (String)
+
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
+### Read-Only
+
+- `destination_id` (String)
+- `destination_type` (String)
+
+
+### Nested Schema for `configuration`
+
+Required:
+
+- `embedding` (Attributes) Embedding configuration (see [below for nested schema](#nestedatt--configuration--embedding))
+- `indexing` (Attributes) Indexing configuration (see [below for nested schema](#nestedatt--configuration--indexing))
+- `processing` (Attributes) (see [below for nested schema](#nestedatt--configuration--processing))
+
+
+### Nested Schema for `configuration.embedding`
+
+Optional:
+
+- `azure_open_ai` (Attributes) Use the Azure-hosted OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--azure_open_ai))
+- `cohere` (Attributes) Use the Cohere API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--cohere))
+- `fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--fake))
+- `from_field` (Attributes) Use a field in the record as the embedding. This is useful if you already have an embedding for your data and want to store it in the vector store. (see [below for nested schema](#nestedatt--configuration--embedding--from_field))
+- `open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--open_ai))
+- `open_ai_compatible` (Attributes) Use a service that's compatible with the OpenAI API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--open_ai_compatible))
+
+
+### Nested Schema for `configuration.embedding.azure_open_ai`
+
+Required:
+
+- `api_base` (String) The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource
+- `deployment` (String) The deployment for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource
+- `openai_key` (String, Sensitive) The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource
+
+
+
+### Nested Schema for `configuration.embedding.cohere`
+
+Required:
+
+- `cohere_key` (String, Sensitive)
+
+
+
+### Nested Schema for `configuration.embedding.fake`
+
+
+
+### Nested Schema for `configuration.embedding.from_field`
+
+Required:
+
+- `dimensions` (Number) The number of dimensions the embedding model is generating
+- `field_name` (String) Name of the field in the record that contains the embedding
+
+
+
+### Nested Schema for `configuration.embedding.open_ai`
+
+Required:
+
+- `openai_key` (String, Sensitive)
+
+
+
+### Nested Schema for `configuration.embedding.open_ai_compatible`
+
+Required:
+
+- `base_url` (String) The base URL for your OpenAI-compatible service
+- `dimensions` (Number) The number of dimensions the embedding model is generating
+
+Optional:
+
+- `api_key` (String, Sensitive) Default: ""
+- `model_name` (String) Default: "text-embedding-ada-002"
+The name of the model to use for embedding
+
+
+
+
+### Nested Schema for `configuration.indexing`
+
+Required:
+
+- `collection` (String) The collection to load data into
+- `url` (String) Public Endpoint of the Qdrant cluser
+
+Optional:
+
+- `auth_method` (Attributes) Method to authenticate with the Qdrant Instance (see [below for nested schema](#nestedatt--configuration--indexing--auth_method))
+- `distance_metric` (Attributes) The Distance metric used to measure similarities among vectors. This field is only used if the collection defined in the does not exist yet and is created automatically by the connector. (see [below for nested schema](#nestedatt--configuration--indexing--distance_metric))
+- `prefer_grpc` (Boolean) Default: true
+Whether to prefer gRPC over HTTP. Set to true for Qdrant cloud clusters
+- `text_field` (String) Default: "text"
+The field in the payload that contains the embedded text
+
+
+### Nested Schema for `configuration.indexing.auth_method`
+
+Optional:
+
+- `api_key_auth` (Attributes) Method to authenticate with the Qdrant Instance (see [below for nested schema](#nestedatt--configuration--indexing--auth_method--api_key_auth))
+- `no_auth` (Attributes) Method to authenticate with the Qdrant Instance (see [below for nested schema](#nestedatt--configuration--indexing--auth_method--no_auth))
+
+
+### Nested Schema for `configuration.indexing.auth_method.no_auth`
+
+Required:
+
+- `api_key` (String, Sensitive) API Key for the Qdrant instance
+
+
+
+### Nested Schema for `configuration.indexing.auth_method.no_auth`
+
+
+
+
+### Nested Schema for `configuration.indexing.distance_metric`
+
+Optional:
+
+- `cos` (Attributes) The Distance metric used to measure similarities among vectors. This field is only used if the collection defined in the does not exist yet and is created automatically by the connector. (see [below for nested schema](#nestedatt--configuration--indexing--distance_metric--cos))
+- `dot` (Attributes) The Distance metric used to measure similarities among vectors. This field is only used if the collection defined in the does not exist yet and is created automatically by the connector. (see [below for nested schema](#nestedatt--configuration--indexing--distance_metric--dot))
+- `euc` (Attributes) The Distance metric used to measure similarities among vectors. This field is only used if the collection defined in the does not exist yet and is created automatically by the connector. (see [below for nested schema](#nestedatt--configuration--indexing--distance_metric--euc))
+
+
+### Nested Schema for `configuration.indexing.distance_metric.euc`
+
+
+
+### Nested Schema for `configuration.indexing.distance_metric.euc`
+
+
+
+### Nested Schema for `configuration.indexing.distance_metric.euc`
+
+
+
+
+
+### Nested Schema for `configuration.processing`
+
+Required:
+
+- `chunk_size` (Number) Size of chunks in tokens to store in vector store (make sure it is not too big for the context if your LLM)
+
+Optional:
+
+- `chunk_overlap` (Number) Default: 0
+Size of overlap between chunks in tokens to store in vector store to better capture relevant context
+- `field_name_mappings` (Attributes List) List of fields to rename. Not applicable for nested fields, but can be used to rename fields already flattened via dot notation. (see [below for nested schema](#nestedatt--configuration--processing--field_name_mappings))
+- `metadata_fields` (List of String) List of fields in the record that should be stored as metadata. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered metadata fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array. When specifying nested paths, all matching values are flattened into an array set to a field named by the path.
+- `text_fields` (List of String) List of fields in the record that should be used to calculate the embedding. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered text fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array.
+- `text_splitter` (Attributes) Split text fields into chunks based on the specified method. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter))
+
+
+### Nested Schema for `configuration.processing.field_name_mappings`
+
+Required:
+
+- `from_field` (String) The field name in the source
+- `to_field` (String) The field name to use in the destination
+
+
+
+### Nested Schema for `configuration.processing.text_splitter`
+
+Optional:
+
+- `by_markdown_header` (Attributes) Split the text by Markdown headers down to the specified header level. If the chunk size fits multiple sections, they will be combined into a single chunk. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter--by_markdown_header))
+- `by_programming_language` (Attributes) Split the text by suitable delimiters based on the programming language. This is useful for splitting code into chunks. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter--by_programming_language))
+- `by_separator` (Attributes) Split the text by the list of separators until the chunk size is reached, using the earlier mentioned separators where possible. This is useful for splitting text fields by paragraphs, sentences, words, etc. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter--by_separator))
+
+
+### Nested Schema for `configuration.processing.text_splitter.by_separator`
+
+Optional:
+
+- `split_level` (Number) Default: 1
+Level of markdown headers to split text fields by. Headings down to the specified level will be used as split points
+
+
+
+### Nested Schema for `configuration.processing.text_splitter.by_separator`
+
+Required:
+
+- `language` (String) must be one of ["cpp", "go", "java", "js", "php", "proto", "python", "rst", "ruby", "rust", "scala", "swift", "markdown", "latex", "html", "sol"]
+Split code in suitable places based on the programming language
+
+
+
+### Nested Schema for `configuration.processing.text_splitter.by_separator`
+
+Optional:
+
+- `keep_separator` (Boolean) Default: false
+Whether to keep the separator in the resulting chunks
+- `separators` (List of String) List of separator strings to split text fields by. The separator itself needs to be wrapped in double quotes, e.g. to split by the dot character, use ".". To split by a newline, use "\n".
+
+
diff --git a/docs/resources/destination_redis.md b/docs/resources/destination_redis.md
index 3392155a9..aa0cb75d7 100644
--- a/docs/resources/destination_redis.md
+++ b/docs/resources/destination_redis.md
@@ -15,26 +15,22 @@ DestinationRedis Resource
```terraform
resource "airbyte_destination_redis" "my_destination_redis" {
configuration = {
- cache_type = "hash"
- destination_type = "redis"
- host = "localhost,127.0.0.1"
- password = "...my_password..."
- port = 9
- ssl = false
+ cache_type = "hash"
+ host = "localhost,127.0.0.1"
+ password = "...my_password..."
+ port = 7
+ ssl = false
ssl_mode = {
- destination_redis_ssl_modes_disable = {
- mode = "disable"
- }
+ destination_redis_disable = {}
}
tunnel_method = {
- destination_redis_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ destination_redis_no_tunnel = {}
}
- username = "Vivianne.Baumbach3"
+ username = "Keyshawn.Ledner"
}
- name = "Bonnie Halvorson"
- workspace_id = "f94e29e9-73e9-422a-97a1-5be3e060807e"
+ definition_id = "34412bc3-217a-4cbe-aad9-f3186486fc7b"
+ name = "Shannon Stroman"
+ workspace_id = "848f4034-6c04-4b19-bfb2-8918e382726e"
}
```
@@ -44,9 +40,13 @@ resource "airbyte_destination_redis" "my_destination_redis" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -57,17 +57,18 @@ resource "airbyte_destination_redis" "my_destination_redis" {
Required:
-- `cache_type` (String) must be one of ["hash"]
-Redis cache type to store data in.
-- `destination_type` (String) must be one of ["redis"]
- `host` (String) Redis host to connect to.
-- `port` (Number) Port of Redis.
- `username` (String) Username associated with Redis.
Optional:
-- `password` (String) Password associated with Redis.
-- `ssl` (Boolean) Indicates whether SSL encryption protocol will be used to connect to Redis. It is recommended to use SSL connection if possible.
+- `cache_type` (String) must be one of ["hash"]; Default: "hash"
+Redis cache type to store data in.
+- `password` (String, Sensitive) Password associated with Redis.
+- `port` (Number) Default: 6379
+Port of Redis.
+- `ssl` (Boolean) Default: false
+Indicates whether SSL encryption protocol will be used to connect to Redis. It is recommended to use SSL connection if possible.
- `ssl_mode` (Attributes) SSL connection modes.
verify-full - This is the most secure mode. Always require encryption and verifies the identity of the source database server (see [below for nested schema](#nestedatt--configuration--ssl_mode))
- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
@@ -77,55 +78,25 @@ Optional:
Optional:
-- `destination_redis_ssl_modes_disable` (Attributes) Disable SSL. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_redis_ssl_modes_disable))
-- `destination_redis_ssl_modes_verify_full` (Attributes) Verify-full SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_redis_ssl_modes_verify_full))
-- `destination_redis_update_ssl_modes_disable` (Attributes) Disable SSL. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_redis_update_ssl_modes_disable))
-- `destination_redis_update_ssl_modes_verify_full` (Attributes) Verify-full SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--destination_redis_update_ssl_modes_verify_full))
-
-
-### Nested Schema for `configuration.ssl_mode.destination_redis_ssl_modes_disable`
-
-Required:
-
-- `mode` (String) must be one of ["disable"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_redis_ssl_modes_verify_full`
-
-Required:
-
-- `ca_certificate` (String) CA certificate
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `mode` (String) must be one of ["verify-full"]
-
-Optional:
-
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-
-
-
-### Nested Schema for `configuration.ssl_mode.destination_redis_update_ssl_modes_disable`
-
-Required:
+- `disable` (Attributes) Disable SSL. (see [below for nested schema](#nestedatt--configuration--ssl_mode--disable))
+- `verify_full` (Attributes) Verify-full SSL mode. (see [below for nested schema](#nestedatt--configuration--ssl_mode--verify_full))
-- `mode` (String) must be one of ["disable"]
+
+### Nested Schema for `configuration.ssl_mode.disable`
-
-### Nested Schema for `configuration.ssl_mode.destination_redis_update_ssl_modes_verify_full`
+
+### Nested Schema for `configuration.ssl_mode.verify_full`
Required:
- `ca_certificate` (String) CA certificate
- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `mode` (String) must be one of ["verify-full"]
+- `client_key` (String, Sensitive) Client key
Optional:
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
+- `client_key_password` (String, Sensitive) Password for keystorage. If you do not add it - the password will be generated automatically.
@@ -134,80 +105,41 @@ Optional:
Optional:
-- `destination_redis_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redis_ssh_tunnel_method_no_tunnel))
-- `destination_redis_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redis_ssh_tunnel_method_password_authentication))
-- `destination_redis_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redis_ssh_tunnel_method_ssh_key_authentication))
-- `destination_redis_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redis_update_ssh_tunnel_method_no_tunnel))
-- `destination_redis_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redis_update_ssh_tunnel_method_password_authentication))
-- `destination_redis_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redis_update_ssh_tunnel_method_ssh_key_authentication))
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-
-### Nested Schema for `configuration.tunnel_method.destination_redis_ssh_tunnel_method_no_tunnel`
-
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-
-### Nested Schema for `configuration.tunnel_method.destination_redis_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redis_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
-
-### Nested Schema for `configuration.tunnel_method.destination_redis_update_ssh_tunnel_method_no_tunnel`
-
-Required:
+Optional:
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.destination_redis_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redis_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/destination_redshift.md b/docs/resources/destination_redshift.md
index cf6d52877..e508a2712 100644
--- a/docs/resources/destination_redshift.md
+++ b/docs/resources/destination_redshift.md
@@ -15,41 +15,37 @@ DestinationRedshift Resource
```terraform
resource "airbyte_destination_redshift" "my_destination_redshift" {
configuration = {
- database = "...my_database..."
- destination_type = "redshift"
- host = "...my_host..."
- jdbc_url_params = "...my_jdbc_url_params..."
- password = "...my_password..."
- port = 5439
- schema = "public"
+ database = "...my_database..."
+ host = "...my_host..."
+ jdbc_url_params = "...my_jdbc_url_params..."
+ password = "...my_password..."
+ port = 5439
+ schema = "public"
tunnel_method = {
- destination_redshift_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ destination_redshift_no_tunnel = {}
}
uploading_method = {
- destination_redshift_uploading_method_s3_staging = {
+ s3_staging = {
access_key_id = "...my_access_key_id..."
encryption = {
- destination_redshift_uploading_method_s3_staging_encryption_aes_cbc_envelope_encryption = {
- encryption_type = "aes_cbc_envelope"
+ aes_cbc_envelope_encryption = {
key_encrypting_key = "...my_key_encrypting_key..."
}
}
file_buffer_count = 10
- file_name_pattern = "{timestamp}"
- method = "S3 Staging"
- purge_staging_data = false
+ file_name_pattern = "{date:yyyy_MM}"
+ purge_staging_data = true
s3_bucket_name = "airbyte.staging"
s3_bucket_path = "data_sync/test"
- s3_bucket_region = "us-west-2"
+ s3_bucket_region = "eu-west-1"
secret_access_key = "...my_secret_access_key..."
}
}
- username = "Margarette_Rau"
+ username = "Rollin_Ernser87"
}
- name = "Mrs. Geraldine Zulauf"
- workspace_id = "7a60ff2a-54a3-41e9-8764-a3e865e7956f"
+ definition_id = "1f9eaf9a-8e21-457a-8560-c89e77fd0c20"
+ name = "Linda Langworth"
+ workspace_id = "396de60f-942f-4937-a3c5-9508dd11c7ed"
}
```
@@ -59,9 +55,13 @@ resource "airbyte_destination_redshift" "my_destination_redshift" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -73,225 +73,116 @@ resource "airbyte_destination_redshift" "my_destination_redshift" {
Required:
- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["redshift"]
- `host` (String) Host Endpoint of the Redshift Cluster (must include the cluster-id, region and end with .redshift.amazonaws.com)
-- `password` (String) Password associated with the username.
-- `port` (Number) Port of the database.
-- `schema` (String) The default schema tables are written to if the source does not specify a namespace. Unless specifically configured, the usual value for this field is "public".
+- `password` (String, Sensitive) Password associated with the username.
- `username` (String) Username to use to access the database.
Optional:
- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
+- `port` (Number) Default: 5439
+Port of the database.
+- `schema` (String) Default: "public"
+The default schema tables are written to if the source does not specify a namespace. Unless specifically configured, the usual value for this field is "public".
- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
-- `uploading_method` (Attributes) The method how the data will be uploaded to the database. (see [below for nested schema](#nestedatt--configuration--uploading_method))
+- `uploading_method` (Attributes) The way data will be uploaded to Redshift. (see [below for nested schema](#nestedatt--configuration--uploading_method))
### Nested Schema for `configuration.tunnel_method`
Optional:
-- `destination_redshift_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redshift_ssh_tunnel_method_no_tunnel))
-- `destination_redshift_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redshift_ssh_tunnel_method_password_authentication))
-- `destination_redshift_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redshift_ssh_tunnel_method_ssh_key_authentication))
-- `destination_redshift_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redshift_update_ssh_tunnel_method_no_tunnel))
-- `destination_redshift_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redshift_update_ssh_tunnel_method_password_authentication))
-- `destination_redshift_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_redshift_update_ssh_tunnel_method_ssh_key_authentication))
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-
-### Nested Schema for `configuration.tunnel_method.destination_redshift_ssh_tunnel_method_no_tunnel`
-
-Required:
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redshift_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redshift_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redshift_update_ssh_tunnel_method_no_tunnel`
-
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
+Optional:
-
-### Nested Schema for `configuration.tunnel_method.destination_redshift_update_ssh_tunnel_method_password_authentication`
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-Required:
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_redshift_update_ssh_tunnel_method_ssh_key_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.uploading_method`
-
Optional:
-- `destination_redshift_update_uploading_method_s3_staging` (Attributes) The method how the data will be uploaded to the database. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_update_uploading_method_s3_staging))
-- `destination_redshift_update_uploading_method_standard` (Attributes) The method how the data will be uploaded to the database. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_update_uploading_method_standard))
-- `destination_redshift_uploading_method_s3_staging` (Attributes) The method how the data will be uploaded to the database. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_uploading_method_s3_staging))
-- `destination_redshift_uploading_method_standard` (Attributes) The method how the data will be uploaded to the database. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_uploading_method_standard))
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_update_uploading_method_s3_staging`
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-Required:
-
-- `access_key_id` (String) This ID grants access to the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket. See AWS docs on how to generate an access key ID and secret access key.
-- `method` (String) must be one of ["S3 Staging"]
-- `s3_bucket_name` (String) The name of the staging S3 bucket to use if utilising a COPY strategy. COPY is recommended for production workloads for better speed and scalability. See AWS docs for more details.
-- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1"]
-The region of the S3 staging bucket to use if utilising a COPY strategy. See AWS docs for details.
-- `secret_access_key` (String) The corresponding secret to the above access key id. See AWS docs on how to generate an access key ID and secret access key.
-Optional:
-- `encryption` (Attributes) How to encrypt the staging data (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_update_uploading_method_s3_staging--encryption))
-- `file_buffer_count` (Number) Number of file buffers allocated for writing data. Increasing this number is beneficial for connections using Change Data Capture (CDC) and up to the number of streams within a connection. Increasing the number of file buffers past the maximum number of streams has deteriorating effects
-- `file_name_pattern` (String) The pattern allows you to set the file-name format for the S3 staging file(s)
-- `purge_staging_data` (Boolean) Whether to delete the staging files from S3 after completing the sync. See docs for details.
-- `s3_bucket_path` (String) The directory under the S3 bucket where data will be written. If not provided, then defaults to the root directory. See path's name recommendations for more details.
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_update_uploading_method_s3_staging.s3_bucket_path`
+
+### Nested Schema for `configuration.uploading_method`
Optional:
-- `destination_redshift_update_uploading_method_s3_staging_encryption_aes_cbc_envelope_encryption` (Attributes) Staging data will be encrypted using AES-CBC envelope encryption. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_update_uploading_method_s3_staging--s3_bucket_path--destination_redshift_update_uploading_method_s3_staging_encryption_aes_cbc_envelope_encryption))
-- `destination_redshift_update_uploading_method_s3_staging_encryption_no_encryption` (Attributes) Staging data will be stored in plaintext. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_update_uploading_method_s3_staging--s3_bucket_path--destination_redshift_update_uploading_method_s3_staging_encryption_no_encryption))
+- `s3_staging` (Attributes) (recommended) Uploads data to S3 and then uses a COPY to insert the data into Redshift. COPY is recommended for production workloads for better speed and scalability. See AWS docs for more details. (see [below for nested schema](#nestedatt--configuration--uploading_method--s3_staging))
+- `standard` (Attributes) (not recommended) Direct loading using SQL INSERT statements. This method is extremely inefficient and provided only for quick testing. In all other cases, you should use S3 uploading. (see [below for nested schema](#nestedatt--configuration--uploading_method--standard))
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_update_uploading_method_s3_staging.s3_bucket_path.destination_redshift_update_uploading_method_s3_staging_encryption_aes_cbc_envelope_encryption`
+
+### Nested Schema for `configuration.uploading_method.s3_staging`
Required:
-- `encryption_type` (String) must be one of ["aes_cbc_envelope"]
+- `access_key_id` (String, Sensitive) This ID grants access to the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket. See AWS docs on how to generate an access key ID and secret access key.
+- `s3_bucket_name` (String) The name of the staging S3 bucket.
+- `secret_access_key` (String, Sensitive) The corresponding secret to the above access key id. See AWS docs on how to generate an access key ID and secret access key.
Optional:
-- `key_encrypting_key` (String) The key, base64-encoded. Must be either 128, 192, or 256 bits. Leave blank to have Airbyte generate an ephemeral key for each sync.
-
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_update_uploading_method_s3_staging.s3_bucket_path.destination_redshift_update_uploading_method_s3_staging_encryption_no_encryption`
-
-Required:
-
-- `encryption_type` (String) must be one of ["none"]
-
-
-
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_update_uploading_method_standard`
-
-Required:
-
-- `method` (String) must be one of ["Standard"]
-
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_uploading_method_s3_staging`
-
-Required:
-
-- `access_key_id` (String) This ID grants access to the above S3 staging bucket. Airbyte requires Read and Write permissions to the given bucket. See AWS docs on how to generate an access key ID and secret access key.
-- `method` (String) must be one of ["S3 Staging"]
-- `s3_bucket_name` (String) The name of the staging S3 bucket to use if utilising a COPY strategy. COPY is recommended for production workloads for better speed and scalability. See AWS docs for more details.
-- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1"]
-The region of the S3 staging bucket to use if utilising a COPY strategy. See AWS docs for details.
-- `secret_access_key` (String) The corresponding secret to the above access key id. See AWS docs on how to generate an access key ID and secret access key.
-
-Optional:
-
-- `encryption` (Attributes) How to encrypt the staging data (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_uploading_method_s3_staging--encryption))
-- `file_buffer_count` (Number) Number of file buffers allocated for writing data. Increasing this number is beneficial for connections using Change Data Capture (CDC) and up to the number of streams within a connection. Increasing the number of file buffers past the maximum number of streams has deteriorating effects
+- `encryption` (Attributes) How to encrypt the staging data (see [below for nested schema](#nestedatt--configuration--uploading_method--s3_staging--encryption))
+- `file_buffer_count` (Number) Default: 10
+Number of file buffers allocated for writing data. Increasing this number is beneficial for connections using Change Data Capture (CDC) and up to the number of streams within a connection. Increasing the number of file buffers past the maximum number of streams has deteriorating effects
- `file_name_pattern` (String) The pattern allows you to set the file-name format for the S3 staging file(s)
-- `purge_staging_data` (Boolean) Whether to delete the staging files from S3 after completing the sync. See docs for details.
+- `purge_staging_data` (Boolean) Default: true
+Whether to delete the staging files from S3 after completing the sync. See docs for details.
- `s3_bucket_path` (String) The directory under the S3 bucket where data will be written. If not provided, then defaults to the root directory. See path's name recommendations for more details.
+- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1"]; Default: ""
+The region of the S3 staging bucket.
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_uploading_method_s3_staging.s3_bucket_path`
+
+### Nested Schema for `configuration.uploading_method.s3_staging.s3_bucket_region`
Optional:
-- `destination_redshift_uploading_method_s3_staging_encryption_aes_cbc_envelope_encryption` (Attributes) Staging data will be encrypted using AES-CBC envelope encryption. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_uploading_method_s3_staging--s3_bucket_path--destination_redshift_uploading_method_s3_staging_encryption_aes_cbc_envelope_encryption))
-- `destination_redshift_uploading_method_s3_staging_encryption_no_encryption` (Attributes) Staging data will be stored in plaintext. (see [below for nested schema](#nestedatt--configuration--uploading_method--destination_redshift_uploading_method_s3_staging--s3_bucket_path--destination_redshift_uploading_method_s3_staging_encryption_no_encryption))
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_uploading_method_s3_staging.s3_bucket_path.destination_redshift_uploading_method_s3_staging_encryption_aes_cbc_envelope_encryption`
+- `aescbc_envelope_encryption` (Attributes) Staging data will be encrypted using AES-CBC envelope encryption. (see [below for nested schema](#nestedatt--configuration--uploading_method--s3_staging--s3_bucket_region--aescbc_envelope_encryption))
+- `no_encryption` (Attributes) Staging data will be stored in plaintext. (see [below for nested schema](#nestedatt--configuration--uploading_method--s3_staging--s3_bucket_region--no_encryption))
-Required:
-
-- `encryption_type` (String) must be one of ["aes_cbc_envelope"]
+
+### Nested Schema for `configuration.uploading_method.s3_staging.s3_bucket_region.aescbc_envelope_encryption`
Optional:
-- `key_encrypting_key` (String) The key, base64-encoded. Must be either 128, 192, or 256 bits. Leave blank to have Airbyte generate an ephemeral key for each sync.
-
-
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_uploading_method_s3_staging.s3_bucket_path.destination_redshift_uploading_method_s3_staging_encryption_no_encryption`
+- `key_encrypting_key` (String, Sensitive) The key, base64-encoded. Must be either 128, 192, or 256 bits. Leave blank to have Airbyte generate an ephemeral key for each sync.
-Required:
-
-- `encryption_type` (String) must be one of ["none"]
+
+### Nested Schema for `configuration.uploading_method.s3_staging.s3_bucket_region.no_encryption`
-
-### Nested Schema for `configuration.uploading_method.destination_redshift_uploading_method_standard`
-
-Required:
-- `method` (String) must be one of ["Standard"]
+
+### Nested Schema for `configuration.uploading_method.standard`
diff --git a/docs/resources/destination_s3.md b/docs/resources/destination_s3.md
index 38141cdd1..c513bb9e5 100644
--- a/docs/resources/destination_s3.md
+++ b/docs/resources/destination_s3.md
@@ -16,12 +16,11 @@ DestinationS3 Resource
resource "airbyte_destination_s3" "my_destination_s3" {
configuration = {
access_key_id = "A012345678910EXAMPLE"
- destination_type = "s3"
- file_name_pattern = "{timestamp}"
+ file_name_pattern = "{date}"
format = {
- destination_s3_output_format_avro_apache_avro = {
+ destination_s3_avro_apache_avro = {
compression_codec = {
- destination_s3_output_format_avro_apache_avro_compression_codec_bzip2 = {
+ destination_s3_bzip2 = {
codec = "bzip2"
}
}
@@ -30,13 +29,14 @@ resource "airbyte_destination_s3" "my_destination_s3" {
}
s3_bucket_name = "airbyte_sync"
s3_bucket_path = "data_sync/test"
- s3_bucket_region = "us-west-1"
+ s3_bucket_region = "ap-southeast-1"
s3_endpoint = "http://localhost:9000"
s3_path_format = "${NAMESPACE}/${STREAM_NAME}/${YEAR}_${MONTH}_${DAY}_${EPOCH}_"
secret_access_key = "a012345678910ABCDEFGH/AbCdEfGhEXAMPLEKEY"
}
- name = "Joyce O'Kon"
- workspace_id = "9da660ff-57bf-4aad-8f9e-fc1b4512c103"
+ definition_id = "b1d5b002-89a0-4dc0-a329-a5cae9f38884"
+ name = "Lloyd Watsica"
+ workspace_id = "20ebb305-f362-44c4-b900-725fa3e33722"
}
```
@@ -46,9 +46,13 @@ resource "airbyte_destination_s3" "my_destination_s3" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -59,378 +63,201 @@ resource "airbyte_destination_s3" "my_destination_s3" {
Required:
-- `destination_type` (String) must be one of ["s3"]
- `format` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format))
- `s3_bucket_name` (String) The name of the S3 bucket. Read more here.
- `s3_bucket_path` (String) Directory under the S3 bucket where data will be written. Read more here
-- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
-The region of the S3 bucket. See here for all region codes.
Optional:
-- `access_key_id` (String) The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more here.
+- `access_key_id` (String, Sensitive) The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more here.
- `file_name_pattern` (String) The pattern allows you to set the file-name format for the S3 staging file(s)
-- `s3_endpoint` (String) Your S3 endpoint url. Read more here
+- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]; Default: ""
+The region of the S3 bucket. See here for all region codes.
+- `s3_endpoint` (String) Default: ""
+Your S3 endpoint url. Read more here
- `s3_path_format` (String) Format string on how data will be organized inside the S3 bucket directory. Read more here
-- `secret_access_key` (String) The corresponding secret to the access key ID. Read more here
+- `secret_access_key` (String, Sensitive) The corresponding secret to the access key ID. Read more here
### Nested Schema for `configuration.format`
Optional:
-- `destination_s3_output_format_avro_apache_avro` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro))
-- `destination_s3_output_format_csv_comma_separated_values` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_csv_comma_separated_values))
-- `destination_s3_output_format_json_lines_newline_delimited_json` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_json_lines_newline_delimited_json))
-- `destination_s3_output_format_parquet_columnar_storage` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_parquet_columnar_storage))
-- `destination_s3_update_output_format_avro_apache_avro` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro))
-- `destination_s3_update_output_format_csv_comma_separated_values` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_csv_comma_separated_values))
-- `destination_s3_update_output_format_json_lines_newline_delimited_json` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_json_lines_newline_delimited_json))
-- `destination_s3_update_output_format_parquet_columnar_storage` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_parquet_columnar_storage))
+- `avro_apache_avro` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro))
+- `csv_comma_separated_values` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--csv_comma_separated_values))
+- `json_lines_newline_delimited_json` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json))
+- `parquet_columnar_storage` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--parquet_columnar_storage))
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro`
+
+### Nested Schema for `configuration.format.avro_apache_avro`
Required:
-- `compression_codec` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--compression_codec))
-- `format_type` (String) must be one of ["Avro"]
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type`
+- `compression_codec` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--compression_codec))
Optional:
-- `destination_s3_output_format_avro_apache_avro_compression_codec_bzip2` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--format_type--destination_s3_output_format_avro_apache_avro_compression_codec_bzip2))
-- `destination_s3_output_format_avro_apache_avro_compression_codec_deflate` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--format_type--destination_s3_output_format_avro_apache_avro_compression_codec_deflate))
-- `destination_s3_output_format_avro_apache_avro_compression_codec_no_compression` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--format_type--destination_s3_output_format_avro_apache_avro_compression_codec_no_compression))
-- `destination_s3_output_format_avro_apache_avro_compression_codec_snappy` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--format_type--destination_s3_output_format_avro_apache_avro_compression_codec_snappy))
-- `destination_s3_output_format_avro_apache_avro_compression_codec_xz` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--format_type--destination_s3_output_format_avro_apache_avro_compression_codec_xz))
-- `destination_s3_output_format_avro_apache_avro_compression_codec_zstandard` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_avro_apache_avro--format_type--destination_s3_output_format_avro_apache_avro_compression_codec_zstandard))
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type.destination_s3_output_format_avro_apache_avro_compression_codec_bzip2`
-
-Required:
-
-- `codec` (String) must be one of ["bzip2"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type.destination_s3_output_format_avro_apache_avro_compression_codec_deflate`
-
-Required:
-
-- `codec` (String) must be one of ["Deflate"]
-- `compression_level` (Number) 0: no compression & fastest, 9: best compression & slowest.
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type.destination_s3_output_format_avro_apache_avro_compression_codec_no_compression`
-
-Required:
-
-- `codec` (String) must be one of ["no compression"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type.destination_s3_output_format_avro_apache_avro_compression_codec_snappy`
-
-Required:
-
-- `codec` (String) must be one of ["snappy"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type.destination_s3_output_format_avro_apache_avro_compression_codec_xz`
-
-Required:
-
-- `codec` (String) must be one of ["xz"]
-- `compression_level` (Number) See here for details.
+- `format_type` (String) must be one of ["Avro"]; Default: "Avro"
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_avro_apache_avro.format_type.destination_s3_output_format_avro_apache_avro_compression_codec_zstandard`
-
-Required:
-
-- `codec` (String) must be one of ["zstandard"]
-- `compression_level` (Number) Negative levels are 'fast' modes akin to lz4 or snappy, levels above 9 are generally for archival purposes, and levels above 18 use a lot of memory.
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type`
Optional:
-- `include_checksum` (Boolean) If true, include a checksum with each data block.
-
-
-
+- `bzip2` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--format_type--bzip2))
+- `deflate` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--format_type--deflate))
+- `no_compression` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--format_type--no_compression))
+- `snappy` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--format_type--snappy))
+- `xz` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--format_type--xz))
+- `zstandard` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--avro_apache_avro--format_type--zstandard))
-
-### Nested Schema for `configuration.format.destination_s3_output_format_csv_comma_separated_values`
-
-Required:
-
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
-- `format_type` (String) must be one of ["CSV"]
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type.bzip2`
Optional:
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_csv_comma_separated_values--compression))
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_csv_comma_separated_values.compression`
-
-Optional:
+- `codec` (String) must be one of ["bzip2"]; Default: "bzip2"
-- `destination_s3_output_format_csv_comma_separated_values_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_csv_comma_separated_values--compression--destination_s3_output_format_csv_comma_separated_values_compression_gzip))
-- `destination_s3_output_format_csv_comma_separated_values_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_csv_comma_separated_values--compression--destination_s3_output_format_csv_comma_separated_values_compression_no_compression))
-
-### Nested Schema for `configuration.format.destination_s3_output_format_csv_comma_separated_values.compression.destination_s3_output_format_csv_comma_separated_values_compression_gzip`
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type.deflate`
Optional:
-- `compression_type` (String) must be one of ["GZIP"]
+- `codec` (String) must be one of ["Deflate"]; Default: "Deflate"
+- `compression_level` (Number) Default: 0
+0: no compression & fastest, 9: best compression & slowest.
-
-### Nested Schema for `configuration.format.destination_s3_output_format_csv_comma_separated_values.compression.destination_s3_output_format_csv_comma_separated_values_compression_no_compression`
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type.no_compression`
Optional:
-- `compression_type` (String) must be one of ["No Compression"]
+- `codec` (String) must be one of ["no compression"]; Default: "no compression"
-
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_json_lines_newline_delimited_json`
-
-Required:
-
-- `format_type` (String) must be one of ["JSONL"]
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type.snappy`
Optional:
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_json_lines_newline_delimited_json--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input json data should be normalized (flattened) in the output JSON Lines. Please refer to docs for details.
-
-
-### Nested Schema for `configuration.format.destination_s3_output_format_json_lines_newline_delimited_json.flattening`
+- `codec` (String) must be one of ["snappy"]; Default: "snappy"
-Optional:
-
-- `destination_s3_output_format_json_lines_newline_delimited_json_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_json_lines_newline_delimited_json--flattening--destination_s3_output_format_json_lines_newline_delimited_json_compression_gzip))
-- `destination_s3_output_format_json_lines_newline_delimited_json_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_output_format_json_lines_newline_delimited_json--flattening--destination_s3_output_format_json_lines_newline_delimited_json_compression_no_compression))
-
-### Nested Schema for `configuration.format.destination_s3_output_format_json_lines_newline_delimited_json.flattening.destination_s3_output_format_json_lines_newline_delimited_json_compression_gzip`
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type.xz`
Optional:
-- `compression_type` (String) must be one of ["GZIP"]
+- `codec` (String) must be one of ["xz"]; Default: "xz"
+- `compression_level` (Number) Default: 6
+See here for details.
-
-### Nested Schema for `configuration.format.destination_s3_output_format_json_lines_newline_delimited_json.flattening.destination_s3_output_format_json_lines_newline_delimited_json_compression_no_compression`
+
+### Nested Schema for `configuration.format.avro_apache_avro.format_type.zstandard`
Optional:
-- `compression_type` (String) must be one of ["No Compression"]
+- `codec` (String) must be one of ["zstandard"]; Default: "zstandard"
+- `compression_level` (Number) Default: 3
+Negative levels are 'fast' modes akin to lz4 or snappy, levels above 9 are generally for archival purposes, and levels above 18 use a lot of memory.
+- `include_checksum` (Boolean) Default: false
+If true, include a checksum with each data block.
-
-### Nested Schema for `configuration.format.destination_s3_output_format_parquet_columnar_storage`
-
-Required:
-
-- `format_type` (String) must be one of ["Parquet"]
+
+### Nested Schema for `configuration.format.csv_comma_separated_values`
Optional:
-- `block_size_mb` (Number) This is the size of a row group being buffered in memory. It limits the memory usage when writing. Larger values will improve the IO when reading, but consume more memory when writing. Default: 128 MB.
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "LZO", "BROTLI", "LZ4", "ZSTD"]
-The compression algorithm used to compress data pages.
-- `dictionary_encoding` (Boolean) Default: true.
-- `dictionary_page_size_kb` (Number) There is one dictionary page per column per row group when dictionary encoding is used. The dictionary page size works like the page size but for dictionary. Default: 1024 KB.
-- `max_padding_size_mb` (Number) Maximum size allowed as padding to align row groups. This is also the minimum size of a row group. Default: 8 MB.
-- `page_size_kb` (Number) The page size is for compression. A block is composed of pages. A page is the smallest unit that must be read fully to access a single record. If this value is too small, the compression will deteriorate. Default: 1024 KB.
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro`
-
-Required:
-
-- `compression_codec` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--compression_codec))
-- `format_type` (String) must be one of ["Avro"]
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type`
-
-Optional:
-
-- `destination_s3_update_output_format_avro_apache_avro_compression_codec_bzip2` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--format_type--destination_s3_update_output_format_avro_apache_avro_compression_codec_bzip2))
-- `destination_s3_update_output_format_avro_apache_avro_compression_codec_deflate` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--format_type--destination_s3_update_output_format_avro_apache_avro_compression_codec_deflate))
-- `destination_s3_update_output_format_avro_apache_avro_compression_codec_no_compression` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--format_type--destination_s3_update_output_format_avro_apache_avro_compression_codec_no_compression))
-- `destination_s3_update_output_format_avro_apache_avro_compression_codec_snappy` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--format_type--destination_s3_update_output_format_avro_apache_avro_compression_codec_snappy))
-- `destination_s3_update_output_format_avro_apache_avro_compression_codec_xz` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--format_type--destination_s3_update_output_format_avro_apache_avro_compression_codec_xz))
-- `destination_s3_update_output_format_avro_apache_avro_compression_codec_zstandard` (Attributes) The compression algorithm used to compress data. Default to no compression. (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_avro_apache_avro--format_type--destination_s3_update_output_format_avro_apache_avro_compression_codec_zstandard))
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type.destination_s3_update_output_format_avro_apache_avro_compression_codec_bzip2`
-
-Required:
-
-- `codec` (String) must be one of ["bzip2"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type.destination_s3_update_output_format_avro_apache_avro_compression_codec_deflate`
-
-Required:
-
-- `codec` (String) must be one of ["Deflate"]
-- `compression_level` (Number) 0: no compression & fastest, 9: best compression & slowest.
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type.destination_s3_update_output_format_avro_apache_avro_compression_codec_no_compression`
-
-Required:
-
-- `codec` (String) must be one of ["no compression"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type.destination_s3_update_output_format_avro_apache_avro_compression_codec_snappy`
-
-Required:
-
-- `codec` (String) must be one of ["snappy"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type.destination_s3_update_output_format_avro_apache_avro_compression_codec_xz`
-
-Required:
-
-- `codec` (String) must be one of ["xz"]
-- `compression_level` (Number) See here for details.
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_avro_apache_avro.format_type.destination_s3_update_output_format_avro_apache_avro_compression_codec_zstandard`
-
-Required:
-
-- `codec` (String) must be one of ["zstandard"]
-- `compression_level` (Number) Negative levels are 'fast' modes akin to lz4 or snappy, levels above 9 are generally for archival purposes, and levels above 18 use a lot of memory.
-
-Optional:
-
-- `include_checksum` (Boolean) If true, include a checksum with each data block.
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_csv_comma_separated_values`
-
-Required:
-
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
+- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--csv_comma_separated_values--compression))
+- `flattening` (String) must be one of ["No flattening", "Root level flattening"]; Default: "No flattening"
Whether the input json data should be normalized (flattened) in the output CSV. Please refer to docs for details.
-- `format_type` (String) must be one of ["CSV"]
-
-Optional:
+- `format_type` (String) must be one of ["CSV"]; Default: "CSV"
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_csv_comma_separated_values--compression))
-
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_csv_comma_separated_values.compression`
+
+### Nested Schema for `configuration.format.csv_comma_separated_values.format_type`
Optional:
-- `destination_s3_update_output_format_csv_comma_separated_values_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_csv_comma_separated_values--compression--destination_s3_update_output_format_csv_comma_separated_values_compression_gzip))
-- `destination_s3_update_output_format_csv_comma_separated_values_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_csv_comma_separated_values--compression--destination_s3_update_output_format_csv_comma_separated_values_compression_no_compression))
+- `gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--csv_comma_separated_values--format_type--gzip))
+- `no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".csv.gz"). (see [below for nested schema](#nestedatt--configuration--format--csv_comma_separated_values--format_type--no_compression))
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_csv_comma_separated_values.compression.destination_s3_update_output_format_csv_comma_separated_values_compression_gzip`
+
+### Nested Schema for `configuration.format.csv_comma_separated_values.format_type.gzip`
Optional:
-- `compression_type` (String) must be one of ["GZIP"]
+- `compression_type` (String) must be one of ["GZIP"]; Default: "GZIP"
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_csv_comma_separated_values.compression.destination_s3_update_output_format_csv_comma_separated_values_compression_no_compression`
+
+### Nested Schema for `configuration.format.csv_comma_separated_values.format_type.no_compression`
Optional:
-- `compression_type` (String) must be one of ["No Compression"]
-
+- `compression_type` (String) must be one of ["No Compression"]; Default: "No Compression"
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_json_lines_newline_delimited_json`
-
-Required:
-- `format_type` (String) must be one of ["JSONL"]
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json`
Optional:
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_json_lines_newline_delimited_json--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
+- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json--compression))
+- `flattening` (String) must be one of ["No flattening", "Root level flattening"]; Default: "No flattening"
Whether the input json data should be normalized (flattened) in the output JSON Lines. Please refer to docs for details.
+- `format_type` (String) must be one of ["JSONL"]; Default: "JSONL"
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_json_lines_newline_delimited_json.flattening`
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json.format_type`
Optional:
-- `destination_s3_update_output_format_json_lines_newline_delimited_json_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_json_lines_newline_delimited_json--flattening--destination_s3_update_output_format_json_lines_newline_delimited_json_compression_gzip))
-- `destination_s3_update_output_format_json_lines_newline_delimited_json_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_update_output_format_json_lines_newline_delimited_json--flattening--destination_s3_update_output_format_json_lines_newline_delimited_json_compression_no_compression))
+- `gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json--format_type--gzip))
+- `no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json--format_type--no_compression))
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_json_lines_newline_delimited_json.flattening.destination_s3_update_output_format_json_lines_newline_delimited_json_compression_gzip`
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json.format_type.gzip`
Optional:
-- `compression_type` (String) must be one of ["GZIP"]
+- `compression_type` (String) must be one of ["GZIP"]; Default: "GZIP"
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_json_lines_newline_delimited_json.flattening.destination_s3_update_output_format_json_lines_newline_delimited_json_compression_no_compression`
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json.format_type.no_compression`
Optional:
-- `compression_type` (String) must be one of ["No Compression"]
+- `compression_type` (String) must be one of ["No Compression"]; Default: "No Compression"
-
-### Nested Schema for `configuration.format.destination_s3_update_output_format_parquet_columnar_storage`
-
-Required:
-
-- `format_type` (String) must be one of ["Parquet"]
+
+### Nested Schema for `configuration.format.parquet_columnar_storage`
Optional:
-- `block_size_mb` (Number) This is the size of a row group being buffered in memory. It limits the memory usage when writing. Larger values will improve the IO when reading, but consume more memory when writing. Default: 128 MB.
-- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "LZO", "BROTLI", "LZ4", "ZSTD"]
+- `block_size_mb` (Number) Default: 128
+This is the size of a row group being buffered in memory. It limits the memory usage when writing. Larger values will improve the IO when reading, but consume more memory when writing. Default: 128 MB.
+- `compression_codec` (String) must be one of ["UNCOMPRESSED", "SNAPPY", "GZIP", "LZO", "BROTLI", "LZ4", "ZSTD"]; Default: "UNCOMPRESSED"
The compression algorithm used to compress data pages.
-- `dictionary_encoding` (Boolean) Default: true.
-- `dictionary_page_size_kb` (Number) There is one dictionary page per column per row group when dictionary encoding is used. The dictionary page size works like the page size but for dictionary. Default: 1024 KB.
-- `max_padding_size_mb` (Number) Maximum size allowed as padding to align row groups. This is also the minimum size of a row group. Default: 8 MB.
-- `page_size_kb` (Number) The page size is for compression. A block is composed of pages. A page is the smallest unit that must be read fully to access a single record. If this value is too small, the compression will deteriorate. Default: 1024 KB.
+- `dictionary_encoding` (Boolean) Default: true
+Default: true.
+- `dictionary_page_size_kb` (Number) Default: 1024
+There is one dictionary page per column per row group when dictionary encoding is used. The dictionary page size works like the page size but for dictionary. Default: 1024 KB.
+- `format_type` (String) must be one of ["Parquet"]; Default: "Parquet"
+- `max_padding_size_mb` (Number) Default: 8
+Maximum size allowed as padding to align row groups. This is also the minimum size of a row group. Default: 8 MB.
+- `page_size_kb` (Number) Default: 1024
+The page size is for compression. A block is composed of pages. A page is the smallest unit that must be read fully to access a single record. If this value is too small, the compression will deteriorate. Default: 1024 KB.
diff --git a/docs/resources/destination_s3_glue.md b/docs/resources/destination_s3_glue.md
index 02f7619d4..c9f9b8866 100644
--- a/docs/resources/destination_s3_glue.md
+++ b/docs/resources/destination_s3_glue.md
@@ -16,30 +16,30 @@ DestinationS3Glue Resource
resource "airbyte_destination_s3_glue" "my_destination_s3glue" {
configuration = {
access_key_id = "A012345678910EXAMPLE"
- destination_type = "s3-glue"
- file_name_pattern = "{date}"
+ file_name_pattern = "{sync_id}"
format = {
- destination_s3_glue_output_format_json_lines_newline_delimited_json = {
+ destination_s3_glue_json_lines_newline_delimited_json = {
compression = {
- destination_s3_glue_output_format_json_lines_newline_delimited_json_compression_gzip = {
+ destination_s3_glue_gzip = {
compression_type = "GZIP"
}
}
- flattening = "No flattening"
+ flattening = "Root level flattening"
format_type = "JSONL"
}
}
glue_database = "airbyte_database"
- glue_serialization_library = "org.openx.data.jsonserde.JsonSerDe"
+ glue_serialization_library = "org.apache.hive.hcatalog.data.JsonSerDe"
s3_bucket_name = "airbyte_sync"
s3_bucket_path = "data_sync/test"
- s3_bucket_region = "ca-central-1"
+ s3_bucket_region = "eu-central-1"
s3_endpoint = "http://localhost:9000"
s3_path_format = "${NAMESPACE}/${STREAM_NAME}/${YEAR}_${MONTH}_${DAY}_${EPOCH}_"
secret_access_key = "a012345678910ABCDEFGH/AbCdEfGhEXAMPLEKEY"
}
- name = "Edmund Daugherty"
- workspace_id = "15199ebf-d0e9-4fe6-8632-ca3aed011799"
+ definition_id = "2f8e06ef-6fed-4365-9e7d-5496735da213"
+ name = "Jordan Johnston"
+ workspace_id = "b9fef8f5-3876-4e3d-a30a-86e4df19faac"
}
```
@@ -49,9 +49,13 @@ resource "airbyte_destination_s3_glue" "my_destination_s3glue" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -62,105 +66,62 @@ resource "airbyte_destination_s3_glue" "my_destination_s3glue" {
Required:
-- `destination_type` (String) must be one of ["s3-glue"]
- `format` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format))
- `glue_database` (String) Name of the glue database for creating the tables, leave blank if no integration
-- `glue_serialization_library` (String) must be one of ["org.openx.data.jsonserde.JsonSerDe", "org.apache.hive.hcatalog.data.JsonSerDe"]
-The library that your query engine will use for reading and writing data in your lake.
- `s3_bucket_name` (String) The name of the S3 bucket. Read more here.
- `s3_bucket_path` (String) Directory under the S3 bucket where data will be written. Read more here
-- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
-The region of the S3 bucket. See here for all region codes.
Optional:
-- `access_key_id` (String) The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more here.
+- `access_key_id` (String, Sensitive) The access key ID to access the S3 bucket. Airbyte requires Read and Write permissions to the given bucket. Read more here.
- `file_name_pattern` (String) The pattern allows you to set the file-name format for the S3 staging file(s)
-- `s3_endpoint` (String) Your S3 endpoint url. Read more here
+- `glue_serialization_library` (String) must be one of ["org.openx.data.jsonserde.JsonSerDe", "org.apache.hive.hcatalog.data.JsonSerDe"]; Default: "org.openx.data.jsonserde.JsonSerDe"
+The library that your query engine will use for reading and writing data in your lake.
+- `s3_bucket_region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]; Default: ""
+The region of the S3 bucket. See here for all region codes.
+- `s3_endpoint` (String) Default: ""
+Your S3 endpoint url. Read more here
- `s3_path_format` (String) Format string on how data will be organized inside the S3 bucket directory. Read more here
-- `secret_access_key` (String) The corresponding secret to the access key ID. Read more here
+- `secret_access_key` (String, Sensitive) The corresponding secret to the access key ID. Read more here
### Nested Schema for `configuration.format`
Optional:
-- `destination_s3_glue_output_format_json_lines_newline_delimited_json` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_output_format_json_lines_newline_delimited_json))
-- `destination_s3_glue_update_output_format_json_lines_newline_delimited_json` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_update_output_format_json_lines_newline_delimited_json))
-
-
-### Nested Schema for `configuration.format.destination_s3_glue_output_format_json_lines_newline_delimited_json`
-
-Required:
-
-- `format_type` (String) must be one of ["JSONL"]
-
-Optional:
-
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_output_format_json_lines_newline_delimited_json--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
-Whether the input json data should be normalized (flattened) in the output JSON Lines. Please refer to docs for details.
-
-
-### Nested Schema for `configuration.format.destination_s3_glue_output_format_json_lines_newline_delimited_json.flattening`
-
-Optional:
-
-- `destination_s3_glue_output_format_json_lines_newline_delimited_json_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_output_format_json_lines_newline_delimited_json--flattening--destination_s3_glue_output_format_json_lines_newline_delimited_json_compression_gzip))
-- `destination_s3_glue_output_format_json_lines_newline_delimited_json_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_output_format_json_lines_newline_delimited_json--flattening--destination_s3_glue_output_format_json_lines_newline_delimited_json_compression_no_compression))
-
-
-### Nested Schema for `configuration.format.destination_s3_glue_output_format_json_lines_newline_delimited_json.flattening.destination_s3_glue_output_format_json_lines_newline_delimited_json_compression_gzip`
-
-Optional:
-
-- `compression_type` (String) must be one of ["GZIP"]
-
-
-
-### Nested Schema for `configuration.format.destination_s3_glue_output_format_json_lines_newline_delimited_json.flattening.destination_s3_glue_output_format_json_lines_newline_delimited_json_compression_no_compression`
-
-Optional:
-
-- `compression_type` (String) must be one of ["No Compression"]
-
-
-
-
-
-### Nested Schema for `configuration.format.destination_s3_glue_update_output_format_json_lines_newline_delimited_json`
-
-Required:
+- `json_lines_newline_delimited_json` (Attributes) Format of the data output. See here for more details (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json))
-- `format_type` (String) must be one of ["JSONL"]
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json`
Optional:
-- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_update_output_format_json_lines_newline_delimited_json--compression))
-- `flattening` (String) must be one of ["No flattening", "Root level flattening"]
+- `compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json--compression))
+- `flattening` (String) must be one of ["No flattening", "Root level flattening"]; Default: "Root level flattening"
Whether the input json data should be normalized (flattened) in the output JSON Lines. Please refer to docs for details.
+- `format_type` (String) must be one of ["JSONL"]; Default: "JSONL"
-
-### Nested Schema for `configuration.format.destination_s3_glue_update_output_format_json_lines_newline_delimited_json.flattening`
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json.format_type`
Optional:
-- `destination_s3_glue_update_output_format_json_lines_newline_delimited_json_compression_gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_update_output_format_json_lines_newline_delimited_json--flattening--destination_s3_glue_update_output_format_json_lines_newline_delimited_json_compression_gzip))
-- `destination_s3_glue_update_output_format_json_lines_newline_delimited_json_compression_no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--destination_s3_glue_update_output_format_json_lines_newline_delimited_json--flattening--destination_s3_glue_update_output_format_json_lines_newline_delimited_json_compression_no_compression))
+- `gzip` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json--format_type--gzip))
+- `no_compression` (Attributes) Whether the output files should be compressed. If compression is selected, the output filename will have an extra extension (GZIP: ".jsonl.gz"). (see [below for nested schema](#nestedatt--configuration--format--json_lines_newline_delimited_json--format_type--no_compression))
-
-### Nested Schema for `configuration.format.destination_s3_glue_update_output_format_json_lines_newline_delimited_json.flattening.destination_s3_glue_update_output_format_json_lines_newline_delimited_json_compression_gzip`
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json.format_type.gzip`
Optional:
-- `compression_type` (String) must be one of ["GZIP"]
+- `compression_type` (String) must be one of ["GZIP"]; Default: "GZIP"
-
-### Nested Schema for `configuration.format.destination_s3_glue_update_output_format_json_lines_newline_delimited_json.flattening.destination_s3_glue_update_output_format_json_lines_newline_delimited_json_compression_no_compression`
+
+### Nested Schema for `configuration.format.json_lines_newline_delimited_json.format_type.no_compression`
Optional:
-- `compression_type` (String) must be one of ["No Compression"]
+- `compression_type` (String) must be one of ["No Compression"]; Default: "No Compression"
diff --git a/docs/resources/destination_sftp_json.md b/docs/resources/destination_sftp_json.md
index a7e3c7f07..94c01a37a 100644
--- a/docs/resources/destination_sftp_json.md
+++ b/docs/resources/destination_sftp_json.md
@@ -16,14 +16,14 @@ DestinationSftpJSON Resource
resource "airbyte_destination_sftp_json" "my_destination_sftpjson" {
configuration = {
destination_path = "/json_data"
- destination_type = "sftp-json"
host = "...my_host..."
password = "...my_password..."
port = 22
- username = "Dayton98"
+ username = "Deshawn10"
}
- name = "Terence Beer"
- workspace_id = "71778ff6-1d01-4747-a360-a15db6a66065"
+ definition_id = "846ef364-4196-4a04-bb96-66e7d15e7eed"
+ name = "Frederick Howell"
+ workspace_id = "586b689f-dc13-4c29-afcf-ab73b9ba5d30"
}
```
@@ -33,9 +33,13 @@ resource "airbyte_destination_sftp_json" "my_destination_sftpjson" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -47,13 +51,13 @@ resource "airbyte_destination_sftp_json" "my_destination_sftpjson" {
Required:
- `destination_path` (String) Path to the directory where json files will be written.
-- `destination_type` (String) must be one of ["sftp-json"]
- `host` (String) Hostname of the SFTP server.
-- `password` (String) Password associated with the username.
+- `password` (String, Sensitive) Password associated with the username.
- `username` (String) Username to use to access the SFTP server.
Optional:
-- `port` (Number) Port of the SFTP server.
+- `port` (Number) Default: 22
+Port of the SFTP server.
diff --git a/docs/resources/destination_snowflake.md b/docs/resources/destination_snowflake.md
index 53a90329c..a2d1478de 100644
--- a/docs/resources/destination_snowflake.md
+++ b/docs/resources/destination_snowflake.md
@@ -16,24 +16,24 @@ DestinationSnowflake Resource
resource "airbyte_destination_snowflake" "my_destination_snowflake" {
configuration = {
credentials = {
- destination_snowflake_authorization_method_key_pair_authentication = {
- auth_type = "Key Pair Authentication"
+ key_pair_authentication = {
private_key = "...my_private_key..."
private_key_password = "...my_private_key_password..."
}
}
- database = "AIRBYTE_DATABASE"
- destination_type = "snowflake"
- host = "accountname.snowflakecomputing.com"
- jdbc_url_params = "...my_jdbc_url_params..."
- raw_data_schema = "...my_raw_data_schema..."
- role = "AIRBYTE_ROLE"
- schema = "AIRBYTE_SCHEMA"
- username = "AIRBYTE_USER"
- warehouse = "AIRBYTE_WAREHOUSE"
+ database = "AIRBYTE_DATABASE"
+ disable_type_dedupe = true
+ host = "accountname.us-east-2.aws.snowflakecomputing.com"
+ jdbc_url_params = "...my_jdbc_url_params..."
+ raw_data_schema = "...my_raw_data_schema..."
+ role = "AIRBYTE_ROLE"
+ schema = "AIRBYTE_SCHEMA"
+ username = "AIRBYTE_USER"
+ warehouse = "AIRBYTE_WAREHOUSE"
}
- name = "Shaun Osinski"
- workspace_id = "851d6c64-5b08-4b61-891b-aa0fe1ade008"
+ definition_id = "d28dce71-d7fd-4713-a64c-8ab088c248e9"
+ name = "Robin Marvin"
+ workspace_id = "3407545d-5006-486d-84e6-08039bc7eb07"
}
```
@@ -43,9 +43,13 @@ resource "airbyte_destination_snowflake" "my_destination_snowflake" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -57,7 +61,6 @@ resource "airbyte_destination_snowflake" "my_destination_snowflake" {
Required:
- `database` (String) Enter the name of the database you want to sync data into
-- `destination_type` (String) must be one of ["snowflake"]
- `host` (String) Enter your Snowflake account's locator (in the format ...snowflakecomputing.com)
- `role` (String) Enter the role that you want to use to access Snowflake
- `schema` (String) Enter the name of the default schema
@@ -67,98 +70,51 @@ Required:
Optional:
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
+- `disable_type_dedupe` (Boolean) Default: false
+Disable Writing Final Tables. WARNING! The data format in _airbyte_data is likely stable but there are no guarantees that other metadata columns will remain the same in future versions
- `jdbc_url_params` (String) Enter the additional properties to pass to the JDBC URL string when connecting to the database (formatted as key=value pairs separated by the symbol &). Example: key1=value1&key2=value2&key3=value3
-- `raw_data_schema` (String) The schema to write raw tables into
+- `raw_data_schema` (String) The schema to write raw tables into (default: airbyte_internal)
### Nested Schema for `configuration.credentials`
Optional:
-- `destination_snowflake_authorization_method_key_pair_authentication` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--destination_snowflake_authorization_method_key_pair_authentication))
-- `destination_snowflake_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--destination_snowflake_authorization_method_o_auth2_0))
-- `destination_snowflake_authorization_method_username_and_password` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--destination_snowflake_authorization_method_username_and_password))
-- `destination_snowflake_update_authorization_method_key_pair_authentication` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--destination_snowflake_update_authorization_method_key_pair_authentication))
-- `destination_snowflake_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--destination_snowflake_update_authorization_method_o_auth2_0))
-- `destination_snowflake_update_authorization_method_username_and_password` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--destination_snowflake_update_authorization_method_username_and_password))
+- `key_pair_authentication` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--key_pair_authentication))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
+- `username_and_password` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--username_and_password))
-
-### Nested Schema for `configuration.credentials.destination_snowflake_authorization_method_key_pair_authentication`
+
+### Nested Schema for `configuration.credentials.key_pair_authentication`
Required:
-- `private_key` (String) RSA Private key to use for Snowflake connection. See the docs for more information on how to obtain this key.
+- `private_key` (String, Sensitive) RSA Private key to use for Snowflake connection. See the docs for more information on how to obtain this key.
Optional:
-- `auth_type` (String) must be one of ["Key Pair Authentication"]
-- `private_key_password` (String) Passphrase for private key
+- `private_key_password` (String, Sensitive) Passphrase for private key
-
-### Nested Schema for `configuration.credentials.destination_snowflake_authorization_method_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) Enter you application's Access Token
-- `refresh_token` (String) Enter your application's Refresh Token
+- `access_token` (String, Sensitive) Enter you application's Access Token
+- `refresh_token` (String, Sensitive) Enter your application's Refresh Token
Optional:
-- `auth_type` (String) must be one of ["OAuth2.0"]
- `client_id` (String) Enter your application's Client ID
- `client_secret` (String) Enter your application's Client secret
-
-### Nested Schema for `configuration.credentials.destination_snowflake_authorization_method_username_and_password`
-
-Required:
-
-- `password` (String) Enter the password associated with the username.
-
-Optional:
-
-- `auth_type` (String) must be one of ["Username and Password"]
-
-
-
-### Nested Schema for `configuration.credentials.destination_snowflake_update_authorization_method_key_pair_authentication`
+
+### Nested Schema for `configuration.credentials.username_and_password`
Required:
-- `private_key` (String) RSA Private key to use for Snowflake connection. See the docs for more information on how to obtain this key.
-
-Optional:
-
-- `auth_type` (String) must be one of ["Key Pair Authentication"]
-- `private_key_password` (String) Passphrase for private key
-
-
-
-### Nested Schema for `configuration.credentials.destination_snowflake_update_authorization_method_o_auth2_0`
-
-Required:
-
-- `access_token` (String) Enter you application's Access Token
-- `refresh_token` (String) Enter your application's Refresh Token
-
-Optional:
-
-- `auth_type` (String) must be one of ["OAuth2.0"]
-- `client_id` (String) Enter your application's Client ID
-- `client_secret` (String) Enter your application's Client secret
-
-
-
-### Nested Schema for `configuration.credentials.destination_snowflake_update_authorization_method_username_and_password`
-
-Required:
-
-- `password` (String) Enter the password associated with the username.
-
-Optional:
-
-- `auth_type` (String) must be one of ["Username and Password"]
+- `password` (String, Sensitive) Enter the password associated with the username.
diff --git a/docs/resources/destination_timeplus.md b/docs/resources/destination_timeplus.md
index 30302c1a5..4f39216ca 100644
--- a/docs/resources/destination_timeplus.md
+++ b/docs/resources/destination_timeplus.md
@@ -15,12 +15,12 @@ DestinationTimeplus Resource
```terraform
resource "airbyte_destination_timeplus" "my_destination_timeplus" {
configuration = {
- apikey = "...my_apikey..."
- destination_type = "timeplus"
- endpoint = "https://us.timeplus.cloud/workspace_id"
+ apikey = "...my_apikey..."
+ endpoint = "https://us.timeplus.cloud/workspace_id"
}
- name = "Ruben Williamson"
- workspace_id = "5f350d8c-db5a-4341-8143-010421813d52"
+ definition_id = "32a47524-bb49-40aa-b53a-d11902ba1888"
+ name = "Kimberly Cole V"
+ workspace_id = "d193af49-1985-4c92-933c-ae7edb401c23"
}
```
@@ -30,9 +30,13 @@ resource "airbyte_destination_timeplus" "my_destination_timeplus" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -43,8 +47,11 @@ resource "airbyte_destination_timeplus" "my_destination_timeplus" {
Required:
-- `apikey` (String) Personal API key
-- `destination_type` (String) must be one of ["timeplus"]
-- `endpoint` (String) Timeplus workspace endpoint
+- `apikey` (String, Sensitive) Personal API key
+
+Optional:
+
+- `endpoint` (String) Default: "https://us.timeplus.cloud/"
+Timeplus workspace endpoint
diff --git a/docs/resources/destination_typesense.md b/docs/resources/destination_typesense.md
index d6f7345d8..b23be9813 100644
--- a/docs/resources/destination_typesense.md
+++ b/docs/resources/destination_typesense.md
@@ -15,15 +15,15 @@ DestinationTypesense Resource
```terraform
resource "airbyte_destination_typesense" "my_destination_typesense" {
configuration = {
- api_key = "...my_api_key..."
- batch_size = 0
- destination_type = "typesense"
- host = "...my_host..."
- port = "...my_port..."
- protocol = "...my_protocol..."
+ api_key = "...my_api_key..."
+ batch_size = 6
+ host = "...my_host..."
+ port = "...my_port..."
+ protocol = "...my_protocol..."
}
- name = "Conrad Rutherford"
- workspace_id = "e253b668-451c-46c6-a205-e16deab3fec9"
+ definition_id = "e69c6f21-d654-4173-8ccb-bc51a3caa62e"
+ name = "Lorraine Kiehn"
+ workspace_id = "a0d33800-2a57-467f-8f37-9fa4011eae8d"
}
```
@@ -33,9 +33,13 @@ resource "airbyte_destination_typesense" "my_destination_typesense" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -46,8 +50,7 @@ resource "airbyte_destination_typesense" "my_destination_typesense" {
Required:
-- `api_key` (String) Typesense API Key
-- `destination_type` (String) must be one of ["typesense"]
+- `api_key` (String, Sensitive) Typesense API Key
- `host` (String) Hostname of the Typesense instance without protocol.
Optional:
diff --git a/docs/resources/destination_vertica.md b/docs/resources/destination_vertica.md
index f4974a20b..2a21a1a7d 100644
--- a/docs/resources/destination_vertica.md
+++ b/docs/resources/destination_vertica.md
@@ -15,22 +15,20 @@ DestinationVertica Resource
```terraform
resource "airbyte_destination_vertica" "my_destination_vertica" {
configuration = {
- database = "...my_database..."
- destination_type = "vertica"
- host = "...my_host..."
- jdbc_url_params = "...my_jdbc_url_params..."
- password = "...my_password..."
- port = 5433
- schema = "...my_schema..."
+ database = "...my_database..."
+ host = "...my_host..."
+ jdbc_url_params = "...my_jdbc_url_params..."
+ password = "...my_password..."
+ port = 5433
+ schema = "...my_schema..."
tunnel_method = {
- destination_vertica_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ destination_vertica_no_tunnel = {}
}
- username = "Jackson.Kuvalis"
+ username = "Bailey26"
}
- name = "Ida Lubowitz"
- workspace_id = "73a8418d-1623-409f-b092-9921aefb9f58"
+ definition_id = "f7f4dcb2-8108-4584-a7e5-cd333285c7cc"
+ name = "Josefina Sporer"
+ workspace_id = "34f786aa-e3aa-4f52-bfe1-9eb1bf8ee233"
}
```
@@ -40,9 +38,13 @@ resource "airbyte_destination_vertica" "my_destination_vertica" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -54,16 +56,16 @@ resource "airbyte_destination_vertica" "my_destination_vertica" {
Required:
- `database` (String) Name of the database.
-- `destination_type` (String) must be one of ["vertica"]
- `host` (String) Hostname of the database.
-- `port` (Number) Port of the database.
- `schema` (String) Schema for vertica destination
- `username` (String) Username to use to access the database.
Optional:
- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) Password associated with the username.
+- `password` (String, Sensitive) Password associated with the username.
+- `port` (Number) Default: 5433
+Port of the database.
- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
@@ -71,80 +73,41 @@ Optional:
Optional:
-- `destination_vertica_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_vertica_ssh_tunnel_method_no_tunnel))
-- `destination_vertica_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_vertica_ssh_tunnel_method_password_authentication))
-- `destination_vertica_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_vertica_ssh_tunnel_method_ssh_key_authentication))
-- `destination_vertica_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_vertica_update_ssh_tunnel_method_no_tunnel))
-- `destination_vertica_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_vertica_update_ssh_tunnel_method_password_authentication))
-- `destination_vertica_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--destination_vertica_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.destination_vertica_ssh_tunnel_method_no_tunnel`
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-
-### Nested Schema for `configuration.tunnel_method.destination_vertica_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_vertica_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
+Optional:
-
-### Nested Schema for `configuration.tunnel_method.destination_vertica_update_ssh_tunnel_method_no_tunnel`
-
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.destination_vertica_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.destination_vertica_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/destination_weaviate.md b/docs/resources/destination_weaviate.md
new file mode 100644
index 000000000..daee69f6f
--- /dev/null
+++ b/docs/resources/destination_weaviate.md
@@ -0,0 +1,289 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_destination_weaviate Resource - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ DestinationWeaviate Resource
+---
+
+# airbyte_destination_weaviate (Resource)
+
+DestinationWeaviate Resource
+
+## Example Usage
+
+```terraform
+resource "airbyte_destination_weaviate" "my_destination_weaviate" {
+ configuration = {
+ embedding = {
+ destination_weaviate_azure_open_ai = {
+ api_base = "https://your-resource-name.openai.azure.com"
+ deployment = "your-resource-name"
+ openai_key = "...my_openai_key..."
+ }
+ }
+ indexing = {
+ additional_headers = [
+ {
+ header_key = "...my_header_key..."
+ value = "...my_value..."
+ },
+ ]
+ auth = {
+ destination_weaviate_api_token = {
+ token = "...my_token..."
+ }
+ }
+ batch_size = 6
+ default_vectorizer = "text2vec-huggingface"
+ host = "https://my-cluster.weaviate.network"
+ text_field = "...my_text_field..."
+ }
+ processing = {
+ chunk_overlap = 4
+ chunk_size = 5
+ field_name_mappings = [
+ {
+ from_field = "...my_from_field..."
+ to_field = "...my_to_field..."
+ },
+ ]
+ metadata_fields = [
+ "...",
+ ]
+ text_fields = [
+ "...",
+ ]
+ text_splitter = {
+ destination_weaviate_by_markdown_header = {
+ split_level = 4
+ }
+ }
+ }
+ }
+ definition_id = "97e801e6-7689-4a46-b396-c7c6bf737242"
+ name = "Diana Runte Jr."
+ workspace_id = "59f1e303-60fc-40ea-a506-81bc3adb090c"
+}
+```
+
+
+## Schema
+
+### Required
+
+- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
+- `workspace_id` (String)
+
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
+### Read-Only
+
+- `destination_id` (String)
+- `destination_type` (String)
+
+
+### Nested Schema for `configuration`
+
+Required:
+
+- `embedding` (Attributes) Embedding configuration (see [below for nested schema](#nestedatt--configuration--embedding))
+- `indexing` (Attributes) Indexing configuration (see [below for nested schema](#nestedatt--configuration--indexing))
+- `processing` (Attributes) (see [below for nested schema](#nestedatt--configuration--processing))
+
+
+### Nested Schema for `configuration.embedding`
+
+Optional:
+
+- `azure_open_ai` (Attributes) Use the Azure-hosted OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--azure_open_ai))
+- `cohere` (Attributes) Use the Cohere API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--cohere))
+- `fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--fake))
+- `from_field` (Attributes) Use a field in the record as the embedding. This is useful if you already have an embedding for your data and want to store it in the vector store. (see [below for nested schema](#nestedatt--configuration--embedding--from_field))
+- `no_external_embedding` (Attributes) Do not calculate and pass embeddings to Weaviate. Suitable for clusters with configured vectorizers to calculate embeddings within Weaviate or for classes that should only support regular text search. (see [below for nested schema](#nestedatt--configuration--embedding--no_external_embedding))
+- `open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--open_ai))
+- `open_ai_compatible` (Attributes) Use a service that's compatible with the OpenAI API to embed text. (see [below for nested schema](#nestedatt--configuration--embedding--open_ai_compatible))
+
+
+### Nested Schema for `configuration.embedding.azure_open_ai`
+
+Required:
+
+- `api_base` (String) The base URL for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource
+- `deployment` (String) The deployment for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource
+- `openai_key` (String, Sensitive) The API key for your Azure OpenAI resource. You can find this in the Azure portal under your Azure OpenAI resource
+
+
+
+### Nested Schema for `configuration.embedding.cohere`
+
+Required:
+
+- `cohere_key` (String, Sensitive)
+
+
+
+### Nested Schema for `configuration.embedding.fake`
+
+
+
+### Nested Schema for `configuration.embedding.from_field`
+
+Required:
+
+- `dimensions` (Number) The number of dimensions the embedding model is generating
+- `field_name` (String) Name of the field in the record that contains the embedding
+
+
+
+### Nested Schema for `configuration.embedding.no_external_embedding`
+
+
+
+### Nested Schema for `configuration.embedding.open_ai`
+
+Required:
+
+- `openai_key` (String, Sensitive)
+
+
+
+### Nested Schema for `configuration.embedding.open_ai_compatible`
+
+Required:
+
+- `base_url` (String) The base URL for your OpenAI-compatible service
+- `dimensions` (Number) The number of dimensions the embedding model is generating
+
+Optional:
+
+- `api_key` (String, Sensitive) Default: ""
+- `model_name` (String) Default: "text-embedding-ada-002"
+The name of the model to use for embedding
+
+
+
+
+### Nested Schema for `configuration.indexing`
+
+Required:
+
+- `auth` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--indexing--auth))
+- `host` (String) The public endpoint of the Weaviate cluster.
+
+Optional:
+
+- `additional_headers` (Attributes List) Additional HTTP headers to send with every request. (see [below for nested schema](#nestedatt--configuration--indexing--additional_headers))
+- `batch_size` (Number) Default: 128
+The number of records to send to Weaviate in each batch
+- `default_vectorizer` (String) must be one of ["none", "text2vec-cohere", "text2vec-huggingface", "text2vec-openai", "text2vec-palm", "text2vec-contextionary", "text2vec-transformers", "text2vec-gpt4all"]; Default: "none"
+The vectorizer to use if new classes need to be created
+- `text_field` (String) Default: "text"
+The field in the object that contains the embedded text
+
+
+### Nested Schema for `configuration.indexing.auth`
+
+Optional:
+
+- `api_token` (Attributes) Authenticate using an API token (suitable for Weaviate Cloud) (see [below for nested schema](#nestedatt--configuration--indexing--auth--api_token))
+- `no_authentication` (Attributes) Do not authenticate (suitable for locally running test clusters, do not use for clusters with public IP addresses) (see [below for nested schema](#nestedatt--configuration--indexing--auth--no_authentication))
+- `username_password` (Attributes) Authenticate using username and password (suitable for self-managed Weaviate clusters) (see [below for nested schema](#nestedatt--configuration--indexing--auth--username_password))
+
+
+### Nested Schema for `configuration.indexing.auth.username_password`
+
+Required:
+
+- `token` (String, Sensitive) API Token for the Weaviate instance
+
+
+
+### Nested Schema for `configuration.indexing.auth.username_password`
+
+
+
+### Nested Schema for `configuration.indexing.auth.username_password`
+
+Required:
+
+- `password` (String, Sensitive) Password for the Weaviate cluster
+- `username` (String) Username for the Weaviate cluster
+
+
+
+
+### Nested Schema for `configuration.indexing.additional_headers`
+
+Required:
+
+- `header_key` (String, Sensitive)
+- `value` (String)
+
+
+
+
+### Nested Schema for `configuration.processing`
+
+Required:
+
+- `chunk_size` (Number) Size of chunks in tokens to store in vector store (make sure it is not too big for the context if your LLM)
+
+Optional:
+
+- `chunk_overlap` (Number) Default: 0
+Size of overlap between chunks in tokens to store in vector store to better capture relevant context
+- `field_name_mappings` (Attributes List) List of fields to rename. Not applicable for nested fields, but can be used to rename fields already flattened via dot notation. (see [below for nested schema](#nestedatt--configuration--processing--field_name_mappings))
+- `metadata_fields` (List of String) List of fields in the record that should be stored as metadata. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered metadata fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array. When specifying nested paths, all matching values are flattened into an array set to a field named by the path.
+- `text_fields` (List of String) List of fields in the record that should be used to calculate the embedding. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered text fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array.
+- `text_splitter` (Attributes) Split text fields into chunks based on the specified method. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter))
+
+
+### Nested Schema for `configuration.processing.field_name_mappings`
+
+Required:
+
+- `from_field` (String) The field name in the source
+- `to_field` (String) The field name to use in the destination
+
+
+
+### Nested Schema for `configuration.processing.text_splitter`
+
+Optional:
+
+- `by_markdown_header` (Attributes) Split the text by Markdown headers down to the specified header level. If the chunk size fits multiple sections, they will be combined into a single chunk. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter--by_markdown_header))
+- `by_programming_language` (Attributes) Split the text by suitable delimiters based on the programming language. This is useful for splitting code into chunks. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter--by_programming_language))
+- `by_separator` (Attributes) Split the text by the list of separators until the chunk size is reached, using the earlier mentioned separators where possible. This is useful for splitting text fields by paragraphs, sentences, words, etc. (see [below for nested schema](#nestedatt--configuration--processing--text_splitter--by_separator))
+
+
+### Nested Schema for `configuration.processing.text_splitter.by_separator`
+
+Optional:
+
+- `split_level` (Number) Default: 1
+Level of markdown headers to split text fields by. Headings down to the specified level will be used as split points
+
+
+
+### Nested Schema for `configuration.processing.text_splitter.by_separator`
+
+Required:
+
+- `language` (String) must be one of ["cpp", "go", "java", "js", "php", "proto", "python", "rst", "ruby", "rust", "scala", "swift", "markdown", "latex", "html", "sol"]
+Split code in suitable places based on the programming language
+
+
+
+### Nested Schema for `configuration.processing.text_splitter.by_separator`
+
+Optional:
+
+- `keep_separator` (Boolean) Default: false
+Whether to keep the separator in the resulting chunks
+- `separators` (List of String) List of separator strings to split text fields by. The separator itself needs to be wrapped in double quotes, e.g. to split by the dot character, use ".". To split by a newline, use "\n".
+
+
diff --git a/docs/resources/destination_xata.md b/docs/resources/destination_xata.md
index a05ab3edd..5aa9f60bd 100644
--- a/docs/resources/destination_xata.md
+++ b/docs/resources/destination_xata.md
@@ -15,12 +15,12 @@ DestinationXata Resource
```terraform
resource "airbyte_destination_xata" "my_destination_xata" {
configuration = {
- api_key = "...my_api_key..."
- db_url = "https://my-workspace-abc123.us-east-1.xata.sh/db/nyc-taxi-fares:main"
- destination_type = "xata"
+ api_key = "...my_api_key..."
+ db_url = "https://my-workspace-abc123.us-east-1.xata.sh/db/nyc-taxi-fares:main"
}
- name = "Oscar Smith"
- workspace_id = "e68e4be0-5601-43f5-9da7-57a59ecfef66"
+ definition_id = "013842c1-01e2-465e-abc2-30b15094cc21"
+ name = "Derrick Green"
+ workspace_id = "b75e7d1c-9ddc-42da-b62f-af1b28fe26cb"
}
```
@@ -30,9 +30,13 @@ resource "airbyte_destination_xata" "my_destination_xata" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided.
+
### Read-Only
- `destination_id` (String)
@@ -43,8 +47,7 @@ resource "airbyte_destination_xata" "my_destination_xata" {
Required:
-- `api_key` (String) API Key to connect.
+- `api_key` (String, Sensitive) API Key to connect.
- `db_url` (String) URL pointing to your workspace.
-- `destination_type` (String) must be one of ["xata"]
diff --git a/docs/resources/source_aha.md b/docs/resources/source_aha.md
index df1b71d29..fc9c21f6e 100644
--- a/docs/resources/source_aha.md
+++ b/docs/resources/source_aha.md
@@ -15,13 +15,13 @@ SourceAha Resource
```terraform
resource "airbyte_source_aha" "my_source_aha" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "aha"
- url = "...my_url..."
+ api_key = "...my_api_key..."
+ url = "...my_url..."
}
- name = "Van Bergnaum"
- secret_id = "...my_secret_id..."
- workspace_id = "a3383c2b-eb47-4737-bc8d-72f64d1db1f2"
+ definition_id = "1bb0550b-4e34-4412-ae7f-29336e237818"
+ name = "Samuel Hammes"
+ secret_id = "...my_secret_id..."
+ workspace_id = "3da8d6ee-f047-4576-b0dd-bc2dbf188dfa"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_aha" "my_source_aha" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_aha" "my_source_aha" {
Required:
-- `api_key` (String) API Key
-- `source_type` (String) must be one of ["aha"]
+- `api_key` (String, Sensitive) API Key
- `url` (String) URL
diff --git a/docs/resources/source_aircall.md b/docs/resources/source_aircall.md
index 8576bc2f6..6af18b535 100644
--- a/docs/resources/source_aircall.md
+++ b/docs/resources/source_aircall.md
@@ -15,14 +15,14 @@ SourceAircall Resource
```terraform
resource "airbyte_source_aircall" "my_source_aircall" {
configuration = {
- api_id = "...my_api_id..."
- api_token = "...my_api_token..."
- source_type = "aircall"
- start_date = "2022-03-01T00:00:00.000Z"
+ api_id = "...my_api_id..."
+ api_token = "...my_api_token..."
+ start_date = "2022-03-01T00:00:00.000Z"
}
- name = "Martha Bashirian"
- secret_id = "...my_secret_id..."
- workspace_id = "1e96349e-1cf9-4e06-a3a4-37000ae6b6bc"
+ definition_id = "57111ac6-1dff-4a69-be71-43a3e9a244d7"
+ name = "Lucas Breitenberg"
+ secret_id = "...my_secret_id..."
+ workspace_id = "a6e1cc19-3137-4221-8027-ee71b638bd64"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_aircall" "my_source_aircall" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,8 +51,7 @@ resource "airbyte_source_aircall" "my_source_aircall" {
Required:
- `api_id` (String) App ID found at settings https://dashboard.aircall.io/integrations/api-keys
-- `api_token` (String) App token found at settings (Ref- https://dashboard.aircall.io/integrations/api-keys)
-- `source_type` (String) must be one of ["aircall"]
+- `api_token` (String, Sensitive) App token found at settings (Ref- https://dashboard.aircall.io/integrations/api-keys)
- `start_date` (String) Date time filter for incremental filter, Specify which date to extract from.
diff --git a/docs/resources/source_airtable.md b/docs/resources/source_airtable.md
index 2e257fa5e..803fc74a6 100644
--- a/docs/resources/source_airtable.md
+++ b/docs/resources/source_airtable.md
@@ -16,20 +16,19 @@ SourceAirtable Resource
resource "airbyte_source_airtable" "my_source_airtable" {
configuration = {
credentials = {
- source_airtable_authentication_o_auth2_0 = {
+ source_airtable_o_auth2_0 = {
access_token = "...my_access_token..."
- auth_method = "oauth2.0"
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
refresh_token = "...my_refresh_token..."
- token_expiry_date = "2021-08-01T09:41:55.270Z"
+ token_expiry_date = "2021-04-10T21:26:19.630Z"
}
}
- source_type = "airtable"
}
- name = "Tommie Klocko"
- secret_id = "...my_secret_id..."
- workspace_id = "eac55a97-41d3-4113-9296-5bb8a7202611"
+ definition_id = "54814afe-b93d-44bb-9e9f-2bb80cd3fe4a"
+ name = "Todd Lockman"
+ secret_id = "...my_secret_id..."
+ workspace_id = "38c45275-6445-4179-b0ed-8d43c0dabba6"
}
```
@@ -39,11 +38,12 @@ resource "airbyte_source_airtable" "my_source_airtable" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -57,71 +57,35 @@ resource "airbyte_source_airtable" "my_source_airtable" {
Optional:
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["airtable"]
### Nested Schema for `configuration.credentials`
Optional:
-- `source_airtable_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_airtable_authentication_o_auth2_0))
-- `source_airtable_authentication_personal_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_airtable_authentication_personal_access_token))
-- `source_airtable_update_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_airtable_update_authentication_o_auth2_0))
-- `source_airtable_update_authentication_personal_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_airtable_update_authentication_personal_access_token))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
+- `personal_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--personal_access_token))
-
-### Nested Schema for `configuration.credentials.source_airtable_authentication_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
- `client_id` (String) The client ID of the Airtable developer application.
- `client_secret` (String) The client secret the Airtable developer application.
-- `refresh_token` (String) The key to refresh the expired access token.
+- `refresh_token` (String, Sensitive) The key to refresh the expired access token.
Optional:
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
+- `access_token` (String, Sensitive) Access Token for making authenticated requests.
+- `token_expiry_date` (String, Sensitive) The date-time when the access token should be refreshed.
-
-### Nested Schema for `configuration.credentials.source_airtable_authentication_personal_access_token`
+
+### Nested Schema for `configuration.credentials.personal_access_token`
Required:
-- `api_key` (String) The Personal Access Token for the Airtable account. See the Support Guide for more information on how to obtain this token.
-
-Optional:
-
-- `auth_method` (String) must be one of ["api_key"]
-
-
-
-### Nested Schema for `configuration.credentials.source_airtable_update_authentication_o_auth2_0`
-
-Required:
-
-- `client_id` (String) The client ID of the Airtable developer application.
-- `client_secret` (String) The client secret the Airtable developer application.
-- `refresh_token` (String) The key to refresh the expired access token.
-
-Optional:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-
-
-### Nested Schema for `configuration.credentials.source_airtable_update_authentication_personal_access_token`
-
-Required:
-
-- `api_key` (String) The Personal Access Token for the Airtable account. See the Support Guide for more information on how to obtain this token.
-
-Optional:
-
-- `auth_method` (String) must be one of ["api_key"]
+- `api_key` (String, Sensitive) The Personal Access Token for the Airtable account. See the Support Guide for more information on how to obtain this token.
diff --git a/docs/resources/source_alloydb.md b/docs/resources/source_alloydb.md
index 4d3d54abc..2022b665c 100644
--- a/docs/resources/source_alloydb.md
+++ b/docs/resources/source_alloydb.md
@@ -21,10 +21,10 @@ resource "airbyte_source_alloydb" "my_source_alloydb" {
password = "...my_password..."
port = 5432
replication_method = {
- source_alloydb_replication_method_logical_replication_cdc_ = {
- initial_waiting_seconds = 2
- lsn_commit_behaviour = "While reading Data"
- method = "CDC"
+ logical_replication_cdc = {
+ additional_properties = "{ \"see\": \"documentation\" }"
+ initial_waiting_seconds = 10
+ lsn_commit_behaviour = "After loading Data in the destination"
plugin = "pgoutput"
publication = "...my_publication..."
queue_size = 10
@@ -34,22 +34,20 @@ resource "airbyte_source_alloydb" "my_source_alloydb" {
schemas = [
"...",
]
- source_type = "alloydb"
ssl_mode = {
- source_alloydb_ssl_modes_allow = {
- mode = "allow"
+ source_alloydb_allow = {
+ additional_properties = "{ \"see\": \"documentation\" }"
}
}
tunnel_method = {
- source_alloydb_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ source_alloydb_no_tunnel = {}
}
- username = "Ashlynn_Emard"
+ username = "Olaf.Emard48"
}
- name = "Wilbert Crona"
- secret_id = "...my_secret_id..."
- workspace_id = "9b1abda8-c070-4e10-84cb-0672d1ad879e"
+ definition_id = "44fd252e-57aa-4673-9282-59f0c220e39e"
+ name = "Deborah Stanton"
+ secret_id = "...my_secret_id..."
+ workspace_id = "f09fb849-b0bd-4f3d-9ca9-6c63354ae1d2"
}
```
@@ -59,11 +57,12 @@ resource "airbyte_source_alloydb" "my_source_alloydb" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -78,14 +77,14 @@ Required:
- `database` (String) Name of the database.
- `host` (String) Hostname of the database.
-- `port` (Number) Port of the database.
-- `source_type` (String) must be one of ["alloydb"]
- `username` (String) Username to access the database.
Optional:
- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (Eg. key1=value1&key2=value2&key3=value3). For more information read about JDBC URL parameters.
-- `password` (String) Password associated with the username.
+- `password` (String, Sensitive) Password associated with the username.
+- `port` (Number) Default: 5432
+Port of the database.
- `replication_method` (Attributes) Replication method for extracting data from the database. (see [below for nested schema](#nestedatt--configuration--replication_method))
- `schemas` (List of String) The list of schemas (case sensitive) to sync from. Defaults to public.
- `ssl_mode` (Attributes) SSL connection modes.
@@ -97,83 +96,37 @@ Optional:
Optional:
-- `source_alloydb_replication_method_logical_replication_cdc` (Attributes) Logical replication uses the Postgres write-ahead log (WAL) to detect inserts, updates, and deletes. This needs to be configured on the source database itself. Only available on Postgres 10 and above. Read the docs. (see [below for nested schema](#nestedatt--configuration--replication_method--source_alloydb_replication_method_logical_replication_cdc))
-- `source_alloydb_replication_method_standard` (Attributes) Standard replication requires no setup on the DB side but will not be able to represent deletions incrementally. (see [below for nested schema](#nestedatt--configuration--replication_method--source_alloydb_replication_method_standard))
-- `source_alloydb_replication_method_standard_xmin` (Attributes) Xmin replication requires no setup on the DB side but will not be able to represent deletions incrementally. (see [below for nested schema](#nestedatt--configuration--replication_method--source_alloydb_replication_method_standard_xmin))
-- `source_alloydb_update_replication_method_logical_replication_cdc` (Attributes) Logical replication uses the Postgres write-ahead log (WAL) to detect inserts, updates, and deletes. This needs to be configured on the source database itself. Only available on Postgres 10 and above. Read the docs. (see [below for nested schema](#nestedatt--configuration--replication_method--source_alloydb_update_replication_method_logical_replication_cdc))
-- `source_alloydb_update_replication_method_standard` (Attributes) Standard replication requires no setup on the DB side but will not be able to represent deletions incrementally. (see [below for nested schema](#nestedatt--configuration--replication_method--source_alloydb_update_replication_method_standard))
-- `source_alloydb_update_replication_method_standard_xmin` (Attributes) Xmin replication requires no setup on the DB side but will not be able to represent deletions incrementally. (see [below for nested schema](#nestedatt--configuration--replication_method--source_alloydb_update_replication_method_standard_xmin))
+- `logical_replication_cdc` (Attributes) Logical replication uses the Postgres write-ahead log (WAL) to detect inserts, updates, and deletes. This needs to be configured on the source database itself. Only available on Postgres 10 and above. Read the docs. (see [below for nested schema](#nestedatt--configuration--replication_method--logical_replication_cdc))
+- `standard` (Attributes) Standard replication requires no setup on the DB side but will not be able to represent deletions incrementally. (see [below for nested schema](#nestedatt--configuration--replication_method--standard))
+- `standard_xmin` (Attributes) Xmin replication requires no setup on the DB side but will not be able to represent deletions incrementally. (see [below for nested schema](#nestedatt--configuration--replication_method--standard_xmin))
-
-### Nested Schema for `configuration.replication_method.source_alloydb_replication_method_logical_replication_cdc`
+
+### Nested Schema for `configuration.replication_method.logical_replication_cdc`
Required:
-- `method` (String) must be one of ["CDC"]
- `publication` (String) A Postgres publication used for consuming changes. Read about publications and replication identities.
- `replication_slot` (String) A plugin logical replication slot. Read about replication slots.
Optional:
- `additional_properties` (String) Parsed as JSON.
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `lsn_commit_behaviour` (String) must be one of ["While reading Data", "After loading Data in the destination"]
+- `initial_waiting_seconds` (Number) Default: 300
+The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
+- `lsn_commit_behaviour` (String) must be one of ["While reading Data", "After loading Data in the destination"]; Default: "After loading Data in the destination"
Determines when Airbtye should flush the LSN of processed WAL logs in the source database. `After loading Data in the destination` is default. If `While reading Data` is selected, in case of a downstream failure (while loading data into the destination), next sync would result in a full sync.
-- `plugin` (String) must be one of ["pgoutput"]
+- `plugin` (String) must be one of ["pgoutput"]; Default: "pgoutput"
A logical decoding plugin installed on the PostgreSQL server.
-- `queue_size` (Number) The size of the internal queue. This may interfere with memory consumption and efficiency of the connector, please be careful.
-
-
-
-### Nested Schema for `configuration.replication_method.source_alloydb_replication_method_standard`
-
-Required:
-
-- `method` (String) must be one of ["Standard"]
-
-
-
-### Nested Schema for `configuration.replication_method.source_alloydb_replication_method_standard_xmin`
+- `queue_size` (Number) Default: 10000
+The size of the internal queue. This may interfere with memory consumption and efficiency of the connector, please be careful.
-Required:
-
-- `method` (String) must be one of ["Xmin"]
+
+### Nested Schema for `configuration.replication_method.standard`
-
-### Nested Schema for `configuration.replication_method.source_alloydb_update_replication_method_logical_replication_cdc`
-Required:
-
-- `method` (String) must be one of ["CDC"]
-- `publication` (String) A Postgres publication used for consuming changes. Read about publications and replication identities.
-- `replication_slot` (String) A plugin logical replication slot. Read about replication slots.
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `lsn_commit_behaviour` (String) must be one of ["While reading Data", "After loading Data in the destination"]
-Determines when Airbtye should flush the LSN of processed WAL logs in the source database. `After loading Data in the destination` is default. If `While reading Data` is selected, in case of a downstream failure (while loading data into the destination), next sync would result in a full sync.
-- `plugin` (String) must be one of ["pgoutput"]
-A logical decoding plugin installed on the PostgreSQL server.
-- `queue_size` (Number) The size of the internal queue. This may interfere with memory consumption and efficiency of the connector, please be careful.
-
-
-
-### Nested Schema for `configuration.replication_method.source_alloydb_update_replication_method_standard`
-
-Required:
-
-- `method` (String) must be one of ["Standard"]
-
-
-
-### Nested Schema for `configuration.replication_method.source_alloydb_update_replication_method_standard_xmin`
-
-Required:
-
-- `method` (String) must be one of ["Xmin"]
+
+### Nested Schema for `configuration.replication_method.standard_xmin`
@@ -182,177 +135,73 @@ Required:
Optional:
-- `source_alloydb_ssl_modes_allow` (Attributes) Enables encryption only when required by the source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_ssl_modes_allow))
-- `source_alloydb_ssl_modes_disable` (Attributes) Disables encryption of communication between Airbyte and source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_ssl_modes_disable))
-- `source_alloydb_ssl_modes_prefer` (Attributes) Allows unencrypted connection only if the source database does not support encryption. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_ssl_modes_prefer))
-- `source_alloydb_ssl_modes_require` (Attributes) Always require encryption. If the source database server does not support encryption, connection will fail. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_ssl_modes_require))
-- `source_alloydb_ssl_modes_verify_ca` (Attributes) Always require encryption and verifies that the source database server has a valid SSL certificate. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_ssl_modes_verify_ca))
-- `source_alloydb_ssl_modes_verify_full` (Attributes) This is the most secure mode. Always require encryption and verifies the identity of the source database server. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_ssl_modes_verify_full))
-- `source_alloydb_update_ssl_modes_allow` (Attributes) Enables encryption only when required by the source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_update_ssl_modes_allow))
-- `source_alloydb_update_ssl_modes_disable` (Attributes) Disables encryption of communication between Airbyte and source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_update_ssl_modes_disable))
-- `source_alloydb_update_ssl_modes_prefer` (Attributes) Allows unencrypted connection only if the source database does not support encryption. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_update_ssl_modes_prefer))
-- `source_alloydb_update_ssl_modes_require` (Attributes) Always require encryption. If the source database server does not support encryption, connection will fail. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_update_ssl_modes_require))
-- `source_alloydb_update_ssl_modes_verify_ca` (Attributes) Always require encryption and verifies that the source database server has a valid SSL certificate. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_update_ssl_modes_verify_ca))
-- `source_alloydb_update_ssl_modes_verify_full` (Attributes) This is the most secure mode. Always require encryption and verifies the identity of the source database server. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_alloydb_update_ssl_modes_verify_full))
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_ssl_modes_allow`
+- `allow` (Attributes) Enables encryption only when required by the source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--allow))
+- `disable` (Attributes) Disables encryption of communication between Airbyte and source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--disable))
+- `prefer` (Attributes) Allows unencrypted connection only if the source database does not support encryption. (see [below for nested schema](#nestedatt--configuration--ssl_mode--prefer))
+- `require` (Attributes) Always require encryption. If the source database server does not support encryption, connection will fail. (see [below for nested schema](#nestedatt--configuration--ssl_mode--require))
+- `verify_ca` (Attributes) Always require encryption and verifies that the source database server has a valid SSL certificate. (see [below for nested schema](#nestedatt--configuration--ssl_mode--verify_ca))
+- `verify_full` (Attributes) This is the most secure mode. Always require encryption and verifies the identity of the source database server. (see [below for nested schema](#nestedatt--configuration--ssl_mode--verify_full))
-Required:
-
-- `mode` (String) must be one of ["allow"]
+
+### Nested Schema for `configuration.ssl_mode.allow`
Optional:
- `additional_properties` (String) Parsed as JSON.
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_ssl_modes_disable`
-
-Required:
-
-- `mode` (String) must be one of ["disable"]
+
+### Nested Schema for `configuration.ssl_mode.disable`
Optional:
- `additional_properties` (String) Parsed as JSON.
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_ssl_modes_prefer`
-
-Required:
-
-- `mode` (String) must be one of ["prefer"]
+
+### Nested Schema for `configuration.ssl_mode.prefer`
Optional:
- `additional_properties` (String) Parsed as JSON.
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_ssl_modes_require`
-
-Required:
-
-- `mode` (String) must be one of ["require"]
+
+### Nested Schema for `configuration.ssl_mode.require`
Optional:
- `additional_properties` (String) Parsed as JSON.
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_ssl_modes_verify_ca`
+
+### Nested Schema for `configuration.ssl_mode.verify_ca`
Required:
- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify-ca"]
Optional:
- `additional_properties` (String) Parsed as JSON.
- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
+- `client_key` (String, Sensitive) Client key
+- `client_key_password` (String, Sensitive) Password for keystorage. If you do not add it - the password will be generated automatically.
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_ssl_modes_verify_full`
+
+### Nested Schema for `configuration.ssl_mode.verify_full`
Required:
- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify-full"]
Optional:
- `additional_properties` (String) Parsed as JSON.
- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_update_ssl_modes_allow`
-
-Required:
-
-- `mode` (String) must be one of ["allow"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_update_ssl_modes_disable`
-
-Required:
-
-- `mode` (String) must be one of ["disable"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_update_ssl_modes_prefer`
-
-Required:
-
-- `mode` (String) must be one of ["prefer"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_update_ssl_modes_require`
-
-Required:
-
-- `mode` (String) must be one of ["require"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_update_ssl_modes_verify_ca`
-
-Required:
-
-- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify-ca"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_alloydb_update_ssl_modes_verify_full`
-
-Required:
-
-- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify-full"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
+- `client_key` (String, Sensitive) Client key
+- `client_key_password` (String, Sensitive) Password for keystorage. If you do not add it - the password will be generated automatically.
@@ -361,80 +210,41 @@ Optional:
Optional:
-- `source_alloydb_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_alloydb_ssh_tunnel_method_no_tunnel))
-- `source_alloydb_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_alloydb_ssh_tunnel_method_password_authentication))
-- `source_alloydb_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_alloydb_ssh_tunnel_method_ssh_key_authentication))
-- `source_alloydb_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_alloydb_update_ssh_tunnel_method_no_tunnel))
-- `source_alloydb_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_alloydb_update_ssh_tunnel_method_password_authentication))
-- `source_alloydb_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_alloydb_update_ssh_tunnel_method_ssh_key_authentication))
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-
-### Nested Schema for `configuration.tunnel_method.source_alloydb_ssh_tunnel_method_no_tunnel`
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-### Nested Schema for `configuration.tunnel_method.source_alloydb_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_alloydb_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_alloydb_update_ssh_tunnel_method_no_tunnel`
-
-Required:
+Optional:
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.source_alloydb_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_alloydb_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/source_amazon_ads.md b/docs/resources/source_amazon_ads.md
index 9313194aa..7b77ad032 100644
--- a/docs/resources/source_amazon_ads.md
+++ b/docs/resources/source_amazon_ads.md
@@ -15,30 +15,29 @@ SourceAmazonAds Resource
```terraform
resource "airbyte_source_amazon_ads" "my_source_amazonads" {
configuration = {
- auth_type = "oauth2.0"
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
- look_back_window = 10
+ look_back_window = 3
marketplace_ids = [
"...",
]
profiles = [
- 6,
+ 2,
]
refresh_token = "...my_refresh_token..."
- region = "EU"
+ region = "FE"
report_record_types = [
- "asins_targets",
+ "adGroups",
]
- source_type = "amazon-ads"
- start_date = "2022-10-10"
+ start_date = "2022-10-10"
state_filter = [
- "archived",
+ "paused",
]
}
- name = "Dan Towne"
- secret_id = "...my_secret_id..."
- workspace_id = "d02bae0b-e2d7-4822-99e3-ea4b5197f924"
+ definition_id = "34df0d75-6d8b-40d9-8daf-9186ab63a7b2"
+ name = "Chris Littel"
+ secret_id = "...my_secret_id..."
+ workspace_id = "ec566b1d-1d8b-4b57-bf00-1ddb3cf074d6"
}
```
@@ -48,11 +47,12 @@ resource "airbyte_source_amazon_ads" "my_source_amazonads" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -67,16 +67,15 @@ Required:
- `client_id` (String) The client ID of your Amazon Ads developer application. See the docs for more information.
- `client_secret` (String) The client secret of your Amazon Ads developer application. See the docs for more information.
-- `refresh_token` (String) Amazon Ads refresh token. See the docs for more information on how to obtain this token.
-- `source_type` (String) must be one of ["amazon-ads"]
+- `refresh_token` (String, Sensitive) Amazon Ads refresh token. See the docs for more information on how to obtain this token.
Optional:
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `look_back_window` (Number) The amount of days to go back in time to get the updated data from Amazon Ads
+- `look_back_window` (Number) Default: 3
+The amount of days to go back in time to get the updated data from Amazon Ads
- `marketplace_ids` (List of String) Marketplace IDs you want to fetch data for. Note: If Profile IDs are also selected, profiles will be selected if they match the Profile ID OR the Marketplace ID.
- `profiles` (List of Number) Profile IDs you want to fetch data for. See docs for more details. Note: If Marketplace IDs are also selected, profiles will be selected if they match the Profile ID OR the Marketplace ID.
-- `region` (String) must be one of ["NA", "EU", "FE"]
+- `region` (String) must be one of ["NA", "EU", "FE"]; Default: "NA"
Region to pull data from (EU/NA/FE). See docs for more details.
- `report_record_types` (List of String) Optional configuration which accepts an array of string of record types. Leave blank for default behaviour to pull all report types. Use this config option only if you want to pull specific report type(s). See docs for more details
- `start_date` (String) The Start date for collecting reports, should not be more than 60 days in the past. In YYYY-MM-DD format
diff --git a/docs/resources/source_amazon_seller_partner.md b/docs/resources/source_amazon_seller_partner.md
index 2a1f4205f..712445fd1 100644
--- a/docs/resources/source_amazon_seller_partner.md
+++ b/docs/resources/source_amazon_seller_partner.md
@@ -15,26 +15,22 @@ SourceAmazonSellerPartner Resource
```terraform
resource "airbyte_source_amazon_seller_partner" "my_source_amazonsellerpartner" {
configuration = {
+ account_type = "Seller"
advanced_stream_options = "{\"GET_SALES_AND_TRAFFIC_REPORT\": {\"availability_sla_days\": 3}}"
- auth_type = "oauth2.0"
- aws_access_key = "...my_aws_access_key..."
- aws_environment = "PRODUCTION"
- aws_secret_key = "...my_aws_secret_key..."
+ aws_environment = "SANDBOX"
lwa_app_id = "...my_lwa_app_id..."
lwa_client_secret = "...my_lwa_client_secret..."
- max_wait_seconds = 1980
- period_in_days = 5
+ period_in_days = 2
refresh_token = "...my_refresh_token..."
- region = "SA"
+ region = "AE"
replication_end_date = "2017-01-25T00:00:00Z"
replication_start_date = "2017-01-25T00:00:00Z"
- report_options = "{\"GET_SOME_REPORT\": {\"custom\": \"true\"}}"
- role_arn = "...my_role_arn..."
- source_type = "amazon-seller-partner"
+ report_options = "{\"GET_BRAND_ANALYTICS_SEARCH_TERMS_REPORT\": {\"reportPeriod\": \"WEEK\"}}"
}
- name = "Phyllis Quitzon"
- secret_id = "...my_secret_id..."
- workspace_id = "5c537c64-54ef-4b0b-b489-6c3ca5acfbe2"
+ definition_id = "69bb26e6-b9f2-45aa-9f8c-7d4107048d9f"
+ name = "Caleb Legros"
+ secret_id = "...my_secret_id..."
+ workspace_id = "9afeef69-ead1-4e5d-b690-efc6e828b1d2"
}
```
@@ -44,11 +40,12 @@ resource "airbyte_source_amazon_seller_partner" "my_source_amazonsellerpartner"
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -61,26 +58,23 @@ resource "airbyte_source_amazon_seller_partner" "my_source_amazonsellerpartner"
Required:
-- `aws_environment` (String) must be one of ["PRODUCTION", "SANDBOX"]
-Select the AWS Environment.
- `lwa_app_id` (String) Your Login with Amazon Client ID.
- `lwa_client_secret` (String) Your Login with Amazon Client Secret.
-- `refresh_token` (String) The Refresh Token obtained via OAuth flow authorization.
-- `region` (String) must be one of ["AE", "AU", "BE", "BR", "CA", "DE", "EG", "ES", "FR", "GB", "IN", "IT", "JP", "MX", "NL", "PL", "SA", "SE", "SG", "TR", "UK", "US"]
-Select the AWS Region.
+- `refresh_token` (String, Sensitive) The Refresh Token obtained via OAuth flow authorization.
- `replication_start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-- `source_type` (String) must be one of ["amazon-seller-partner"]
Optional:
+- `account_type` (String) must be one of ["Seller", "Vendor"]; Default: "Seller"
+Type of the Account you're going to authorize the Airbyte application by
- `advanced_stream_options` (String) Additional information to configure report options. This varies by report type, not every report implement this kind of feature. Must be a valid json string.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `aws_access_key` (String) Specifies the AWS access key used as part of the credentials to authenticate the user.
-- `aws_secret_key` (String) Specifies the AWS secret key used as part of the credentials to authenticate the user.
-- `max_wait_seconds` (Number) Sometimes report can take up to 30 minutes to generate. This will set the limit for how long to wait for a successful report.
-- `period_in_days` (Number) Will be used for stream slicing for initial full_refresh sync when no updated state is present for reports that support sliced incremental sync.
+- `aws_environment` (String) must be one of ["PRODUCTION", "SANDBOX"]; Default: "PRODUCTION"
+Select the AWS Environment.
+- `period_in_days` (Number) Default: 90
+Will be used for stream slicing for initial full_refresh sync when no updated state is present for reports that support sliced incremental sync.
+- `region` (String) must be one of ["AE", "AU", "BE", "BR", "CA", "DE", "EG", "ES", "FR", "GB", "IN", "IT", "JP", "MX", "NL", "PL", "SA", "SE", "SG", "TR", "UK", "US"]; Default: "US"
+Select the AWS Region.
- `replication_end_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data after this date will not be replicated.
- `report_options` (String) Additional information passed to reports. This varies by report type. Must be a valid json string.
-- `role_arn` (String) Specifies the Amazon Resource Name (ARN) of an IAM role that you want to use to perform operations requested using this profile. (Needs permission to 'Assume Role' STS).
diff --git a/docs/resources/source_amazon_sqs.md b/docs/resources/source_amazon_sqs.md
index 7ea4bb73b..6a0470a6d 100644
--- a/docs/resources/source_amazon_sqs.md
+++ b/docs/resources/source_amazon_sqs.md
@@ -17,18 +17,18 @@ resource "airbyte_source_amazon_sqs" "my_source_amazonsqs" {
configuration = {
access_key = "xxxxxHRNxxx3TBxxxxxx"
attributes_to_return = "attr1,attr2"
- delete_messages = false
+ delete_messages = true
max_batch_size = 5
max_wait_time = 5
queue_url = "https://sqs.eu-west-1.amazonaws.com/1234567890/my-example-queue"
- region = "ap-southeast-2"
+ region = "ap-northeast-2"
secret_key = "hu+qE5exxxxT6o/ZrKsxxxxxxBhxxXLexxxxxVKz"
- source_type = "amazon-sqs"
visibility_timeout = 15
}
- name = "Cathy Kirlin"
- secret_id = "...my_secret_id..."
- workspace_id = "29177dea-c646-4ecb-9734-09e3eb1e5a2b"
+ definition_id = "aa9ea927-cae7-4b29-885e-6b85628652e0"
+ name = "Emmett Labadie"
+ secret_id = "...my_secret_id..."
+ workspace_id = "21b517b1-6f1f-4884-abcd-5137451945c4"
}
```
@@ -38,11 +38,12 @@ resource "airbyte_source_amazon_sqs" "my_source_amazonsqs" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -55,19 +56,19 @@ resource "airbyte_source_amazon_sqs" "my_source_amazonsqs" {
Required:
-- `delete_messages` (Boolean) If Enabled, messages will be deleted from the SQS Queue after being read. If Disabled, messages are left in the queue and can be read more than once. WARNING: Enabling this option can result in data loss in cases of failure, use with caution, see documentation for more detail.
- `queue_url` (String) URL of the SQS Queue
- `region` (String) must be one of ["us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
AWS Region of the SQS Queue
-- `source_type` (String) must be one of ["amazon-sqs"]
Optional:
-- `access_key` (String) The Access Key ID of the AWS IAM Role to use for pulling messages
+- `access_key` (String, Sensitive) The Access Key ID of the AWS IAM Role to use for pulling messages
- `attributes_to_return` (String) Comma separated list of Mesage Attribute names to return
+- `delete_messages` (Boolean) Default: false
+If Enabled, messages will be deleted from the SQS Queue after being read. If Disabled, messages are left in the queue and can be read more than once. WARNING: Enabling this option can result in data loss in cases of failure, use with caution, see documentation for more detail.
- `max_batch_size` (Number) Max amount of messages to get in one batch (10 max)
- `max_wait_time` (Number) Max amount of time in seconds to wait for messages in a single poll (20 max)
-- `secret_key` (String) The Secret Key of the AWS IAM Role to use for pulling messages
+- `secret_key` (String, Sensitive) The Secret Key of the AWS IAM Role to use for pulling messages
- `visibility_timeout` (Number) Modify the Visibility Timeout of the individual message from the Queue's default (seconds).
diff --git a/docs/resources/source_amplitude.md b/docs/resources/source_amplitude.md
index afa37370c..fc2c295ad 100644
--- a/docs/resources/source_amplitude.md
+++ b/docs/resources/source_amplitude.md
@@ -17,14 +17,14 @@ resource "airbyte_source_amplitude" "my_source_amplitude" {
configuration = {
api_key = "...my_api_key..."
data_region = "Standard Server"
- request_time_range = 1
+ request_time_range = 2
secret_key = "...my_secret_key..."
- source_type = "amplitude"
start_date = "2021-01-25T00:00:00Z"
}
- name = "Robin Bednar"
- secret_id = "...my_secret_id..."
- workspace_id = "116db995-45fc-495f-a889-70e189dbb30f"
+ definition_id = "526ae8aa-3c4f-4287-913b-8668105e1180"
+ name = "Dominic Dach"
+ secret_id = "...my_secret_id..."
+ workspace_id = "75a1ca19-0e95-4bd1-982a-17eb0af63def"
}
```
@@ -34,11 +34,12 @@ resource "airbyte_source_amplitude" "my_source_amplitude" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,15 +52,15 @@ resource "airbyte_source_amplitude" "my_source_amplitude" {
Required:
-- `api_key` (String) Amplitude API Key. See the setup guide for more information on how to obtain this key.
-- `secret_key` (String) Amplitude Secret Key. See the setup guide for more information on how to obtain this key.
-- `source_type` (String) must be one of ["amplitude"]
+- `api_key` (String, Sensitive) Amplitude API Key. See the setup guide for more information on how to obtain this key.
+- `secret_key` (String, Sensitive) Amplitude Secret Key. See the setup guide for more information on how to obtain this key.
- `start_date` (String) UTC date and time in the format 2021-01-25T00:00:00Z. Any data before this date will not be replicated.
Optional:
-- `data_region` (String) must be one of ["Standard Server", "EU Residency Server"]
+- `data_region` (String) must be one of ["Standard Server", "EU Residency Server"]; Default: "Standard Server"
Amplitude data region server
-- `request_time_range` (Number) According to Considerations too big time range in request can cause a timeout error. In this case, set shorter time interval in hours.
+- `request_time_range` (Number) Default: 24
+According to Considerations too big time range in request can cause a timeout error. In this case, set shorter time interval in hours.
diff --git a/docs/resources/source_apify_dataset.md b/docs/resources/source_apify_dataset.md
index dc02b94ef..cbab9974c 100644
--- a/docs/resources/source_apify_dataset.md
+++ b/docs/resources/source_apify_dataset.md
@@ -15,14 +15,13 @@ SourceApifyDataset Resource
```terraform
resource "airbyte_source_apify_dataset" "my_source_apifydataset" {
configuration = {
- clean = true
- dataset_id = "...my_dataset_id..."
- source_type = "apify-dataset"
- token = "Personal API tokens"
+ dataset_id = "rHuMdwm6xCFt6WiGU"
+ token = "apify_api_PbVwb1cBbuvbfg2jRmAIHZKgx3NQyfEMG7uk"
}
- name = "Dale Ferry"
- secret_id = "...my_secret_id..."
- workspace_id = "055b197c-d44e-42f5-ad82-d3513bb6f48b"
+ definition_id = "a73356f3-9bea-45e2-889f-0e8905c8543b"
+ name = "Justin Luettgen"
+ secret_id = "...my_secret_id..."
+ workspace_id = "ac7dcada-d293-48da-9765-e7880f00a30d"
}
```
@@ -32,11 +31,12 @@ resource "airbyte_source_apify_dataset" "my_source_apifydataset" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,12 +49,7 @@ resource "airbyte_source_apify_dataset" "my_source_apifydataset" {
Required:
-- `source_type` (String) must be one of ["apify-dataset"]
-- `token` (String) Your application's Client Secret. You can find this value on the console integrations tab after you login.
-
-Optional:
-
-- `clean` (Boolean) If set to true, only clean items will be downloaded from the dataset. See description of what clean means in Apify API docs. If not sure, set clean to false.
-- `dataset_id` (String) ID of the dataset you would like to load to Airbyte.
+- `dataset_id` (String) ID of the dataset you would like to load to Airbyte. In Apify Console, you can view your datasets in the Storage section under the Datasets tab after you login. See the Apify Docs for more information.
+- `token` (String, Sensitive) Personal API token of your Apify account. In Apify Console, you can find your API token in the Settings section under the Integrations tab after you login. See the Apify Docs for more information.
diff --git a/docs/resources/source_appfollow.md b/docs/resources/source_appfollow.md
index 7f6e36ef9..17d017973 100644
--- a/docs/resources/source_appfollow.md
+++ b/docs/resources/source_appfollow.md
@@ -15,12 +15,12 @@ SourceAppfollow Resource
```terraform
resource "airbyte_source_appfollow" "my_source_appfollow" {
configuration = {
- api_secret = "...my_api_secret..."
- source_type = "appfollow"
+ api_secret = "...my_api_secret..."
}
- name = "Regina Huel"
- secret_id = "...my_secret_id..."
- workspace_id = "db35ff2e-4b27-4537-a8cd-9e7319c177d5"
+ definition_id = "def9a90f-a7f8-4f44-9b58-dfc559a0bee1"
+ name = "Maurice Wilderman"
+ secret_id = "...my_secret_id..."
+ workspace_id = "23389204-2261-4684-a73e-f602c915f597"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_appfollow" "my_source_appfollow" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -45,10 +46,6 @@ resource "airbyte_source_appfollow" "my_source_appfollow" {
### Nested Schema for `configuration`
-Required:
-
-- `source_type` (String) must be one of ["appfollow"]
-
Optional:
- `api_secret` (String) API Key provided by Appfollow
diff --git a/docs/resources/source_asana.md b/docs/resources/source_asana.md
index 63b256530..e578bf02d 100644
--- a/docs/resources/source_asana.md
+++ b/docs/resources/source_asana.md
@@ -16,18 +16,21 @@ SourceAsana Resource
resource "airbyte_source_asana" "my_source_asana" {
configuration = {
credentials = {
- source_asana_authentication_mechanism_authenticate_via_asana_oauth_ = {
+ authenticate_via_asana_oauth = {
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
- option_title = "OAuth Credentials"
refresh_token = "...my_refresh_token..."
}
}
- source_type = "asana"
+ organization_export_ids = [
+ "{ \"see\": \"documentation\" }",
+ ]
+ test_mode = true
}
- name = "Jill Wintheiser"
- secret_id = "...my_secret_id..."
- workspace_id = "b114eeb5-2ff7-485f-8378-14d4c98e0c2b"
+ definition_id = "f5896557-ce17-4ccd-ab10-d6388d4fdfb9"
+ name = "Ms. Irvin Anderson"
+ secret_id = "...my_secret_id..."
+ workspace_id = "c04191be-b057-4f07-8546-621bdba90354"
}
```
@@ -37,11 +40,12 @@ resource "airbyte_source_asana" "my_source_asana" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -55,71 +59,32 @@ resource "airbyte_source_asana" "my_source_asana" {
Optional:
- `credentials` (Attributes) Choose how to authenticate to Github (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["asana"]
+- `organization_export_ids` (List of String) Globally unique identifiers for the organization exports
+- `test_mode` (Boolean) This flag is used for testing purposes for certain streams that return a lot of data. This flag is not meant to be enabled for prod.
### Nested Schema for `configuration.credentials`
Optional:
-- `source_asana_authentication_mechanism_authenticate_via_asana_oauth` (Attributes) Choose how to authenticate to Github (see [below for nested schema](#nestedatt--configuration--credentials--source_asana_authentication_mechanism_authenticate_via_asana_oauth))
-- `source_asana_authentication_mechanism_authenticate_with_personal_access_token` (Attributes) Choose how to authenticate to Github (see [below for nested schema](#nestedatt--configuration--credentials--source_asana_authentication_mechanism_authenticate_with_personal_access_token))
-- `source_asana_update_authentication_mechanism_authenticate_via_asana_oauth` (Attributes) Choose how to authenticate to Github (see [below for nested schema](#nestedatt--configuration--credentials--source_asana_update_authentication_mechanism_authenticate_via_asana_oauth))
-- `source_asana_update_authentication_mechanism_authenticate_with_personal_access_token` (Attributes) Choose how to authenticate to Github (see [below for nested schema](#nestedatt--configuration--credentials--source_asana_update_authentication_mechanism_authenticate_with_personal_access_token))
+- `authenticate_via_asana_oauth` (Attributes) Choose how to authenticate to Github (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_via_asana_oauth))
+- `authenticate_with_personal_access_token` (Attributes) Choose how to authenticate to Github (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_with_personal_access_token))
-
-### Nested Schema for `configuration.credentials.source_asana_authentication_mechanism_authenticate_via_asana_oauth`
+
+### Nested Schema for `configuration.credentials.authenticate_via_asana_oauth`
Required:
- `client_id` (String)
- `client_secret` (String)
-- `refresh_token` (String)
+- `refresh_token` (String, Sensitive)
-Optional:
-
-- `option_title` (String) must be one of ["OAuth Credentials"]
-OAuth Credentials
-
-
-
-### Nested Schema for `configuration.credentials.source_asana_authentication_mechanism_authenticate_with_personal_access_token`
-
-Required:
-
-- `personal_access_token` (String) Asana Personal Access Token (generate yours here).
-
-Optional:
-
-- `option_title` (String) must be one of ["PAT Credentials"]
-PAT Credentials
-
-
-
-### Nested Schema for `configuration.credentials.source_asana_update_authentication_mechanism_authenticate_via_asana_oauth`
-Required:
-
-- `client_id` (String)
-- `client_secret` (String)
-- `refresh_token` (String)
-
-Optional:
-
-- `option_title` (String) must be one of ["OAuth Credentials"]
-OAuth Credentials
-
-
-
-### Nested Schema for `configuration.credentials.source_asana_update_authentication_mechanism_authenticate_with_personal_access_token`
+
+### Nested Schema for `configuration.credentials.authenticate_with_personal_access_token`
Required:
-- `personal_access_token` (String) Asana Personal Access Token (generate yours here).
-
-Optional:
-
-- `option_title` (String) must be one of ["PAT Credentials"]
-PAT Credentials
+- `personal_access_token` (String, Sensitive) Asana Personal Access Token (generate yours here).
diff --git a/docs/resources/source_auth0.md b/docs/resources/source_auth0.md
index 52fc91890..7930c812a 100644
--- a/docs/resources/source_auth0.md
+++ b/docs/resources/source_auth0.md
@@ -17,17 +17,16 @@ resource "airbyte_source_auth0" "my_source_auth0" {
configuration = {
base_url = "https://dev-yourOrg.us.auth0.com/"
credentials = {
- source_auth0_authentication_method_o_auth2_access_token = {
+ o_auth2_access_token = {
access_token = "...my_access_token..."
- auth_type = "oauth2_access_token"
}
}
- source_type = "auth0"
- start_date = "2023-08-05T00:43:59.244Z"
+ start_date = "2023-08-05T00:43:59.244Z"
}
- name = "Willard McLaughlin"
- secret_id = "...my_secret_id..."
- workspace_id = "75dad636-c600-4503-98bb-31180f739ae9"
+ definition_id = "f51ed0a8-181e-46e5-9fd9-ebe7b2f5ca6e"
+ name = "Dallas Wiza"
+ secret_id = "...my_secret_id..."
+ workspace_id = "2b052102-08e0-436b-a68d-758466c963e1"
}
```
@@ -37,11 +36,12 @@ resource "airbyte_source_auth0" "my_source_auth0" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -56,58 +56,34 @@ Required:
- `base_url` (String) The Authentication API is served over HTTPS. All URLs referenced in the documentation have the following base `https://YOUR_DOMAIN`
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["auth0"]
Optional:
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
+- `start_date` (String) Default: "2023-08-05T00:43:59.244Z"
+UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
### Nested Schema for `configuration.credentials`
Optional:
-- `source_auth0_authentication_method_o_auth2_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_auth0_authentication_method_o_auth2_access_token))
-- `source_auth0_authentication_method_o_auth2_confidential_application` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_auth0_authentication_method_o_auth2_confidential_application))
-- `source_auth0_update_authentication_method_o_auth2_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_auth0_update_authentication_method_o_auth2_access_token))
-- `source_auth0_update_authentication_method_o_auth2_confidential_application` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_auth0_update_authentication_method_o_auth2_confidential_application))
+- `o_auth2_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth2_access_token))
+- `o_auth2_confidential_application` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth2_confidential_application))
-
-### Nested Schema for `configuration.credentials.source_auth0_authentication_method_o_auth2_access_token`
+
+### Nested Schema for `configuration.credentials.o_auth2_access_token`
Required:
-- `access_token` (String) Also called API Access Token The access token used to call the Auth0 Management API Token. It's a JWT that contains specific grant permissions knowns as scopes.
-- `auth_type` (String) must be one of ["oauth2_access_token"]
+- `access_token` (String, Sensitive) Also called API Access Token The access token used to call the Auth0 Management API Token. It's a JWT that contains specific grant permissions knowns as scopes.
-
-### Nested Schema for `configuration.credentials.source_auth0_authentication_method_o_auth2_confidential_application`
+
+### Nested Schema for `configuration.credentials.o_auth2_confidential_application`
Required:
- `audience` (String) The audience for the token, which is your API. You can find this in the Identifier field on your API's settings tab
-- `auth_type` (String) must be one of ["oauth2_confidential_application"]
-- `client_id` (String) Your application's Client ID. You can find this value on the application's settings tab after you login the admin portal.
-- `client_secret` (String) Your application's Client Secret. You can find this value on the application's settings tab after you login the admin portal.
-
-
-
-### Nested Schema for `configuration.credentials.source_auth0_update_authentication_method_o_auth2_access_token`
-
-Required:
-
-- `access_token` (String) Also called API Access Token The access token used to call the Auth0 Management API Token. It's a JWT that contains specific grant permissions knowns as scopes.
-- `auth_type` (String) must be one of ["oauth2_access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_auth0_update_authentication_method_o_auth2_confidential_application`
-
-Required:
-
-- `audience` (String) The audience for the token, which is your API. You can find this in the Identifier field on your API's settings tab
-- `auth_type` (String) must be one of ["oauth2_confidential_application"]
- `client_id` (String) Your application's Client ID. You can find this value on the application's settings tab after you login the admin portal.
- `client_secret` (String) Your application's Client Secret. You can find this value on the application's settings tab after you login the admin portal.
diff --git a/docs/resources/source_aws_cloudtrail.md b/docs/resources/source_aws_cloudtrail.md
index ae5eda7dc..1f0eb17d8 100644
--- a/docs/resources/source_aws_cloudtrail.md
+++ b/docs/resources/source_aws_cloudtrail.md
@@ -18,12 +18,12 @@ resource "airbyte_source_aws_cloudtrail" "my_source_awscloudtrail" {
aws_key_id = "...my_aws_key_id..."
aws_region_name = "...my_aws_region_name..."
aws_secret_key = "...my_aws_secret_key..."
- source_type = "aws-cloudtrail"
start_date = "2021-01-01"
}
- name = "Nellie Waters"
- secret_id = "...my_secret_id..."
- workspace_id = "09e28103-31f3-4981-94c7-00b607f3c93c"
+ definition_id = "1b394b84-acdf-48db-aa4f-7e23711b260f"
+ name = "Janis Erdman"
+ secret_id = "...my_secret_id..."
+ workspace_id = "1edcb36c-da3d-451c-bc15-623ec6453ce6"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_aws_cloudtrail" "my_source_awscloudtrail" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,10 +51,13 @@ resource "airbyte_source_aws_cloudtrail" "my_source_awscloudtrail" {
Required:
-- `aws_key_id` (String) AWS CloudTrail Access Key ID. See the docs for more information on how to obtain this key.
+- `aws_key_id` (String, Sensitive) AWS CloudTrail Access Key ID. See the docs for more information on how to obtain this key.
- `aws_region_name` (String) The default AWS Region to use, for example, us-west-1 or us-west-2. When specifying a Region inline during client initialization, this property is named region_name.
-- `aws_secret_key` (String) AWS CloudTrail Access Key ID. See the docs for more information on how to obtain this key.
-- `source_type` (String) must be one of ["aws-cloudtrail"]
-- `start_date` (String) The date you would like to replicate data. Data in AWS CloudTrail is available for last 90 days only. Format: YYYY-MM-DD.
+- `aws_secret_key` (String, Sensitive) AWS CloudTrail Access Key ID. See the docs for more information on how to obtain this key.
+
+Optional:
+
+- `start_date` (String) Default: "1970-01-01"
+The date you would like to replicate data. Data in AWS CloudTrail is available for last 90 days only. Format: YYYY-MM-DD.
diff --git a/docs/resources/source_azure_blob_storage.md b/docs/resources/source_azure_blob_storage.md
index a6896cfbb..8462aa41b 100644
--- a/docs/resources/source_azure_blob_storage.md
+++ b/docs/resources/source_azure_blob_storage.md
@@ -15,22 +15,35 @@ SourceAzureBlobStorage Resource
```terraform
resource "airbyte_source_azure_blob_storage" "my_source_azureblobstorage" {
configuration = {
- azure_blob_storage_account_key = "Z8ZkZpteggFx394vm+PJHnGTvdRncaYS+JhLKdj789YNmD+iyGTnG+PV+POiuYNhBg/ACS+LKjd%4FG3FHGN12Nd=="
- azure_blob_storage_account_name = "airbyte5storage"
- azure_blob_storage_blobs_prefix = "FolderA/FolderB/"
- azure_blob_storage_container_name = "airbytetescontainername"
- azure_blob_storage_endpoint = "blob.core.windows.net"
- azure_blob_storage_schema_inference_limit = 500
- format = {
- source_azure_blob_storage_input_format_json_lines_newline_delimited_json = {
- format_type = "JSONL"
- }
- }
- source_type = "azure-blob-storage"
+ azure_blob_storage_account_key = "Z8ZkZpteggFx394vm+PJHnGTvdRncaYS+JhLKdj789YNmD+iyGTnG+PV+POiuYNhBg/ACS+LKjd%4FG3FHGN12Nd=="
+ azure_blob_storage_account_name = "airbyte5storage"
+ azure_blob_storage_container_name = "airbytetescontainername"
+ azure_blob_storage_endpoint = "blob.core.windows.net"
+ start_date = "2021-01-01T00:00:00.000000Z"
+ streams = [
+ {
+ days_to_sync_if_history_is_full = 8
+ format = {
+ avro_format = {
+ double_as_string = true
+ }
+ }
+ globs = [
+ "...",
+ ]
+ input_schema = "...my_input_schema..."
+ legacy_prefix = "...my_legacy_prefix..."
+ name = "Angelina Armstrong"
+ primary_key = "...my_primary_key..."
+ schemaless = true
+ validation_policy = "Wait for Discover"
+ },
+ ]
}
- name = "Patty Mraz"
- secret_id = "...my_secret_id..."
- workspace_id = "3f2ceda7-e23f-4225-b411-faf4b7544e47"
+ definition_id = "e16b8da7-b814-43f8-91cf-99c7fd70e504"
+ name = "Joy Sipes"
+ secret_id = "...my_secret_id..."
+ workspace_id = "4f64874e-62c5-48d8-b92f-d48887cb19c4"
}
```
@@ -39,12 +52,14 @@ resource "airbyte_source_azure_blob_storage" "my_source_azureblobstorage" {
### Required
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `configuration` (Attributes) NOTE: When this Spec is changed, legacy_config_transformer.py must also be modified to uptake the changes
+because it is responsible for converting legacy Azure Blob Storage v0 configs into v1 configs using the File-Based CDK. (see [below for nested schema](#nestedatt--configuration))
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -57,39 +72,130 @@ resource "airbyte_source_azure_blob_storage" "my_source_azureblobstorage" {
Required:
-- `azure_blob_storage_account_key` (String) The Azure blob storage account key.
+- `azure_blob_storage_account_key` (String, Sensitive) The Azure blob storage account key.
- `azure_blob_storage_account_name` (String) The account's name of the Azure Blob Storage.
- `azure_blob_storage_container_name` (String) The name of the Azure blob storage container.
-- `format` (Attributes) Input data format (see [below for nested schema](#nestedatt--configuration--format))
-- `source_type` (String) must be one of ["azure-blob-storage"]
+- `streams` (Attributes List) Each instance of this configuration defines a stream. Use this to define which files belong in the stream, their format, and how they should be parsed and validated. When sending data to warehouse destination such as Snowflake or BigQuery, each stream is a separate table. (see [below for nested schema](#nestedatt--configuration--streams))
Optional:
-- `azure_blob_storage_blobs_prefix` (String) The Azure blob storage prefix to be applied
- `azure_blob_storage_endpoint` (String) This is Azure Blob Storage endpoint domain name. Leave default value (or leave it empty if run container from command line) to use Microsoft native from example.
-- `azure_blob_storage_schema_inference_limit` (Number) The Azure blob storage blobs to scan for inferring the schema, useful on large amounts of data with consistent structure
+- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00.000000Z. Any file modified before this date will not be replicated.
-
-### Nested Schema for `configuration.format`
+
+### Nested Schema for `configuration.streams`
+
+Required:
+
+- `format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format))
+- `name` (String) The name of the stream.
Optional:
-- `source_azure_blob_storage_input_format_json_lines_newline_delimited_json` (Attributes) Input data format (see [below for nested schema](#nestedatt--configuration--format--source_azure_blob_storage_input_format_json_lines_newline_delimited_json))
-- `source_azure_blob_storage_update_input_format_json_lines_newline_delimited_json` (Attributes) Input data format (see [below for nested schema](#nestedatt--configuration--format--source_azure_blob_storage_update_input_format_json_lines_newline_delimited_json))
+- `days_to_sync_if_history_is_full` (Number) Default: 3
+When the state history of the file store is full, syncs will only read files that were last modified in the provided day range.
+- `globs` (List of String) The pattern used to specify which files should be selected from the file system. For more information on glob pattern matching look here.
+- `input_schema` (String) The schema that will be used to validate records extracted from the file. This will override the stream schema that is auto-detected from incoming files.
+- `legacy_prefix` (String) The path prefix configured in v3 versions of the S3 connector. This option is deprecated in favor of a single glob.
+- `primary_key` (String, Sensitive) The column or columns (for a composite key) that serves as the unique identifier of a record.
+- `schemaless` (Boolean) Default: false
+When enabled, syncs will not validate or structure records against the stream's schema.
+- `validation_policy` (String) must be one of ["Emit Record", "Skip Record", "Wait for Discover"]; Default: "Emit Record"
+The name of the validation policy that dictates sync behavior when a record does not adhere to the stream schema.
-
-### Nested Schema for `configuration.format.source_azure_blob_storage_input_format_json_lines_newline_delimited_json`
+
+### Nested Schema for `configuration.streams.format`
-Required:
+Optional:
+
+- `avro_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--avro_format))
+- `csv_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--csv_format))
+- `document_file_type_format_experimental` (Attributes) Extract text from document formats (.pdf, .docx, .md, .pptx) and emit as one record per file. (see [below for nested schema](#nestedatt--configuration--streams--format--document_file_type_format_experimental))
+- `jsonl_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--jsonl_format))
+- `parquet_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format))
+
+
+### Nested Schema for `configuration.streams.format.parquet_format`
+
+Optional:
+
+- `double_as_string` (Boolean) Default: false
+Whether to convert double fields to strings. This is recommended if you have decimal numbers with a high degree of precision because there can be a loss precision when handling floating point numbers.
+
+
+
+### Nested Schema for `configuration.streams.format.parquet_format`
+
+Optional:
+
+- `delimiter` (String) Default: ","
+The character delimiting individual cells in the CSV data. This may only be a 1-character string. For tab-delimited data enter '\t'.
+- `double_quote` (Boolean) Default: true
+Whether two quotes in a quoted CSV value denote a single quote in the data.
+- `encoding` (String) Default: "utf8"
+The character encoding of the CSV data. Leave blank to default to UTF8. See list of python encodings for allowable options.
+- `escape_char` (String) The character used for escaping special characters. To disallow escaping, leave this field blank.
+- `false_values` (List of String) A set of case-sensitive strings that should be interpreted as false values.
+- `header_definition` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format--header_definition))
+- `inference_type` (String) must be one of ["None", "Primitive Types Only"]; Default: "None"
+How to infer the types of the columns. If none, inference default to strings.
+- `null_values` (List of String) A set of case-sensitive strings that should be interpreted as null values. For example, if the value 'NA' should be interpreted as null, enter 'NA' in this field.
+- `quote_char` (String) Default: "\""
+The character used for quoting CSV values. To disallow quoting, make this field blank.
+- `skip_rows_after_header` (Number) Default: 0
+The number of rows to skip after the header row.
+- `skip_rows_before_header` (Number) Default: 0
+The number of rows to skip before the header row. For example, if the header row is on the 3rd row, enter 2 in this field.
+- `strings_can_be_null` (Boolean) Default: true
+Whether strings can be interpreted as null values. If true, strings that match the null_values set will be interpreted as null. If false, strings that match the null_values set will be interpreted as the string itself.
+- `true_values` (List of String) A set of case-sensitive strings that should be interpreted as true values.
+
+
+### Nested Schema for `configuration.streams.format.parquet_format.header_definition`
+
+Optional:
-- `format_type` (String) must be one of ["JSONL"]
+- `autogenerated` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format--header_definition--autogenerated))
+- `from_csv` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format--header_definition--from_csv))
+- `user_provided` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format--header_definition--user_provided))
+
+### Nested Schema for `configuration.streams.format.parquet_format.header_definition.user_provided`
-
-### Nested Schema for `configuration.format.source_azure_blob_storage_update_input_format_json_lines_newline_delimited_json`
+
+
+### Nested Schema for `configuration.streams.format.parquet_format.header_definition.user_provided`
+
+
+
+### Nested Schema for `configuration.streams.format.parquet_format.header_definition.user_provided`
Required:
-- `format_type` (String) must be one of ["JSONL"]
+- `column_names` (List of String) The column names that will be used while emitting the CSV records
+
+
+
+
+
+### Nested Schema for `configuration.streams.format.parquet_format`
+
+Optional:
+
+- `skip_unprocessable_file_types` (Boolean) Default: true
+If true, skip files that cannot be parsed because of their file type and log a warning. If false, fail the sync. Corrupted files with valid file types will still result in a failed sync.
+
+
+
+### Nested Schema for `configuration.streams.format.parquet_format`
+
+
+
+### Nested Schema for `configuration.streams.format.parquet_format`
+
+Optional:
+
+- `decimal_as_float` (Boolean) Default: false
+Whether to convert decimal fields to floats. There is a loss of precision when converting decimals to floats, so this is not recommended.
diff --git a/docs/resources/source_azure_table.md b/docs/resources/source_azure_table.md
index 608c2aab8..e6077b806 100644
--- a/docs/resources/source_azure_table.md
+++ b/docs/resources/source_azure_table.md
@@ -15,14 +15,14 @@ SourceAzureTable Resource
```terraform
resource "airbyte_source_azure_table" "my_source_azuretable" {
configuration = {
- source_type = "azure-table"
storage_access_key = "...my_storage_access_key..."
storage_account_name = "...my_storage_account_name..."
- storage_endpoint_suffix = "core.windows.net"
+ storage_endpoint_suffix = "core.chinacloudapi.cn"
}
- name = "Ian Baumbach"
- secret_id = "...my_secret_id..."
- workspace_id = "57a5b404-63a7-4d57-9f14-00e764ad7334"
+ definition_id = "ec8b4573-d66d-4007-a52a-2e4396e7403e"
+ name = "Adam Stracke V"
+ secret_id = "...my_secret_id..."
+ workspace_id = "59a4fa50-e807-4c86-bd0c-bf5314eea0fa"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_azure_table" "my_source_azuretable" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,12 +50,12 @@ resource "airbyte_source_azure_table" "my_source_azuretable" {
Required:
-- `source_type` (String) must be one of ["azure-table"]
-- `storage_access_key` (String) Azure Table Storage Access Key. See the docs for more information on how to obtain this key.
+- `storage_access_key` (String, Sensitive) Azure Table Storage Access Key. See the docs for more information on how to obtain this key.
- `storage_account_name` (String) The name of your storage account.
Optional:
-- `storage_endpoint_suffix` (String) Azure Table Storage service account URL suffix. See the docs for more information on how to obtain endpoint suffix
+- `storage_endpoint_suffix` (String) Default: "core.windows.net"
+Azure Table Storage service account URL suffix. See the docs for more information on how to obtain endpoint suffix
diff --git a/docs/resources/source_bamboo_hr.md b/docs/resources/source_bamboo_hr.md
index 7e8bce413..382290a09 100644
--- a/docs/resources/source_bamboo_hr.md
+++ b/docs/resources/source_bamboo_hr.md
@@ -18,12 +18,12 @@ resource "airbyte_source_bamboo_hr" "my_source_bamboohr" {
api_key = "...my_api_key..."
custom_reports_fields = "...my_custom_reports_fields..."
custom_reports_include_default_fields = true
- source_type = "bamboo-hr"
subdomain = "...my_subdomain..."
}
- name = "Ralph Rau"
- secret_id = "...my_secret_id..."
- workspace_id = "1b36a080-88d1-400e-bada-200ef0422eb2"
+ definition_id = "1aa37367-271c-478a-9aa9-603df323c7d7"
+ name = "Joel Harber"
+ secret_id = "...my_secret_id..."
+ workspace_id = "f8882a19-738b-4218-b704-94da21b79cfd"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_bamboo_hr" "my_source_bamboohr" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,13 +51,14 @@ resource "airbyte_source_bamboo_hr" "my_source_bamboohr" {
Required:
-- `api_key` (String) Api key of bamboo hr
-- `source_type` (String) must be one of ["bamboo-hr"]
+- `api_key` (String, Sensitive) Api key of bamboo hr
- `subdomain` (String) Sub Domain of bamboo hr
Optional:
-- `custom_reports_fields` (String) Comma-separated list of fields to include in custom reports.
-- `custom_reports_include_default_fields` (Boolean) If true, the custom reports endpoint will include the default fields defined here: https://documentation.bamboohr.com/docs/list-of-field-names.
+- `custom_reports_fields` (String) Default: ""
+Comma-separated list of fields to include in custom reports.
+- `custom_reports_include_default_fields` (Boolean) Default: true
+If true, the custom reports endpoint will include the default fields defined here: https://documentation.bamboohr.com/docs/list-of-field-names.
diff --git a/docs/resources/source_bigcommerce.md b/docs/resources/source_bigcommerce.md
deleted file mode 100644
index e850994f9..000000000
--- a/docs/resources/source_bigcommerce.md
+++ /dev/null
@@ -1,57 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_bigcommerce Resource - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceBigcommerce Resource
----
-
-# airbyte_source_bigcommerce (Resource)
-
-SourceBigcommerce Resource
-
-## Example Usage
-
-```terraform
-resource "airbyte_source_bigcommerce" "my_source_bigcommerce" {
- configuration = {
- access_token = "...my_access_token..."
- source_type = "bigcommerce"
- start_date = "2021-01-01"
- store_hash = "...my_store_hash..."
- }
- name = "Beth Gleason"
- secret_id = "...my_secret_id..."
- workspace_id = "9ab8366c-723f-4fda-9e06-bee4825c1fc0"
-}
-```
-
-
-## Schema
-
-### Required
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `source_id` (String)
-- `source_type` (String)
-
-
-### Nested Schema for `configuration`
-
-Required:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `source_type` (String) must be one of ["bigcommerce"]
-- `start_date` (String) The date you would like to replicate data. Format: YYYY-MM-DD.
-- `store_hash` (String) The hash code of the store. For https://api.bigcommerce.com/stores/HASH_CODE/v3/, The store's hash code is 'HASH_CODE'.
-
-
diff --git a/docs/resources/source_bigquery.md b/docs/resources/source_bigquery.md
index 5d4224fa6..277efd84a 100644
--- a/docs/resources/source_bigquery.md
+++ b/docs/resources/source_bigquery.md
@@ -18,11 +18,11 @@ resource "airbyte_source_bigquery" "my_source_bigquery" {
credentials_json = "...my_credentials_json..."
dataset_id = "...my_dataset_id..."
project_id = "...my_project_id..."
- source_type = "bigquery"
}
- name = "Joe Bradtke"
- secret_id = "...my_secret_id..."
- workspace_id = "80bff918-544e-4c42-9efc-ce8f1977773e"
+ definition_id = "9baf3821-deb7-4264-9ad9-e5fb53126691"
+ name = "Darrin Rogahn"
+ secret_id = "...my_secret_id..."
+ workspace_id = "b9ea24da-51fb-473f-872f-2e8bbfe18227"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_bigquery" "my_source_bigquery" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,7 +52,6 @@ Required:
- `credentials_json` (String) The contents of your Service Account Key JSON file. See the docs for more information on how to obtain this key.
- `project_id` (String) The GCP project ID for the project containing the target BigQuery dataset.
-- `source_type` (String) must be one of ["bigquery"]
Optional:
diff --git a/docs/resources/source_bing_ads.md b/docs/resources/source_bing_ads.md
index c209d01ae..00fbc9cec 100644
--- a/docs/resources/source_bing_ads.md
+++ b/docs/resources/source_bing_ads.md
@@ -15,19 +15,28 @@ SourceBingAds Resource
```terraform
resource "airbyte_source_bing_ads" "my_source_bingads" {
configuration = {
- auth_method = "oauth2.0"
- client_id = "...my_client_id..."
- client_secret = "...my_client_secret..."
+ client_id = "...my_client_id..."
+ client_secret = "...my_client_secret..."
+ custom_reports = [
+ {
+ name = "AdDynamicTextPerformanceReport"
+ report_aggregation = "...my_report_aggregation..."
+ report_columns = [
+ "...",
+ ]
+ reporting_object = "ShareOfVoiceReportRequest"
+ },
+ ]
developer_token = "...my_developer_token..."
- lookback_window = 4
+ lookback_window = 3
refresh_token = "...my_refresh_token..."
- reports_start_date = "2022-08-23"
- source_type = "bing-ads"
+ reports_start_date = "2022-08-17"
tenant_id = "...my_tenant_id..."
}
- name = "Kathryn Nitzsche"
- secret_id = "...my_secret_id..."
- workspace_id = "408f05e3-d48f-4daf-b13a-1f5fd94259c0"
+ definition_id = "f49be625-99f1-47b5-861c-8d2f7dd6ee9c"
+ name = "Delia Kub Sr."
+ secret_id = "...my_secret_id..."
+ workspace_id = "90282195-430f-4896-8a32-1f431fb3aad0"
}
```
@@ -37,11 +46,12 @@ resource "airbyte_source_bing_ads" "my_source_bingads" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -55,16 +65,33 @@ resource "airbyte_source_bing_ads" "my_source_bingads" {
Required:
- `client_id` (String) The Client ID of your Microsoft Advertising developer application.
-- `developer_token` (String) Developer token associated with user. See more info in the docs.
-- `refresh_token` (String) Refresh Token to renew the expired Access Token.
-- `reports_start_date` (String) The start date from which to begin replicating report data. Any data generated before this date will not be replicated in reports. This is a UTC date in YYYY-MM-DD format.
-- `source_type` (String) must be one of ["bing-ads"]
+- `developer_token` (String, Sensitive) Developer token associated with user. See more info in the docs.
+- `refresh_token` (String, Sensitive) Refresh Token to renew the expired Access Token.
Optional:
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `client_secret` (String) The Client Secret of your Microsoft Advertising developer application.
-- `lookback_window` (Number) Also known as attribution or conversion window. How far into the past to look for records (in days). If your conversion window has an hours/minutes granularity, round it up to the number of days exceeding. Used only for performance report streams in incremental mode.
-- `tenant_id` (String) The Tenant ID of your Microsoft Advertising developer application. Set this to "common" unless you know you need a different value.
+- `client_secret` (String) Default: ""
+The Client Secret of your Microsoft Advertising developer application.
+- `custom_reports` (Attributes List) You can add your Custom Bing Ads report by creating one. (see [below for nested schema](#nestedatt--configuration--custom_reports))
+- `lookback_window` (Number) Default: 0
+Also known as attribution or conversion window. How far into the past to look for records (in days). If your conversion window has an hours/minutes granularity, round it up to the number of days exceeding. Used only for performance report streams in incremental mode without specified Reports Start Date.
+- `reports_start_date` (String) The start date from which to begin replicating report data. Any data generated before this date will not be replicated in reports. This is a UTC date in YYYY-MM-DD format. If not set, data from previous and current calendar year will be replicated.
+- `tenant_id` (String) Default: "common"
+The Tenant ID of your Microsoft Advertising developer application. Set this to "common" unless you know you need a different value.
+
+
+### Nested Schema for `configuration.custom_reports`
+
+Required:
+
+- `name` (String) The name of the custom report, this name would be used as stream name
+- `report_columns` (List of String) A list of available report object columns. You can find it in description of reporting object that you want to add to custom report.
+- `reporting_object` (String) must be one of ["AccountPerformanceReportRequest", "AdDynamicTextPerformanceReportRequest", "AdExtensionByAdReportRequest", "AdExtensionByKeywordReportRequest", "AdExtensionDetailReportRequest", "AdGroupPerformanceReportRequest", "AdPerformanceReportRequest", "AgeGenderAudienceReportRequest", "AudiencePerformanceReportRequest", "CallDetailReportRequest", "CampaignPerformanceReportRequest", "ConversionPerformanceReportRequest", "DestinationUrlPerformanceReportRequest", "DSAAutoTargetPerformanceReportRequest", "DSACategoryPerformanceReportRequest", "DSASearchQueryPerformanceReportRequest", "GeographicPerformanceReportRequest", "GoalsAndFunnelsReportRequest", "HotelDimensionPerformanceReportRequest", "HotelGroupPerformanceReportRequest", "KeywordPerformanceReportRequest", "NegativeKeywordConflictReportRequest", "ProductDimensionPerformanceReportRequest", "ProductMatchCountReportRequest", "ProductNegativeKeywordConflictReportRequest", "ProductPartitionPerformanceReportRequest", "ProductPartitionUnitPerformanceReportRequest", "ProductSearchQueryPerformanceReportRequest", "ProfessionalDemographicsAudienceReportRequest", "PublisherUsagePerformanceReportRequest", "SearchCampaignChangeHistoryReportRequest", "SearchQueryPerformanceReportRequest", "ShareOfVoiceReportRequest", "UserLocationPerformanceReportRequest"]
+The name of the the object derives from the ReportRequest object. You can find it in Bing Ads Api docs - Reporting API - Reporting Data Objects.
+
+Optional:
+
+- `report_aggregation` (String) Default: "[Hourly]"
+A list of available aggregations.
diff --git a/docs/resources/source_braintree.md b/docs/resources/source_braintree.md
index e4b899ddb..e6c557362 100644
--- a/docs/resources/source_braintree.md
+++ b/docs/resources/source_braintree.md
@@ -15,16 +15,16 @@ SourceBraintree Resource
```terraform
resource "airbyte_source_braintree" "my_source_braintree" {
configuration = {
- environment = "Development"
+ environment = "Qa"
merchant_id = "...my_merchant_id..."
private_key = "...my_private_key..."
public_key = "...my_public_key..."
- source_type = "braintree"
start_date = "2020-12-30"
}
- name = "Henrietta Nienow"
- secret_id = "...my_secret_id..."
- workspace_id = "4f3b756c-11f6-4c37-a512-6243835bbc05"
+ definition_id = "12fcb5a7-fdd8-454e-8c39-c22fe17df57a"
+ name = "Ms. Tommie Bins"
+ secret_id = "...my_secret_id..."
+ workspace_id = "5ff7f1a2-7e8f-4d2f-993d-4f9ab29a2f83"
}
```
@@ -34,11 +34,12 @@ resource "airbyte_source_braintree" "my_source_braintree" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -54,9 +55,8 @@ Required:
- `environment` (String) must be one of ["Development", "Sandbox", "Qa", "Production"]
Environment specifies where the data will come from.
- `merchant_id` (String) The unique identifier for your entire gateway account. See the docs for more information on how to obtain this ID.
-- `private_key` (String) Braintree Private Key. See the docs for more information on how to obtain this key.
-- `public_key` (String) Braintree Public Key. See the docs for more information on how to obtain this key.
-- `source_type` (String) must be one of ["braintree"]
+- `private_key` (String, Sensitive) Braintree Private Key. See the docs for more information on how to obtain this key.
+- `public_key` (String, Sensitive) Braintree Public Key. See the docs for more information on how to obtain this key.
Optional:
diff --git a/docs/resources/source_braze.md b/docs/resources/source_braze.md
index 9b54f5633..4af3aff5d 100644
--- a/docs/resources/source_braze.md
+++ b/docs/resources/source_braze.md
@@ -15,14 +15,14 @@ SourceBraze Resource
```terraform
resource "airbyte_source_braze" "my_source_braze" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "braze"
- start_date = "2022-09-06"
- url = "...my_url..."
+ api_key = "...my_api_key..."
+ start_date = "2022-07-08"
+ url = "...my_url..."
}
- name = "Rosie Glover"
- secret_id = "...my_secret_id..."
- workspace_id = "efc5fde1-0a0c-4e21-a9e5-10019c6dc5e3"
+ definition_id = "dec4e3ea-b02c-4cb9-8852-3df16a0cc499"
+ name = "Margarita Leuschke"
+ secret_id = "...my_secret_id..."
+ workspace_id = "682b0a70-74f0-416f-b212-7f33f8652b25"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_braze" "my_source_braze" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,8 +50,7 @@ resource "airbyte_source_braze" "my_source_braze" {
Required:
-- `api_key` (String) Braze REST API key
-- `source_type` (String) must be one of ["braze"]
+- `api_key` (String, Sensitive) Braze REST API key
- `start_date` (String) Rows after this date will be synced
- `url` (String) Braze REST API endpoint
diff --git a/docs/resources/source_cart.md b/docs/resources/source_cart.md
new file mode 100644
index 000000000..063acf385
--- /dev/null
+++ b/docs/resources/source_cart.md
@@ -0,0 +1,90 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_source_cart Resource - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ SourceCart Resource
+---
+
+# airbyte_source_cart (Resource)
+
+SourceCart Resource
+
+## Example Usage
+
+```terraform
+resource "airbyte_source_cart" "my_source_cart" {
+ configuration = {
+ credentials = {
+ central_api_router = {
+ site_id = "...my_site_id..."
+ user_name = "Ethyl.Bosco18"
+ user_secret = "...my_user_secret..."
+ }
+ }
+ start_date = "2021-01-01T00:00:00Z"
+ }
+ definition_id = "3ec1224a-7ffb-4268-9c18-7087d37ac99f"
+ name = "Jamie Macejkovic III"
+ secret_id = "...my_secret_id..."
+ workspace_id = "12305e0c-1f4b-465d-9ebd-757e5946981c"
+}
+```
+
+
+## Schema
+
+### Required
+
+- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `name` (String) Name of the source e.g. dev-mysql-instance.
+- `workspace_id` (String)
+
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
+- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
+
+### Read-Only
+
+- `source_id` (String)
+- `source_type` (String)
+
+
+### Nested Schema for `configuration`
+
+Required:
+
+- `start_date` (String) The date from which you'd like to replicate the data
+
+Optional:
+
+- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
+
+
+### Nested Schema for `configuration.credentials`
+
+Optional:
+
+- `central_api_router` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--central_api_router))
+- `single_store_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--single_store_access_token))
+
+
+### Nested Schema for `configuration.credentials.central_api_router`
+
+Required:
+
+- `site_id` (String) You can determine a site provisioning site Id by hitting https://site.com/store/sitemonitor.aspx and reading the response param PSID
+- `user_name` (String) Enter your application's User Name
+- `user_secret` (String) Enter your application's User Secret
+
+
+
+### Nested Schema for `configuration.credentials.single_store_access_token`
+
+Required:
+
+- `access_token` (String, Sensitive) Access Token for making authenticated requests.
+- `store_name` (String) The name of Cart.com Online Store. All API URLs start with https://[mystorename.com]/api/v1/, where [mystorename.com] is the domain name of your store.
+
+
diff --git a/docs/resources/source_chargebee.md b/docs/resources/source_chargebee.md
index 441d8aec3..cc5354deb 100644
--- a/docs/resources/source_chargebee.md
+++ b/docs/resources/source_chargebee.md
@@ -15,15 +15,15 @@ SourceChargebee Resource
```terraform
resource "airbyte_source_chargebee" "my_source_chargebee" {
configuration = {
- product_catalog = "1.0"
+ product_catalog = "2.0"
site = "airbyte-test"
site_api_key = "...my_site_api_key..."
- source_type = "chargebee"
start_date = "2021-01-25T00:00:00Z"
}
- name = "Viola Morissette"
- secret_id = "...my_secret_id..."
- workspace_id = "fbbe6949-fb2b-4b4e-8ae6-c3d5db3adebd"
+ definition_id = "08691686-308e-4adb-b3c3-69be0c12ece5"
+ name = "Jean Mann"
+ secret_id = "...my_secret_id..."
+ workspace_id = "aef8e474-9058-48d0-a293-9574a681eea7"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_chargebee" "my_source_chargebee" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -53,8 +54,7 @@ Required:
- `product_catalog` (String) must be one of ["1.0", "2.0"]
Product Catalog version of your Chargebee site. Instructions on how to find your version you may find here under `API Version` section.
- `site` (String) The site prefix for your Chargebee instance.
-- `site_api_key` (String) Chargebee API Key. See the docs for more information on how to obtain this key.
-- `source_type` (String) must be one of ["chargebee"]
+- `site_api_key` (String, Sensitive) Chargebee API Key. See the docs for more information on how to obtain this key.
- `start_date` (String) UTC date and time in the format 2021-01-25T00:00:00Z. Any data before this date will not be replicated.
diff --git a/docs/resources/source_chartmogul.md b/docs/resources/source_chartmogul.md
index eae9586b4..88ecc29c4 100644
--- a/docs/resources/source_chartmogul.md
+++ b/docs/resources/source_chartmogul.md
@@ -15,14 +15,13 @@ SourceChartmogul Resource
```terraform
resource "airbyte_source_chartmogul" "my_source_chartmogul" {
configuration = {
- api_key = "...my_api_key..."
- interval = "week"
- source_type = "chartmogul"
- start_date = "2017-01-25T00:00:00Z"
+ api_key = "...my_api_key..."
+ start_date = "2017-01-25T00:00:00Z"
}
- name = "Neal Gorczany"
- secret_id = "...my_secret_id..."
- workspace_id = "06a8aa94-c026-444c-b5e9-d9a4578adc1a"
+ definition_id = "87a1fb18-7d33-4223-980b-b99362d2f459"
+ name = "Monica Pagac"
+ secret_id = "...my_secret_id..."
+ workspace_id = "bc3680ab-b376-4bce-a6a7-c0ce20da3e9a"
}
```
@@ -32,11 +31,12 @@ resource "airbyte_source_chartmogul" "my_source_chartmogul" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,10 +49,7 @@ resource "airbyte_source_chartmogul" "my_source_chartmogul" {
Required:
-- `api_key` (String) Your Chartmogul API key. See the docs for info on how to obtain this.
-- `interval` (String) must be one of ["day", "week", "month", "quarter"]
-Some APIs such as Metrics require intervals to cluster data.
-- `source_type` (String) must be one of ["chartmogul"]
+- `api_key` (String, Sensitive) Your Chartmogul API key. See the docs for info on how to obtain this.
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. When feasible, any data before this date will not be replicated.
diff --git a/docs/resources/source_clickhouse.md b/docs/resources/source_clickhouse.md
index 0752f6a3e..b8f07ea23 100644
--- a/docs/resources/source_clickhouse.md
+++ b/docs/resources/source_clickhouse.md
@@ -15,21 +15,19 @@ SourceClickhouse Resource
```terraform
resource "airbyte_source_clickhouse" "my_source_clickhouse" {
configuration = {
- database = "default"
- host = "...my_host..."
- password = "...my_password..."
- port = 8123
- source_type = "clickhouse"
+ database = "default"
+ host = "...my_host..."
+ password = "...my_password..."
+ port = 8123
tunnel_method = {
- source_clickhouse_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ source_clickhouse_no_tunnel = {}
}
- username = "Gerry81"
+ username = "Maximus28"
}
- name = "Mr. Simon Altenwerth"
- secret_id = "...my_secret_id..."
- workspace_id = "c802e2ec-09ff-48f0-b816-ff3477c13e90"
+ definition_id = "54cb2418-93e1-4da4-ac4f-685d205011b8"
+ name = "Milton Crooks"
+ secret_id = "...my_secret_id..."
+ workspace_id = "3b757391-0861-48e9-9445-d83c494a849c"
}
```
@@ -39,11 +37,12 @@ resource "airbyte_source_clickhouse" "my_source_clickhouse" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -58,13 +57,13 @@ Required:
- `database` (String) The name of the database.
- `host` (String) The host endpoint of the Clickhouse cluster.
-- `port` (Number) The port of the database.
-- `source_type` (String) must be one of ["clickhouse"]
- `username` (String) The username which is used to access the database.
Optional:
-- `password` (String) The password associated with this username.
+- `password` (String, Sensitive) The password associated with this username.
+- `port` (Number) Default: 8123
+The port of the database.
- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
@@ -72,80 +71,41 @@ Optional:
Optional:
-- `source_clickhouse_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_clickhouse_ssh_tunnel_method_no_tunnel))
-- `source_clickhouse_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_clickhouse_ssh_tunnel_method_password_authentication))
-- `source_clickhouse_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_clickhouse_ssh_tunnel_method_ssh_key_authentication))
-- `source_clickhouse_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_clickhouse_update_ssh_tunnel_method_no_tunnel))
-- `source_clickhouse_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_clickhouse_update_ssh_tunnel_method_password_authentication))
-- `source_clickhouse_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_clickhouse_update_ssh_tunnel_method_ssh_key_authentication))
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-
-### Nested Schema for `configuration.tunnel_method.source_clickhouse_ssh_tunnel_method_no_tunnel`
-
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-
-### Nested Schema for `configuration.tunnel_method.source_clickhouse_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_clickhouse_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
-
-### Nested Schema for `configuration.tunnel_method.source_clickhouse_update_ssh_tunnel_method_no_tunnel`
-
-Required:
+Optional:
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.source_clickhouse_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_clickhouse_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/source_clickup_api.md b/docs/resources/source_clickup_api.md
index 3fe26b280..2bc07698e 100644
--- a/docs/resources/source_clickup_api.md
+++ b/docs/resources/source_clickup_api.md
@@ -19,13 +19,13 @@ resource "airbyte_source_clickup_api" "my_source_clickupapi" {
folder_id = "...my_folder_id..."
include_closed_tasks = true
list_id = "...my_list_id..."
- source_type = "clickup-api"
space_id = "...my_space_id..."
team_id = "...my_team_id..."
}
- name = "Mr. Jack Gottlieb"
- secret_id = "...my_secret_id..."
- workspace_id = "b0960a66-8151-4a47-aaf9-23c5949f83f3"
+ definition_id = "517f0e32-c2e3-402e-ade9-2b3e43098446"
+ name = "Freddie Little"
+ secret_id = "...my_secret_id..."
+ workspace_id = "e6422d15-b828-4621-a877-d2e625cdd80b"
}
```
@@ -35,11 +35,12 @@ resource "airbyte_source_clickup_api" "my_source_clickupapi" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -52,13 +53,13 @@ resource "airbyte_source_clickup_api" "my_source_clickupapi" {
Required:
-- `api_token` (String) Every ClickUp API call required authentication. This field is your personal API token. See here.
-- `source_type` (String) must be one of ["clickup-api"]
+- `api_token` (String, Sensitive) Every ClickUp API call required authentication. This field is your personal API token. See here.
Optional:
- `folder_id` (String) The ID of your folder in your space. Retrieve it from the `/space/{space_id}/folder` of the ClickUp API. See here.
-- `include_closed_tasks` (Boolean) Include or exclude closed tasks. By default, they are excluded. See here.
+- `include_closed_tasks` (Boolean) Default: false
+Include or exclude closed tasks. By default, they are excluded. See here.
- `list_id` (String) The ID of your list in your folder. Retrieve it from the `/folder/{folder_id}/list` of the ClickUp API. See here.
- `space_id` (String) The ID of your space in your workspace. Retrieve it from the `/team/{team_id}/space` of the ClickUp API. See here.
- `team_id` (String) The ID of your team in ClickUp. Retrieve it from the `/team` of the ClickUp API. See here.
diff --git a/docs/resources/source_clockify.md b/docs/resources/source_clockify.md
index 113eb7b03..59cfafc53 100644
--- a/docs/resources/source_clockify.md
+++ b/docs/resources/source_clockify.md
@@ -17,12 +17,12 @@ resource "airbyte_source_clockify" "my_source_clockify" {
configuration = {
api_key = "...my_api_key..."
api_url = "...my_api_url..."
- source_type = "clockify"
workspace_id = "...my_workspace_id..."
}
- name = "Angela Schaefer"
- secret_id = "...my_secret_id..."
- workspace_id = "76ffb901-c6ec-4bb4-a243-cf789ffafeda"
+ definition_id = "a5ff53c6-fc10-4ca6-ba82-7c3d349f444d"
+ name = "Julius Lockman"
+ secret_id = "...my_secret_id..."
+ workspace_id = "9d8494dc-faea-4550-8380-1e9f446900c8"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_clockify" "my_source_clockify" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,12 +50,12 @@ resource "airbyte_source_clockify" "my_source_clockify" {
Required:
-- `api_key` (String) You can get your api access_key here This API is Case Sensitive.
-- `source_type` (String) must be one of ["clockify"]
+- `api_key` (String, Sensitive) You can get your api access_key here This API is Case Sensitive.
- `workspace_id` (String) WorkSpace Id
Optional:
-- `api_url` (String) The URL for the Clockify API. This should only need to be modified if connecting to an enterprise version of Clockify.
+- `api_url` (String) Default: "https://api.clockify.me"
+The URL for the Clockify API. This should only need to be modified if connecting to an enterprise version of Clockify.
diff --git a/docs/resources/source_close_com.md b/docs/resources/source_close_com.md
index b28cb6c3a..6b9611ad8 100644
--- a/docs/resources/source_close_com.md
+++ b/docs/resources/source_close_com.md
@@ -15,13 +15,13 @@ SourceCloseCom Resource
```terraform
resource "airbyte_source_close_com" "my_source_closecom" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "close-com"
- start_date = "2021-01-01"
+ api_key = "...my_api_key..."
+ start_date = "2021-01-01"
}
- name = "Ronnie Nikolaus"
- secret_id = "...my_secret_id..."
- workspace_id = "e0ac184c-2b9c-4247-8883-73a40e1942f3"
+ definition_id = "ba7b45cf-ea08-4abd-9a32-8f6c373e0666"
+ name = "Miss Eva Collier"
+ secret_id = "...my_secret_id..."
+ workspace_id = "a3ab4d44-755b-4910-a5c9-99e89cbd0e8f"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_close_com" "my_source_closecom" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,11 +49,11 @@ resource "airbyte_source_close_com" "my_source_closecom" {
Required:
-- `api_key` (String) Close.com API key (usually starts with 'api_'; find yours here).
-- `source_type` (String) must be one of ["close-com"]
+- `api_key` (String, Sensitive) Close.com API key (usually starts with 'api_'; find yours here).
Optional:
-- `start_date` (String) The start date to sync data; all data after this date will be replicated. Leave blank to retrieve all the data available in the account. Format: YYYY-MM-DD.
+- `start_date` (String) Default: "2021-01-01"
+The start date to sync data; all data after this date will be replicated. Leave blank to retrieve all the data available in the account. Format: YYYY-MM-DD.
diff --git a/docs/resources/source_coda.md b/docs/resources/source_coda.md
index cc909b5dd..242e36d48 100644
--- a/docs/resources/source_coda.md
+++ b/docs/resources/source_coda.md
@@ -15,12 +15,12 @@ SourceCoda Resource
```terraform
resource "airbyte_source_coda" "my_source_coda" {
configuration = {
- auth_token = "...my_auth_token..."
- source_type = "coda"
+ auth_token = "...my_auth_token..."
}
- name = "Lila Harris II"
- secret_id = "...my_secret_id..."
- workspace_id = "5756f5d5-6d0b-4d0a-b2df-e13db4f62cba"
+ definition_id = "2a37cc1f-bec8-483d-a2fe-cd2cab29e0bc"
+ name = "Lisa Barrows"
+ secret_id = "...my_secret_id..."
+ workspace_id = "3cc123e8-783d-450d-8d2b-80c50dc344f6"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_coda" "my_source_coda" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_coda" "my_source_coda" {
Required:
-- `auth_token` (String) Bearer token
-- `source_type` (String) must be one of ["coda"]
+- `auth_token` (String, Sensitive) Bearer token
diff --git a/docs/resources/source_coin_api.md b/docs/resources/source_coin_api.md
index 3bd2f51bf..6014811bc 100644
--- a/docs/resources/source_coin_api.md
+++ b/docs/resources/source_coin_api.md
@@ -18,15 +18,15 @@ resource "airbyte_source_coin_api" "my_source_coinapi" {
api_key = "...my_api_key..."
end_date = "2019-01-01T00:00:00"
environment = "sandbox"
- limit = 10
+ limit = 8
period = "2MTH"
- source_type = "coin-api"
start_date = "2019-01-01T00:00:00"
symbol_id = "...my_symbol_id..."
}
- name = "Francis Boyle"
- secret_id = "...my_secret_id..."
- workspace_id = "bc0b80a6-924d-43b2-acfc-c8f895010f5d"
+ definition_id = "f0e9a05e-994a-4ce4-9dc5-b42f2a228e88"
+ name = "Rhonda Kunze"
+ secret_id = "...my_secret_id..."
+ workspace_id = "d4275060-42c1-4c65-a61b-2485a060238e"
}
```
@@ -36,11 +36,12 @@ resource "airbyte_source_coin_api" "my_source_coinapi" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -53,11 +54,8 @@ resource "airbyte_source_coin_api" "my_source_coinapi" {
Required:
-- `api_key` (String) API Key
-- `environment` (String) must be one of ["sandbox", "production"]
-The environment to use. Either sandbox or production.
+- `api_key` (String, Sensitive) API Key
- `period` (String) The period to use. See the documentation for a list. https://docs.coinapi.io/#list-all-periods-get
-- `source_type` (String) must be one of ["coin-api"]
- `start_date` (String) The start date in ISO 8601 format.
- `symbol_id` (String) The symbol ID to use. See the documentation for a list.
https://docs.coinapi.io/#list-all-symbols-get
@@ -67,7 +65,10 @@ Optional:
- `end_date` (String) The end date in ISO 8601 format. If not supplied, data will be returned
from the start date to the current time, or when the count of result
elements reaches its limit.
-- `limit` (Number) The maximum number of elements to return. If not supplied, the default
+- `environment` (String) must be one of ["sandbox", "production"]; Default: "sandbox"
+The environment to use. Either sandbox or production.
+- `limit` (Number) Default: 100
+The maximum number of elements to return. If not supplied, the default
is 100. For numbers larger than 100, each 100 items is counted as one
request for pricing purposes. Maximum value is 100000.
diff --git a/docs/resources/source_coinmarketcap.md b/docs/resources/source_coinmarketcap.md
index c2cd2f263..672a071ae 100644
--- a/docs/resources/source_coinmarketcap.md
+++ b/docs/resources/source_coinmarketcap.md
@@ -15,16 +15,16 @@ SourceCoinmarketcap Resource
```terraform
resource "airbyte_source_coinmarketcap" "my_source_coinmarketcap" {
configuration = {
- api_key = "...my_api_key..."
- data_type = "historical"
- source_type = "coinmarketcap"
+ api_key = "...my_api_key..."
+ data_type = "historical"
symbols = [
"...",
]
}
- name = "Meredith Kassulke"
- secret_id = "...my_secret_id..."
- workspace_id = "1804e54c-82f1-468a-b63c-8873e484380b"
+ definition_id = "a1361d3c-00cf-4e1b-a68d-340502b96029"
+ name = "Pat Robel"
+ secret_id = "...my_secret_id..."
+ workspace_id = "9e6abf17-c2d5-40cb-ae6f-f332bdf14577"
}
```
@@ -34,11 +34,12 @@ resource "airbyte_source_coinmarketcap" "my_source_coinmarketcap" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,10 +52,9 @@ resource "airbyte_source_coinmarketcap" "my_source_coinmarketcap" {
Required:
-- `api_key` (String) Your API Key. See here. The token is case sensitive.
+- `api_key` (String, Sensitive) Your API Key. See here. The token is case sensitive.
- `data_type` (String) must be one of ["latest", "historical"]
/latest: Latest market ticker quotes and averages for cryptocurrencies and exchanges. /historical: Intervals of historic market data like OHLCV data or data for use in charting libraries. See here.
-- `source_type` (String) must be one of ["coinmarketcap"]
Optional:
diff --git a/docs/resources/source_configcat.md b/docs/resources/source_configcat.md
index 0664fac49..741ca4a38 100644
--- a/docs/resources/source_configcat.md
+++ b/docs/resources/source_configcat.md
@@ -15,13 +15,13 @@ SourceConfigcat Resource
```terraform
resource "airbyte_source_configcat" "my_source_configcat" {
configuration = {
- password = "...my_password..."
- source_type = "configcat"
- username = "Art_Wiegand"
+ password = "...my_password..."
+ username = "Estrella_Wilkinson70"
}
- name = "Lowell Oberbrunner"
- secret_id = "...my_secret_id..."
- workspace_id = "5a60a04c-495c-4c69-9171-b51c1bdb1cf4"
+ definition_id = "81a7466b-f78b-43b7-9ede-547fc7c1cb53"
+ name = "Ms. Luis Harris"
+ secret_id = "...my_secret_id..."
+ workspace_id = "9ddb3b3d-7401-439d-82cf-2cb416442d85"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_configcat" "my_source_configcat" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_configcat" "my_source_configcat" {
Required:
-- `password` (String) Basic auth password. See here.
-- `source_type` (String) must be one of ["configcat"]
+- `password` (String, Sensitive) Basic auth password. See here.
- `username` (String) Basic auth user name. See here.
diff --git a/docs/resources/source_confluence.md b/docs/resources/source_confluence.md
index 65945d69a..fdf332e89 100644
--- a/docs/resources/source_confluence.md
+++ b/docs/resources/source_confluence.md
@@ -18,11 +18,11 @@ resource "airbyte_source_confluence" "my_source_confluence" {
api_token = "...my_api_token..."
domain_name = "...my_domain_name..."
email = "abc@example.com"
- source_type = "confluence"
}
- name = "Jody Will"
- secret_id = "...my_secret_id..."
- workspace_id = "ccca99bc-7fc0-4b2d-8e10-873e42b006d6"
+ definition_id = "82e70e18-a817-42f9-b227-1c9f9cbaa542"
+ name = "Ms. Nathaniel Walter V"
+ secret_id = "...my_secret_id..."
+ workspace_id = "61d84c3f-bc24-4f86-8fce-85198c116e72"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_confluence" "my_source_confluence" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,9 +50,8 @@ resource "airbyte_source_confluence" "my_source_confluence" {
Required:
-- `api_token` (String) Please follow the Jira confluence for generating an API token: generating an API token.
+- `api_token` (String, Sensitive) Please follow the Jira confluence for generating an API token: generating an API token.
- `domain_name` (String) Your Confluence domain name
- `email` (String) Your Confluence login email
-- `source_type` (String) must be one of ["confluence"]
diff --git a/docs/resources/source_convex.md b/docs/resources/source_convex.md
index 107ac4ea7..f01b1acfa 100644
--- a/docs/resources/source_convex.md
+++ b/docs/resources/source_convex.md
@@ -17,11 +17,11 @@ resource "airbyte_source_convex" "my_source_convex" {
configuration = {
access_key = "...my_access_key..."
deployment_url = "https://murky-swan-635.convex.cloud"
- source_type = "convex"
}
- name = "Guy Kovacek"
- secret_id = "...my_secret_id..."
- workspace_id = "a8581a58-208c-454f-afa9-c95f2eac5565"
+ definition_id = "581ee677-0fa8-4ec1-ba80-4bd6457a40e8"
+ name = "Corey Braun"
+ secret_id = "...my_secret_id..."
+ workspace_id = "541ba6f5-d90d-45a8-a349-e2072bdff381"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_convex" "my_source_convex" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_convex" "my_source_convex" {
Required:
-- `access_key` (String) API access key used to retrieve data from Convex.
+- `access_key` (String, Sensitive) API access key used to retrieve data from Convex.
- `deployment_url` (String)
-- `source_type` (String) must be one of ["convex"]
diff --git a/docs/resources/source_datascope.md b/docs/resources/source_datascope.md
index bae217d54..49d09813b 100644
--- a/docs/resources/source_datascope.md
+++ b/docs/resources/source_datascope.md
@@ -15,13 +15,13 @@ SourceDatascope Resource
```terraform
resource "airbyte_source_datascope" "my_source_datascope" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "datascope"
- start_date = "dd/mm/YYYY HH:MM"
+ api_key = "...my_api_key..."
+ start_date = "dd/mm/YYYY HH:MM"
}
- name = "Danny Bahringer"
- secret_id = "...my_secret_id..."
- workspace_id = "fee81206-e281-43fa-8a41-c480d3f2132a"
+ definition_id = "8dbe50fc-b32a-4781-b3ab-b82e6a7189e9"
+ name = "Erin Johns"
+ secret_id = "...my_secret_id..."
+ workspace_id = "4638d140-9463-49cf-9dd4-a0c05f536f6b"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_datascope" "my_source_datascope" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_datascope" "my_source_datascope" {
Required:
-- `api_key` (String) API Key
-- `source_type` (String) must be one of ["datascope"]
+- `api_key` (String, Sensitive) API Key
- `start_date` (String) Start date for the data to be replicated
diff --git a/docs/resources/source_delighted.md b/docs/resources/source_delighted.md
index b49cbcb20..24120990c 100644
--- a/docs/resources/source_delighted.md
+++ b/docs/resources/source_delighted.md
@@ -15,13 +15,13 @@ SourceDelighted Resource
```terraform
resource "airbyte_source_delighted" "my_source_delighted" {
configuration = {
- api_key = "...my_api_key..."
- since = "2022-05-30 04:50:23"
- source_type = "delighted"
+ api_key = "...my_api_key..."
+ since = "2022-05-30 04:50:23"
}
- name = "Sarah Collier"
- secret_id = "...my_secret_id..."
- workspace_id = "14f4cc6f-18bf-4962-9a6a-4f77a87ee3e4"
+ definition_id = "b8f8f6af-bf36-45d6-87e0-87e3905b6a41"
+ name = "Elsa Osinski"
+ secret_id = "...my_secret_id..."
+ workspace_id = "4f73b7e8-dc37-41ec-bee1-0511b439ed17"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_delighted" "my_source_delighted" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_delighted" "my_source_delighted" {
Required:
-- `api_key` (String) A Delighted API key.
+- `api_key` (String, Sensitive) A Delighted API key.
- `since` (String) The date from which you'd like to replicate the data
-- `source_type` (String) must be one of ["delighted"]
diff --git a/docs/resources/source_dixa.md b/docs/resources/source_dixa.md
index 00d9fb658..8721b0e31 100644
--- a/docs/resources/source_dixa.md
+++ b/docs/resources/source_dixa.md
@@ -15,14 +15,14 @@ SourceDixa Resource
```terraform
resource "airbyte_source_dixa" "my_source_dixa" {
configuration = {
- api_token = "...my_api_token..."
- batch_size = 31
- source_type = "dixa"
- start_date = "YYYY-MM-DD"
+ api_token = "...my_api_token..."
+ batch_size = 1
+ start_date = "YYYY-MM-DD"
}
- name = "Brittany Cole"
- secret_id = "...my_secret_id..."
- workspace_id = "5b34418e-3bb9-41c8-9975-e0e8419d8f84"
+ definition_id = "9f9b4783-ac23-42bf-a41c-80b23345c949"
+ name = "Arturo Hammes"
+ secret_id = "...my_secret_id..."
+ workspace_id = "9f5a34ff-680c-488d-8e9f-7431721e4227"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_dixa" "my_source_dixa" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,12 +50,12 @@ resource "airbyte_source_dixa" "my_source_dixa" {
Required:
-- `api_token` (String) Dixa API token
-- `source_type` (String) must be one of ["dixa"]
+- `api_token` (String, Sensitive) Dixa API token
- `start_date` (String) The connector pulls records updated from this date onwards.
Optional:
-- `batch_size` (Number) Number of days to batch into one request. Max 31.
+- `batch_size` (Number) Default: 31
+Number of days to batch into one request. Max 31.
diff --git a/docs/resources/source_dockerhub.md b/docs/resources/source_dockerhub.md
index 45ad7b7aa..1f60a4c26 100644
--- a/docs/resources/source_dockerhub.md
+++ b/docs/resources/source_dockerhub.md
@@ -16,11 +16,11 @@ SourceDockerhub Resource
resource "airbyte_source_dockerhub" "my_source_dockerhub" {
configuration = {
docker_username = "airbyte"
- source_type = "dockerhub"
}
- name = "Joe Haag"
- secret_id = "...my_secret_id..."
- workspace_id = "3e07edcc-4aa5-4f3c-abd9-05a972e05672"
+ definition_id = "fd51b66e-c345-4b5c-9bae-74726a8cd9c5"
+ name = "Ernesto Swaniawski"
+ secret_id = "...my_secret_id..."
+ workspace_id = "afda11e1-0d00-42e1-873f-9ba1e39a63be"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_dockerhub" "my_source_dockerhub" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,6 +49,5 @@ resource "airbyte_source_dockerhub" "my_source_dockerhub" {
Required:
- `docker_username` (String) Username of DockerHub person or organization (for https://hub.docker.com/v2/repositories/USERNAME/ API call)
-- `source_type` (String) must be one of ["dockerhub"]
diff --git a/docs/resources/source_dremio.md b/docs/resources/source_dremio.md
index ace6581e5..0a00d2f00 100644
--- a/docs/resources/source_dremio.md
+++ b/docs/resources/source_dremio.md
@@ -15,13 +15,13 @@ SourceDremio Resource
```terraform
resource "airbyte_source_dremio" "my_source_dremio" {
configuration = {
- api_key = "...my_api_key..."
- base_url = "...my_base_url..."
- source_type = "dremio"
+ api_key = "...my_api_key..."
+ base_url = "...my_base_url..."
}
- name = "Aaron Connelly"
- secret_id = "...my_secret_id..."
- workspace_id = "2d309470-bf7a-44fa-87cf-535a6fae54eb"
+ definition_id = "209caa59-3eb8-408e-88c0-a1f11671a56d"
+ name = "Jeanne Lebsack"
+ secret_id = "...my_secret_id..."
+ workspace_id = "b5e4c195-9643-43e1-9514-84aac586d055"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_dremio" "my_source_dremio" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,11 @@ resource "airbyte_source_dremio" "my_source_dremio" {
Required:
-- `api_key` (String) API Key that is generated when you authenticate to Dremio API
-- `base_url` (String) URL of your Dremio instance
-- `source_type` (String) must be one of ["dremio"]
+- `api_key` (String, Sensitive) API Key that is generated when you authenticate to Dremio API
+
+Optional:
+
+- `base_url` (String) Default: "https://app.dremio.cloud"
+URL of your Dremio instance
diff --git a/docs/resources/source_dynamodb.md b/docs/resources/source_dynamodb.md
index 6de681c81..ad1249a81 100644
--- a/docs/resources/source_dynamodb.md
+++ b/docs/resources/source_dynamodb.md
@@ -17,14 +17,14 @@ resource "airbyte_source_dynamodb" "my_source_dynamodb" {
configuration = {
access_key_id = "A012345678910EXAMPLE"
endpoint = "https://{aws_dynamo_db_url}.com"
- region = "us-gov-west-1"
+ region = "us-west-1"
reserved_attribute_names = "name, field_name, field-name"
secret_access_key = "a012345678910ABCDEFGH/AbCdEfGhEXAMPLEKEY"
- source_type = "dynamodb"
}
- name = "Sandra Rowe Sr."
- secret_id = "...my_secret_id..."
- workspace_id = "f023b75d-2367-4fe1-a0cc-8df79f0a396d"
+ definition_id = "44c5465b-457a-42c2-a18f-e1b91dcce8e6"
+ name = "Faye Streich"
+ secret_id = "...my_secret_id..."
+ workspace_id = "75fb5812-2af6-4a8a-8655-36a205f1e4d3"
}
```
@@ -34,11 +34,12 @@ resource "airbyte_source_dynamodb" "my_source_dynamodb" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,14 +52,14 @@ resource "airbyte_source_dynamodb" "my_source_dynamodb" {
Required:
-- `access_key_id` (String) The access key id to access Dynamodb. Airbyte requires read permissions to the database
-- `secret_access_key` (String) The corresponding secret to the access key id.
-- `source_type` (String) must be one of ["dynamodb"]
+- `access_key_id` (String, Sensitive) The access key id to access Dynamodb. Airbyte requires read permissions to the database
+- `secret_access_key` (String, Sensitive) The corresponding secret to the access key id.
Optional:
-- `endpoint` (String) the URL of the Dynamodb database
-- `region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]
+- `endpoint` (String) Default: ""
+the URL of the Dynamodb database
+- `region` (String) must be one of ["", "us-east-1", "us-east-2", "us-west-1", "us-west-2", "af-south-1", "ap-east-1", "ap-south-1", "ap-northeast-1", "ap-northeast-2", "ap-northeast-3", "ap-southeast-1", "ap-southeast-2", "ca-central-1", "cn-north-1", "cn-northwest-1", "eu-central-1", "eu-north-1", "eu-south-1", "eu-west-1", "eu-west-2", "eu-west-3", "sa-east-1", "me-south-1", "us-gov-east-1", "us-gov-west-1"]; Default: ""
The region of the Dynamodb database
- `reserved_attribute_names` (String) Comma separated reserved attribute names present in your tables
diff --git a/docs/resources/source_e2e_test_cloud.md b/docs/resources/source_e2e_test_cloud.md
deleted file mode 100644
index f8fb3fe22..000000000
--- a/docs/resources/source_e2e_test_cloud.md
+++ /dev/null
@@ -1,124 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_e2e_test_cloud Resource - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceE2eTestCloud Resource
----
-
-# airbyte_source_e2e_test_cloud (Resource)
-
-SourceE2eTestCloud Resource
-
-## Example Usage
-
-```terraform
-resource "airbyte_source_e2e_test_cloud" "my_source_e2etestcloud" {
- configuration = {
- max_messages = 6
- message_interval_ms = 0
- mock_catalog = {
- source_e2e_test_cloud_mock_catalog_multi_schema = {
- stream_schemas = "...my_stream_schemas..."
- type = "MULTI_STREAM"
- }
- }
- seed = 42
- source_type = "e2e-test-cloud"
- type = "CONTINUOUS_FEED"
- }
- name = "Gertrude Grant"
- secret_id = "...my_secret_id..."
- workspace_id = "c15dfbac-e188-4b1c-8ee2-c8c6ce611fee"
-}
-```
-
-
-## Schema
-
-### Required
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `source_id` (String)
-- `source_type` (String)
-
-
-### Nested Schema for `configuration`
-
-Required:
-
-- `max_messages` (Number) Number of records to emit per stream. Min 1. Max 100 billion.
-- `mock_catalog` (Attributes) (see [below for nested schema](#nestedatt--configuration--mock_catalog))
-- `source_type` (String) must be one of ["e2e-test-cloud"]
-
-Optional:
-
-- `message_interval_ms` (Number) Interval between messages in ms. Min 0 ms. Max 60000 ms (1 minute).
-- `seed` (Number) When the seed is unspecified, the current time millis will be used as the seed. Range: [0, 1000000].
-- `type` (String) must be one of ["CONTINUOUS_FEED"]
-
-
-### Nested Schema for `configuration.mock_catalog`
-
-Optional:
-
-- `source_e2e_test_cloud_mock_catalog_multi_schema` (Attributes) A catalog with multiple data streams, each with a different schema. (see [below for nested schema](#nestedatt--configuration--mock_catalog--source_e2e_test_cloud_mock_catalog_multi_schema))
-- `source_e2e_test_cloud_mock_catalog_single_schema` (Attributes) A catalog with one or multiple streams that share the same schema. (see [below for nested schema](#nestedatt--configuration--mock_catalog--source_e2e_test_cloud_mock_catalog_single_schema))
-- `source_e2e_test_cloud_update_mock_catalog_multi_schema` (Attributes) A catalog with multiple data streams, each with a different schema. (see [below for nested schema](#nestedatt--configuration--mock_catalog--source_e2e_test_cloud_update_mock_catalog_multi_schema))
-- `source_e2e_test_cloud_update_mock_catalog_single_schema` (Attributes) A catalog with one or multiple streams that share the same schema. (see [below for nested schema](#nestedatt--configuration--mock_catalog--source_e2e_test_cloud_update_mock_catalog_single_schema))
-
-
-### Nested Schema for `configuration.mock_catalog.source_e2e_test_cloud_mock_catalog_multi_schema`
-
-Required:
-
-- `stream_schemas` (String) A Json object specifying multiple data streams and their schemas. Each key in this object is one stream name. Each value is the schema for that stream. The schema should be compatible with draft-07. See this doc for examples.
-- `type` (String) must be one of ["MULTI_STREAM"]
-
-
-
-### Nested Schema for `configuration.mock_catalog.source_e2e_test_cloud_mock_catalog_single_schema`
-
-Required:
-
-- `stream_name` (String) Name of the data stream.
-- `stream_schema` (String) A Json schema for the stream. The schema should be compatible with draft-07. See this doc for examples.
-- `type` (String) must be one of ["SINGLE_STREAM"]
-
-Optional:
-
-- `stream_duplication` (Number) Duplicate the stream for easy load testing. Each stream name will have a number suffix. For example, if the stream name is "ds", the duplicated streams will be "ds_0", "ds_1", etc.
-
-
-
-### Nested Schema for `configuration.mock_catalog.source_e2e_test_cloud_update_mock_catalog_multi_schema`
-
-Required:
-
-- `stream_schemas` (String) A Json object specifying multiple data streams and their schemas. Each key in this object is one stream name. Each value is the schema for that stream. The schema should be compatible with draft-07. See this doc for examples.
-- `type` (String) must be one of ["MULTI_STREAM"]
-
-
-
-### Nested Schema for `configuration.mock_catalog.source_e2e_test_cloud_update_mock_catalog_single_schema`
-
-Required:
-
-- `stream_name` (String) Name of the data stream.
-- `stream_schema` (String) A Json schema for the stream. The schema should be compatible with draft-07. See this doc for examples.
-- `type` (String) must be one of ["SINGLE_STREAM"]
-
-Optional:
-
-- `stream_duplication` (Number) Duplicate the stream for easy load testing. Each stream name will have a number suffix. For example, if the stream name is "ds", the duplicated streams will be "ds_0", "ds_1", etc.
-
-
diff --git a/docs/resources/source_emailoctopus.md b/docs/resources/source_emailoctopus.md
index 43ef90e3a..d272fae25 100644
--- a/docs/resources/source_emailoctopus.md
+++ b/docs/resources/source_emailoctopus.md
@@ -15,12 +15,12 @@ SourceEmailoctopus Resource
```terraform
resource "airbyte_source_emailoctopus" "my_source_emailoctopus" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "emailoctopus"
+ api_key = "...my_api_key..."
}
- name = "Gregory Satterfield"
- secret_id = "...my_secret_id..."
- workspace_id = "bdb6eec7-4378-4ba2-9317-747dc915ad2c"
+ definition_id = "09ea5800-594f-4bd8-a631-4cace02f96b8"
+ name = "Annie Hegmann"
+ secret_id = "...my_secret_id..."
+ workspace_id = "f7e4181b-36cf-41af-8f94-e3c79cbeca1c"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_emailoctopus" "my_source_emailoctopus" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_emailoctopus" "my_source_emailoctopus" {
Required:
-- `api_key` (String) EmailOctopus API Key. See the docs for information on how to generate this key.
-- `source_type` (String) must be one of ["emailoctopus"]
+- `api_key` (String, Sensitive) EmailOctopus API Key. See the docs for information on how to generate this key.
diff --git a/docs/resources/source_exchange_rates.md b/docs/resources/source_exchange_rates.md
index ca8d8a6e0..049b80fda 100644
--- a/docs/resources/source_exchange_rates.md
+++ b/docs/resources/source_exchange_rates.md
@@ -16,14 +16,14 @@ SourceExchangeRates Resource
resource "airbyte_source_exchange_rates" "my_source_exchangerates" {
configuration = {
access_key = "...my_access_key..."
- base = "USD"
- ignore_weekends = false
- source_type = "exchange-rates"
+ base = "EUR"
+ ignore_weekends = true
start_date = "YYYY-MM-DD"
}
- name = "Mrs. Leslie Klocko"
- secret_id = "...my_secret_id..."
- workspace_id = "c0f5ae2f-3a6b-4700-8787-56143f5a6c98"
+ definition_id = "a5bbba82-d4c0-4a2c-af78-12475bca9a48"
+ name = "Amber Osinski"
+ secret_id = "...my_secret_id..."
+ workspace_id = "0ddc3156-b2ff-4d5d-ac69-da5497add71f"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_exchange_rates" "my_source_exchangerates" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,13 +51,13 @@ resource "airbyte_source_exchange_rates" "my_source_exchangerates" {
Required:
-- `access_key` (String) Your API Key. See here. The key is case sensitive.
-- `source_type` (String) must be one of ["exchange-rates"]
+- `access_key` (String, Sensitive) Your API Key. See here. The key is case sensitive.
- `start_date` (String) Start getting data from that date.
Optional:
- `base` (String) ISO reference currency. See here. Free plan doesn't support Source Currency Switching, default base currency is EUR
-- `ignore_weekends` (Boolean) Ignore weekends? (Exchanges don't run on weekends)
+- `ignore_weekends` (Boolean) Default: true
+Ignore weekends? (Exchanges don't run on weekends)
diff --git a/docs/resources/source_facebook_marketing.md b/docs/resources/source_facebook_marketing.md
index f34d17319..e99a07706 100644
--- a/docs/resources/source_facebook_marketing.md
+++ b/docs/resources/source_facebook_marketing.md
@@ -23,35 +23,34 @@ resource "airbyte_source_facebook_marketing" "my_source_facebookmarketing" {
custom_insights = [
{
action_breakdowns = [
- "action_destination",
+ "action_video_sound",
]
- action_report_time = "conversion"
+ action_report_time = "mixed"
breakdowns = [
- "frequency_value",
+ "mmm",
]
end_date = "2017-01-26T00:00:00Z"
fields = [
- "account_name",
+ "cpp",
]
- insights_lookback_window = 6
+ insights_lookback_window = 7
level = "ad"
- name = "Jesus Batz"
+ name = "Julio Beier"
start_date = "2017-01-25T00:00:00Z"
- time_increment = 8
+ time_increment = 9
},
]
end_date = "2017-01-26T00:00:00Z"
fetch_thumbnail_images = false
- include_deleted = true
- insights_lookback_window = 4
- max_batch_size = 7
+ include_deleted = false
+ insights_lookback_window = 2
page_size = 3
- source_type = "facebook-marketing"
start_date = "2017-01-25T00:00:00Z"
}
- name = "Ms. Wilbert McGlynn"
- secret_id = "...my_secret_id..."
- workspace_id = "04f926ba-d255-4381-9b47-4b0ed20e5624"
+ definition_id = "7eb149e6-fe9a-476b-9271-d6f7a77e51b0"
+ name = "Olivia MacGyver"
+ secret_id = "...my_secret_id..."
+ workspace_id = "2e6bc1e2-2381-4cdc-ae96-42f3c2fe19c3"
}
```
@@ -61,11 +60,12 @@ resource "airbyte_source_facebook_marketing" "my_source_facebookmarketing" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -78,23 +78,26 @@ resource "airbyte_source_facebook_marketing" "my_source_facebookmarketing" {
Required:
-- `access_token` (String) The value of the generated access token. From your App’s Dashboard, click on "Marketing API" then "Tools". Select permissions ads_management, ads_read, read_insights, business_management. Then click on "Get token". See the docs for more information.
-- `account_id` (String) The Facebook Ad account ID to use when pulling data from the Facebook Marketing API. Open your Meta Ads Manager. The Ad account ID number is in the account dropdown menu or in your browser's address bar. See the docs for more information.
-- `source_type` (String) must be one of ["facebook-marketing"]
-- `start_date` (String) The date from which you'd like to replicate data for all incremental streams, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
+- `access_token` (String, Sensitive) The value of the generated access token. From your App’s Dashboard, click on "Marketing API" then "Tools". Select permissions ads_management, ads_read, read_insights, business_management. Then click on "Get token". See the docs for more information.
+- `account_id` (String) The Facebook Ad account ID to use when pulling data from the Facebook Marketing API. The Ad account ID number is in the account dropdown menu or in your browser's address bar of your Meta Ads Manager. See the docs for more information.
Optional:
-- `action_breakdowns_allow_empty` (Boolean) Allows action_breakdowns to be an empty list
+- `action_breakdowns_allow_empty` (Boolean) Default: true
+Allows action_breakdowns to be an empty list
- `client_id` (String) The Client Id for your OAuth app
- `client_secret` (String) The Client Secret for your OAuth app
- `custom_insights` (Attributes List) A list which contains ad statistics entries, each entry must have a name and can contains fields, breakdowns or action_breakdowns. Click on "add" to fill this field. (see [below for nested schema](#nestedatt--configuration--custom_insights))
- `end_date` (String) The date until which you'd like to replicate data for all incremental streams, in the format YYYY-MM-DDT00:00:00Z. All data generated between the start date and this end date will be replicated. Not setting this option will result in always syncing the latest data.
-- `fetch_thumbnail_images` (Boolean) Set to active if you want to fetch the thumbnail_url and store the result in thumbnail_data_url for each Ad Creative.
-- `include_deleted` (Boolean) Set to active if you want to include data from deleted Campaigns, Ads, and AdSets.
-- `insights_lookback_window` (Number) The attribution window. Facebook freezes insight data 28 days after it was generated, which means that all data from the past 28 days may have changed since we last emitted it, so you can retrieve refreshed insights from the past by setting this parameter. If you set a custom lookback window value in Facebook account, please provide the same value here.
-- `max_batch_size` (Number) Maximum batch size used when sending batch requests to Facebook API. Most users do not need to set this field unless they specifically need to tune the connector to address specific issues or use cases.
-- `page_size` (Number) Page size used when sending requests to Facebook API to specify number of records per page when response has pagination. Most users do not need to set this field unless they specifically need to tune the connector to address specific issues or use cases.
+- `fetch_thumbnail_images` (Boolean) Default: false
+Set to active if you want to fetch the thumbnail_url and store the result in thumbnail_data_url for each Ad Creative.
+- `include_deleted` (Boolean) Default: false
+Set to active if you want to include data from deleted Campaigns, Ads, and AdSets.
+- `insights_lookback_window` (Number) Default: 28
+The attribution window. Facebook freezes insight data 28 days after it was generated, which means that all data from the past 28 days may have changed since we last emitted it, so you can retrieve refreshed insights from the past by setting this parameter. If you set a custom lookback window value in Facebook account, please provide the same value here.
+- `page_size` (Number) Default: 100
+Page size used when sending requests to Facebook API to specify number of records per page when response has pagination. Most users do not need to set this field unless they specifically need to tune the connector to address specific issues or use cases.
+- `start_date` (String) The date from which you'd like to replicate data for all incremental streams, in the format YYYY-MM-DDT00:00:00Z. If not set then all data will be replicated for usual streams and only last 2 years for insight streams.
### Nested Schema for `configuration.custom_insights`
@@ -106,15 +109,17 @@ Required:
Optional:
- `action_breakdowns` (List of String) A list of chosen action_breakdowns for action_breakdowns
-- `action_report_time` (String) must be one of ["conversion", "impression", "mixed"]
+- `action_report_time` (String) must be one of ["conversion", "impression", "mixed"]; Default: "mixed"
Determines the report time of action stats. For example, if a person saw the ad on Jan 1st but converted on Jan 2nd, when you query the API with action_report_time=impression, you see a conversion on Jan 1st. When you query the API with action_report_time=conversion, you see a conversion on Jan 2nd.
- `breakdowns` (List of String) A list of chosen breakdowns for breakdowns
- `end_date` (String) The date until which you'd like to replicate data for this stream, in the format YYYY-MM-DDT00:00:00Z. All data generated between the start date and this end date will be replicated. Not setting this option will result in always syncing the latest data.
- `fields` (List of String) A list of chosen fields for fields parameter
-- `insights_lookback_window` (Number) The attribution window
-- `level` (String) must be one of ["ad", "adset", "campaign", "account"]
+- `insights_lookback_window` (Number) Default: 28
+The attribution window
+- `level` (String) must be one of ["ad", "adset", "campaign", "account"]; Default: "ad"
Chosen level for API
- `start_date` (String) The date from which you'd like to replicate data for this stream, in the format YYYY-MM-DDT00:00:00Z.
-- `time_increment` (Number) Time window in days by which to aggregate statistics. The sync will be chunked into N day intervals, where N is the number of days you specified. For example, if you set this value to 7, then all statistics will be reported as 7-day aggregates by starting from the start_date. If the start and end dates are October 1st and October 30th, then the connector will output 5 records: 01 - 06, 07 - 13, 14 - 20, 21 - 27, and 28 - 30 (3 days only).
+- `time_increment` (Number) Default: 1
+Time window in days by which to aggregate statistics. The sync will be chunked into N day intervals, where N is the number of days you specified. For example, if you set this value to 7, then all statistics will be reported as 7-day aggregates by starting from the start_date. If the start and end dates are October 1st and October 30th, then the connector will output 5 records: 01 - 06, 07 - 13, 14 - 20, 21 - 27, and 28 - 30 (3 days only).
diff --git a/docs/resources/source_facebook_pages.md b/docs/resources/source_facebook_pages.md
index 92ca82db3..2b419fbd8 100644
--- a/docs/resources/source_facebook_pages.md
+++ b/docs/resources/source_facebook_pages.md
@@ -17,11 +17,11 @@ resource "airbyte_source_facebook_pages" "my_source_facebookpages" {
configuration = {
access_token = "...my_access_token..."
page_id = "...my_page_id..."
- source_type = "facebook-pages"
}
- name = "Moses Wuckert"
- secret_id = "...my_secret_id..."
- workspace_id = "39a910ab-dcab-4626-b669-6e1ec00221b3"
+ definition_id = "2edfee92-bc33-473a-92c8-87f28ef975a7"
+ name = "Scott Baumbach"
+ secret_id = "...my_secret_id..."
+ workspace_id = "5487915a-2f44-49e5-b0b6-8d5fb4b99e2f"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_facebook_pages" "my_source_facebookpages" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_facebook_pages" "my_source_facebookpages" {
Required:
-- `access_token` (String) Facebook Page Access Token
+- `access_token` (String, Sensitive) Facebook Page Access Token
- `page_id` (String) Page ID
-- `source_type` (String) must be one of ["facebook-pages"]
diff --git a/docs/resources/source_faker.md b/docs/resources/source_faker.md
index 5fa327f17..f2466bb13 100644
--- a/docs/resources/source_faker.md
+++ b/docs/resources/source_faker.md
@@ -16,15 +16,15 @@ SourceFaker Resource
resource "airbyte_source_faker" "my_source_faker" {
configuration = {
always_updated = false
- count = 3
- parallelism = 9
- records_per_slice = 5
- seed = 6
- source_type = "faker"
+ count = 9
+ parallelism = 8
+ records_per_slice = 1
+ seed = 5
}
- name = "Delbert Reynolds"
- secret_id = "...my_secret_id..."
- workspace_id = "cfda8d0c-549e-4f03-8049-78a61fa1cf20"
+ definition_id = "33c76bbd-55f5-466b-8ade-0498ec40fd8a"
+ name = "Kirk Braun MD"
+ secret_id = "...my_secret_id..."
+ workspace_id = "05c5e889-977e-4ae0-86e3-c2d33082ab84"
}
```
@@ -34,11 +34,12 @@ resource "airbyte_source_faker" "my_source_faker" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,16 +50,17 @@ resource "airbyte_source_faker" "my_source_faker" {
### Nested Schema for `configuration`
-Required:
-
-- `count` (Number) How many users should be generated in total. This setting does not apply to the purchases or products stream.
-- `source_type` (String) must be one of ["faker"]
-
Optional:
-- `always_updated` (Boolean) Should the updated_at values for every record be new each sync? Setting this to false will case the source to stop emitting records after COUNT records have been emitted.
-- `parallelism` (Number) How many parallel workers should we use to generate fake data? Choose a value equal to the number of CPUs you will allocate to this source.
-- `records_per_slice` (Number) How many fake records will be in each page (stream slice), before a state message is emitted?
-- `seed` (Number) Manually control the faker random seed to return the same values on subsequent runs (leave -1 for random)
+- `always_updated` (Boolean) Default: true
+Should the updated_at values for every record be new each sync? Setting this to false will case the source to stop emitting records after COUNT records have been emitted.
+- `count` (Number) Default: 1000
+How many users should be generated in total. This setting does not apply to the purchases or products stream.
+- `parallelism` (Number) Default: 4
+How many parallel workers should we use to generate fake data? Choose a value equal to the number of CPUs you will allocate to this source.
+- `records_per_slice` (Number) Default: 1000
+How many fake records will be in each page (stream slice), before a state message is emitted?
+- `seed` (Number) Default: -1
+Manually control the faker random seed to return the same values on subsequent runs (leave -1 for random)
diff --git a/docs/resources/source_fauna.md b/docs/resources/source_fauna.md
index e08ebd353..f43058a55 100644
--- a/docs/resources/source_fauna.md
+++ b/docs/resources/source_fauna.md
@@ -17,21 +17,19 @@ resource "airbyte_source_fauna" "my_source_fauna" {
configuration = {
collection = {
deletions = {
- source_fauna_collection_deletion_mode_disabled = {
- deletion_mode = "ignore"
- }
+ disabled = {}
}
- page_size = 4
+ page_size = 0
}
- domain = "...my_domain..."
- port = 5
- scheme = "...my_scheme..."
- secret = "...my_secret..."
- source_type = "fauna"
+ domain = "...my_domain..."
+ port = 10
+ scheme = "...my_scheme..."
+ secret = "...my_secret..."
}
- name = "Irvin Klein"
- secret_id = "...my_secret_id..."
- workspace_id = "1ffc71dc-a163-4f2a-bc80-a97ff334cddf"
+ definition_id = "56112c1f-da02-410a-9cfb-ec287654f12b"
+ name = "Mr. Willard Gislason"
+ secret_id = "...my_secret_id..."
+ workspace_id = "fbb0cddc-f802-4e3e-a016-5466352da9b0"
}
```
@@ -41,11 +39,12 @@ resource "airbyte_source_fauna" "my_source_fauna" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -58,15 +57,17 @@ resource "airbyte_source_fauna" "my_source_fauna" {
Required:
-- `domain` (String) Domain of Fauna to query. Defaults db.fauna.com. See the docs.
-- `port` (Number) Endpoint port.
-- `scheme` (String) URL scheme.
- `secret` (String) Fauna secret, used when authenticating with the database.
-- `source_type` (String) must be one of ["fauna"]
Optional:
- `collection` (Attributes) Settings for the Fauna Collection. (see [below for nested schema](#nestedatt--configuration--collection))
+- `domain` (String) Default: "db.fauna.com"
+Domain of Fauna to query. Defaults db.fauna.com. See the docs.
+- `port` (Number) Default: 443
+Endpoint port.
+- `scheme` (String) Default: "https"
+URL scheme.
### Nested Schema for `configuration.collection`
@@ -77,7 +78,11 @@ Required:
Enabling deletion mode informs your destination of deleted documents.
Disabled - Leave this feature disabled, and ignore deleted documents.
Enabled - Enables this feature. When a document is deleted, the connector exports a record with a "deleted at" column containing the time that the document was deleted. (see [below for nested schema](#nestedatt--configuration--collection--deletions))
-- `page_size` (Number) The page size used when reading documents from the database. The larger the page size, the faster the connector processes documents. However, if a page is too large, the connector may fail.
+
+Optional:
+
+- `page_size` (Number) Default: 64
+The page size used when reading documents from the database. The larger the page size, the faster the connector processes documents. However, if a page is too large, the connector may fail.
Choose your page size based on how large the documents are.
See the docs.
@@ -86,54 +91,25 @@ See This only applies to incremental syncs.
+Enabled - Enables this feature. When a document is deleted, the connector exports a record with a "deleted at" column containing the time that the document was deleted. (see [below for nested schema](#nestedatt--configuration--collection--deletions--disabled))
+- `enabled` (Attributes) This only applies to incremental syncs.
Enabling deletion mode informs your destination of deleted documents.
Disabled - Leave this feature disabled, and ignore deleted documents.
-Enabled - Enables this feature. When a document is deleted, the connector exports a record with a "deleted at" column containing the time that the document was deleted. (see [below for nested schema](#nestedatt--configuration--collection--deletions--source_fauna_collection_deletion_mode_enabled))
-- `source_fauna_update_collection_deletion_mode_disabled` (Attributes) This only applies to incremental syncs.
-Enabling deletion mode informs your destination of deleted documents.
-Disabled - Leave this feature disabled, and ignore deleted documents.
-Enabled - Enables this feature. When a document is deleted, the connector exports a record with a "deleted at" column containing the time that the document was deleted. (see [below for nested schema](#nestedatt--configuration--collection--deletions--source_fauna_update_collection_deletion_mode_disabled))
-- `source_fauna_update_collection_deletion_mode_enabled` (Attributes) This only applies to incremental syncs.
-Enabling deletion mode informs your destination of deleted documents.
-Disabled - Leave this feature disabled, and ignore deleted documents.
-Enabled - Enables this feature. When a document is deleted, the connector exports a record with a "deleted at" column containing the time that the document was deleted. (see [below for nested schema](#nestedatt--configuration--collection--deletions--source_fauna_update_collection_deletion_mode_enabled))
-
-
-### Nested Schema for `configuration.collection.deletions.source_fauna_update_collection_deletion_mode_enabled`
-
-Required:
-
-- `deletion_mode` (String) must be one of ["ignore"]
+Enabled - Enables this feature. When a document is deleted, the connector exports a record with a "deleted at" column containing the time that the document was deleted. (see [below for nested schema](#nestedatt--configuration--collection--deletions--enabled))
+
+### Nested Schema for `configuration.collection.deletions.enabled`
-
-### Nested Schema for `configuration.collection.deletions.source_fauna_update_collection_deletion_mode_enabled`
-Required:
-
-- `column` (String) Name of the "deleted at" column.
-- `deletion_mode` (String) must be one of ["deleted_field"]
-
-
-
-### Nested Schema for `configuration.collection.deletions.source_fauna_update_collection_deletion_mode_enabled`
-
-Required:
+
+### Nested Schema for `configuration.collection.deletions.enabled`
-- `deletion_mode` (String) must be one of ["ignore"]
-
-
-
-### Nested Schema for `configuration.collection.deletions.source_fauna_update_collection_deletion_mode_enabled`
-
-Required:
+Optional:
-- `column` (String) Name of the "deleted at" column.
-- `deletion_mode` (String) must be one of ["deleted_field"]
+- `column` (String) Default: "deleted_at"
+Name of the "deleted at" column.
diff --git a/docs/resources/source_file.md b/docs/resources/source_file.md
new file mode 100644
index 000000000..fbc5be7ca
--- /dev/null
+++ b/docs/resources/source_file.md
@@ -0,0 +1,164 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_source_file Resource - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ SourceFile Resource
+---
+
+# airbyte_source_file (Resource)
+
+SourceFile Resource
+
+## Example Usage
+
+```terraform
+resource "airbyte_source_file" "my_source_file" {
+ configuration = {
+ dataset_name = "...my_dataset_name..."
+ format = "jsonl"
+ provider = {
+ az_blob_azure_blob_storage = {
+ sas_token = "...my_sas_token..."
+ shared_key = "...my_shared_key..."
+ storage_account = "...my_storage_account..."
+ }
+ }
+ reader_options = "{\"sep\": \"\t\", \"header\": 0, \"names\": [\"column1\", \"column2\"] }"
+ url = "https://storage.googleapis.com/covid19-open-data/v2/latest/epidemiology.csv"
+ }
+ definition_id = "6c5d5cf5-0fbf-4713-864e-d5bf6d67306c"
+ name = "Floyd Goyette"
+ secret_id = "...my_secret_id..."
+ workspace_id = "68cfaeff-480d-4f14-bee1-0f8279e427b2"
+}
+```
+
+
+## Schema
+
+### Required
+
+- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `name` (String) Name of the source e.g. dev-mysql-instance.
+- `workspace_id` (String)
+
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
+- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
+
+### Read-Only
+
+- `source_id` (String)
+- `source_type` (String)
+
+
+### Nested Schema for `configuration`
+
+Required:
+
+- `dataset_name` (String) The Name of the final table to replicate this file into (should include letters, numbers dash and underscores only).
+- `provider` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider))
+- `url` (String) The URL path to access the file which should be replicated.
+
+Optional:
+
+- `format` (String) must be one of ["csv", "json", "jsonl", "excel", "excel_binary", "feather", "parquet", "yaml"]; Default: "csv"
+The Format of the file which should be replicated (Warning: some formats may be experimental, please refer to the docs).
+- `reader_options` (String) This should be a string in JSON format. It depends on the chosen file format to provide additional options and tune its behavior.
+
+
+### Nested Schema for `configuration.provider`
+
+Optional:
+
+- `az_blob_azure_blob_storage` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--az_blob_azure_blob_storage))
+- `gcs_google_cloud_storage` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--gcs_google_cloud_storage))
+- `https_public_web` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--https_public_web))
+- `s3_amazon_web_services` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--s3_amazon_web_services))
+- `scp_secure_copy_protocol` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--scp_secure_copy_protocol))
+- `sftp_secure_file_transfer_protocol` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--sftp_secure_file_transfer_protocol))
+- `ssh_secure_shell` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--ssh_secure_shell))
+
+
+### Nested Schema for `configuration.provider.az_blob_azure_blob_storage`
+
+Required:
+
+- `storage_account` (String) The globally unique name of the storage account that the desired blob sits within. See here for more details.
+
+Optional:
+
+- `sas_token` (String, Sensitive) To access Azure Blob Storage, this connector would need credentials with the proper permissions. One option is a SAS (Shared Access Signature) token. If accessing publicly available data, this field is not necessary.
+- `shared_key` (String, Sensitive) To access Azure Blob Storage, this connector would need credentials with the proper permissions. One option is a storage account shared key (aka account key or access key). If accessing publicly available data, this field is not necessary.
+
+
+
+### Nested Schema for `configuration.provider.gcs_google_cloud_storage`
+
+Optional:
+
+- `service_account_json` (String) In order to access private Buckets stored on Google Cloud, this connector would need a service account json credentials with the proper permissions as described here. Please generate the credentials.json file and copy/paste its content to this field (expecting JSON formats). If accessing publicly available data, this field is not necessary.
+
+
+
+### Nested Schema for `configuration.provider.https_public_web`
+
+Optional:
+
+- `user_agent` (Boolean) Default: false
+Add User-Agent to request
+
+
+
+### Nested Schema for `configuration.provider.s3_amazon_web_services`
+
+Optional:
+
+- `aws_access_key_id` (String, Sensitive) In order to access private Buckets stored on AWS S3, this connector would need credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
+- `aws_secret_access_key` (String, Sensitive) In order to access private Buckets stored on AWS S3, this connector would need credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
+
+
+
+### Nested Schema for `configuration.provider.scp_secure_copy_protocol`
+
+Required:
+
+- `host` (String)
+- `user` (String)
+
+Optional:
+
+- `password` (String, Sensitive)
+- `port` (String) Default: "22"
+
+
+
+### Nested Schema for `configuration.provider.sftp_secure_file_transfer_protocol`
+
+Required:
+
+- `host` (String)
+- `user` (String)
+
+Optional:
+
+- `password` (String, Sensitive)
+- `port` (String) Default: "22"
+
+
+
+### Nested Schema for `configuration.provider.ssh_secure_shell`
+
+Required:
+
+- `host` (String)
+- `user` (String)
+
+Optional:
+
+- `password` (String, Sensitive)
+- `port` (String) Default: "22"
+
+
diff --git a/docs/resources/source_file_secure.md b/docs/resources/source_file_secure.md
deleted file mode 100644
index 79d6ae22a..000000000
--- a/docs/resources/source_file_secure.md
+++ /dev/null
@@ -1,283 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_file_secure Resource - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceFileSecure Resource
----
-
-# airbyte_source_file_secure (Resource)
-
-SourceFileSecure Resource
-
-## Example Usage
-
-```terraform
-resource "airbyte_source_file_secure" "my_source_filesecure" {
- configuration = {
- dataset_name = "...my_dataset_name..."
- format = "excel_binary"
- provider = {
- source_file_secure_storage_provider_az_blob_azure_blob_storage = {
- sas_token = "...my_sas_token..."
- shared_key = "...my_shared_key..."
- storage = "AzBlob"
- storage_account = "...my_storage_account..."
- }
- }
- reader_options = "{\"sep\": \" \"}"
- source_type = "file-secure"
- url = "gs://my-google-bucket/data.csv"
- }
- name = "Freddie Von V"
- secret_id = "...my_secret_id..."
- workspace_id = "76c6ab21-d29d-4fc9-8d6f-ecd799390066"
-}
-```
-
-
-## Schema
-
-### Required
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `source_id` (String)
-- `source_type` (String)
-
-
-### Nested Schema for `configuration`
-
-Required:
-
-- `dataset_name` (String) The Name of the final table to replicate this file into (should include letters, numbers dash and underscores only).
-- `format` (String) must be one of ["csv", "json", "jsonl", "excel", "excel_binary", "feather", "parquet", "yaml"]
-The Format of the file which should be replicated (Warning: some formats may be experimental, please refer to the docs).
-- `provider` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider))
-- `source_type` (String) must be one of ["file-secure"]
-- `url` (String) The URL path to access the file which should be replicated.
-
-Optional:
-
-- `reader_options` (String) This should be a string in JSON format. It depends on the chosen file format to provide additional options and tune its behavior.
-
-
-### Nested Schema for `configuration.provider`
-
-Optional:
-
-- `source_file_secure_storage_provider_az_blob_azure_blob_storage` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_az_blob_azure_blob_storage))
-- `source_file_secure_storage_provider_gcs_google_cloud_storage` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_gcs_google_cloud_storage))
-- `source_file_secure_storage_provider_https_public_web` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_https_public_web))
-- `source_file_secure_storage_provider_s3_amazon_web_services` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_s3_amazon_web_services))
-- `source_file_secure_storage_provider_scp_secure_copy_protocol` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_scp_secure_copy_protocol))
-- `source_file_secure_storage_provider_sftp_secure_file_transfer_protocol` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_sftp_secure_file_transfer_protocol))
-- `source_file_secure_storage_provider_ssh_secure_shell` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_storage_provider_ssh_secure_shell))
-- `source_file_secure_update_storage_provider_az_blob_azure_blob_storage` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_az_blob_azure_blob_storage))
-- `source_file_secure_update_storage_provider_gcs_google_cloud_storage` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_gcs_google_cloud_storage))
-- `source_file_secure_update_storage_provider_https_public_web` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_https_public_web))
-- `source_file_secure_update_storage_provider_s3_amazon_web_services` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_s3_amazon_web_services))
-- `source_file_secure_update_storage_provider_scp_secure_copy_protocol` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_scp_secure_copy_protocol))
-- `source_file_secure_update_storage_provider_sftp_secure_file_transfer_protocol` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_sftp_secure_file_transfer_protocol))
-- `source_file_secure_update_storage_provider_ssh_secure_shell` (Attributes) The storage Provider or Location of the file(s) which should be replicated. (see [below for nested schema](#nestedatt--configuration--provider--source_file_secure_update_storage_provider_ssh_secure_shell))
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_az_blob_azure_blob_storage`
-
-Required:
-
-- `storage` (String) must be one of ["AzBlob"]
-- `storage_account` (String) The globally unique name of the storage account that the desired blob sits within. See here for more details.
-
-Optional:
-
-- `sas_token` (String) To access Azure Blob Storage, this connector would need credentials with the proper permissions. One option is a SAS (Shared Access Signature) token. If accessing publicly available data, this field is not necessary.
-- `shared_key` (String) To access Azure Blob Storage, this connector would need credentials with the proper permissions. One option is a storage account shared key (aka account key or access key). If accessing publicly available data, this field is not necessary.
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_gcs_google_cloud_storage`
-
-Required:
-
-- `storage` (String) must be one of ["GCS"]
-
-Optional:
-
-- `service_account_json` (String) In order to access private Buckets stored on Google Cloud, this connector would need a service account json credentials with the proper permissions as described here. Please generate the credentials.json file and copy/paste its content to this field (expecting JSON formats). If accessing publicly available data, this field is not necessary.
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_https_public_web`
-
-Required:
-
-- `storage` (String) must be one of ["HTTPS"]
-
-Optional:
-
-- `user_agent` (Boolean) Add User-Agent to request
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_s3_amazon_web_services`
-
-Required:
-
-- `storage` (String) must be one of ["S3"]
-
-Optional:
-
-- `aws_access_key_id` (String) In order to access private Buckets stored on AWS S3, this connector would need credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-- `aws_secret_access_key` (String) In order to access private Buckets stored on AWS S3, this connector would need credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_scp_secure_copy_protocol`
-
-Required:
-
-- `host` (String)
-- `storage` (String) must be one of ["SCP"]
-- `user` (String)
-
-Optional:
-
-- `password` (String)
-- `port` (String)
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_sftp_secure_file_transfer_protocol`
-
-Required:
-
-- `host` (String)
-- `storage` (String) must be one of ["SFTP"]
-- `user` (String)
-
-Optional:
-
-- `password` (String)
-- `port` (String)
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_storage_provider_ssh_secure_shell`
-
-Required:
-
-- `host` (String)
-- `storage` (String) must be one of ["SSH"]
-- `user` (String)
-
-Optional:
-
-- `password` (String)
-- `port` (String)
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_az_blob_azure_blob_storage`
-
-Required:
-
-- `storage` (String) must be one of ["AzBlob"]
-- `storage_account` (String) The globally unique name of the storage account that the desired blob sits within. See here for more details.
-
-Optional:
-
-- `sas_token` (String) To access Azure Blob Storage, this connector would need credentials with the proper permissions. One option is a SAS (Shared Access Signature) token. If accessing publicly available data, this field is not necessary.
-- `shared_key` (String) To access Azure Blob Storage, this connector would need credentials with the proper permissions. One option is a storage account shared key (aka account key or access key). If accessing publicly available data, this field is not necessary.
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_gcs_google_cloud_storage`
-
-Required:
-
-- `storage` (String) must be one of ["GCS"]
-
-Optional:
-
-- `service_account_json` (String) In order to access private Buckets stored on Google Cloud, this connector would need a service account json credentials with the proper permissions as described here. Please generate the credentials.json file and copy/paste its content to this field (expecting JSON formats). If accessing publicly available data, this field is not necessary.
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_https_public_web`
-
-Required:
-
-- `storage` (String) must be one of ["HTTPS"]
-
-Optional:
-
-- `user_agent` (Boolean) Add User-Agent to request
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_s3_amazon_web_services`
-
-Required:
-
-- `storage` (String) must be one of ["S3"]
-
-Optional:
-
-- `aws_access_key_id` (String) In order to access private Buckets stored on AWS S3, this connector would need credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-- `aws_secret_access_key` (String) In order to access private Buckets stored on AWS S3, this connector would need credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_scp_secure_copy_protocol`
-
-Required:
-
-- `host` (String)
-- `storage` (String) must be one of ["SCP"]
-- `user` (String)
-
-Optional:
-
-- `password` (String)
-- `port` (String)
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_sftp_secure_file_transfer_protocol`
-
-Required:
-
-- `host` (String)
-- `storage` (String) must be one of ["SFTP"]
-- `user` (String)
-
-Optional:
-
-- `password` (String)
-- `port` (String)
-
-
-
-### Nested Schema for `configuration.provider.source_file_secure_update_storage_provider_ssh_secure_shell`
-
-Required:
-
-- `host` (String)
-- `storage` (String) must be one of ["SSH"]
-- `user` (String)
-
-Optional:
-
-- `password` (String)
-- `port` (String)
-
-
diff --git a/docs/resources/source_firebolt.md b/docs/resources/source_firebolt.md
index 6d39095c2..3e34c8954 100644
--- a/docs/resources/source_firebolt.md
+++ b/docs/resources/source_firebolt.md
@@ -15,17 +15,17 @@ SourceFirebolt Resource
```terraform
resource "airbyte_source_firebolt" "my_source_firebolt" {
configuration = {
- account = "...my_account..."
- database = "...my_database..."
- engine = "...my_engine..."
- host = "api.app.firebolt.io"
- password = "...my_password..."
- source_type = "firebolt"
- username = "username@email.com"
+ account = "...my_account..."
+ database = "...my_database..."
+ engine = "...my_engine..."
+ host = "api.app.firebolt.io"
+ password = "...my_password..."
+ username = "username@email.com"
}
- name = "Donna Abshire"
- secret_id = "...my_secret_id..."
- workspace_id = "5338cec0-86fa-421e-9152-cb3119167b8e"
+ definition_id = "e1d4b428-b10c-462a-aeab-6a16bc0f1be5"
+ name = "Laurie Kuhlman"
+ secret_id = "...my_secret_id..."
+ workspace_id = "7324c6ca-7fcd-4ac6-b878-54b69c42e8b9"
}
```
@@ -35,11 +35,12 @@ resource "airbyte_source_firebolt" "my_source_firebolt" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -53,8 +54,7 @@ resource "airbyte_source_firebolt" "my_source_firebolt" {
Required:
- `database` (String) The database to connect to.
-- `password` (String) Firebolt password.
-- `source_type` (String) must be one of ["firebolt"]
+- `password` (String, Sensitive) Firebolt password.
- `username` (String) Firebolt email address you use to login.
Optional:
diff --git a/docs/resources/source_freshcaller.md b/docs/resources/source_freshcaller.md
index 5201870a0..63b1cc885 100644
--- a/docs/resources/source_freshcaller.md
+++ b/docs/resources/source_freshcaller.md
@@ -17,14 +17,14 @@ resource "airbyte_source_freshcaller" "my_source_freshcaller" {
configuration = {
api_key = "...my_api_key..."
domain = "snaptravel"
- requests_per_minute = 2
- source_type = "freshcaller"
+ requests_per_minute = 7
start_date = "2022-01-01T12:00:00Z"
- sync_lag_minutes = 9
+ sync_lag_minutes = 2
}
- name = "Kenneth Friesen IV"
- secret_id = "...my_secret_id..."
- workspace_id = "d6d364ff-d455-4906-9126-3d48e935c2c9"
+ definition_id = "c06fe5a2-e94e-4ff2-91ad-fc721dd1f802"
+ name = "Margarita Nitzsche"
+ secret_id = "...my_secret_id..."
+ workspace_id = "9660c93e-b114-448c-9cd3-afe5ef85381e"
}
```
@@ -34,11 +34,12 @@ resource "airbyte_source_freshcaller" "my_source_freshcaller" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,14 +52,13 @@ resource "airbyte_source_freshcaller" "my_source_freshcaller" {
Required:
-- `api_key` (String) Freshcaller API Key. See the docs for more information on how to obtain this key.
+- `api_key` (String, Sensitive) Freshcaller API Key. See the docs for more information on how to obtain this key.
- `domain` (String) Used to construct Base URL for the Freshcaller APIs
-- `source_type` (String) must be one of ["freshcaller"]
-- `start_date` (String) UTC date and time. Any data created after this date will be replicated.
Optional:
- `requests_per_minute` (Number) The number of requests per minute that this source allowed to use. There is a rate limit of 50 requests per minute per app per account.
+- `start_date` (String) UTC date and time. Any data created after this date will be replicated.
- `sync_lag_minutes` (Number) Lag in minutes for each sync, i.e., at time T, data for the time range [prev_sync_time, T-30] will be fetched
diff --git a/docs/resources/source_freshdesk.md b/docs/resources/source_freshdesk.md
index ac7a22698..34885805c 100644
--- a/docs/resources/source_freshdesk.md
+++ b/docs/resources/source_freshdesk.md
@@ -17,13 +17,13 @@ resource "airbyte_source_freshdesk" "my_source_freshdesk" {
configuration = {
api_key = "...my_api_key..."
domain = "myaccount.freshdesk.com"
- requests_per_minute = 10
- source_type = "freshdesk"
+ requests_per_minute = 1
start_date = "2020-12-01T00:00:00Z"
}
- name = "Dale Altenwerth"
- secret_id = "...my_secret_id..."
- workspace_id = "3e43202d-7216-4576-9066-41870d9d21f9"
+ definition_id = "9fe1bd22-2412-41e6-b15b-e306a4e83994"
+ name = "Frances Farrell"
+ secret_id = "...my_secret_id..."
+ workspace_id = "c75d4c70-b588-42c8-81a0-878bfdf7e2fa"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_freshdesk" "my_source_freshdesk" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,9 +51,8 @@ resource "airbyte_source_freshdesk" "my_source_freshdesk" {
Required:
-- `api_key` (String) Freshdesk API Key. See the docs for more information on how to obtain this key.
+- `api_key` (String, Sensitive) Freshdesk API Key. See the docs for more information on how to obtain this key.
- `domain` (String) Freshdesk domain
-- `source_type` (String) must be one of ["freshdesk"]
Optional:
diff --git a/docs/resources/source_freshsales.md b/docs/resources/source_freshsales.md
index 01138f18f..1a533607c 100644
--- a/docs/resources/source_freshsales.md
+++ b/docs/resources/source_freshsales.md
@@ -17,11 +17,11 @@ resource "airbyte_source_freshsales" "my_source_freshsales" {
configuration = {
api_key = "...my_api_key..."
domain_name = "mydomain.myfreshworks.com"
- source_type = "freshsales"
}
- name = "Gustavo Adams DDS"
- secret_id = "...my_secret_id..."
- workspace_id = "4ecc11a0-8364-4290-a8b8-502a55e7f73b"
+ definition_id = "4a63623e-34bb-4a48-ad6d-0eaf7f54c7c3"
+ name = "Shelly Wolf"
+ secret_id = "...my_secret_id..."
+ workspace_id = "b0a3dd00-07da-4ef7-b0c8-1f95c5b8dd2d"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_freshsales" "my_source_freshsales" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_freshsales" "my_source_freshsales" {
Required:
-- `api_key` (String) Freshsales API Key. See here. The key is case sensitive.
+- `api_key` (String, Sensitive) Freshsales API Key. See here. The key is case sensitive.
- `domain_name` (String) The Name of your Freshsales domain
-- `source_type` (String) must be one of ["freshsales"]
diff --git a/docs/resources/source_gainsight_px.md b/docs/resources/source_gainsight_px.md
index b61fefa57..b50e81e34 100644
--- a/docs/resources/source_gainsight_px.md
+++ b/docs/resources/source_gainsight_px.md
@@ -15,12 +15,12 @@ SourceGainsightPx Resource
```terraform
resource "airbyte_source_gainsight_px" "my_source_gainsightpx" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "gainsight-px"
+ api_key = "...my_api_key..."
}
- name = "Hugh Goodwin"
- secret_id = "...my_secret_id..."
- workspace_id = "320a319f-4bad-4f94-bc9a-867bc4242666"
+ definition_id = "32b37f6f-ec5c-4d0a-8fda-52f69543b862"
+ name = "Cristina McKenzie"
+ secret_id = "...my_secret_id..."
+ workspace_id = "50480aaa-f77a-4e08-bd2c-af83f045910a"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_gainsight_px" "my_source_gainsightpx" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_gainsight_px" "my_source_gainsightpx" {
Required:
-- `api_key` (String) The Aptrinsic API Key which is recieved from the dashboard settings (ref - https://app.aptrinsic.com/settings/api-keys)
-- `source_type` (String) must be one of ["gainsight-px"]
+- `api_key` (String, Sensitive) The Aptrinsic API Key which is recieved from the dashboard settings (ref - https://app.aptrinsic.com/settings/api-keys)
diff --git a/docs/resources/source_gcs.md b/docs/resources/source_gcs.md
index 36995e4c2..e5add9054 100644
--- a/docs/resources/source_gcs.md
+++ b/docs/resources/source_gcs.md
@@ -15,14 +15,53 @@ SourceGcs Resource
```terraform
resource "airbyte_source_gcs" "my_source_gcs" {
configuration = {
- gcs_bucket = "...my_gcs_bucket..."
- gcs_path = "...my_gcs_path..."
- service_account = "{ \"type\": \"service_account\", \"project_id\": YOUR_PROJECT_ID, \"private_key_id\": YOUR_PRIVATE_KEY, ... }"
- source_type = "gcs"
+ bucket = "...my_bucket..."
+ service_account = "...my_service_account..."
+ start_date = "2021-01-01T00:00:00.000000Z"
+ streams = [
+ {
+ days_to_sync_if_history_is_full = 3
+ format = {
+ source_gcs_csv_format = {
+ delimiter = "...my_delimiter..."
+ double_quote = false
+ encoding = "...my_encoding..."
+ escape_char = "...my_escape_char..."
+ false_values = [
+ "...",
+ ]
+ header_definition = {
+ source_gcs_autogenerated = {}
+ }
+ inference_type = "None"
+ null_values = [
+ "...",
+ ]
+ quote_char = "...my_quote_char..."
+ skip_rows_after_header = 3
+ skip_rows_before_header = 5
+ strings_can_be_null = false
+ true_values = [
+ "...",
+ ]
+ }
+ }
+ globs = [
+ "...",
+ ]
+ input_schema = "...my_input_schema..."
+ legacy_prefix = "...my_legacy_prefix..."
+ name = "Guy Langosh III"
+ primary_key = "...my_primary_key..."
+ schemaless = false
+ validation_policy = "Wait for Discover"
+ },
+ ]
}
- name = "Olga Blanda"
- secret_id = "...my_secret_id..."
- workspace_id = "dca8ef51-fcb4-4c59-bec1-2cdaad0ec7af"
+ definition_id = "a4e6d7c2-fcaa-4386-9a1d-2ddf0351c49c"
+ name = "Leah Jerde Jr."
+ secret_id = "...my_secret_id..."
+ workspace_id = "51741425-e4d3-48a3-8ea5-6cdfa27fbf62"
}
```
@@ -31,12 +70,15 @@ resource "airbyte_source_gcs" "my_source_gcs" {
### Required
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `configuration` (Attributes) NOTE: When this Spec is changed, legacy_config_transformer.py must also be
+modified to uptake the changes because it is responsible for converting
+legacy GCS configs into file based configs using the File-Based CDK. (see [below for nested schema](#nestedatt--configuration))
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,9 +91,91 @@ resource "airbyte_source_gcs" "my_source_gcs" {
Required:
-- `gcs_bucket` (String) GCS bucket name
-- `gcs_path` (String) GCS path to data
+- `bucket` (String) Name of the GCS bucket where the file(s) exist.
- `service_account` (String) Enter your Google Cloud service account key in JSON format
-- `source_type` (String) must be one of ["gcs"]
+- `streams` (Attributes List) Each instance of this configuration defines a stream. Use this to define which files belong in the stream, their format, and how they should be parsed and validated. When sending data to warehouse destination such as Snowflake or BigQuery, each stream is a separate table. (see [below for nested schema](#nestedatt--configuration--streams))
+
+Optional:
+
+- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00.000000Z. Any file modified before this date will not be replicated.
+
+
+### Nested Schema for `configuration.streams`
+
+Required:
+
+- `format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format))
+- `name` (String) The name of the stream.
+
+Optional:
+
+- `days_to_sync_if_history_is_full` (Number) Default: 3
+When the state history of the file store is full, syncs will only read files that were last modified in the provided day range.
+- `globs` (List of String) The pattern used to specify which files should be selected from the file system. For more information on glob pattern matching look here.
+- `input_schema` (String) The schema that will be used to validate records extracted from the file. This will override the stream schema that is auto-detected from incoming files.
+- `legacy_prefix` (String) The path prefix configured in previous versions of the GCS connector. This option is deprecated in favor of a single glob.
+- `primary_key` (String, Sensitive) The column or columns (for a composite key) that serves as the unique identifier of a record.
+- `schemaless` (Boolean) Default: false
+When enabled, syncs will not validate or structure records against the stream's schema.
+- `validation_policy` (String) must be one of ["Emit Record", "Skip Record", "Wait for Discover"]; Default: "Emit Record"
+The name of the validation policy that dictates sync behavior when a record does not adhere to the stream schema.
+
+
+### Nested Schema for `configuration.streams.format`
+
+Optional:
+
+- `csv_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--csv_format))
+
+
+### Nested Schema for `configuration.streams.format.csv_format`
+
+Optional:
+
+- `delimiter` (String) Default: ","
+The character delimiting individual cells in the CSV data. This may only be a 1-character string. For tab-delimited data enter '\t'.
+- `double_quote` (Boolean) Default: true
+Whether two quotes in a quoted CSV value denote a single quote in the data.
+- `encoding` (String) Default: "utf8"
+The character encoding of the CSV data. Leave blank to default to UTF8. See list of python encodings for allowable options.
+- `escape_char` (String) The character used for escaping special characters. To disallow escaping, leave this field blank.
+- `false_values` (List of String) A set of case-sensitive strings that should be interpreted as false values.
+- `header_definition` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--csv_format--header_definition))
+- `inference_type` (String) must be one of ["None", "Primitive Types Only"]; Default: "None"
+How to infer the types of the columns. If none, inference default to strings.
+- `null_values` (List of String) A set of case-sensitive strings that should be interpreted as null values. For example, if the value 'NA' should be interpreted as null, enter 'NA' in this field.
+- `quote_char` (String) Default: "\""
+The character used for quoting CSV values. To disallow quoting, make this field blank.
+- `skip_rows_after_header` (Number) Default: 0
+The number of rows to skip after the header row.
+- `skip_rows_before_header` (Number) Default: 0
+The number of rows to skip before the header row. For example, if the header row is on the 3rd row, enter 2 in this field.
+- `strings_can_be_null` (Boolean) Default: true
+Whether strings can be interpreted as null values. If true, strings that match the null_values set will be interpreted as null. If false, strings that match the null_values set will be interpreted as the string itself.
+- `true_values` (List of String) A set of case-sensitive strings that should be interpreted as true values.
+
+
+### Nested Schema for `configuration.streams.format.csv_format.header_definition`
+
+Optional:
+
+- `autogenerated` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--csv_format--header_definition--autogenerated))
+- `from_csv` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--csv_format--header_definition--from_csv))
+- `user_provided` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--csv_format--header_definition--user_provided))
+
+
+### Nested Schema for `configuration.streams.format.csv_format.header_definition.user_provided`
+
+
+
+### Nested Schema for `configuration.streams.format.csv_format.header_definition.user_provided`
+
+
+
+### Nested Schema for `configuration.streams.format.csv_format.header_definition.user_provided`
+
+Required:
+
+- `column_names` (List of String) The column names that will be used while emitting the CSV records
diff --git a/docs/resources/source_getlago.md b/docs/resources/source_getlago.md
index ffb27eb2c..d7be0dee3 100644
--- a/docs/resources/source_getlago.md
+++ b/docs/resources/source_getlago.md
@@ -15,12 +15,13 @@ SourceGetlago Resource
```terraform
resource "airbyte_source_getlago" "my_source_getlago" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "getlago"
+ api_key = "...my_api_key..."
+ api_url = "...my_api_url..."
}
- name = "Irving Rohan"
- secret_id = "...my_secret_id..."
- workspace_id = "0df448a4-7f93-490c-9888-0983dabf9ef3"
+ definition_id = "25b4bae6-1112-4211-be87-b490ecc6bf75"
+ name = "Mrs. Willie Bins"
+ secret_id = "...my_secret_id..."
+ workspace_id = "c803c831-1a97-4a1a-a894-9629432a02ce"
}
```
@@ -30,11 +31,12 @@ resource "airbyte_source_getlago" "my_source_getlago" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +49,11 @@ resource "airbyte_source_getlago" "my_source_getlago" {
Required:
-- `api_key` (String) Your API Key. See here.
-- `source_type` (String) must be one of ["getlago"]
+- `api_key` (String, Sensitive) Your API Key. See here.
+
+Optional:
+
+- `api_url` (String) Default: "https://api.getlago.com/api/v1"
+Your Lago API URL
diff --git a/docs/resources/source_github.md b/docs/resources/source_github.md
index fe3191ddf..4272f0335 100644
--- a/docs/resources/source_github.md
+++ b/docs/resources/source_github.md
@@ -15,23 +15,29 @@ SourceGithub Resource
```terraform
resource "airbyte_source_github" "my_source_github" {
configuration = {
- branch = "airbytehq/airbyte/master airbytehq/airbyte/my-branch"
+ api_url = "https://github.company.org"
+ branch = "airbytehq/airbyte/master airbytehq/airbyte/my-branch"
+ branches = [
+ "...",
+ ]
credentials = {
- source_github_authentication_o_auth = {
+ o_auth = {
access_token = "...my_access_token..."
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
- option_title = "OAuth Credentials"
}
}
+ repositories = [
+ "...",
+ ]
repository = "airbytehq/airbyte"
requests_per_hour = 10
- source_type = "github"
start_date = "2021-03-01T00:00:00Z"
}
- name = "Van Kuhlman IV"
- secret_id = "...my_secret_id..."
- workspace_id = "9af4d357-24cd-4b0f-8d28-1187d56844ed"
+ definition_id = "e017f905-2f20-440e-8692-82dd6a12cb01"
+ name = "Bennie Stroman"
+ secret_id = "...my_secret_id..."
+ workspace_id = "aeeda058-2852-4791-bedf-cf9c9058e69d"
}
```
@@ -41,11 +47,12 @@ resource "airbyte_source_github" "my_source_github" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -58,75 +65,45 @@ resource "airbyte_source_github" "my_source_github" {
Required:
-- `repository` (String) Space-delimited list of GitHub organizations/repositories, e.g. `airbytehq/airbyte` for single repository, `airbytehq/*` for get all repositories from organization and `airbytehq/airbyte airbytehq/another-repo` for multiple repositories.
-- `source_type` (String) must be one of ["github"]
-- `start_date` (String) The date from which you'd like to replicate data from GitHub in the format YYYY-MM-DDT00:00:00Z. For the streams which support this configuration, only data generated on or after the start date will be replicated. This field doesn't apply to all streams, see the docs for more info
+- `credentials` (Attributes) Choose how to authenticate to GitHub (see [below for nested schema](#nestedatt--configuration--credentials))
+- `repositories` (List of String) List of GitHub organizations/repositories, e.g. `airbytehq/airbyte` for single repository, `airbytehq/*` for get all repositories from organization and `airbytehq/airbyte airbytehq/another-repo` for multiple repositories.
Optional:
-- `branch` (String) Space-delimited list of GitHub repository branches to pull commits for, e.g. `airbytehq/airbyte/master`. If no branches are specified for a repository, the default branch will be pulled.
-- `credentials` (Attributes) Choose how to authenticate to GitHub (see [below for nested schema](#nestedatt--configuration--credentials))
+- `api_url` (String) Default: "https://api.github.com/"
+Please enter your basic URL from self-hosted GitHub instance or leave it empty to use GitHub.
+- `branch` (String) (DEPRCATED) Space-delimited list of GitHub repository branches to pull commits for, e.g. `airbytehq/airbyte/master`. If no branches are specified for a repository, the default branch will be pulled.
+- `branches` (List of String) List of GitHub repository branches to pull commits for, e.g. `airbytehq/airbyte/master`. If no branches are specified for a repository, the default branch will be pulled.
+- `repository` (String) (DEPRCATED) Space-delimited list of GitHub organizations/repositories, e.g. `airbytehq/airbyte` for single repository, `airbytehq/*` for get all repositories from organization and `airbytehq/airbyte airbytehq/another-repo` for multiple repositories.
- `requests_per_hour` (Number) The GitHub API allows for a maximum of 5000 requests per hour (15000 for Github Enterprise). You can specify a lower value to limit your use of the API quota.
+- `start_date` (String) The date from which you'd like to replicate data from GitHub in the format YYYY-MM-DDT00:00:00Z. If the date is not set, all data will be replicated. For the streams which support this configuration, only data generated on or after the start date will be replicated. This field doesn't apply to all streams, see the docs for more info
### Nested Schema for `configuration.credentials`
Optional:
-- `source_github_authentication_o_auth` (Attributes) Choose how to authenticate to GitHub (see [below for nested schema](#nestedatt--configuration--credentials--source_github_authentication_o_auth))
-- `source_github_authentication_personal_access_token` (Attributes) Choose how to authenticate to GitHub (see [below for nested schema](#nestedatt--configuration--credentials--source_github_authentication_personal_access_token))
-- `source_github_update_authentication_o_auth` (Attributes) Choose how to authenticate to GitHub (see [below for nested schema](#nestedatt--configuration--credentials--source_github_update_authentication_o_auth))
-- `source_github_update_authentication_personal_access_token` (Attributes) Choose how to authenticate to GitHub (see [below for nested schema](#nestedatt--configuration--credentials--source_github_update_authentication_personal_access_token))
+- `o_auth` (Attributes) Choose how to authenticate to GitHub (see [below for nested schema](#nestedatt--configuration--credentials--o_auth))
+- `personal_access_token` (Attributes) Choose how to authenticate to GitHub (see [below for nested schema](#nestedatt--configuration--credentials--personal_access_token))
-
-### Nested Schema for `configuration.credentials.source_github_authentication_o_auth`
+
+### Nested Schema for `configuration.credentials.o_auth`
Required:
-- `access_token` (String) OAuth access token
+- `access_token` (String, Sensitive) OAuth access token
Optional:
- `client_id` (String) OAuth Client Id
- `client_secret` (String) OAuth Client secret
-- `option_title` (String) must be one of ["OAuth Credentials"]
-
-
-
-### Nested Schema for `configuration.credentials.source_github_authentication_personal_access_token`
-
-Required:
-- `personal_access_token` (String) Log into GitHub and then generate a personal access token. To load balance your API quota consumption across multiple API tokens, input multiple tokens separated with ","
-
-Optional:
-
-- `option_title` (String) must be one of ["PAT Credentials"]
-
-
-### Nested Schema for `configuration.credentials.source_github_update_authentication_o_auth`
+
+### Nested Schema for `configuration.credentials.personal_access_token`
Required:
-- `access_token` (String) OAuth access token
-
-Optional:
-
-- `client_id` (String) OAuth Client Id
-- `client_secret` (String) OAuth Client secret
-- `option_title` (String) must be one of ["OAuth Credentials"]
-
-
-
-### Nested Schema for `configuration.credentials.source_github_update_authentication_personal_access_token`
-
-Required:
-
-- `personal_access_token` (String) Log into GitHub and then generate a personal access token. To load balance your API quota consumption across multiple API tokens, input multiple tokens separated with ","
-
-Optional:
-
-- `option_title` (String) must be one of ["PAT Credentials"]
+- `personal_access_token` (String, Sensitive) Log into GitHub and then generate a personal access token. To load balance your API quota consumption across multiple API tokens, input multiple tokens separated with ","
diff --git a/docs/resources/source_gitlab.md b/docs/resources/source_gitlab.md
index 169076e81..60cbba234 100644
--- a/docs/resources/source_gitlab.md
+++ b/docs/resources/source_gitlab.md
@@ -15,25 +15,30 @@ SourceGitlab Resource
```terraform
resource "airbyte_source_gitlab" "my_source_gitlab" {
configuration = {
- api_url = "https://gitlab.company.org"
+ api_url = "gitlab.com"
credentials = {
- source_gitlab_authorization_method_o_auth2_0 = {
+ source_gitlab_o_auth2_0 = {
access_token = "...my_access_token..."
- auth_type = "oauth2.0"
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
refresh_token = "...my_refresh_token..."
- token_expiry_date = "2021-06-26T03:36:42.239Z"
+ token_expiry_date = "2022-01-24T13:56:19.954Z"
}
}
- groups = "airbyte.io"
- projects = "airbyte.io/documentation"
- source_type = "gitlab"
- start_date = "2021-03-01T00:00:00Z"
+ groups = "airbyte.io"
+ groups_list = [
+ "...",
+ ]
+ projects = "airbyte.io/documentation"
+ projects_list = [
+ "...",
+ ]
+ start_date = "2021-03-01T00:00:00Z"
}
- name = "Frank Keeling"
- secret_id = "...my_secret_id..."
- workspace_id = "628bdfc2-032b-46c8-b992-3b7e13584f7a"
+ definition_id = "e4cb55c6-95e2-4f08-ab76-e351cef20de4"
+ name = "Winston Schroeder"
+ secret_id = "...my_secret_id..."
+ workspace_id = "2b42c84c-d8bc-4607-ae71-4fbf0cfd3aed"
}
```
@@ -43,11 +48,12 @@ resource "airbyte_source_gitlab" "my_source_gitlab" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -61,78 +67,42 @@ resource "airbyte_source_gitlab" "my_source_gitlab" {
Required:
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["gitlab"]
-- `start_date` (String) The date from which you'd like to replicate data for GitLab API, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
Optional:
-- `api_url` (String) Please enter your basic URL from GitLab instance.
-- `groups` (String) Space-delimited list of groups. e.g. airbyte.io.
-- `projects` (String) Space-delimited list of projects. e.g. airbyte.io/documentation meltano/tap-gitlab.
+- `api_url` (String) Default: "gitlab.com"
+Please enter your basic URL from GitLab instance.
+- `groups` (String) [DEPRECATED] Space-delimited list of groups. e.g. airbyte.io.
+- `groups_list` (List of String) List of groups. e.g. airbyte.io.
+- `projects` (String) [DEPRECATED] Space-delimited list of projects. e.g. airbyte.io/documentation meltano/tap-gitlab.
+- `projects_list` (List of String) Space-delimited list of projects. e.g. airbyte.io/documentation meltano/tap-gitlab.
+- `start_date` (String) The date from which you'd like to replicate data for GitLab API, in the format YYYY-MM-DDT00:00:00Z. Optional. If not set, all data will be replicated. All data generated after this date will be replicated.
### Nested Schema for `configuration.credentials`
Optional:
-- `source_gitlab_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_gitlab_authorization_method_o_auth2_0))
-- `source_gitlab_authorization_method_private_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_gitlab_authorization_method_private_token))
-- `source_gitlab_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_gitlab_update_authorization_method_o_auth2_0))
-- `source_gitlab_update_authorization_method_private_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_gitlab_update_authorization_method_private_token))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
+- `private_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--private_token))
-
-### Nested Schema for `configuration.credentials.source_gitlab_authorization_method_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) Access Token for making authenticated requests.
+- `access_token` (String, Sensitive) Access Token for making authenticated requests.
- `client_id` (String) The API ID of the Gitlab developer application.
- `client_secret` (String) The API Secret the Gitlab developer application.
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
+- `refresh_token` (String, Sensitive) The key to refresh the expired access_token.
+- `token_expiry_date` (String, Sensitive) The date-time when the access token should be refreshed.
-Optional:
-
-- `auth_type` (String) must be one of ["oauth2.0"]
-
-
-
-### Nested Schema for `configuration.credentials.source_gitlab_authorization_method_private_token`
-
-Required:
-
-- `access_token` (String) Log into your Gitlab account and then generate a personal Access Token.
-
-Optional:
-
-- `auth_type` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_gitlab_update_authorization_method_o_auth2_0`
-Required:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `client_id` (String) The API ID of the Gitlab developer application.
-- `client_secret` (String) The API Secret the Gitlab developer application.
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-Optional:
-
-- `auth_type` (String) must be one of ["oauth2.0"]
-
-
-
-### Nested Schema for `configuration.credentials.source_gitlab_update_authorization_method_private_token`
+
+### Nested Schema for `configuration.credentials.private_token`
Required:
-- `access_token` (String) Log into your Gitlab account and then generate a personal Access Token.
-
-Optional:
-
-- `auth_type` (String) must be one of ["access_token"]
+- `access_token` (String, Sensitive) Log into your Gitlab account and then generate a personal Access Token.
diff --git a/docs/resources/source_glassfrog.md b/docs/resources/source_glassfrog.md
index 9d864489f..6b2e89ce5 100644
--- a/docs/resources/source_glassfrog.md
+++ b/docs/resources/source_glassfrog.md
@@ -15,12 +15,12 @@ SourceGlassfrog Resource
```terraform
resource "airbyte_source_glassfrog" "my_source_glassfrog" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "glassfrog"
+ api_key = "...my_api_key..."
}
- name = "Carl Davis"
- secret_id = "...my_secret_id..."
- workspace_id = "891f82ce-1157-4172-b053-77dcfa89df97"
+ definition_id = "54ef24d0-de80-4e3d-b905-02015d2de4b8"
+ name = "Jonathon Erdman"
+ secret_id = "...my_secret_id..."
+ workspace_id = "2b3a27b0-b342-4a10-bbc4-7ca706139037"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_glassfrog" "my_source_glassfrog" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_glassfrog" "my_source_glassfrog" {
Required:
-- `api_key` (String) API key provided by Glassfrog
-- `source_type` (String) must be one of ["glassfrog"]
+- `api_key` (String, Sensitive) API key provided by Glassfrog
diff --git a/docs/resources/source_gnews.md b/docs/resources/source_gnews.md
index 43bff1408..e9dbdfa0b 100644
--- a/docs/resources/source_gnews.md
+++ b/docs/resources/source_gnews.md
@@ -16,25 +16,25 @@ SourceGnews Resource
resource "airbyte_source_gnews" "my_source_gnews" {
configuration = {
api_key = "...my_api_key..."
- country = "ie"
+ country = "es"
end_date = "2022-08-21 16:27:09"
in = [
- "content",
+ "description",
]
- language = "fr"
+ language = "ta"
nullable = [
- "description",
+ "content",
]
- query = "Apple AND NOT iPhone"
- sortby = "publishedAt"
- source_type = "gnews"
+ query = "Intel AND (i7 OR i9)"
+ sortby = "relevance"
start_date = "2022-08-21 16:27:09"
top_headlines_query = "Apple AND NOT iPhone"
- top_headlines_topic = "business"
+ top_headlines_topic = "world"
}
- name = "Katrina Considine"
- secret_id = "...my_secret_id..."
- workspace_id = "c3ddc5f1-11de-4a10-a6d5-41a4d190feb2"
+ definition_id = "df3c14a3-49fd-4e89-ab27-6cbad00caee1"
+ name = "Sadie Gleichner"
+ secret_id = "...my_secret_id..."
+ workspace_id = "5b57e54a-27b6-417a-812e-6bf68e1922df"
}
```
@@ -44,11 +44,12 @@ resource "airbyte_source_gnews" "my_source_gnews" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -61,7 +62,7 @@ resource "airbyte_source_gnews" "my_source_gnews" {
Required:
-- `api_key` (String) API Key
+- `api_key` (String, Sensitive) API Key
- `query` (String) This parameter allows you to specify your search keywords to find the news articles you are looking for. The keywords will be used to return the most relevant articles. It is possible to use logical operators with keywords. - Phrase Search Operator: This operator allows you to make an exact search. Keywords surrounded by
quotation marks are used to search for articles with the exact same keyword sequence.
For example the query: "Apple iPhone" will return articles matching at least once this sequence of keywords.
@@ -76,7 +77,6 @@ Required:
specified keywords. To use it, you need to add NOT in front of each word or phrase surrounded by quotes.
For example the query: Apple NOT iPhone will return all articles matching the keyword Apple but not the keyword
iPhone
-- `source_type` (String) must be one of ["gnews"]
Optional:
diff --git a/docs/resources/source_google_ads.md b/docs/resources/source_google_ads.md
index 0d20ad3dc..fd66b76e0 100644
--- a/docs/resources/source_google_ads.md
+++ b/docs/resources/source_google_ads.md
@@ -32,12 +32,12 @@ resource "airbyte_source_google_ads" "my_source_googleads" {
customer_id = "6783948572,5839201945"
end_date = "2017-01-30"
login_customer_id = "7349206847"
- source_type = "google-ads"
start_date = "2017-01-25"
}
- name = "Dr. Forrest Roob"
- secret_id = "...my_secret_id..."
- workspace_id = "bddb4847-08fb-44e3-91e6-bc158c4c4e54"
+ definition_id = "14313a52-3140-431f-97b8-2b3c164c1950"
+ name = "Dr. Matt Feeney"
+ secret_id = "...my_secret_id..."
+ workspace_id = "ecd9b5a7-5a7c-45fc-a1d7-22b310b676fb"
}
```
@@ -47,11 +47,12 @@ resource "airbyte_source_google_ads" "my_source_googleads" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -66,11 +67,11 @@ Required:
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
- `customer_id` (String) Comma-separated list of (client) customer IDs. Each customer ID must be specified as a 10-digit number without dashes. For detailed instructions on finding this value, refer to our documentation.
-- `source_type` (String) must be one of ["google-ads"]
Optional:
-- `conversion_window_days` (Number) A conversion window is the number of days after an ad interaction (such as an ad click or video view) during which a conversion, such as a purchase, is recorded in Google Ads. For more information, see Google's documentation.
+- `conversion_window_days` (Number) Default: 14
+A conversion window is the number of days after an ad interaction (such as an ad click or video view) during which a conversion, such as a purchase, is recorded in Google Ads. For more information, see Google's documentation.
- `custom_queries` (Attributes List) (see [below for nested schema](#nestedatt--configuration--custom_queries))
- `end_date` (String) UTC date in the format YYYY-MM-DD. Any data after this date will not be replicated. (Default value of today is used if not set)
- `login_customer_id` (String) If your access to the customer account is through a manager account, this field is required, and must be set to the 10-digit customer ID of the manager account. For more information about this field, refer to Google's documentation.
@@ -83,12 +84,12 @@ Required:
- `client_id` (String) The Client ID of your Google Ads developer application. For detailed instructions on finding this value, refer to our documentation.
- `client_secret` (String) The Client Secret of your Google Ads developer application. For detailed instructions on finding this value, refer to our documentation.
-- `developer_token` (String) The Developer Token granted by Google to use their APIs. For detailed instructions on finding this value, refer to our documentation.
-- `refresh_token` (String) The token used to obtain a new Access Token. For detailed instructions on finding this value, refer to our documentation.
+- `developer_token` (String, Sensitive) The Developer Token granted by Google to use their APIs. For detailed instructions on finding this value, refer to our documentation.
+- `refresh_token` (String, Sensitive) The token used to obtain a new Access Token. For detailed instructions on finding this value, refer to our documentation.
Optional:
-- `access_token` (String) The Access Token for making authenticated requests. For detailed instructions on finding this value, refer to our documentation.
+- `access_token` (String, Sensitive) The Access Token for making authenticated requests. For detailed instructions on finding this value, refer to our documentation.
diff --git a/docs/resources/source_google_analytics_data_api.md b/docs/resources/source_google_analytics_data_api.md
index cbd825a38..395884245 100644
--- a/docs/resources/source_google_analytics_data_api.md
+++ b/docs/resources/source_google_analytics_data_api.md
@@ -16,23 +16,80 @@ SourceGoogleAnalyticsDataAPI Resource
resource "airbyte_source_google_analytics_data_api" "my_source_googleanalyticsdataapi" {
configuration = {
credentials = {
- source_google_analytics_data_api_credentials_authenticate_via_google_oauth_ = {
+ authenticate_via_google_oauth = {
access_token = "...my_access_token..."
- auth_type = "Client"
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
refresh_token = "...my_refresh_token..."
}
}
- custom_reports = "...my_custom_reports..."
+ custom_reports_array = [
+ {
+ dimension_filter = {
+ and_group = {
+ expressions = [
+ {
+ field_name = "...my_field_name..."
+ filter = {
+ source_google_analytics_data_api_update_schemas_custom_reports_array_between_filter = {
+ from_value = {
+ source_google_analytics_data_api_schemas_custom_reports_array_dimension_filter_dimensions_filter_1_expressions_double_value = {
+ value = 45.05
+ }
+ }
+ to_value = {
+ source_google_analytics_data_api_schemas_custom_reports_array_dimension_filter_dimensions_filter_1_expressions_filter_double_value = {
+ value = 22.65
+ }
+ }
+ }
+ }
+ },
+ ]
+ }
+ }
+ dimensions = [
+ "...",
+ ]
+ metric_filter = {
+ source_google_analytics_data_api_update_and_group = {
+ expressions = [
+ {
+ field_name = "...my_field_name..."
+ filter = {
+ source_google_analytics_data_api_schemas_custom_reports_array_metric_filter_between_filter = {
+ from_value = {
+ source_google_analytics_data_api_schemas_custom_reports_array_metric_filter_metrics_filter_1_expressions_filter_double_value = {
+ value = 8.4
+ }
+ }
+ to_value = {
+ source_google_analytics_data_api_schemas_custom_reports_array_metric_filter_metrics_filter_1_double_value = {
+ value = 77.49
+ }
+ }
+ }
+ }
+ },
+ ]
+ }
+ }
+ metrics = [
+ "...",
+ ]
+ name = "Mrs. Mercedes Herman PhD"
+ },
+ ]
date_ranges_start_date = "2021-01-01"
- property_id = "5729978930"
- source_type = "google-analytics-data-api"
- window_in_days = 364
+ property_ids = [
+ "...",
+ ]
+ window_in_days = 60
}
- name = "Juanita Collier"
- secret_id = "...my_secret_id..."
- workspace_id = "0e9b200c-e78a-41bd-8fb7-a0a116ce723d"
+ definition_id = "d4fc0324-2ccd-4276-ba0d-30eb91c3df25"
+ name = "Rodney Goldner"
+ secret_id = "...my_secret_id..."
+ workspace_id = "52dc8258-f30a-4271-83b0-0ec7045956c0"
}
```
@@ -42,11 +99,12 @@ resource "airbyte_source_google_analytics_data_api" "my_source_googleanalyticsda
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -59,77 +117,1228 @@ resource "airbyte_source_google_analytics_data_api" "my_source_googleanalyticsda
Required:
-- `date_ranges_start_date` (String) The start date from which to replicate report data in the format YYYY-MM-DD. Data generated before this date will not be included in the report. Not applied to custom Cohort reports.
-- `property_id` (String) The Property ID is a unique number assigned to each property in Google Analytics, found in your GA4 property URL. This ID allows the connector to track the specific events associated with your property. Refer to the Google Analytics documentation to locate your property ID.
-- `source_type` (String) must be one of ["google-analytics-data-api"]
+- `property_ids` (List of String) A list of your Property IDs. The Property ID is a unique number assigned to each property in Google Analytics, found in your GA4 property URL. This ID allows the connector to track the specific events associated with your property. Refer to the Google Analytics documentation to locate your property ID.
Optional:
- `credentials` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials))
-- `custom_reports` (String) A JSON array describing the custom reports you want to sync from Google Analytics. See the documentation for more information about the exact format you can use to fill out this field.
-- `window_in_days` (Number) The interval in days for each data request made to the Google Analytics API. A larger value speeds up data sync, but increases the chance of data sampling, which may result in inaccuracies. We recommend a value of 1 to minimize sampling, unless speed is an absolute priority over accuracy. Acceptable values range from 1 to 364. Does not apply to custom Cohort reports. More information is available in the documentation.
+- `custom_reports_array` (Attributes List) You can add your Custom Analytics report by creating one. (see [below for nested schema](#nestedatt--configuration--custom_reports_array))
+- `date_ranges_start_date` (String) The start date from which to replicate report data in the format YYYY-MM-DD. Data generated before this date will not be included in the report. Not applied to custom Cohort reports.
+- `window_in_days` (Number) Default: 1
+The interval in days for each data request made to the Google Analytics API. A larger value speeds up data sync, but increases the chance of data sampling, which may result in inaccuracies. We recommend a value of 1 to minimize sampling, unless speed is an absolute priority over accuracy. Acceptable values range from 1 to 364. Does not apply to custom Cohort reports. More information is available in the documentation.
### Nested Schema for `configuration.credentials`
Optional:
-- `source_google_analytics_data_api_credentials_authenticate_via_google_oauth` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_data_api_credentials_authenticate_via_google_oauth))
-- `source_google_analytics_data_api_credentials_service_account_key_authentication` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_data_api_credentials_service_account_key_authentication))
-- `source_google_analytics_data_api_update_credentials_authenticate_via_google_oauth` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_data_api_update_credentials_authenticate_via_google_oauth))
-- `source_google_analytics_data_api_update_credentials_service_account_key_authentication` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_data_api_update_credentials_service_account_key_authentication))
+- `authenticate_via_google_oauth` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_via_google_oauth))
+- `service_account_key_authentication` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--service_account_key_authentication))
-
-### Nested Schema for `configuration.credentials.source_google_analytics_data_api_credentials_authenticate_via_google_oauth`
+
+### Nested Schema for `configuration.credentials.authenticate_via_google_oauth`
Required:
- `client_id` (String) The Client ID of your Google Analytics developer application.
- `client_secret` (String) The Client Secret of your Google Analytics developer application.
-- `refresh_token` (String) The token for obtaining a new access token.
+- `refresh_token` (String, Sensitive) The token for obtaining a new access token.
Optional:
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["Client"]
+- `access_token` (String, Sensitive) Access Token for making authenticated requests.
-
-### Nested Schema for `configuration.credentials.source_google_analytics_data_api_credentials_service_account_key_authentication`
+
+### Nested Schema for `configuration.credentials.service_account_key_authentication`
Required:
- `credentials_json` (String) The JSON key linked to the service account used for authorization. For steps on obtaining this key, refer to the setup guide.
+
+
+
+### Nested Schema for `configuration.custom_reports_array`
+
+Required:
+
+- `dimensions` (List of String) A list of dimensions.
+- `metrics` (List of String) A list of metrics.
+- `name` (String) The name of the custom report, this name would be used as stream name.
+
Optional:
-- `auth_type` (String) must be one of ["Service"]
+- `dimension_filter` (Attributes) Dimensions filter (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter))
+- `metric_filter` (Attributes) Metrics filter (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter))
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter`
-
-### Nested Schema for `configuration.credentials.source_google_analytics_data_api_update_credentials_authenticate_via_google_oauth`
+Optional:
+
+- `and_group` (Attributes) The FilterExpressions in andGroup have an AND relationship. (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--and_group))
+- `filter` (Attributes) A primitive filter. In the same FilterExpression, all of the filter's field names need to be either all dimensions. (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--filter))
+- `not_expression` (Attributes) The FilterExpression is NOT of notExpression. (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--not_expression))
+- `or_group` (Attributes) The FilterExpressions in orGroup have an OR relationship. (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group`
Required:
-- `client_id` (String) The Client ID of your Google Analytics developer application.
-- `client_secret` (String) The Client Secret of your Google Analytics developer application.
-- `refresh_token` (String) The token for obtaining a new access token.
+- `expressions` (Attributes List) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions`
+
+Required:
+
+- `field_name` (String)
+- `filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter`
+
+Optional:
+
+- `between_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--between_filter))
+- `in_list_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--in_list_filter))
+- `numeric_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--numeric_filter))
+- `string_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `from_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--from_value))
+- `to_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--to_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.to_value`
Optional:
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["Client"]
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.to_value.double_value`
+
+Required:
+
+- `value` (Number)
-
-### Nested Schema for `configuration.credentials.source_google_analytics_data_api_update_credentials_service_account_key_authentication`
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.to_value.int64_value`
Required:
-- `credentials_json` (String) The JSON key linked to the service account used for authorization. For steps on obtaining this key, refer to the setup guide.
+- `value` (String)
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.to_value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `values` (List of String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `operation` (List of String)
+- `value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `value` (String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+- `match_type` (List of String)
+
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group`
+
+Required:
+
+- `field_name` (String)
+- `filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter`
+
+Optional:
+
+- `between_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter--between_filter))
+- `in_list_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter--in_list_filter))
+- `numeric_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter--numeric_filter))
+- `string_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter--string_filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter.string_filter`
+
+Required:
+
+- `from_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter--string_filter--from_value))
+- `to_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter--string_filter--to_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter.string_filter`
+
+Required:
+
+- `values` (List of String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter.string_filter`
+
+Required:
+
+- `operation` (List of String)
+- `value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter--string_filter--value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter.string_filter.value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter--string_filter--value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--filter--string_filter--value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter.string_filter.value.int64_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter.string_filter.value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.filter.string_filter`
+
+Required:
+
+- `value` (String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+- `match_type` (List of String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group`
+
+Optional:
+
+- `expression` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression`
+
+Required:
+
+- `field_name` (String)
+- `filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter`
+
+Optional:
+
+- `between_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter--between_filter))
+- `in_list_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter--in_list_filter))
+- `numeric_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter--numeric_filter))
+- `string_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter--string_filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter.string_filter`
+
+Required:
+
+- `from_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter--string_filter--from_value))
+- `to_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter--string_filter--to_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter.string_filter.to_value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter.string_filter.to_value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter.string_filter`
+
+Required:
+
+- `values` (List of String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter.string_filter`
+
+Required:
+
+- `operation` (List of String)
+- `value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter--string_filter--value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter.string_filter.value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter--string_filter--value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expression--filter--string_filter--value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter.string_filter.value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter.string_filter.value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expression.filter.string_filter`
+
+Required:
+
+- `value` (String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+- `match_type` (List of String)
+
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group`
+
+Required:
+
+- `expressions` (Attributes List) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions`
+
+Required:
+
+- `field_name` (String)
+- `filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter`
+
+Optional:
+
+- `between_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--between_filter))
+- `in_list_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--in_list_filter))
+- `numeric_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--numeric_filter))
+- `string_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `from_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--from_value))
+- `to_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--to_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.to_value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.to_value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `values` (List of String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `operation` (List of String)
+- `value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--dimension_filter--or_group--expressions--filter--string_filter--value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter.value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.dimension_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `value` (String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+- `match_type` (List of String)
+
+
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter`
+
+Optional:
+
+- `and_group` (Attributes) The FilterExpressions in andGroup have an AND relationship. (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--and_group))
+- `filter` (Attributes) A primitive filter. In the same FilterExpression, all of the filter's field names need to be either all metrics. (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--filter))
+- `not_expression` (Attributes) The FilterExpression is NOT of notExpression. (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--not_expression))
+- `or_group` (Attributes) The FilterExpressions in orGroup have an OR relationship. (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group`
+
+Required:
+
+- `expressions` (Attributes List) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions`
+
+Required:
+
+- `field_name` (String)
+- `filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter`
+
+Optional:
+
+- `between_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--between_filter))
+- `in_list_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--in_list_filter))
+- `numeric_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--numeric_filter))
+- `string_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `from_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--from_value))
+- `to_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--to_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.to_value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.to_value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `values` (List of String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `operation` (List of String)
+- `value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `value` (String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+- `match_type` (List of String)
+
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group`
+
+Required:
+
+- `field_name` (String)
+- `filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter`
+
+Optional:
+
+- `between_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter--between_filter))
+- `in_list_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter--in_list_filter))
+- `numeric_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter--numeric_filter))
+- `string_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter--string_filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter.string_filter`
+
+Required:
+
+- `from_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter--string_filter--from_value))
+- `to_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter--string_filter--to_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter.string_filter`
+
+Required:
+
+- `values` (List of String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter.string_filter`
+
+Required:
+
+- `operation` (List of String)
+- `value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter--string_filter--value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter.string_filter.value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter--string_filter--value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--filter--string_filter--value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter.string_filter.value.int64_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter.string_filter.value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.filter.string_filter`
+
+Required:
+
+- `value` (String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+- `match_type` (List of String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group`
+
+Optional:
+
+- `expression` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression`
+
+Required:
+
+- `field_name` (String)
+- `filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter`
+
+Optional:
+
+- `between_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter--between_filter))
+- `in_list_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter--in_list_filter))
+- `numeric_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter--numeric_filter))
+- `string_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter--string_filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter.string_filter`
+
+Required:
+
+- `from_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter--string_filter--from_value))
+- `to_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter--string_filter--to_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter.string_filter.to_value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter.string_filter.to_value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter.string_filter`
+
+Required:
+
+- `values` (List of String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter.string_filter`
+
+Required:
+
+- `operation` (List of String)
+- `value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter--string_filter--value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter.string_filter.value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter--string_filter--value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expression--filter--string_filter--value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter.string_filter.value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter.string_filter.value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expression.filter.string_filter`
+
+Required:
+
+- `value` (String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+- `match_type` (List of String)
+
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group`
+
+Required:
+
+- `expressions` (Attributes List) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions`
+
+Required:
+
+- `field_name` (String)
+- `filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter`
+
+Optional:
+
+- `between_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--between_filter))
+- `in_list_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--in_list_filter))
+- `numeric_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--numeric_filter))
+- `string_filter` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `from_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--from_value))
+- `to_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--to_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.to_value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.to_value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--to_value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--to_value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.to_value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.to_value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `values` (List of String)
+
+Optional:
+
+- `case_sensitive` (Boolean)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `operation` (List of String)
+- `value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.value`
+
+Optional:
+
+- `double_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--value--double_value))
+- `int64_value` (Attributes) (see [below for nested schema](#nestedatt--configuration--custom_reports_array--metric_filter--or_group--expressions--filter--string_filter--value--int64_value))
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.value.double_value`
+
+Required:
+
+- `value` (Number)
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter.value.int64_value`
+
+Required:
+
+- `value` (String)
+
+
+
+
+
+### Nested Schema for `configuration.custom_reports_array.metric_filter.or_group.expressions.filter.string_filter`
+
+Required:
+
+- `value` (String)
Optional:
-- `auth_type` (String) must be one of ["Service"]
+- `case_sensitive` (Boolean)
+- `match_type` (List of String)
diff --git a/docs/resources/source_google_analytics_v4.md b/docs/resources/source_google_analytics_v4.md
deleted file mode 100644
index 1a2d2ab2a..000000000
--- a/docs/resources/source_google_analytics_v4.md
+++ /dev/null
@@ -1,135 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_google_analytics_v4 Resource - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceGoogleAnalyticsV4 Resource
----
-
-# airbyte_source_google_analytics_v4 (Resource)
-
-SourceGoogleAnalyticsV4 Resource
-
-## Example Usage
-
-```terraform
-resource "airbyte_source_google_analytics_v4" "my_source_googleanalyticsv4" {
- configuration = {
- credentials = {
- source_google_analytics_v4_credentials_authenticate_via_google_oauth_ = {
- access_token = "...my_access_token..."
- auth_type = "Client"
- client_id = "...my_client_id..."
- client_secret = "...my_client_secret..."
- refresh_token = "...my_refresh_token..."
- }
- }
- custom_reports = "...my_custom_reports..."
- source_type = "google-analytics-v4"
- start_date = "2020-06-01"
- view_id = "...my_view_id..."
- window_in_days = 120
- }
- name = "Dr. Doug Dibbert"
- secret_id = "...my_secret_id..."
- workspace_id = "af725b29-1220-430d-83f5-aeb7799d22e8"
-}
-```
-
-
-## Schema
-
-### Required
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `source_id` (String)
-- `source_type` (String)
-
-
-### Nested Schema for `configuration`
-
-Required:
-
-- `source_type` (String) must be one of ["google-analytics-v4"]
-- `start_date` (String) The date in the format YYYY-MM-DD. Any data before this date will not be replicated.
-- `view_id` (String) The ID for the Google Analytics View you want to fetch data from. This can be found from the Google Analytics Account Explorer.
-
-Optional:
-
-- `credentials` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials))
-- `custom_reports` (String) A JSON array describing the custom reports you want to sync from Google Analytics. See the docs for more information about the exact format you can use to fill out this field.
-- `window_in_days` (Number) The time increment used by the connector when requesting data from the Google Analytics API. More information is available in the the docs. The bigger this value is, the faster the sync will be, but the more likely that sampling will be applied to your data, potentially causing inaccuracies in the returned results. We recommend setting this to 1 unless you have a hard requirement to make the sync faster at the expense of accuracy. The minimum allowed value for this field is 1, and the maximum is 364.
-
-
-### Nested Schema for `configuration.credentials`
-
-Optional:
-
-- `source_google_analytics_v4_credentials_authenticate_via_google_oauth` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_v4_credentials_authenticate_via_google_oauth))
-- `source_google_analytics_v4_credentials_service_account_key_authentication` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_v4_credentials_service_account_key_authentication))
-- `source_google_analytics_v4_update_credentials_authenticate_via_google_oauth` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_v4_update_credentials_authenticate_via_google_oauth))
-- `source_google_analytics_v4_update_credentials_service_account_key_authentication` (Attributes) Credentials for the service (see [below for nested schema](#nestedatt--configuration--credentials--source_google_analytics_v4_update_credentials_service_account_key_authentication))
-
-
-### Nested Schema for `configuration.credentials.source_google_analytics_v4_credentials_authenticate_via_google_oauth`
-
-Required:
-
-- `client_id` (String) The Client ID of your Google Analytics developer application.
-- `client_secret` (String) The Client Secret of your Google Analytics developer application.
-- `refresh_token` (String) The token for obtaining a new access token.
-
-Optional:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["Client"]
-
-
-
-### Nested Schema for `configuration.credentials.source_google_analytics_v4_credentials_service_account_key_authentication`
-
-Required:
-
-- `credentials_json` (String) The JSON key of the service account to use for authorization
-
-Optional:
-
-- `auth_type` (String) must be one of ["Service"]
-
-
-
-### Nested Schema for `configuration.credentials.source_google_analytics_v4_update_credentials_authenticate_via_google_oauth`
-
-Required:
-
-- `client_id` (String) The Client ID of your Google Analytics developer application.
-- `client_secret` (String) The Client Secret of your Google Analytics developer application.
-- `refresh_token` (String) The token for obtaining a new access token.
-
-Optional:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["Client"]
-
-
-
-### Nested Schema for `configuration.credentials.source_google_analytics_v4_update_credentials_service_account_key_authentication`
-
-Required:
-
-- `credentials_json` (String) The JSON key of the service account to use for authorization
-
-Optional:
-
-- `auth_type` (String) must be one of ["Service"]
-
-
diff --git a/docs/resources/source_google_directory.md b/docs/resources/source_google_directory.md
index 277f8a891..0142780e7 100644
--- a/docs/resources/source_google_directory.md
+++ b/docs/resources/source_google_directory.md
@@ -16,17 +16,16 @@ SourceGoogleDirectory Resource
resource "airbyte_source_google_directory" "my_source_googledirectory" {
configuration = {
credentials = {
- source_google_directory_google_credentials_service_account_key = {
- credentials_json = "...my_credentials_json..."
- credentials_title = "Service accounts"
- email = "Ayla.Zulauf@hotmail.com"
+ service_account_key = {
+ credentials_json = "...my_credentials_json..."
+ email = "Sharon_Schmidt@gmail.com"
}
}
- source_type = "google-directory"
}
- name = "Mrs. Allen Lockman"
- secret_id = "...my_secret_id..."
- workspace_id = "dc42c876-c2c2-4dfb-8cfc-1c76230f841f"
+ definition_id = "8b68fdfc-0692-4b4f-9673-f59a8d0acc99"
+ name = "Mr. Mattie Rau"
+ secret_id = "...my_secret_id..."
+ workspace_id = "1059fac1-d6c9-4b0f-8f35-d942704e93eb"
}
```
@@ -36,11 +35,12 @@ resource "airbyte_source_google_directory" "my_source_googledirectory" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,10 +51,6 @@ resource "airbyte_source_google_directory" "my_source_googledirectory" {
### Nested Schema for `configuration`
-Required:
-
-- `source_type` (String) must be one of ["google-directory"]
-
Optional:
- `credentials` (Attributes) Google APIs use the OAuth 2.0 protocol for authentication and authorization. The Source supports Web server application and Service accounts scenarios. (see [below for nested schema](#nestedatt--configuration--credentials))
@@ -64,66 +60,25 @@ Optional:
Optional:
-- `source_google_directory_google_credentials_service_account_key` (Attributes) For these scenario user should obtain service account's credentials from the Google API Console and provide delegated email. (see [below for nested schema](#nestedatt--configuration--credentials--source_google_directory_google_credentials_service_account_key))
-- `source_google_directory_google_credentials_sign_in_via_google_o_auth` (Attributes) For these scenario user only needs to give permission to read Google Directory data. (see [below for nested schema](#nestedatt--configuration--credentials--source_google_directory_google_credentials_sign_in_via_google_o_auth))
-- `source_google_directory_update_google_credentials_service_account_key` (Attributes) For these scenario user should obtain service account's credentials from the Google API Console and provide delegated email. (see [below for nested schema](#nestedatt--configuration--credentials--source_google_directory_update_google_credentials_service_account_key))
-- `source_google_directory_update_google_credentials_sign_in_via_google_o_auth` (Attributes) For these scenario user only needs to give permission to read Google Directory data. (see [below for nested schema](#nestedatt--configuration--credentials--source_google_directory_update_google_credentials_sign_in_via_google_o_auth))
+- `service_account_key` (Attributes) For these scenario user should obtain service account's credentials from the Google API Console and provide delegated email. (see [below for nested schema](#nestedatt--configuration--credentials--service_account_key))
+- `sign_in_via_google_o_auth` (Attributes) For these scenario user only needs to give permission to read Google Directory data. (see [below for nested schema](#nestedatt--configuration--credentials--sign_in_via_google_o_auth))
-
-### Nested Schema for `configuration.credentials.source_google_directory_google_credentials_service_account_key`
+
+### Nested Schema for `configuration.credentials.service_account_key`
Required:
- `credentials_json` (String) The contents of the JSON service account key. See the docs for more information on how to generate this key.
- `email` (String) The email of the user, which has permissions to access the Google Workspace Admin APIs.
-Optional:
-
-- `credentials_title` (String) must be one of ["Service accounts"]
-Authentication Scenario
-
-
-### Nested Schema for `configuration.credentials.source_google_directory_google_credentials_sign_in_via_google_o_auth`
+
+### Nested Schema for `configuration.credentials.sign_in_via_google_o_auth`
Required:
- `client_id` (String) The Client ID of the developer application.
- `client_secret` (String) The Client Secret of the developer application.
-- `refresh_token` (String) The Token for obtaining a new access token.
-
-Optional:
-
-- `credentials_title` (String) must be one of ["Web server app"]
-Authentication Scenario
-
-
-
-### Nested Schema for `configuration.credentials.source_google_directory_update_google_credentials_service_account_key`
-
-Required:
-
-- `credentials_json` (String) The contents of the JSON service account key. See the docs for more information on how to generate this key.
-- `email` (String) The email of the user, which has permissions to access the Google Workspace Admin APIs.
-
-Optional:
-
-- `credentials_title` (String) must be one of ["Service accounts"]
-Authentication Scenario
-
-
-
-### Nested Schema for `configuration.credentials.source_google_directory_update_google_credentials_sign_in_via_google_o_auth`
-
-Required:
-
-- `client_id` (String) The Client ID of the developer application.
-- `client_secret` (String) The Client Secret of the developer application.
-- `refresh_token` (String) The Token for obtaining a new access token.
-
-Optional:
-
-- `credentials_title` (String) must be one of ["Web server app"]
-Authentication Scenario
+- `refresh_token` (String, Sensitive) The Token for obtaining a new access token.
diff --git a/docs/resources/source_google_drive.md b/docs/resources/source_google_drive.md
new file mode 100644
index 000000000..edbd2b8e2
--- /dev/null
+++ b/docs/resources/source_google_drive.md
@@ -0,0 +1,226 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_source_google_drive Resource - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ SourceGoogleDrive Resource
+---
+
+# airbyte_source_google_drive (Resource)
+
+SourceGoogleDrive Resource
+
+## Example Usage
+
+```terraform
+resource "airbyte_source_google_drive" "my_source_googledrive" {
+ configuration = {
+ credentials = {
+ source_google_drive_authenticate_via_google_o_auth = {
+ client_id = "...my_client_id..."
+ client_secret = "...my_client_secret..."
+ refresh_token = "...my_refresh_token..."
+ }
+ }
+ folder_url = "https://drive.google.com/drive/folders/1Xaz0vXXXX2enKnNYU5qSt9NS70gvMyYn"
+ start_date = "2021-01-01T00:00:00.000000Z"
+ streams = [
+ {
+ days_to_sync_if_history_is_full = 4
+ format = {
+ source_google_drive_avro_format = {
+ double_as_string = false
+ }
+ }
+ globs = [
+ "...",
+ ]
+ input_schema = "...my_input_schema..."
+ name = "Rex Pacocha"
+ primary_key = "...my_primary_key..."
+ schemaless = false
+ validation_policy = "Emit Record"
+ },
+ ]
+ }
+ definition_id = "f0c4c84b-89e6-425b-ae87-6a32dc31e1b4"
+ name = "Lester Kihn"
+ secret_id = "...my_secret_id..."
+ workspace_id = "53bf2def-ea2f-4d14-9f48-d36313985539"
+}
+```
+
+
+## Schema
+
+### Required
+
+- `configuration` (Attributes) Used during spec; allows the developer to configure the cloud provider specific options
+that are needed when users configure a file-based source. (see [below for nested schema](#nestedatt--configuration))
+- `name` (String) Name of the source e.g. dev-mysql-instance.
+- `workspace_id` (String)
+
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
+- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
+
+### Read-Only
+
+- `source_id` (String)
+- `source_type` (String)
+
+
+### Nested Schema for `configuration`
+
+Required:
+
+- `credentials` (Attributes) Credentials for connecting to the Google Drive API (see [below for nested schema](#nestedatt--configuration--credentials))
+- `folder_url` (String) URL for the folder you want to sync. Using individual streams and glob patterns, it's possible to only sync a subset of all files located in the folder.
+- `streams` (Attributes List) Each instance of this configuration defines a stream. Use this to define which files belong in the stream, their format, and how they should be parsed and validated. When sending data to warehouse destination such as Snowflake or BigQuery, each stream is a separate table. (see [below for nested schema](#nestedatt--configuration--streams))
+
+Optional:
+
+- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00.000000Z. Any file modified before this date will not be replicated.
+
+
+### Nested Schema for `configuration.credentials`
+
+Optional:
+
+- `authenticate_via_google_o_auth` (Attributes) Credentials for connecting to the Google Drive API (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_via_google_o_auth))
+- `service_account_key_authentication` (Attributes) Credentials for connecting to the Google Drive API (see [below for nested schema](#nestedatt--configuration--credentials--service_account_key_authentication))
+
+
+### Nested Schema for `configuration.credentials.authenticate_via_google_o_auth`
+
+Required:
+
+- `client_id` (String) Client ID for the Google Drive API
+- `client_secret` (String) Client Secret for the Google Drive API
+- `refresh_token` (String, Sensitive) Refresh Token for the Google Drive API
+
+
+
+### Nested Schema for `configuration.credentials.service_account_key_authentication`
+
+Required:
+
+- `service_account_info` (String) The JSON key of the service account to use for authorization. Read more here.
+
+
+
+
+### Nested Schema for `configuration.streams`
+
+Required:
+
+- `format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format))
+- `name` (String) The name of the stream.
+
+Optional:
+
+- `days_to_sync_if_history_is_full` (Number) Default: 3
+When the state history of the file store is full, syncs will only read files that were last modified in the provided day range.
+- `globs` (List of String) The pattern used to specify which files should be selected from the file system. For more information on glob pattern matching look here.
+- `input_schema` (String) The schema that will be used to validate records extracted from the file. This will override the stream schema that is auto-detected from incoming files.
+- `primary_key` (String, Sensitive) The column or columns (for a composite key) that serves as the unique identifier of a record.
+- `schemaless` (Boolean) Default: false
+When enabled, syncs will not validate or structure records against the stream's schema.
+- `validation_policy` (String) must be one of ["Emit Record", "Skip Record", "Wait for Discover"]; Default: "Emit Record"
+The name of the validation policy that dictates sync behavior when a record does not adhere to the stream schema.
+
+
+### Nested Schema for `configuration.streams.format`
+
+Optional:
+
+- `avro_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--avro_format))
+- `csv_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--csv_format))
+- `document_file_type_format_experimental` (Attributes) Extract text from document formats (.pdf, .docx, .md, .pptx) and emit as one record per file. (see [below for nested schema](#nestedatt--configuration--streams--format--document_file_type_format_experimental))
+- `jsonl_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--jsonl_format))
+- `parquet_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format))
+
+
+### Nested Schema for `configuration.streams.format.parquet_format`
+
+Optional:
+
+- `double_as_string` (Boolean) Default: false
+Whether to convert double fields to strings. This is recommended if you have decimal numbers with a high degree of precision because there can be a loss precision when handling floating point numbers.
+
+
+
+### Nested Schema for `configuration.streams.format.parquet_format`
+
+Optional:
+
+- `delimiter` (String) Default: ","
+The character delimiting individual cells in the CSV data. This may only be a 1-character string. For tab-delimited data enter '\t'.
+- `double_quote` (Boolean) Default: true
+Whether two quotes in a quoted CSV value denote a single quote in the data.
+- `encoding` (String) Default: "utf8"
+The character encoding of the CSV data. Leave blank to default to UTF8. See list of python encodings for allowable options.
+- `escape_char` (String) The character used for escaping special characters. To disallow escaping, leave this field blank.
+- `false_values` (List of String) A set of case-sensitive strings that should be interpreted as false values.
+- `header_definition` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format--header_definition))
+- `null_values` (List of String) A set of case-sensitive strings that should be interpreted as null values. For example, if the value 'NA' should be interpreted as null, enter 'NA' in this field.
+- `quote_char` (String) Default: "\""
+The character used for quoting CSV values. To disallow quoting, make this field blank.
+- `skip_rows_after_header` (Number) Default: 0
+The number of rows to skip after the header row.
+- `skip_rows_before_header` (Number) Default: 0
+The number of rows to skip before the header row. For example, if the header row is on the 3rd row, enter 2 in this field.
+- `strings_can_be_null` (Boolean) Default: true
+Whether strings can be interpreted as null values. If true, strings that match the null_values set will be interpreted as null. If false, strings that match the null_values set will be interpreted as the string itself.
+- `true_values` (List of String) A set of case-sensitive strings that should be interpreted as true values.
+
+
+### Nested Schema for `configuration.streams.format.parquet_format.header_definition`
+
+Optional:
+
+- `autogenerated` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format--header_definition--autogenerated))
+- `from_csv` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format--header_definition--from_csv))
+- `user_provided` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format--header_definition--user_provided))
+
+
+### Nested Schema for `configuration.streams.format.parquet_format.header_definition.user_provided`
+
+
+
+### Nested Schema for `configuration.streams.format.parquet_format.header_definition.user_provided`
+
+
+
+### Nested Schema for `configuration.streams.format.parquet_format.header_definition.user_provided`
+
+Required:
+
+- `column_names` (List of String) The column names that will be used while emitting the CSV records
+
+
+
+
+
+### Nested Schema for `configuration.streams.format.parquet_format`
+
+Optional:
+
+- `skip_unprocessable_file_types` (Boolean) Default: true
+If true, skip files that cannot be parsed because of their file type and log a warning. If false, fail the sync. Corrupted files with valid file types will still result in a failed sync.
+
+
+
+### Nested Schema for `configuration.streams.format.parquet_format`
+
+
+
+### Nested Schema for `configuration.streams.format.parquet_format`
+
+Optional:
+
+- `decimal_as_float` (Boolean) Default: false
+Whether to convert decimal fields to floats. There is a loss of precision when converting decimals to floats, so this is not recommended.
+
+
diff --git a/docs/resources/source_google_pagespeed_insights.md b/docs/resources/source_google_pagespeed_insights.md
index c574bdc92..9229e1f80 100644
--- a/docs/resources/source_google_pagespeed_insights.md
+++ b/docs/resources/source_google_pagespeed_insights.md
@@ -17,9 +17,8 @@ resource "airbyte_source_google_pagespeed_insights" "my_source_googlepagespeedin
configuration = {
api_key = "...my_api_key..."
categories = [
- "pwa",
+ "seo",
]
- source_type = "google-pagespeed-insights"
strategies = [
"desktop",
]
@@ -27,9 +26,10 @@ resource "airbyte_source_google_pagespeed_insights" "my_source_googlepagespeedin
"...",
]
}
- name = "Kristopher Dare"
- secret_id = "...my_secret_id..."
- workspace_id = "db14db6b-e5a6-4859-98e2-2ae20da16fc2"
+ definition_id = "52d3206a-fb3a-4724-a60d-40134e58876c"
+ name = "Miss Ronald Erdman Sr."
+ secret_id = "...my_secret_id..."
+ workspace_id = "8ae06a57-c7c5-477a-b1e5-baddd2747bbc"
}
```
@@ -39,11 +39,12 @@ resource "airbyte_source_google_pagespeed_insights" "my_source_googlepagespeedin
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -57,12 +58,11 @@ resource "airbyte_source_google_pagespeed_insights" "my_source_googlepagespeedin
Required:
- `categories` (List of String) Defines which Lighthouse category to run. One or many of: "accessibility", "best-practices", "performance", "pwa", "seo".
-- `source_type` (String) must be one of ["google-pagespeed-insights"]
- `strategies` (List of String) The analyses strategy to use. Either "desktop" or "mobile".
- `urls` (List of String) The URLs to retrieve pagespeed information from. The connector will attempt to sync PageSpeed reports for all the defined URLs. Format: https://(www.)url.domain
Optional:
-- `api_key` (String) Google PageSpeed API Key. See here. The key is optional - however the API is heavily rate limited when using without API Key. Creating and using the API key therefore is recommended. The key is case sensitive.
+- `api_key` (String, Sensitive) Google PageSpeed API Key. See here. The key is optional - however the API is heavily rate limited when using without API Key. Creating and using the API key therefore is recommended. The key is case sensitive.
diff --git a/docs/resources/source_google_search_console.md b/docs/resources/source_google_search_console.md
index 5d721b254..79735b33c 100644
--- a/docs/resources/source_google_search_console.md
+++ b/docs/resources/source_google_search_console.md
@@ -16,9 +16,8 @@ SourceGoogleSearchConsole Resource
resource "airbyte_source_google_search_console" "my_source_googlesearchconsole" {
configuration = {
authorization = {
- source_google_search_console_authentication_type_o_auth = {
+ source_google_search_console_o_auth = {
access_token = "...my_access_token..."
- auth_type = "Client"
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
refresh_token = "...my_refresh_token..."
@@ -28,9 +27,9 @@ resource "airbyte_source_google_search_console" "my_source_googlesearchconsole"
custom_reports_array = [
{
dimensions = [
- "page",
+ "device",
]
- name = "Heidi Bernier"
+ name = "Ms. Randy Gorczany V"
},
]
data_state = "all"
@@ -38,12 +37,12 @@ resource "airbyte_source_google_search_console" "my_source_googlesearchconsole"
site_urls = [
"...",
]
- source_type = "google-search-console"
- start_date = "2022-07-11"
+ start_date = "2020-03-18"
}
- name = "Jordan Hilll"
- secret_id = "...my_secret_id..."
- workspace_id = "90439d22-2465-4694-a240-7084f7ab37ce"
+ definition_id = "165bc484-0e7f-4b5d-b254-77f370b0ec7c"
+ name = "Wendell Rempel"
+ secret_id = "...my_secret_id..."
+ workspace_id = "0cb9d8df-c27a-48c7-ac3e-b5dc55714db0"
}
```
@@ -53,11 +52,12 @@ resource "airbyte_source_google_search_console" "my_source_googlesearchconsole"
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -72,73 +72,44 @@ Required:
- `authorization` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization))
- `site_urls` (List of String) The URLs of the website property attached to your GSC account. Learn more about properties here.
-- `source_type` (String) must be one of ["google-search-console"]
Optional:
- `custom_reports` (String) (DEPRCATED) A JSON array describing the custom reports you want to sync from Google Search Console. See our documentation for more information on formulating custom reports.
- `custom_reports_array` (Attributes List) You can add your Custom Analytics report by creating one. (see [below for nested schema](#nestedatt--configuration--custom_reports_array))
-- `data_state` (String) must be one of ["final", "all"]
+- `data_state` (String) must be one of ["final", "all"]; Default: "final"
If set to 'final', the returned data will include only finalized, stable data. If set to 'all', fresh data will be included. When using Incremental sync mode, we do not recommend setting this parameter to 'all' as it may cause data loss. More information can be found in our full documentation.
- `end_date` (String) UTC date in the format YYYY-MM-DD. Any data created after this date will not be replicated. Must be greater or equal to the start date field. Leaving this field blank will replicate all data from the start date onward.
-- `start_date` (String) UTC date in the format YYYY-MM-DD. Any data before this date will not be replicated.
+- `start_date` (String) Default: "2021-01-01"
+UTC date in the format YYYY-MM-DD. Any data before this date will not be replicated.
### Nested Schema for `configuration.authorization`
Optional:
-- `source_google_search_console_authentication_type_o_auth` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization--source_google_search_console_authentication_type_o_auth))
-- `source_google_search_console_authentication_type_service_account_key_authentication` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization--source_google_search_console_authentication_type_service_account_key_authentication))
-- `source_google_search_console_update_authentication_type_o_auth` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization--source_google_search_console_update_authentication_type_o_auth))
-- `source_google_search_console_update_authentication_type_service_account_key_authentication` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization--source_google_search_console_update_authentication_type_service_account_key_authentication))
+- `o_auth` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization--o_auth))
+- `service_account_key_authentication` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization--service_account_key_authentication))
-
-### Nested Schema for `configuration.authorization.source_google_search_console_authentication_type_o_auth`
+
+### Nested Schema for `configuration.authorization.o_auth`
Required:
-- `auth_type` (String) must be one of ["Client"]
- `client_id` (String) The client ID of your Google Search Console developer application. Read more here.
- `client_secret` (String) The client secret of your Google Search Console developer application. Read more here.
-- `refresh_token` (String) The token for obtaining a new access token. Read more here.
+- `refresh_token` (String, Sensitive) The token for obtaining a new access token. Read more here.
Optional:
-- `access_token` (String) Access token for making authenticated requests. Read more here.
+- `access_token` (String, Sensitive) Access token for making authenticated requests. Read more here.
-
-### Nested Schema for `configuration.authorization.source_google_search_console_authentication_type_service_account_key_authentication`
+
+### Nested Schema for `configuration.authorization.service_account_key_authentication`
Required:
-- `auth_type` (String) must be one of ["Service"]
-- `email` (String) The email of the user which has permissions to access the Google Workspace Admin APIs.
-- `service_account_info` (String) The JSON key of the service account to use for authorization. Read more here.
-
-
-
-### Nested Schema for `configuration.authorization.source_google_search_console_update_authentication_type_o_auth`
-
-Required:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The client ID of your Google Search Console developer application. Read more here.
-- `client_secret` (String) The client secret of your Google Search Console developer application. Read more here.
-- `refresh_token` (String) The token for obtaining a new access token. Read more here.
-
-Optional:
-
-- `access_token` (String) Access token for making authenticated requests. Read more here.
-
-
-
-### Nested Schema for `configuration.authorization.source_google_search_console_update_authentication_type_service_account_key_authentication`
-
-Required:
-
-- `auth_type` (String) must be one of ["Service"]
- `email` (String) The email of the user which has permissions to access the Google Workspace Admin APIs.
- `service_account_info` (String) The JSON key of the service account to use for authorization. Read more here.
@@ -149,7 +120,7 @@ Required:
Required:
-- `dimensions` (List of String) A list of dimensions (country, date, device, page, query)
+- `dimensions` (List of String) A list of available dimensions. Please note, that for technical reasons `date` is the default dimension which will be included in your query whether you specify it or not. Primary key will consist of your custom dimensions and the default dimension along with `site_url` and `search_type`.
- `name` (String) The name of the custom report, this name would be used as stream name
diff --git a/docs/resources/source_google_sheets.md b/docs/resources/source_google_sheets.md
index 3ba1744db..79e44ade4 100644
--- a/docs/resources/source_google_sheets.md
+++ b/docs/resources/source_google_sheets.md
@@ -16,20 +16,19 @@ SourceGoogleSheets Resource
resource "airbyte_source_google_sheets" "my_source_googlesheets" {
configuration = {
credentials = {
- source_google_sheets_authentication_authenticate_via_google_o_auth_ = {
- auth_type = "Client"
+ source_google_sheets_authenticate_via_google_o_auth = {
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
refresh_token = "...my_refresh_token..."
}
}
names_conversion = true
- source_type = "google-sheets"
spreadsheet_id = "https://docs.google.com/spreadsheets/d/1hLd9Qqti3UyLXZB2aFfUWDT7BG-arw2xy4HR3D-dwUb/edit"
}
- name = "Irene Davis"
- secret_id = "...my_secret_id..."
- workspace_id = "194db554-10ad-4c66-9af9-0a26c7cdc981"
+ definition_id = "d7698733-386b-453a-879a-0805ff1793bf"
+ name = "Roderick Kutch"
+ secret_id = "...my_secret_id..."
+ workspace_id = "d63199bd-6b46-48c8-9ec2-1a9ab567f13c"
}
```
@@ -39,11 +38,12 @@ resource "airbyte_source_google_sheets" "my_source_googlesheets" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -57,60 +57,36 @@ resource "airbyte_source_google_sheets" "my_source_googlesheets" {
Required:
- `credentials` (Attributes) Credentials for connecting to the Google Sheets API (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["google-sheets"]
- `spreadsheet_id` (String) Enter the link to the Google spreadsheet you want to sync. To copy the link, click the 'Share' button in the top-right corner of the spreadsheet, then click 'Copy link'.
Optional:
-- `names_conversion` (Boolean) Enables the conversion of column names to a standardized, SQL-compliant format. For example, 'My Name' -> 'my_name'. Enable this option if your destination is SQL-based.
+- `names_conversion` (Boolean) Default: false
+Enables the conversion of column names to a standardized, SQL-compliant format. For example, 'My Name' -> 'my_name'. Enable this option if your destination is SQL-based.
### Nested Schema for `configuration.credentials`
Optional:
-- `source_google_sheets_authentication_authenticate_via_google_o_auth` (Attributes) Credentials for connecting to the Google Sheets API (see [below for nested schema](#nestedatt--configuration--credentials--source_google_sheets_authentication_authenticate_via_google_o_auth))
-- `source_google_sheets_authentication_service_account_key_authentication` (Attributes) Credentials for connecting to the Google Sheets API (see [below for nested schema](#nestedatt--configuration--credentials--source_google_sheets_authentication_service_account_key_authentication))
-- `source_google_sheets_update_authentication_authenticate_via_google_o_auth` (Attributes) Credentials for connecting to the Google Sheets API (see [below for nested schema](#nestedatt--configuration--credentials--source_google_sheets_update_authentication_authenticate_via_google_o_auth))
-- `source_google_sheets_update_authentication_service_account_key_authentication` (Attributes) Credentials for connecting to the Google Sheets API (see [below for nested schema](#nestedatt--configuration--credentials--source_google_sheets_update_authentication_service_account_key_authentication))
+- `authenticate_via_google_o_auth` (Attributes) Credentials for connecting to the Google Sheets API (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_via_google_o_auth))
+- `service_account_key_authentication` (Attributes) Credentials for connecting to the Google Sheets API (see [below for nested schema](#nestedatt--configuration--credentials--service_account_key_authentication))
-
-### Nested Schema for `configuration.credentials.source_google_sheets_authentication_authenticate_via_google_o_auth`
+
+### Nested Schema for `configuration.credentials.authenticate_via_google_o_auth`
Required:
-- `auth_type` (String) must be one of ["Client"]
- `client_id` (String) Enter your Google application's Client ID. See Google's documentation for more information.
- `client_secret` (String) Enter your Google application's Client Secret. See Google's documentation for more information.
-- `refresh_token` (String) Enter your Google application's refresh token. See Google's documentation for more information.
+- `refresh_token` (String, Sensitive) Enter your Google application's refresh token. See Google's documentation for more information.
-
-### Nested Schema for `configuration.credentials.source_google_sheets_authentication_service_account_key_authentication`
+
+### Nested Schema for `configuration.credentials.service_account_key_authentication`
Required:
-- `auth_type` (String) must be one of ["Service"]
-- `service_account_info` (String) The JSON key of the service account to use for authorization. Read more here.
-
-
-
-### Nested Schema for `configuration.credentials.source_google_sheets_update_authentication_authenticate_via_google_o_auth`
-
-Required:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) Enter your Google application's Client ID. See Google's documentation for more information.
-- `client_secret` (String) Enter your Google application's Client Secret. See Google's documentation for more information.
-- `refresh_token` (String) Enter your Google application's refresh token. See Google's documentation for more information.
-
-
-
-### Nested Schema for `configuration.credentials.source_google_sheets_update_authentication_service_account_key_authentication`
-
-Required:
-
-- `auth_type` (String) must be one of ["Service"]
- `service_account_info` (String) The JSON key of the service account to use for authorization. Read more here.
diff --git a/docs/resources/source_google_webfonts.md b/docs/resources/source_google_webfonts.md
index 002d3ab6f..68bf04cf3 100644
--- a/docs/resources/source_google_webfonts.md
+++ b/docs/resources/source_google_webfonts.md
@@ -19,11 +19,11 @@ resource "airbyte_source_google_webfonts" "my_source_googlewebfonts" {
api_key = "...my_api_key..."
pretty_print = "...my_pretty_print..."
sort = "...my_sort..."
- source_type = "google-webfonts"
}
- name = "Donald Hyatt"
- secret_id = "...my_secret_id..."
- workspace_id = "81d6bb33-cfaa-4348-831b-f407ee4fcf0c"
+ definition_id = "77e51fa7-73fc-4f1a-8306-e082909d97bf"
+ name = "Kerry Reinger"
+ secret_id = "...my_secret_id..."
+ workspace_id = "3671a9ca-1d9c-4174-bee4-145562d27576"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_google_webfonts" "my_source_googlewebfonts" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,8 +51,7 @@ resource "airbyte_source_google_webfonts" "my_source_googlewebfonts" {
Required:
-- `api_key` (String) API key is required to access google apis, For getting your's goto google console and generate api key for Webfonts
-- `source_type` (String) must be one of ["google-webfonts"]
+- `api_key` (String, Sensitive) API key is required to access google apis, For getting your's goto google console and generate api key for Webfonts
Optional:
diff --git a/docs/resources/source_google_workspace_admin_reports.md b/docs/resources/source_google_workspace_admin_reports.md
index 9c81767bf..473dc000a 100644
--- a/docs/resources/source_google_workspace_admin_reports.md
+++ b/docs/resources/source_google_workspace_admin_reports.md
@@ -16,13 +16,13 @@ SourceGoogleWorkspaceAdminReports Resource
resource "airbyte_source_google_workspace_admin_reports" "my_source_googleworkspaceadminreports" {
configuration = {
credentials_json = "...my_credentials_json..."
- email = "Bridgette_Rohan@gmail.com"
- lookback = 10
- source_type = "google-workspace-admin-reports"
+ email = "Daisha.Halvorson12@gmail.com"
+ lookback = 8
}
- name = "Samantha Huels"
- secret_id = "...my_secret_id..."
- workspace_id = "398a0dc7-6632-44cc-b06c-8ca12d025292"
+ definition_id = "b8adc8fd-2a7f-4940-9ec4-4e216dff8929"
+ name = "Francisco Swaniawski"
+ secret_id = "...my_secret_id..."
+ workspace_id = "a00b494f-7d68-4d64-a810-b2959587ed0c"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_google_workspace_admin_reports" "my_source_googleworksp
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,7 +52,6 @@ Required:
- `credentials_json` (String) The contents of the JSON service account key. See the docs for more information on how to generate this key.
- `email` (String) The email of the user, which has permissions to access the Google Workspace Admin APIs.
-- `source_type` (String) must be one of ["google-workspace-admin-reports"]
Optional:
diff --git a/docs/resources/source_greenhouse.md b/docs/resources/source_greenhouse.md
index 1a78be76f..cf7c580db 100644
--- a/docs/resources/source_greenhouse.md
+++ b/docs/resources/source_greenhouse.md
@@ -15,12 +15,12 @@ SourceGreenhouse Resource
```terraform
resource "airbyte_source_greenhouse" "my_source_greenhouse" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "greenhouse"
+ api_key = "...my_api_key..."
}
- name = "Patricia Pouros"
- secret_id = "...my_secret_id..."
- workspace_id = "5722dd89-5b8b-4cf2-8db9-59693352f745"
+ definition_id = "47c0f9ce-33c0-4f29-8c11-e4e993d29474"
+ name = "Cassandra Carroll"
+ secret_id = "...my_secret_id..."
+ workspace_id = "54dff6cf-9b79-4e23-a888-b6bde25154a5"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_greenhouse" "my_source_greenhouse" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_greenhouse" "my_source_greenhouse" {
Required:
-- `api_key` (String) Greenhouse API Key. See the docs for more information on how to generate this key.
-- `source_type` (String) must be one of ["greenhouse"]
+- `api_key` (String, Sensitive) Greenhouse API Key. See the docs for more information on how to generate this key.
diff --git a/docs/resources/source_gridly.md b/docs/resources/source_gridly.md
index 01d5a0080..da177273d 100644
--- a/docs/resources/source_gridly.md
+++ b/docs/resources/source_gridly.md
@@ -15,13 +15,13 @@ SourceGridly Resource
```terraform
resource "airbyte_source_gridly" "my_source_gridly" {
configuration = {
- api_key = "...my_api_key..."
- grid_id = "...my_grid_id..."
- source_type = "gridly"
+ api_key = "...my_api_key..."
+ grid_id = "...my_grid_id..."
}
- name = "Josephine McCullough"
- secret_id = "...my_secret_id..."
- workspace_id = "d78de3b6-e938-49f5-abb7-f662550a2838"
+ definition_id = "2da80f2b-fa49-4853-a695-0935ad536c50"
+ name = "Megan Kshlerin"
+ secret_id = "...my_secret_id..."
+ workspace_id = "e30b46b9-59e4-4e75-8ac0-9227119b95b6"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_gridly" "my_source_gridly" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_gridly" "my_source_gridly" {
Required:
-- `api_key` (String)
+- `api_key` (String, Sensitive)
- `grid_id` (String) ID of a grid, or can be ID of a branch
-- `source_type` (String) must be one of ["gridly"]
diff --git a/docs/resources/source_harvest.md b/docs/resources/source_harvest.md
index 1eebc2ea3..1d335ea56 100644
--- a/docs/resources/source_harvest.md
+++ b/docs/resources/source_harvest.md
@@ -17,20 +17,20 @@ resource "airbyte_source_harvest" "my_source_harvest" {
configuration = {
account_id = "...my_account_id..."
credentials = {
- source_harvest_authentication_mechanism_authenticate_via_harvest_o_auth_ = {
- auth_type = "Client"
- client_id = "...my_client_id..."
- client_secret = "...my_client_secret..."
- refresh_token = "...my_refresh_token..."
+ authenticate_via_harvest_o_auth = {
+ additional_properties = "{ \"see\": \"documentation\" }"
+ client_id = "...my_client_id..."
+ client_secret = "...my_client_secret..."
+ refresh_token = "...my_refresh_token..."
}
}
replication_end_date = "2017-01-25T00:00:00Z"
replication_start_date = "2017-01-25T00:00:00Z"
- source_type = "harvest"
}
- name = "Rodney Orn"
- secret_id = "...my_secret_id..."
- workspace_id = "2315bba6-5016-44e0-af5b-f6ae591bc8bd"
+ definition_id = "bb7037ab-5561-4ce1-bb1c-adaa0e328a3b"
+ name = "Jorge Heathcote"
+ secret_id = "...my_secret_id..."
+ workspace_id = "e04de287-b752-465f-9ff2-deb8cbf2674a"
}
```
@@ -40,11 +40,12 @@ resource "airbyte_source_harvest" "my_source_harvest" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -59,7 +60,6 @@ Required:
- `account_id` (String) Harvest account ID. Required for all Harvest requests in pair with Personal Access Token
- `replication_start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-- `source_type` (String) must be one of ["harvest"]
Optional:
@@ -71,64 +71,32 @@ Optional:
Optional:
-- `source_harvest_authentication_mechanism_authenticate_via_harvest_o_auth` (Attributes) Choose how to authenticate to Harvest. (see [below for nested schema](#nestedatt--configuration--credentials--source_harvest_authentication_mechanism_authenticate_via_harvest_o_auth))
-- `source_harvest_authentication_mechanism_authenticate_with_personal_access_token` (Attributes) Choose how to authenticate to Harvest. (see [below for nested schema](#nestedatt--configuration--credentials--source_harvest_authentication_mechanism_authenticate_with_personal_access_token))
-- `source_harvest_update_authentication_mechanism_authenticate_via_harvest_o_auth` (Attributes) Choose how to authenticate to Harvest. (see [below for nested schema](#nestedatt--configuration--credentials--source_harvest_update_authentication_mechanism_authenticate_via_harvest_o_auth))
-- `source_harvest_update_authentication_mechanism_authenticate_with_personal_access_token` (Attributes) Choose how to authenticate to Harvest. (see [below for nested schema](#nestedatt--configuration--credentials--source_harvest_update_authentication_mechanism_authenticate_with_personal_access_token))
+- `authenticate_via_harvest_o_auth` (Attributes) Choose how to authenticate to Harvest. (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_via_harvest_o_auth))
+- `authenticate_with_personal_access_token` (Attributes) Choose how to authenticate to Harvest. (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_with_personal_access_token))
-
-### Nested Schema for `configuration.credentials.source_harvest_authentication_mechanism_authenticate_via_harvest_o_auth`
+
+### Nested Schema for `configuration.credentials.authenticate_via_harvest_o_auth`
Required:
- `client_id` (String) The Client ID of your Harvest developer application.
- `client_secret` (String) The Client Secret of your Harvest developer application.
-- `refresh_token` (String) Refresh Token to renew the expired Access Token.
+- `refresh_token` (String, Sensitive) Refresh Token to renew the expired Access Token.
Optional:
- `additional_properties` (String) Parsed as JSON.
-- `auth_type` (String) must be one of ["Client"]
-
-### Nested Schema for `configuration.credentials.source_harvest_authentication_mechanism_authenticate_with_personal_access_token`
+
+### Nested Schema for `configuration.credentials.authenticate_with_personal_access_token`
Required:
-- `api_token` (String) Log into Harvest and then create new personal access token.
+- `api_token` (String, Sensitive) Log into Harvest and then create new personal access token.
Optional:
- `additional_properties` (String) Parsed as JSON.
-- `auth_type` (String) must be one of ["Token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_harvest_update_authentication_mechanism_authenticate_via_harvest_o_auth`
-
-Required:
-
-- `client_id` (String) The Client ID of your Harvest developer application.
-- `client_secret` (String) The Client Secret of your Harvest developer application.
-- `refresh_token` (String) Refresh Token to renew the expired Access Token.
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `auth_type` (String) must be one of ["Client"]
-
-
-
-### Nested Schema for `configuration.credentials.source_harvest_update_authentication_mechanism_authenticate_with_personal_access_token`
-
-Required:
-
-- `api_token` (String) Log into Harvest and then create new personal access token.
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `auth_type` (String) must be one of ["Token"]
diff --git a/docs/resources/source_hubplanner.md b/docs/resources/source_hubplanner.md
index f20025573..4dbbecfd9 100644
--- a/docs/resources/source_hubplanner.md
+++ b/docs/resources/source_hubplanner.md
@@ -15,12 +15,12 @@ SourceHubplanner Resource
```terraform
resource "airbyte_source_hubplanner" "my_source_hubplanner" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "hubplanner"
+ api_key = "...my_api_key..."
}
- name = "Cary Emmerich Sr."
- secret_id = "...my_secret_id..."
- workspace_id = "b63c205f-da84-4077-8a68-a9a35d086b6f"
+ definition_id = "92033b17-bfcc-4526-af10-da401fb0fc52"
+ name = "Gladys Adams"
+ secret_id = "...my_secret_id..."
+ workspace_id = "9994a41e-4a89-485c-b8fa-7d86bdf5bf91"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_hubplanner" "my_source_hubplanner" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_hubplanner" "my_source_hubplanner" {
Required:
-- `api_key` (String) Hubplanner API key. See https://github.com/hubplanner/API#authentication for more details.
-- `source_type` (String) must be one of ["hubplanner"]
+- `api_key` (String, Sensitive) Hubplanner API key. See https://github.com/hubplanner/API#authentication for more details.
diff --git a/docs/resources/source_hubspot.md b/docs/resources/source_hubspot.md
index a6f8e891a..2de005fba 100644
--- a/docs/resources/source_hubspot.md
+++ b/docs/resources/source_hubspot.md
@@ -16,19 +16,18 @@ SourceHubspot Resource
resource "airbyte_source_hubspot" "my_source_hubspot" {
configuration = {
credentials = {
- source_hubspot_authentication_o_auth = {
- client_id = "123456789000"
- client_secret = "secret"
- credentials_title = "OAuth Credentials"
- refresh_token = "refresh_token"
+ source_hubspot_o_auth = {
+ client_id = "123456789000"
+ client_secret = "secret"
+ refresh_token = "refresh_token"
}
}
- source_type = "hubspot"
- start_date = "2017-01-25T00:00:00Z"
+ start_date = "2017-01-25T00:00:00Z"
}
- name = "Mr. Tomas Wisozk DVM"
- secret_id = "...my_secret_id..."
- workspace_id = "9f443b42-57b9-492c-8dbd-a6a61efa2198"
+ definition_id = "b1210837-28d8-49e3-91e8-68df1f2c5ad8"
+ name = "Amelia Gulgowski II"
+ secret_id = "...my_secret_id..."
+ workspace_id = "3eb240d6-26d4-4887-8caa-f58e0f5c1159"
}
```
@@ -38,11 +37,12 @@ resource "airbyte_source_hubspot" "my_source_hubspot" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -56,7 +56,6 @@ resource "airbyte_source_hubspot" "my_source_hubspot" {
Required:
- `credentials` (Attributes) Choose how to authenticate to HubSpot. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["hubspot"]
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
@@ -64,52 +63,24 @@ Required:
Optional:
-- `source_hubspot_authentication_o_auth` (Attributes) Choose how to authenticate to HubSpot. (see [below for nested schema](#nestedatt--configuration--credentials--source_hubspot_authentication_o_auth))
-- `source_hubspot_authentication_private_app` (Attributes) Choose how to authenticate to HubSpot. (see [below for nested schema](#nestedatt--configuration--credentials--source_hubspot_authentication_private_app))
-- `source_hubspot_update_authentication_o_auth` (Attributes) Choose how to authenticate to HubSpot. (see [below for nested schema](#nestedatt--configuration--credentials--source_hubspot_update_authentication_o_auth))
-- `source_hubspot_update_authentication_private_app` (Attributes) Choose how to authenticate to HubSpot. (see [below for nested schema](#nestedatt--configuration--credentials--source_hubspot_update_authentication_private_app))
+- `o_auth` (Attributes) Choose how to authenticate to HubSpot. (see [below for nested schema](#nestedatt--configuration--credentials--o_auth))
+- `private_app` (Attributes) Choose how to authenticate to HubSpot. (see [below for nested schema](#nestedatt--configuration--credentials--private_app))
-
-### Nested Schema for `configuration.credentials.source_hubspot_authentication_o_auth`
+
+### Nested Schema for `configuration.credentials.o_auth`
Required:
- `client_id` (String) The Client ID of your HubSpot developer application. See the Hubspot docs if you need help finding this ID.
- `client_secret` (String) The client secret for your HubSpot developer application. See the Hubspot docs if you need help finding this secret.
-- `credentials_title` (String) must be one of ["OAuth Credentials"]
-Name of the credentials
-- `refresh_token` (String) Refresh token to renew an expired access token. See the Hubspot docs if you need help finding this token.
+- `refresh_token` (String, Sensitive) Refresh token to renew an expired access token. See the Hubspot docs if you need help finding this token.
-
-### Nested Schema for `configuration.credentials.source_hubspot_authentication_private_app`
+
+### Nested Schema for `configuration.credentials.private_app`
Required:
-- `access_token` (String) HubSpot Access token. See the Hubspot docs if you need help finding this token.
-- `credentials_title` (String) must be one of ["Private App Credentials"]
-Name of the credentials set
-
-
-
-### Nested Schema for `configuration.credentials.source_hubspot_update_authentication_o_auth`
-
-Required:
-
-- `client_id` (String) The Client ID of your HubSpot developer application. See the Hubspot docs if you need help finding this ID.
-- `client_secret` (String) The client secret for your HubSpot developer application. See the Hubspot docs if you need help finding this secret.
-- `credentials_title` (String) must be one of ["OAuth Credentials"]
-Name of the credentials
-- `refresh_token` (String) Refresh token to renew an expired access token. See the Hubspot docs if you need help finding this token.
-
-
-
-### Nested Schema for `configuration.credentials.source_hubspot_update_authentication_private_app`
-
-Required:
-
-- `access_token` (String) HubSpot Access token. See the Hubspot docs if you need help finding this token.
-- `credentials_title` (String) must be one of ["Private App Credentials"]
-Name of the credentials set
+- `access_token` (String, Sensitive) HubSpot Access token. See the Hubspot docs if you need help finding this token.
diff --git a/docs/resources/source_insightly.md b/docs/resources/source_insightly.md
index 576daa4ed..dbc734766 100644
--- a/docs/resources/source_insightly.md
+++ b/docs/resources/source_insightly.md
@@ -15,13 +15,13 @@ SourceInsightly Resource
```terraform
resource "airbyte_source_insightly" "my_source_insightly" {
configuration = {
- source_type = "insightly"
- start_date = "2021-03-01T00:00:00Z"
- token = "...my_token..."
+ start_date = "2021-03-01T00:00:00Z"
+ token = "...my_token..."
}
- name = "Dana Lindgren"
- secret_id = "...my_secret_id..."
- workspace_id = "0a9eba47-f7d3-4ef0-8964-0d6a1831c87a"
+ definition_id = "d6014991-0eec-4fc7-b384-ec604057d045"
+ name = "Geneva Bogan"
+ secret_id = "...my_secret_id..."
+ workspace_id = "b5cccbbb-db31-4196-8f99-d67745afb65f"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_insightly" "my_source_insightly" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_insightly" "my_source_insightly" {
Required:
-- `source_type` (String) must be one of ["insightly"]
- `start_date` (String) The date from which you'd like to replicate data for Insightly in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated. Note that it will be used only for incremental streams.
-- `token` (String) Your Insightly API token.
+- `token` (String, Sensitive) Your Insightly API token.
diff --git a/docs/resources/source_instagram.md b/docs/resources/source_instagram.md
index 926ad00ad..1d40aa72d 100644
--- a/docs/resources/source_instagram.md
+++ b/docs/resources/source_instagram.md
@@ -18,12 +18,12 @@ resource "airbyte_source_instagram" "my_source_instagram" {
access_token = "...my_access_token..."
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
- source_type = "instagram"
start_date = "2017-01-25T00:00:00Z"
}
- name = "Mae Hoppe"
- secret_id = "...my_secret_id..."
- workspace_id = "f1ad837a-e80c-41c1-9c95-ba998678fa3f"
+ definition_id = "20bd7a7e-c191-4626-87e6-80e4417c6f4b"
+ name = "Margaret Maggio"
+ secret_id = "...my_secret_id..."
+ workspace_id = "206a4b04-3ef0-49e6-9b75-b726765eab1a"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_instagram" "my_source_instagram" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,13 +51,12 @@ resource "airbyte_source_instagram" "my_source_instagram" {
Required:
-- `access_token` (String) The value of the access token generated with instagram_basic, instagram_manage_insights, pages_show_list, pages_read_engagement, Instagram Public Content Access permissions. See the docs for more information
-- `source_type` (String) must be one of ["instagram"]
-- `start_date` (String) The date from which you'd like to replicate data for User Insights, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
+- `access_token` (String, Sensitive) The value of the access token generated with instagram_basic, instagram_manage_insights, pages_show_list, pages_read_engagement, Instagram Public Content Access permissions. See the docs for more information
Optional:
- `client_id` (String) The Client ID for your Oauth application
- `client_secret` (String) The Client Secret for your Oauth application
+- `start_date` (String) The date from which you'd like to replicate data for User Insights, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated. If left blank, the start date will be set to 2 years before the present date.
diff --git a/docs/resources/source_instatus.md b/docs/resources/source_instatus.md
index 830c152ab..165c64999 100644
--- a/docs/resources/source_instatus.md
+++ b/docs/resources/source_instatus.md
@@ -15,12 +15,12 @@ SourceInstatus Resource
```terraform
resource "airbyte_source_instatus" "my_source_instatus" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "instatus"
+ api_key = "...my_api_key..."
}
- name = "Bobbie Johnston"
- secret_id = "...my_secret_id..."
- workspace_id = "1af388ce-0361-4444-8c79-77a0ef2f5360"
+ definition_id = "d842954b-d759-4bdc-8b93-f80b7f557094"
+ name = "Enrique Kovacek"
+ secret_id = "...my_secret_id..."
+ workspace_id = "356d5339-1630-4fd2-b131-d4fbef253f33"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_instatus" "my_source_instatus" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_instatus" "my_source_instatus" {
Required:
-- `api_key` (String) Instatus REST API key
-- `source_type` (String) must be one of ["instatus"]
+- `api_key` (String, Sensitive) Instatus REST API key
diff --git a/docs/resources/source_intercom.md b/docs/resources/source_intercom.md
index d2ad37413..a674190f6 100644
--- a/docs/resources/source_intercom.md
+++ b/docs/resources/source_intercom.md
@@ -18,12 +18,12 @@ resource "airbyte_source_intercom" "my_source_intercom" {
access_token = "...my_access_token..."
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
- source_type = "intercom"
start_date = "2020-11-16T00:00:00Z"
}
- name = "Darnell Watsica"
- secret_id = "...my_secret_id..."
- workspace_id = "934152ed-7e25-43f4-8157-deaa7170f445"
+ definition_id = "135dc90f-6379-44a9-bd5a-cf56253a66e5"
+ name = "Clint Douglas V"
+ secret_id = "...my_secret_id..."
+ workspace_id = "29314c65-ed70-4eb1-bcb4-fc24002ca0d0"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_intercom" "my_source_intercom" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,8 +51,7 @@ resource "airbyte_source_intercom" "my_source_intercom" {
Required:
-- `access_token` (String) Access token for making authenticated requests. See the Intercom docs for more information.
-- `source_type` (String) must be one of ["intercom"]
+- `access_token` (String, Sensitive) Access token for making authenticated requests. See the Intercom docs for more information.
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
Optional:
diff --git a/docs/resources/source_ip2whois.md b/docs/resources/source_ip2whois.md
index 623f30ce1..76dd61de9 100644
--- a/docs/resources/source_ip2whois.md
+++ b/docs/resources/source_ip2whois.md
@@ -15,13 +15,13 @@ SourceIp2whois Resource
```terraform
resource "airbyte_source_ip2whois" "my_source_ip2whois" {
configuration = {
- api_key = "...my_api_key..."
- domain = "www.facebook.com"
- source_type = "ip2whois"
+ api_key = "...my_api_key..."
+ domain = "www.google.com"
}
- name = "Leland Wisoky"
- secret_id = "...my_secret_id..."
- workspace_id = "7aaf9bba-d185-4fe4-b1d6-bf5c838fbb8c"
+ definition_id = "711f25a2-8dde-404a-9ce3-be57bfa46127"
+ name = "Monica Champlin"
+ secret_id = "...my_secret_id..."
+ workspace_id = "5ed08074-e17a-4648-8571-1ab94fe75a51"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_ip2whois" "my_source_ip2whois" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_ip2whois" "my_source_ip2whois" {
Optional:
-- `api_key` (String) Your API Key. See here.
+- `api_key` (String, Sensitive) Your API Key. See here.
- `domain` (String) Domain name. See here.
-- `source_type` (String) must be one of ["ip2whois"]
diff --git a/docs/resources/source_iterable.md b/docs/resources/source_iterable.md
index 4c6f49f35..f8b63d58a 100644
--- a/docs/resources/source_iterable.md
+++ b/docs/resources/source_iterable.md
@@ -15,13 +15,13 @@ SourceIterable Resource
```terraform
resource "airbyte_source_iterable" "my_source_iterable" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "iterable"
- start_date = "2021-04-01T00:00:00Z"
+ api_key = "...my_api_key..."
+ start_date = "2021-04-01T00:00:00Z"
}
- name = "Archie Jaskolski"
- secret_id = "...my_secret_id..."
- workspace_id = "c4b425e9-9e62-434c-9f7b-79dfeb77a5c3"
+ definition_id = "00977793-827c-406d-986b-4fbde6ae5395"
+ name = "Katherine Bashirian"
+ secret_id = "...my_secret_id..."
+ workspace_id = "d8df8fdd-acae-4826-9af8-b9bb4850d654"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_iterable" "my_source_iterable" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_iterable" "my_source_iterable" {
Required:
-- `api_key` (String) Iterable API Key. See the docs for more information on how to obtain this key.
-- `source_type` (String) must be one of ["iterable"]
+- `api_key` (String, Sensitive) Iterable API Key. See the docs for more information on how to obtain this key.
- `start_date` (String) The date from which you'd like to replicate data for Iterable, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
diff --git a/docs/resources/source_jira.md b/docs/resources/source_jira.md
index cccfae197..5e6b89d2f 100644
--- a/docs/resources/source_jira.md
+++ b/docs/resources/source_jira.md
@@ -16,20 +16,24 @@ SourceJira Resource
resource "airbyte_source_jira" "my_source_jira" {
configuration = {
api_token = "...my_api_token..."
- domain = ".jira.com"
- email = "Eldridge_Reichert@hotmail.com"
+ domain = "jira..com"
+ email = "Benton_Tromp@hotmail.com"
enable_experimental_streams = false
expand_issue_changelog = false
+ expand_issue_transition = true
+ issues_stream_expand_with = [
+ "transitions",
+ ]
projects = [
"...",
]
- render_fields = false
- source_type = "jira"
+ render_fields = true
start_date = "2021-03-01T00:00:00Z"
}
- name = "Olive Windler"
- secret_id = "...my_secret_id..."
- workspace_id = "0a54b475-f16f-456d-b85a-3c4ac631b99e"
+ definition_id = "7e778751-26eb-4569-8431-2d5d5e6a2d83"
+ name = "Kenneth Runte"
+ secret_id = "...my_secret_id..."
+ workspace_id = "8dd54122-5651-4393-a1b0-488926ab9cfe"
}
```
@@ -39,11 +43,12 @@ resource "airbyte_source_jira" "my_source_jira" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -56,17 +61,22 @@ resource "airbyte_source_jira" "my_source_jira" {
Required:
-- `api_token` (String) Jira API Token. See the docs for more information on how to generate this key. API Token is used for Authorization to your account by BasicAuth.
+- `api_token` (String, Sensitive) Jira API Token. See the docs for more information on how to generate this key. API Token is used for Authorization to your account by BasicAuth.
- `domain` (String) The Domain for your Jira account, e.g. airbyteio.atlassian.net, airbyteio.jira.com, jira.your-domain.com
- `email` (String) The user email for your Jira account which you used to generate the API token. This field is used for Authorization to your account by BasicAuth.
-- `source_type` (String) must be one of ["jira"]
Optional:
-- `enable_experimental_streams` (Boolean) Allow the use of experimental streams which rely on undocumented Jira API endpoints. See https://docs.airbyte.com/integrations/sources/jira#experimental-tables for more info.
-- `expand_issue_changelog` (Boolean) Expand the changelog when replicating issues.
+- `enable_experimental_streams` (Boolean) Default: false
+Allow the use of experimental streams which rely on undocumented Jira API endpoints. See https://docs.airbyte.com/integrations/sources/jira#experimental-tables for more info.
+- `expand_issue_changelog` (Boolean) Default: false
+(DEPRECATED) Expand the changelog when replicating issues.
+- `expand_issue_transition` (Boolean) Default: false
+(DEPRECATED) Expand the transitions when replicating issues.
+- `issues_stream_expand_with` (List of String) Select fields to Expand the `Issues` stream when replicating with:
- `projects` (List of String) List of Jira project keys to replicate data for, or leave it empty if you want to replicate data for all projects.
-- `render_fields` (Boolean) Render issue fields in HTML format in addition to Jira JSON-like format.
+- `render_fields` (Boolean) Default: false
+(DEPRECATED) Render issue fields in HTML format in addition to Jira JSON-like format.
- `start_date` (String) The date from which you want to replicate data from Jira, use the format YYYY-MM-DDT00:00:00Z. Note that this field only applies to certain streams, and only data generated on or after the start date will be replicated. Or leave it empty if you want to replicate all data. For more information, refer to the documentation.
diff --git a/docs/resources/source_k6_cloud.md b/docs/resources/source_k6_cloud.md
index 7259f593b..a04b668c9 100644
--- a/docs/resources/source_k6_cloud.md
+++ b/docs/resources/source_k6_cloud.md
@@ -15,12 +15,12 @@ SourceK6Cloud Resource
```terraform
resource "airbyte_source_k6_cloud" "my_source_k6cloud" {
configuration = {
- api_token = "...my_api_token..."
- source_type = "k6-cloud"
+ api_token = "...my_api_token..."
}
- name = "Ella Runolfsdottir"
- secret_id = "...my_secret_id..."
- workspace_id = "8f9fdb94-10f6-43bb-b817-837b01afdd78"
+ definition_id = "2e85afcc-9acc-46e7-a95c-9a7c9f197511"
+ name = "Franklin D'Amore"
+ secret_id = "...my_secret_id..."
+ workspace_id = "96585095-001a-4ad5-a5f9-cfb0d1e8d3ac"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_k6_cloud" "my_source_k6cloud" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_k6_cloud" "my_source_k6cloud" {
Required:
-- `api_token` (String) Your API Token. See here. The key is case sensitive.
-- `source_type` (String) must be one of ["k6-cloud"]
+- `api_token` (String, Sensitive) Your API Token. See here. The key is case sensitive.
diff --git a/docs/resources/source_klarna.md b/docs/resources/source_klarna.md
index 3ecc96a20..b54a7ade5 100644
--- a/docs/resources/source_klarna.md
+++ b/docs/resources/source_klarna.md
@@ -15,15 +15,15 @@ SourceKlarna Resource
```terraform
resource "airbyte_source_klarna" "my_source_klarna" {
configuration = {
- password = "...my_password..."
- playground = true
- region = "us"
- source_type = "klarna"
- username = "Chase50"
+ password = "...my_password..."
+ playground = true
+ region = "oc"
+ username = "Lessie_Beatty"
}
- name = "Caleb Rau"
- secret_id = "...my_secret_id..."
- workspace_id = "873f5033-f19d-4bf1-a5ce-4152eab9cd7e"
+ definition_id = "ed1087b9-882d-454c-a598-cc59eb952f06"
+ name = "Carmen Bins"
+ secret_id = "...my_secret_id..."
+ workspace_id = "7fd8f9d1-baac-46e0-9b1e-50c14468d231"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_klarna" "my_source_klarna" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,11 +51,14 @@ resource "airbyte_source_klarna" "my_source_klarna" {
Required:
-- `password` (String) A string which is associated with your Merchant ID and is used to authorize use of Klarna's APIs (https://developers.klarna.com/api/#authentication)
-- `playground` (Boolean) Propertie defining if connector is used against playground or production environment
+- `password` (String, Sensitive) A string which is associated with your Merchant ID and is used to authorize use of Klarna's APIs (https://developers.klarna.com/api/#authentication)
- `region` (String) must be one of ["eu", "us", "oc"]
Base url region (For playground eu https://docs.klarna.com/klarna-payments/api/payments-api/#tag/API-URLs). Supported 'eu', 'us', 'oc'
-- `source_type` (String) must be one of ["klarna"]
- `username` (String) Consists of your Merchant ID (eid) - a unique number that identifies your e-store, combined with a random string (https://developers.klarna.com/api/#authentication)
+Optional:
+
+- `playground` (Boolean) Default: false
+Propertie defining if connector is used against playground or production environment
+
diff --git a/docs/resources/source_klaviyo.md b/docs/resources/source_klaviyo.md
index 34b618c3f..1b485cfb0 100644
--- a/docs/resources/source_klaviyo.md
+++ b/docs/resources/source_klaviyo.md
@@ -15,13 +15,13 @@ SourceKlaviyo Resource
```terraform
resource "airbyte_source_klaviyo" "my_source_klaviyo" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "klaviyo"
- start_date = "2017-01-25T00:00:00Z"
+ api_key = "...my_api_key..."
+ start_date = "2017-01-25T00:00:00Z"
}
- name = "Charlotte Muller"
- secret_id = "...my_secret_id..."
- workspace_id = "0e123b78-47ec-459e-9f67-f3c4cce4b6d7"
+ definition_id = "d98f81ed-eee1-4be4-a723-eeaf419bc59e"
+ name = "Joanne Murray"
+ secret_id = "...my_secret_id..."
+ workspace_id = "9e9d149f-3b04-4e32-9c64-9b6bc8e2c7d0"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_klaviyo" "my_source_klaviyo" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,10 @@ resource "airbyte_source_klaviyo" "my_source_klaviyo" {
Required:
-- `api_key` (String) Klaviyo API Key. See our docs if you need help finding this key.
-- `source_type` (String) must be one of ["klaviyo"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
+- `api_key` (String, Sensitive) Klaviyo API Key. See our docs if you need help finding this key.
+
+Optional:
+
+- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated. This field is optional - if not provided, all data will be replicated.
diff --git a/docs/resources/source_kustomer_singer.md b/docs/resources/source_kustomer_singer.md
index 1fdcb97e2..1bd2dabea 100644
--- a/docs/resources/source_kustomer_singer.md
+++ b/docs/resources/source_kustomer_singer.md
@@ -15,13 +15,13 @@ SourceKustomerSinger Resource
```terraform
resource "airbyte_source_kustomer_singer" "my_source_kustomersinger" {
configuration = {
- api_token = "...my_api_token..."
- source_type = "kustomer-singer"
- start_date = "2019-01-01T00:00:00Z"
+ api_token = "...my_api_token..."
+ start_date = "2019-01-01T00:00:00Z"
}
- name = "Bobbie Jacobs"
- secret_id = "...my_secret_id..."
- workspace_id = "3c574750-1357-4e44-b51f-8b084c3197e1"
+ definition_id = "de0f8a2b-57ad-4de2-8e75-111fd0612ffd"
+ name = "Mr. Antonia Yost"
+ secret_id = "...my_secret_id..."
+ workspace_id = "78b38595-7e3c-4921-8c92-84a21155c549"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_kustomer_singer" "my_source_kustomersinger" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_kustomer_singer" "my_source_kustomersinger" {
Required:
-- `api_token` (String) Kustomer API Token. See the docs on how to obtain this
-- `source_type` (String) must be one of ["kustomer-singer"]
+- `api_token` (String, Sensitive) Kustomer API Token. See the docs on how to obtain this
- `start_date` (String) The date from which you'd like to replicate the data
diff --git a/docs/resources/source_kyve.md b/docs/resources/source_kyve.md
index 2ce36824e..62187a2e6 100644
--- a/docs/resources/source_kyve.md
+++ b/docs/resources/source_kyve.md
@@ -15,16 +15,16 @@ SourceKyve Resource
```terraform
resource "airbyte_source_kyve" "my_source_kyve" {
configuration = {
- max_pages = 6
- page_size = 2
- pool_ids = "0,1"
- source_type = "kyve"
- start_ids = "0"
- url_base = "https://api.korellia.kyve.network/"
+ max_pages = 0
+ page_size = 0
+ pool_ids = "0"
+ start_ids = "0"
+ url_base = "https://api.beta.kyve.network/"
}
- name = "Gail Homenick"
- secret_id = "...my_secret_id..."
- workspace_id = "94874c2d-5cc4-4972-a33e-66bd8fe5d00b"
+ definition_id = "be9a984e-4b07-4bca-b13e-d5606ac59e7c"
+ name = "Wilbur Turcotte"
+ secret_id = "...my_secret_id..."
+ workspace_id = "b09ffd37-53fe-446a-9403-ba1bd8103cfb"
}
```
@@ -34,11 +34,12 @@ resource "airbyte_source_kyve" "my_source_kyve" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -52,13 +53,14 @@ resource "airbyte_source_kyve" "my_source_kyve" {
Required:
- `pool_ids` (String) The IDs of the KYVE storage pool you want to archive. (Comma separated)
-- `source_type` (String) must be one of ["kyve"]
- `start_ids` (String) The start-id defines, from which bundle id the pipeline should start to extract the data (Comma separated)
Optional:
- `max_pages` (Number) The maximum amount of pages to go trough. Set to 'null' for all pages.
-- `page_size` (Number) The pagesize for pagination, smaller numbers are used in integration tests.
-- `url_base` (String) URL to the KYVE Chain API.
+- `page_size` (Number) Default: 100
+The pagesize for pagination, smaller numbers are used in integration tests.
+- `url_base` (String) Default: "https://api.korellia.kyve.network"
+URL to the KYVE Chain API.
diff --git a/docs/resources/source_launchdarkly.md b/docs/resources/source_launchdarkly.md
index aac121903..0fd734655 100644
--- a/docs/resources/source_launchdarkly.md
+++ b/docs/resources/source_launchdarkly.md
@@ -16,11 +16,11 @@ SourceLaunchdarkly Resource
resource "airbyte_source_launchdarkly" "my_source_launchdarkly" {
configuration = {
access_token = "...my_access_token..."
- source_type = "launchdarkly"
}
- name = "Darren Monahan"
- secret_id = "...my_secret_id..."
- workspace_id = "20387320-590c-4cc1-8964-00313b3e5044"
+ definition_id = "422849b5-8575-49fd-b9d7-4aa20ea69f1b"
+ name = "Jodi Marquardt"
+ secret_id = "...my_secret_id..."
+ workspace_id = "dd1b5a02-95b1-497b-bb02-27d625c3155f"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_launchdarkly" "my_source_launchdarkly" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_launchdarkly" "my_source_launchdarkly" {
Required:
-- `access_token` (String) Your Access token. See here.
-- `source_type` (String) must be one of ["launchdarkly"]
+- `access_token` (String, Sensitive) Your Access token. See here.
diff --git a/docs/resources/source_lemlist.md b/docs/resources/source_lemlist.md
index 0b231b6ab..0169f3f7a 100644
--- a/docs/resources/source_lemlist.md
+++ b/docs/resources/source_lemlist.md
@@ -15,12 +15,12 @@ SourceLemlist Resource
```terraform
resource "airbyte_source_lemlist" "my_source_lemlist" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "lemlist"
+ api_key = "...my_api_key..."
}
- name = "Gene Herman"
- secret_id = "...my_secret_id..."
- workspace_id = "72dc4077-d0cc-43f4-88ef-c15ceb4d6e1e"
+ definition_id = "731c6e6b-c1ca-4f16-aaee-78925477f387"
+ name = "Mr. Clyde Dibbert"
+ secret_id = "...my_secret_id..."
+ workspace_id = "ba4aed29-95c6-463b-ad13-c6e3bbb93bd4"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_lemlist" "my_source_lemlist" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_lemlist" "my_source_lemlist" {
Required:
-- `api_key` (String) Lemlist API key,
-- `source_type` (String) must be one of ["lemlist"]
+- `api_key` (String, Sensitive) Lemlist API key,
diff --git a/docs/resources/source_lever_hiring.md b/docs/resources/source_lever_hiring.md
index 3e3d97ffa..456b7f93e 100644
--- a/docs/resources/source_lever_hiring.md
+++ b/docs/resources/source_lever_hiring.md
@@ -16,18 +16,17 @@ SourceLeverHiring Resource
resource "airbyte_source_lever_hiring" "my_source_leverhiring" {
configuration = {
credentials = {
- source_lever_hiring_authentication_mechanism_authenticate_via_lever_api_key_ = {
- api_key = "...my_api_key..."
- auth_type = "Api Key"
+ authenticate_via_lever_api_key = {
+ api_key = "...my_api_key..."
}
}
- environment = "Sandbox"
- source_type = "lever-hiring"
+ environment = "Production"
start_date = "2021-03-01T00:00:00Z"
}
- name = "Donald Wuckert"
- secret_id = "...my_secret_id..."
- workspace_id = "aedf2aca-b58b-4991-8926-ddb589461e74"
+ definition_id = "3d75c669-3a6b-492e-b166-50e4c3120d77"
+ name = "Bill Howell"
+ secret_id = "...my_secret_id..."
+ workspace_id = "c796fdac-1f48-4b8f-8670-1054c1db1ce4"
}
```
@@ -37,11 +36,12 @@ resource "airbyte_source_lever_hiring" "my_source_leverhiring" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -54,13 +54,12 @@ resource "airbyte_source_lever_hiring" "my_source_leverhiring" {
Required:
-- `source_type` (String) must be one of ["lever-hiring"]
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated. Note that it will be used only in the following incremental streams: comments, commits, and issues.
Optional:
- `credentials` (Attributes) Choose how to authenticate to Lever Hiring. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `environment` (String) must be one of ["Production", "Sandbox"]
+- `environment` (String) must be one of ["Production", "Sandbox"]; Default: "Sandbox"
The environment in which you'd like to replicate data for Lever. This is used to determine which Lever API endpoint to use.
@@ -68,59 +67,26 @@ The environment in which you'd like to replicate data for Lever. This is used to
Optional:
-- `source_lever_hiring_authentication_mechanism_authenticate_via_lever_api_key` (Attributes) Choose how to authenticate to Lever Hiring. (see [below for nested schema](#nestedatt--configuration--credentials--source_lever_hiring_authentication_mechanism_authenticate_via_lever_api_key))
-- `source_lever_hiring_authentication_mechanism_authenticate_via_lever_o_auth` (Attributes) Choose how to authenticate to Lever Hiring. (see [below for nested schema](#nestedatt--configuration--credentials--source_lever_hiring_authentication_mechanism_authenticate_via_lever_o_auth))
-- `source_lever_hiring_update_authentication_mechanism_authenticate_via_lever_api_key` (Attributes) Choose how to authenticate to Lever Hiring. (see [below for nested schema](#nestedatt--configuration--credentials--source_lever_hiring_update_authentication_mechanism_authenticate_via_lever_api_key))
-- `source_lever_hiring_update_authentication_mechanism_authenticate_via_lever_o_auth` (Attributes) Choose how to authenticate to Lever Hiring. (see [below for nested schema](#nestedatt--configuration--credentials--source_lever_hiring_update_authentication_mechanism_authenticate_via_lever_o_auth))
+- `authenticate_via_lever_api_key` (Attributes) Choose how to authenticate to Lever Hiring. (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_via_lever_api_key))
+- `authenticate_via_lever_o_auth` (Attributes) Choose how to authenticate to Lever Hiring. (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_via_lever_o_auth))
-
-### Nested Schema for `configuration.credentials.source_lever_hiring_authentication_mechanism_authenticate_via_lever_api_key`
+
+### Nested Schema for `configuration.credentials.authenticate_via_lever_api_key`
Required:
-- `api_key` (String) The Api Key of your Lever Hiring account.
-
-Optional:
-
-- `auth_type` (String) must be one of ["Api Key"]
-
-
-
-### Nested Schema for `configuration.credentials.source_lever_hiring_authentication_mechanism_authenticate_via_lever_o_auth`
-
-Required:
-
-- `refresh_token` (String) The token for obtaining new access token.
-
-Optional:
-
-- `auth_type` (String) must be one of ["Client"]
-- `client_id` (String) The Client ID of your Lever Hiring developer application.
-- `client_secret` (String) The Client Secret of your Lever Hiring developer application.
-
-
-
-### Nested Schema for `configuration.credentials.source_lever_hiring_update_authentication_mechanism_authenticate_via_lever_api_key`
-
-Required:
-
-- `api_key` (String) The Api Key of your Lever Hiring account.
-
-Optional:
-
-- `auth_type` (String) must be one of ["Api Key"]
+- `api_key` (String, Sensitive) The Api Key of your Lever Hiring account.
-
-### Nested Schema for `configuration.credentials.source_lever_hiring_update_authentication_mechanism_authenticate_via_lever_o_auth`
+
+### Nested Schema for `configuration.credentials.authenticate_via_lever_o_auth`
Required:
-- `refresh_token` (String) The token for obtaining new access token.
+- `refresh_token` (String, Sensitive) The token for obtaining new access token.
Optional:
-- `auth_type` (String) must be one of ["Client"]
- `client_id` (String) The Client ID of your Lever Hiring developer application.
- `client_secret` (String) The Client Secret of your Lever Hiring developer application.
diff --git a/docs/resources/source_linkedin_ads.md b/docs/resources/source_linkedin_ads.md
index 958334248..2473ef4dc 100644
--- a/docs/resources/source_linkedin_ads.md
+++ b/docs/resources/source_linkedin_ads.md
@@ -16,27 +16,26 @@ SourceLinkedinAds Resource
resource "airbyte_source_linkedin_ads" "my_source_linkedinads" {
configuration = {
account_ids = [
- 1,
+ 6,
]
ad_analytics_reports = [
{
- name = "Kara Rohan"
- pivot_by = "MEMBER_REGION_V2"
+ name = "Dwayne Zboncak"
+ pivot_by = "IMPRESSION_DEVICE_TYPE"
time_granularity = "MONTHLY"
},
]
credentials = {
- source_linkedin_ads_authentication_access_token = {
+ access_token = {
access_token = "...my_access_token..."
- auth_method = "access_token"
}
}
- source_type = "linkedin-ads"
- start_date = "2021-05-17"
+ start_date = "2021-05-17"
}
- name = "Elsa Adams"
- secret_id = "...my_secret_id..."
- workspace_id = "930b69f7-ac2f-472f-8850-090491160820"
+ definition_id = "4672645c-fb24-449e-af87-64eb4b875ea1"
+ name = "Blake Howell"
+ secret_id = "...my_secret_id..."
+ workspace_id = "6c0fac14-03cf-4d91-9cc5-3ae1f1c37b35"
}
```
@@ -46,11 +45,12 @@ resource "airbyte_source_linkedin_ads" "my_source_linkedinads" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -63,7 +63,6 @@ resource "airbyte_source_linkedin_ads" "my_source_linkedinads" {
Required:
-- `source_type` (String) must be one of ["linkedin-ads"]
- `start_date` (String) UTC date in the format YYYY-MM-DD. Any data before this date will not be replicated.
Optional:
@@ -89,60 +88,24 @@ Choose how to group the data in your report by time. The options are:
- 'ALL'
Optional:
-- `source_linkedin_ads_authentication_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_ads_authentication_access_token))
-- `source_linkedin_ads_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_ads_authentication_o_auth2_0))
-- `source_linkedin_ads_update_authentication_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_ads_update_authentication_access_token))
-- `source_linkedin_ads_update_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_ads_update_authentication_o_auth2_0))
+- `access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--access_token))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_linkedin_ads_authentication_access_token`
+
+### Nested Schema for `configuration.credentials.access_token`
Required:
-- `access_token` (String) The access token generated for your developer application. Refer to our documentation for more information.
+- `access_token` (String, Sensitive) The access token generated for your developer application. Refer to our documentation for more information.
-Optional:
-
-- `auth_method` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_ads_authentication_o_auth2_0`
-
-Required:
-
-- `client_id` (String) The client ID of your developer application. Refer to our documentation for more information.
-- `client_secret` (String) The client secret of your developer application. Refer to our documentation for more information.
-- `refresh_token` (String) The key to refresh the expired access token. Refer to our documentation for more information.
-
-Optional:
-
-- `auth_method` (String) must be one of ["oAuth2.0"]
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_ads_update_authentication_access_token`
-
-Required:
-
-- `access_token` (String) The access token generated for your developer application. Refer to our documentation for more information.
-
-Optional:
-
-- `auth_method` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_ads_update_authentication_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
- `client_id` (String) The client ID of your developer application. Refer to our documentation for more information.
- `client_secret` (String) The client secret of your developer application. Refer to our documentation for more information.
-- `refresh_token` (String) The key to refresh the expired access token. Refer to our documentation for more information.
-
-Optional:
-
-- `auth_method` (String) must be one of ["oAuth2.0"]
+- `refresh_token` (String, Sensitive) The key to refresh the expired access token. Refer to our documentation for more information.
diff --git a/docs/resources/source_linkedin_pages.md b/docs/resources/source_linkedin_pages.md
index 14ab6fbf6..ae6c85e25 100644
--- a/docs/resources/source_linkedin_pages.md
+++ b/docs/resources/source_linkedin_pages.md
@@ -16,17 +16,16 @@ SourceLinkedinPages Resource
resource "airbyte_source_linkedin_pages" "my_source_linkedinpages" {
configuration = {
credentials = {
- source_linkedin_pages_authentication_access_token = {
+ source_linkedin_pages_access_token = {
access_token = "...my_access_token..."
- auth_method = "access_token"
}
}
- org_id = "123456789"
- source_type = "linkedin-pages"
+ org_id = "123456789"
}
- name = "Tracey Kutch"
- secret_id = "...my_secret_id..."
- workspace_id = "c66183bf-e965-49eb-80ec-16faf75b0b53"
+ definition_id = "0ebb3981-c89f-4963-b1e6-164cc8788ff7"
+ name = "Kayla Haley"
+ secret_id = "...my_secret_id..."
+ workspace_id = "33f7738d-63dc-47b7-b8b1-6c6167f1e8f0"
}
```
@@ -36,11 +35,12 @@ resource "airbyte_source_linkedin_pages" "my_source_linkedinpages" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -54,7 +54,6 @@ resource "airbyte_source_linkedin_pages" "my_source_linkedinpages" {
Required:
- `org_id` (String) Specify the Organization ID
-- `source_type` (String) must be one of ["linkedin-pages"]
Optional:
@@ -65,60 +64,24 @@ Optional:
Optional:
-- `source_linkedin_pages_authentication_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_pages_authentication_access_token))
-- `source_linkedin_pages_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_pages_authentication_o_auth2_0))
-- `source_linkedin_pages_update_authentication_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_pages_update_authentication_access_token))
-- `source_linkedin_pages_update_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_linkedin_pages_update_authentication_o_auth2_0))
+- `access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--access_token))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_linkedin_pages_authentication_access_token`
+
+### Nested Schema for `configuration.credentials.access_token`
Required:
-- `access_token` (String) The token value generated using the LinkedIn Developers OAuth Token Tools. See the docs to obtain yours.
+- `access_token` (String, Sensitive) The token value generated using the LinkedIn Developers OAuth Token Tools. See the docs to obtain yours.
-Optional:
-
-- `auth_method` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_pages_authentication_o_auth2_0`
-
-Required:
-
-- `client_id` (String) The client ID of the LinkedIn developer application.
-- `client_secret` (String) The client secret of the LinkedIn developer application.
-- `refresh_token` (String) The token value generated using the LinkedIn Developers OAuth Token Tools. See the docs to obtain yours.
-
-Optional:
-
-- `auth_method` (String) must be one of ["oAuth2.0"]
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_pages_update_authentication_access_token`
-
-Required:
-
-- `access_token` (String) The token value generated using the LinkedIn Developers OAuth Token Tools. See the docs to obtain yours.
-
-Optional:
-
-- `auth_method` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_linkedin_pages_update_authentication_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
- `client_id` (String) The client ID of the LinkedIn developer application.
- `client_secret` (String) The client secret of the LinkedIn developer application.
-- `refresh_token` (String) The token value generated using the LinkedIn Developers OAuth Token Tools. See the docs to obtain yours.
-
-Optional:
-
-- `auth_method` (String) must be one of ["oAuth2.0"]
+- `refresh_token` (String, Sensitive) The token value generated using the LinkedIn Developers OAuth Token Tools. See the docs to obtain yours.
diff --git a/docs/resources/source_linnworks.md b/docs/resources/source_linnworks.md
index 80be8629f..b23b69a5d 100644
--- a/docs/resources/source_linnworks.md
+++ b/docs/resources/source_linnworks.md
@@ -17,13 +17,13 @@ resource "airbyte_source_linnworks" "my_source_linnworks" {
configuration = {
application_id = "...my_application_id..."
application_secret = "...my_application_secret..."
- source_type = "linnworks"
- start_date = "2022-05-04T07:21:12.859Z"
+ start_date = "2022-09-13T03:04:12.490Z"
token = "...my_token..."
}
- name = "Antonia Muller"
- secret_id = "...my_secret_id..."
- workspace_id = "cbaaf445-2c48-442c-9b2a-d32dafe81a88"
+ definition_id = "2f92210b-5c8f-4204-a6a7-75647eb6babc"
+ name = "Melba McDermott IV"
+ secret_id = "...my_secret_id..."
+ workspace_id = "b2eafdc4-53fb-46a0-992c-447712b4a020"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_linnworks" "my_source_linnworks" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -52,8 +53,7 @@ Required:
- `application_id` (String) Linnworks Application ID
- `application_secret` (String) Linnworks Application Secret
-- `source_type` (String) must be one of ["linnworks"]
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-- `token` (String)
+- `token` (String, Sensitive)
diff --git a/docs/resources/source_lokalise.md b/docs/resources/source_lokalise.md
index c887b9bc6..607cbc39a 100644
--- a/docs/resources/source_lokalise.md
+++ b/docs/resources/source_lokalise.md
@@ -15,13 +15,13 @@ SourceLokalise Resource
```terraform
resource "airbyte_source_lokalise" "my_source_lokalise" {
configuration = {
- api_key = "...my_api_key..."
- project_id = "...my_project_id..."
- source_type = "lokalise"
+ api_key = "...my_api_key..."
+ project_id = "...my_project_id..."
}
- name = "Bernard Gottlieb"
- secret_id = "...my_secret_id..."
- workspace_id = "573fecd4-7353-4f63-8820-9379aa69cd5f"
+ definition_id = "8830aabe-ffb8-4d97-a510-59b440a5f2f6"
+ name = "Inez Gottlieb"
+ secret_id = "...my_secret_id..."
+ workspace_id = "66849f7b-beaa-4ef5-a404-3cb4c473e8c7"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_lokalise" "my_source_lokalise" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_lokalise" "my_source_lokalise" {
Required:
-- `api_key` (String) Lokalise API Key with read-access. Available at Profile settings > API tokens. See here.
+- `api_key` (String, Sensitive) Lokalise API Key with read-access. Available at Profile settings > API tokens. See here.
- `project_id` (String) Lokalise project ID. Available at Project Settings > General.
-- `source_type` (String) must be one of ["lokalise"]
diff --git a/docs/resources/source_mailchimp.md b/docs/resources/source_mailchimp.md
index b80c36148..f02344ca1 100644
--- a/docs/resources/source_mailchimp.md
+++ b/docs/resources/source_mailchimp.md
@@ -17,16 +17,15 @@ resource "airbyte_source_mailchimp" "my_source_mailchimp" {
configuration = {
campaign_id = "...my_campaign_id..."
credentials = {
- source_mailchimp_authentication_api_key = {
- apikey = "...my_apikey..."
- auth_type = "apikey"
+ api_key = {
+ apikey = "...my_apikey..."
}
}
- source_type = "mailchimp"
}
- name = "Benny Williamson"
- secret_id = "...my_secret_id..."
- workspace_id = "da18a782-2bf9-4589-8e68-61adb55f9e5d"
+ definition_id = "bd591517-4a55-43fd-a41d-af7626ef51c5"
+ name = "Lyle Haley"
+ secret_id = "...my_secret_id..."
+ workspace_id = "0c6c0cc9-3e76-4e9f-9ef5-41f06ca13b1e"
}
```
@@ -36,11 +35,12 @@ resource "airbyte_source_mailchimp" "my_source_mailchimp" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,10 +51,6 @@ resource "airbyte_source_mailchimp" "my_source_mailchimp" {
### Nested Schema for `configuration`
-Required:
-
-- `source_type` (String) must be one of ["mailchimp"]
-
Optional:
- `campaign_id` (String)
@@ -65,50 +61,23 @@ Optional:
Optional:
-- `source_mailchimp_authentication_api_key` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_mailchimp_authentication_api_key))
-- `source_mailchimp_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_mailchimp_authentication_o_auth2_0))
-- `source_mailchimp_update_authentication_api_key` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_mailchimp_update_authentication_api_key))
-- `source_mailchimp_update_authentication_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_mailchimp_update_authentication_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_mailchimp_authentication_api_key`
-
-Required:
-
-- `apikey` (String) Mailchimp API Key. See the docs for information on how to generate this key.
-- `auth_type` (String) must be one of ["apikey"]
-
-
-
-### Nested Schema for `configuration.credentials.source_mailchimp_authentication_o_auth2_0`
-
-Required:
-
-- `access_token` (String) An access token generated using the above client ID and secret.
-- `auth_type` (String) must be one of ["oauth2.0"]
-
-Optional:
-
-- `client_id` (String) The Client ID of your OAuth application.
-- `client_secret` (String) The Client Secret of your OAuth application.
-
+- `api_key` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--api_key))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_mailchimp_update_authentication_api_key`
+
+### Nested Schema for `configuration.credentials.api_key`
Required:
-- `apikey` (String) Mailchimp API Key. See the docs for information on how to generate this key.
-- `auth_type` (String) must be one of ["apikey"]
+- `apikey` (String, Sensitive) Mailchimp API Key. See the docs for information on how to generate this key.
-
-### Nested Schema for `configuration.credentials.source_mailchimp_update_authentication_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) An access token generated using the above client ID and secret.
-- `auth_type` (String) must be one of ["oauth2.0"]
+- `access_token` (String, Sensitive) An access token generated using the above client ID and secret.
Optional:
diff --git a/docs/resources/source_mailgun.md b/docs/resources/source_mailgun.md
index c771986a8..f51dc6e47 100644
--- a/docs/resources/source_mailgun.md
+++ b/docs/resources/source_mailgun.md
@@ -17,12 +17,12 @@ resource "airbyte_source_mailgun" "my_source_mailgun" {
configuration = {
domain_region = "...my_domain_region..."
private_key = "...my_private_key..."
- source_type = "mailgun"
start_date = "2023-08-01T00:00:00Z"
}
- name = "Sheri Mayert"
- secret_id = "...my_secret_id..."
- workspace_id = "8f7502bf-dc34-4508-81f1-764456379f3f"
+ definition_id = "c1488faa-411d-49d9-a226-9c9d648f0bcc"
+ name = "Ervin Deckow"
+ secret_id = "...my_secret_id..."
+ workspace_id = "5af6ed3c-47c1-4416-8113-c2d3cb5eaa64"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_mailgun" "my_source_mailgun" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,12 +50,12 @@ resource "airbyte_source_mailgun" "my_source_mailgun" {
Required:
-- `private_key` (String) Primary account API key to access your Mailgun data.
-- `source_type` (String) must be one of ["mailgun"]
+- `private_key` (String, Sensitive) Primary account API key to access your Mailgun data.
Optional:
-- `domain_region` (String) Domain region code. 'EU' or 'US' are possible values. The default is 'US'.
+- `domain_region` (String) Default: "US"
+Domain region code. 'EU' or 'US' are possible values. The default is 'US'.
- `start_date` (String) UTC date and time in the format 2020-10-01 00:00:00. Any data before this date will not be replicated. If omitted, defaults to 3 days ago.
diff --git a/docs/resources/source_mailjet_sms.md b/docs/resources/source_mailjet_sms.md
index 80e5ef086..ac354ca3b 100644
--- a/docs/resources/source_mailjet_sms.md
+++ b/docs/resources/source_mailjet_sms.md
@@ -15,14 +15,14 @@ SourceMailjetSms Resource
```terraform
resource "airbyte_source_mailjet_sms" "my_source_mailjetsms" {
configuration = {
- end_date = 1666281656
- source_type = "mailjet-sms"
- start_date = 1666261656
- token = "...my_token..."
+ end_date = 1666281656
+ start_date = 1666261656
+ token = "...my_token..."
}
- name = "Dr. Eloise Cronin"
- secret_id = "...my_secret_id..."
- workspace_id = "62657b36-fc6b-49f5-87ce-525c67641a83"
+ definition_id = "6a42dbbb-853e-4c4b-9e6a-18b0d79003de"
+ name = "Gilberto Pagac"
+ secret_id = "...my_secret_id..."
+ workspace_id = "3bfaadd2-9a6d-4ff6-8b6b-f32faf825bea"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_mailjet_sms" "my_source_mailjetsms" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,8 +50,7 @@ resource "airbyte_source_mailjet_sms" "my_source_mailjetsms" {
Required:
-- `source_type` (String) must be one of ["mailjet-sms"]
-- `token` (String) Your access token. See here.
+- `token` (String, Sensitive) Your access token. See here.
Optional:
diff --git a/docs/resources/source_marketo.md b/docs/resources/source_marketo.md
index f59260c4d..33e9cd346 100644
--- a/docs/resources/source_marketo.md
+++ b/docs/resources/source_marketo.md
@@ -18,12 +18,12 @@ resource "airbyte_source_marketo" "my_source_marketo" {
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
domain_url = "https://000-AAA-000.mktorest.com"
- source_type = "marketo"
start_date = "2020-09-25T00:00:00Z"
}
- name = "Jerome Berge"
- secret_id = "...my_secret_id..."
- workspace_id = "b4c21ccb-423a-4bcd-891f-aabdd88e71f6"
+ definition_id = "c87aaffe-b9ea-4290-b7e9-f4166b42b69c"
+ name = "Doris Steuber"
+ secret_id = "...my_secret_id..."
+ workspace_id = "bbad3f0b-f8ca-4743-bfb1-506e5d6deb8b"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_marketo" "my_source_marketo" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -53,7 +54,6 @@ Required:
- `client_id` (String) The Client ID of your Marketo developer application. See the docs for info on how to obtain this.
- `client_secret` (String) The Client Secret of your Marketo developer application. See the docs for info on how to obtain this.
- `domain_url` (String) Your Marketo Base URL. See the docs for info on how to obtain this.
-- `source_type` (String) must be one of ["marketo"]
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
diff --git a/docs/resources/source_metabase.md b/docs/resources/source_metabase.md
index f6b805f97..544da9702 100644
--- a/docs/resources/source_metabase.md
+++ b/docs/resources/source_metabase.md
@@ -18,12 +18,12 @@ resource "airbyte_source_metabase" "my_source_metabase" {
instance_api_url = "https://localhost:3000/api/"
password = "...my_password..."
session_token = "...my_session_token..."
- source_type = "metabase"
- username = "Peyton.Green"
+ username = "Efren_Mante15"
}
- name = "Tammy Sporer"
- secret_id = "...my_secret_id..."
- workspace_id = "71e7fd07-4009-4ef8-929d-e1dd7097b5da"
+ definition_id = "f283fdf1-b362-4a3e-b9ca-cc879ba7ac01"
+ name = "Gail Kirlin"
+ secret_id = "...my_secret_id..."
+ workspace_id = "7c271c50-44a2-45a4-b7e4-eabe3a97768e"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_metabase" "my_source_metabase" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,12 +52,11 @@ resource "airbyte_source_metabase" "my_source_metabase" {
Required:
- `instance_api_url` (String) URL to your metabase instance API
-- `source_type` (String) must be one of ["metabase"]
Optional:
-- `password` (String)
-- `session_token` (String) To generate your session token, you need to run the following command: ``` curl -X POST \
+- `password` (String, Sensitive)
+- `session_token` (String, Sensitive) To generate your session token, you need to run the following command: ``` curl -X POST \
-H "Content-Type: application/json" \
-d '{"username": "person@metabase.com", "password": "fakepassword"}' \
http://localhost:3000/api/session
diff --git a/docs/resources/source_microsoft_teams.md b/docs/resources/source_microsoft_teams.md
index f7977bd25..9eb3bcf43 100644
--- a/docs/resources/source_microsoft_teams.md
+++ b/docs/resources/source_microsoft_teams.md
@@ -16,19 +16,18 @@ SourceMicrosoftTeams Resource
resource "airbyte_source_microsoft_teams" "my_source_microsoftteams" {
configuration = {
credentials = {
- source_microsoft_teams_authentication_mechanism_authenticate_via_microsoft = {
- auth_type = "Token"
+ authenticate_via_microsoft = {
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
tenant_id = "...my_tenant_id..."
}
}
- period = "D7"
- source_type = "microsoft-teams"
+ period = "D7"
}
- name = "Brandy Ryan"
- secret_id = "...my_secret_id..."
- workspace_id = "fa6c78a2-16e1-49ba-beca-6191498140b6"
+ definition_id = "79345d14-4630-4331-8f29-cf10b0742b93"
+ name = "Jesus Marquardt Sr."
+ secret_id = "...my_secret_id..."
+ workspace_id = "1a320cca-d5ad-4c13-b0ef-57488395b5ae"
}
```
@@ -38,11 +37,12 @@ resource "airbyte_source_microsoft_teams" "my_source_microsoftteams" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -56,7 +56,6 @@ resource "airbyte_source_microsoft_teams" "my_source_microsoftteams" {
Required:
- `period` (String) Specifies the length of time over which the Team Device Report stream is aggregated. The supported values are: D7, D30, D90, and D180.
-- `source_type` (String) must be one of ["microsoft-teams"]
Optional:
@@ -67,13 +66,11 @@ Optional:
Optional:
-- `source_microsoft_teams_authentication_mechanism_authenticate_via_microsoft` (Attributes) Choose how to authenticate to Microsoft (see [below for nested schema](#nestedatt--configuration--credentials--source_microsoft_teams_authentication_mechanism_authenticate_via_microsoft))
-- `source_microsoft_teams_authentication_mechanism_authenticate_via_microsoft_o_auth_2_0` (Attributes) Choose how to authenticate to Microsoft (see [below for nested schema](#nestedatt--configuration--credentials--source_microsoft_teams_authentication_mechanism_authenticate_via_microsoft_o_auth_2_0))
-- `source_microsoft_teams_update_authentication_mechanism_authenticate_via_microsoft` (Attributes) Choose how to authenticate to Microsoft (see [below for nested schema](#nestedatt--configuration--credentials--source_microsoft_teams_update_authentication_mechanism_authenticate_via_microsoft))
-- `source_microsoft_teams_update_authentication_mechanism_authenticate_via_microsoft_o_auth_2_0` (Attributes) Choose how to authenticate to Microsoft (see [below for nested schema](#nestedatt--configuration--credentials--source_microsoft_teams_update_authentication_mechanism_authenticate_via_microsoft_o_auth_2_0))
+- `authenticate_via_microsoft` (Attributes) Choose how to authenticate to Microsoft (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_via_microsoft))
+- `authenticate_via_microsoft_o_auth20` (Attributes) Choose how to authenticate to Microsoft (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_via_microsoft_o_auth20))
-
-### Nested Schema for `configuration.credentials.source_microsoft_teams_authentication_mechanism_authenticate_via_microsoft`
+
+### Nested Schema for `configuration.credentials.authenticate_via_microsoft`
Required:
@@ -81,52 +78,15 @@ Required:
- `client_secret` (String) The Client Secret of your Microsoft Teams developer application.
- `tenant_id` (String) A globally unique identifier (GUID) that is different than your organization name or domain. Follow these steps to obtain: open one of the Teams where you belong inside the Teams Application -> Click on the … next to the Team title -> Click on Get link to team -> Copy the link to the team and grab the tenant ID form the URL
-Optional:
-
-- `auth_type` (String) must be one of ["Token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_microsoft_teams_authentication_mechanism_authenticate_via_microsoft_o_auth_2_0`
-
-Required:
-
-- `client_id` (String) The Client ID of your Microsoft Teams developer application.
-- `client_secret` (String) The Client Secret of your Microsoft Teams developer application.
-- `refresh_token` (String) A Refresh Token to renew the expired Access Token.
-- `tenant_id` (String) A globally unique identifier (GUID) that is different than your organization name or domain. Follow these steps to obtain: open one of the Teams where you belong inside the Teams Application -> Click on the … next to the Team title -> Click on Get link to team -> Copy the link to the team and grab the tenant ID form the URL
-
-Optional:
-
-- `auth_type` (String) must be one of ["Client"]
-
-
-### Nested Schema for `configuration.credentials.source_microsoft_teams_update_authentication_mechanism_authenticate_via_microsoft`
+
+### Nested Schema for `configuration.credentials.authenticate_via_microsoft_o_auth20`
Required:
- `client_id` (String) The Client ID of your Microsoft Teams developer application.
- `client_secret` (String) The Client Secret of your Microsoft Teams developer application.
+- `refresh_token` (String, Sensitive) A Refresh Token to renew the expired Access Token.
- `tenant_id` (String) A globally unique identifier (GUID) that is different than your organization name or domain. Follow these steps to obtain: open one of the Teams where you belong inside the Teams Application -> Click on the … next to the Team title -> Click on Get link to team -> Copy the link to the team and grab the tenant ID form the URL
-Optional:
-
-- `auth_type` (String) must be one of ["Token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_microsoft_teams_update_authentication_mechanism_authenticate_via_microsoft_o_auth_2_0`
-
-Required:
-
-- `client_id` (String) The Client ID of your Microsoft Teams developer application.
-- `client_secret` (String) The Client Secret of your Microsoft Teams developer application.
-- `refresh_token` (String) A Refresh Token to renew the expired Access Token.
-- `tenant_id` (String) A globally unique identifier (GUID) that is different than your organization name or domain. Follow these steps to obtain: open one of the Teams where you belong inside the Teams Application -> Click on the … next to the Team title -> Click on Get link to team -> Copy the link to the team and grab the tenant ID form the URL
-
-Optional:
-
-- `auth_type` (String) must be one of ["Client"]
-
diff --git a/docs/resources/source_mixpanel.md b/docs/resources/source_mixpanel.md
index 4b56bde43..f7d91625d 100644
--- a/docs/resources/source_mixpanel.md
+++ b/docs/resources/source_mixpanel.md
@@ -15,25 +15,23 @@ SourceMixpanel Resource
```terraform
resource "airbyte_source_mixpanel" "my_source_mixpanel" {
configuration = {
- attribution_window = 2
+ attribution_window = 0
credentials = {
- source_mixpanel_authentication_wildcard_project_secret = {
- api_secret = "...my_api_secret..."
- option_title = "Project Secret"
+ project_secret = {
+ api_secret = "...my_api_secret..."
}
}
- date_window_size = 10
+ date_window_size = 3
end_date = "2021-11-16"
- project_id = 7
project_timezone = "UTC"
region = "US"
select_properties_by_default = true
- source_type = "mixpanel"
start_date = "2021-11-16"
}
- name = "Donald Ernser"
- secret_id = "...my_secret_id..."
- workspace_id = "f37e4aa8-6855-4596-a732-aa5dcb6682cb"
+ definition_id = "a514955f-a2ea-425a-91d7-622e389cc420"
+ name = "Cecilia Gerlach"
+ secret_id = "...my_secret_id..."
+ workspace_id = "b3299a61-1cc7-4be3-a8ba-7188dc05c92c"
}
```
@@ -43,11 +41,12 @@ resource "airbyte_source_mixpanel" "my_source_mixpanel" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -58,18 +57,23 @@ resource "airbyte_source_mixpanel" "my_source_mixpanel" {
### Nested Schema for `configuration`
-Optional:
+Required:
-- `attribution_window` (Number) A period of time for attributing results to ads and the lookback period after those actions occur during which ad results are counted. Default attribution window is 5 days.
- `credentials` (Attributes) Choose how to authenticate to Mixpanel (see [below for nested schema](#nestedatt--configuration--credentials))
-- `date_window_size` (Number) Defines window size in days, that used to slice through data. You can reduce it, if amount of data in each window is too big for your environment.
+
+Optional:
+
+- `attribution_window` (Number) Default: 5
+A period of time for attributing results to ads and the lookback period after those actions occur during which ad results are counted. Default attribution window is 5 days. (This value should be non-negative integer)
+- `date_window_size` (Number) Default: 30
+Defines window size in days, that used to slice through data. You can reduce it, if amount of data in each window is too big for your environment. (This value should be positive integer)
- `end_date` (String) The date in the format YYYY-MM-DD. Any data after this date will not be replicated. Left empty to always sync to most recent date
-- `project_id` (Number) Your project ID number. See the docs for more information on how to obtain this.
-- `project_timezone` (String) Time zone in which integer date times are stored. The project timezone may be found in the project settings in the Mixpanel console.
-- `region` (String) must be one of ["US", "EU"]
+- `project_timezone` (String) Default: "US/Pacific"
+Time zone in which integer date times are stored. The project timezone may be found in the project settings in the Mixpanel console.
+- `region` (String) must be one of ["US", "EU"]; Default: "US"
The region of mixpanel domain instance either US or EU.
-- `select_properties_by_default` (Boolean) Setting this config parameter to TRUE ensures that new properties on events and engage records are captured. Otherwise new properties will be ignored.
-- `source_type` (String) must be one of ["mixpanel"]
+- `select_properties_by_default` (Boolean) Default: true
+Setting this config parameter to TRUE ensures that new properties on events and engage records are captured. Otherwise new properties will be ignored.
- `start_date` (String) The date in the format YYYY-MM-DD. Any data before this date will not be replicated. If this option is not set, the connector will replicate data from up to one year ago by default.
@@ -77,58 +81,24 @@ The region of mixpanel domain instance either US or EU.
Optional:
-- `source_mixpanel_authentication_wildcard_project_secret` (Attributes) Choose how to authenticate to Mixpanel (see [below for nested schema](#nestedatt--configuration--credentials--source_mixpanel_authentication_wildcard_project_secret))
-- `source_mixpanel_authentication_wildcard_service_account` (Attributes) Choose how to authenticate to Mixpanel (see [below for nested schema](#nestedatt--configuration--credentials--source_mixpanel_authentication_wildcard_service_account))
-- `source_mixpanel_update_authentication_wildcard_project_secret` (Attributes) Choose how to authenticate to Mixpanel (see [below for nested schema](#nestedatt--configuration--credentials--source_mixpanel_update_authentication_wildcard_project_secret))
-- `source_mixpanel_update_authentication_wildcard_service_account` (Attributes) Choose how to authenticate to Mixpanel (see [below for nested schema](#nestedatt--configuration--credentials--source_mixpanel_update_authentication_wildcard_service_account))
-
-
-### Nested Schema for `configuration.credentials.source_mixpanel_authentication_wildcard_project_secret`
-
-Required:
-
-- `api_secret` (String) Mixpanel project secret. See the docs for more information on how to obtain this.
-
-Optional:
-
-- `option_title` (String) must be one of ["Project Secret"]
-
-
-
-### Nested Schema for `configuration.credentials.source_mixpanel_authentication_wildcard_service_account`
-
-Required:
-
-- `secret` (String) Mixpanel Service Account Secret. See the docs for more information on how to obtain this.
-- `username` (String) Mixpanel Service Account Username. See the docs for more information on how to obtain this.
-
-Optional:
-
-- `option_title` (String) must be one of ["Service Account"]
+- `project_secret` (Attributes) Choose how to authenticate to Mixpanel (see [below for nested schema](#nestedatt--configuration--credentials--project_secret))
+- `service_account` (Attributes) Choose how to authenticate to Mixpanel (see [below for nested schema](#nestedatt--configuration--credentials--service_account))
-
-
-### Nested Schema for `configuration.credentials.source_mixpanel_update_authentication_wildcard_project_secret`
+
+### Nested Schema for `configuration.credentials.project_secret`
Required:
- `api_secret` (String) Mixpanel project secret. See the docs for more information on how to obtain this.
-Optional:
-
-- `option_title` (String) must be one of ["Project Secret"]
-
-
-### Nested Schema for `configuration.credentials.source_mixpanel_update_authentication_wildcard_service_account`
+
+### Nested Schema for `configuration.credentials.service_account`
Required:
+- `project_id` (Number) Your project ID number. See the docs for more information on how to obtain this.
- `secret` (String) Mixpanel Service Account Secret. See the docs for more information on how to obtain this.
- `username` (String) Mixpanel Service Account Username. See the docs for more information on how to obtain this.
-Optional:
-
-- `option_title` (String) must be one of ["Service Account"]
-
diff --git a/docs/resources/source_monday.md b/docs/resources/source_monday.md
index ef5c56f9b..e4e97b8b7 100644
--- a/docs/resources/source_monday.md
+++ b/docs/resources/source_monday.md
@@ -16,16 +16,15 @@ SourceMonday Resource
resource "airbyte_source_monday" "my_source_monday" {
configuration = {
credentials = {
- source_monday_authorization_method_api_token = {
+ api_token = {
api_token = "...my_api_token..."
- auth_type = "api_token"
}
}
- source_type = "monday"
}
- name = "Shirley Wisoky"
- secret_id = "...my_secret_id..."
- workspace_id = "fd5fb6e9-1b9a-49f7-8846-e2c3309db053"
+ definition_id = "2050fdf2-ba7d-443d-a0d3-384e15ed5352"
+ name = "Stella Lubowitz"
+ secret_id = "...my_secret_id..."
+ workspace_id = "aeabadeb-93c7-4728-b9b6-069b6a28df31"
}
```
@@ -35,11 +34,12 @@ resource "airbyte_source_monday" "my_source_monday" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,10 +50,6 @@ resource "airbyte_source_monday" "my_source_monday" {
### Nested Schema for `configuration`
-Required:
-
-- `source_type` (String) must be one of ["monday"]
-
Optional:
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
@@ -63,56 +59,29 @@ Optional:
Optional:
-- `source_monday_authorization_method_api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_monday_authorization_method_api_token))
-- `source_monday_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_monday_authorization_method_o_auth2_0))
-- `source_monday_update_authorization_method_api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_monday_update_authorization_method_api_token))
-- `source_monday_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_monday_update_authorization_method_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_monday_authorization_method_api_token`
-
-Required:
-
-- `api_token` (String) API Token for making authenticated requests.
-- `auth_type` (String) must be one of ["api_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_monday_authorization_method_o_auth2_0`
-
-Required:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your OAuth application.
-- `client_secret` (String) The Client Secret of your OAuth application.
-
-Optional:
-
-- `subdomain` (String) Slug/subdomain of the account, or the first part of the URL that comes before .monday.com
-
+- `api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--api_token))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_monday_update_authorization_method_api_token`
+
+### Nested Schema for `configuration.credentials.api_token`
Required:
-- `api_token` (String) API Token for making authenticated requests.
-- `auth_type` (String) must be one of ["api_token"]
+- `api_token` (String, Sensitive) API Token for making authenticated requests.
-
-### Nested Schema for `configuration.credentials.source_monday_update_authorization_method_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
+- `access_token` (String, Sensitive) Access Token for making authenticated requests.
- `client_id` (String) The Client ID of your OAuth application.
- `client_secret` (String) The Client Secret of your OAuth application.
Optional:
-- `subdomain` (String) Slug/subdomain of the account, or the first part of the URL that comes before .monday.com
+- `subdomain` (String) Default: ""
+Slug/subdomain of the account, or the first part of the URL that comes before .monday.com
diff --git a/docs/resources/source_mongodb.md b/docs/resources/source_mongodb.md
deleted file mode 100644
index 9b8bca0ff..000000000
--- a/docs/resources/source_mongodb.md
+++ /dev/null
@@ -1,152 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_mongodb Resource - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceMongodb Resource
----
-
-# airbyte_source_mongodb (Resource)
-
-SourceMongodb Resource
-
-## Example Usage
-
-```terraform
-resource "airbyte_source_mongodb" "my_source_mongodb" {
- configuration = {
- auth_source = "admin"
- database = "...my_database..."
- instance_type = {
- source_mongodb_mongo_db_instance_type_mongo_db_atlas = {
- cluster_url = "...my_cluster_url..."
- instance = "atlas"
- }
- }
- password = "...my_password..."
- source_type = "mongodb"
- user = "...my_user..."
- }
- name = "Doreen Mayer"
- secret_id = "...my_secret_id..."
- workspace_id = "5ca006f5-392c-411a-a5a8-bf92f97428ad"
-}
-```
-
-
-## Schema
-
-### Required
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `source_id` (String)
-- `source_type` (String)
-
-
-### Nested Schema for `configuration`
-
-Required:
-
-- `database` (String) The database you want to replicate.
-- `source_type` (String) must be one of ["mongodb"]
-
-Optional:
-
-- `auth_source` (String) The authentication source where the user information is stored.
-- `instance_type` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type))
-- `password` (String) The password associated with this username.
-- `user` (String) The username which is used to access the database.
-
-
-### Nested Schema for `configuration.instance_type`
-
-Optional:
-
-- `source_mongodb_mongo_db_instance_type_mongo_db_atlas` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--source_mongodb_mongo_db_instance_type_mongo_db_atlas))
-- `source_mongodb_mongo_db_instance_type_replica_set` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--source_mongodb_mongo_db_instance_type_replica_set))
-- `source_mongodb_mongo_db_instance_type_standalone_mongo_db_instance` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--source_mongodb_mongo_db_instance_type_standalone_mongo_db_instance))
-- `source_mongodb_update_mongo_db_instance_type_mongo_db_atlas` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--source_mongodb_update_mongo_db_instance_type_mongo_db_atlas))
-- `source_mongodb_update_mongo_db_instance_type_replica_set` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--source_mongodb_update_mongo_db_instance_type_replica_set))
-- `source_mongodb_update_mongo_db_instance_type_standalone_mongo_db_instance` (Attributes) The MongoDb instance to connect to. For MongoDB Atlas and Replica Set TLS connection is used by default. (see [below for nested schema](#nestedatt--configuration--instance_type--source_mongodb_update_mongo_db_instance_type_standalone_mongo_db_instance))
-
-
-### Nested Schema for `configuration.instance_type.source_mongodb_mongo_db_instance_type_mongo_db_atlas`
-
-Required:
-
-- `cluster_url` (String) The URL of a cluster to connect to.
-- `instance` (String) must be one of ["atlas"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-
-
-### Nested Schema for `configuration.instance_type.source_mongodb_mongo_db_instance_type_replica_set`
-
-Required:
-
-- `instance` (String) must be one of ["replica"]
-- `server_addresses` (String) The members of a replica set. Please specify `host`:`port` of each member separated by comma.
-
-Optional:
-
-- `replica_set` (String) A replica set in MongoDB is a group of mongod processes that maintain the same data set.
-
-
-
-### Nested Schema for `configuration.instance_type.source_mongodb_mongo_db_instance_type_standalone_mongo_db_instance`
-
-Required:
-
-- `host` (String) The host name of the Mongo database.
-- `instance` (String) must be one of ["standalone"]
-- `port` (Number) The port of the Mongo database.
-
-
-
-### Nested Schema for `configuration.instance_type.source_mongodb_update_mongo_db_instance_type_mongo_db_atlas`
-
-Required:
-
-- `cluster_url` (String) The URL of a cluster to connect to.
-- `instance` (String) must be one of ["atlas"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-
-
-### Nested Schema for `configuration.instance_type.source_mongodb_update_mongo_db_instance_type_replica_set`
-
-Required:
-
-- `instance` (String) must be one of ["replica"]
-- `server_addresses` (String) The members of a replica set. Please specify `host`:`port` of each member separated by comma.
-
-Optional:
-
-- `replica_set` (String) A replica set in MongoDB is a group of mongod processes that maintain the same data set.
-
-
-
-### Nested Schema for `configuration.instance_type.source_mongodb_update_mongo_db_instance_type_standalone_mongo_db_instance`
-
-Required:
-
-- `host` (String) The host name of the Mongo database.
-- `instance` (String) must be one of ["standalone"]
-- `port` (Number) The port of the Mongo database.
-
-
diff --git a/docs/resources/source_mongodb_internal_poc.md b/docs/resources/source_mongodb_internal_poc.md
index c7044fa78..f7f47d95c 100644
--- a/docs/resources/source_mongodb_internal_poc.md
+++ b/docs/resources/source_mongodb_internal_poc.md
@@ -19,12 +19,12 @@ resource "airbyte_source_mongodb_internal_poc" "my_source_mongodbinternalpoc" {
connection_string = "mongodb://example1.host.com:27017,example2.host.com:27017,example3.host.com:27017"
password = "...my_password..."
replica_set = "...my_replica_set..."
- source_type = "mongodb-internal-poc"
user = "...my_user..."
}
- name = "Eduardo Weissnat"
- secret_id = "...my_secret_id..."
- workspace_id = "f8221125-359d-4983-87f7-a79cd72cd248"
+ definition_id = "6ea9203c-b787-46e7-9a53-1f3b4802a3b9"
+ name = "Hector Kuhic"
+ secret_id = "...my_secret_id..."
+ workspace_id = "76dbe116-c781-416c-b0bf-b32667c47d50"
}
```
@@ -34,11 +34,12 @@ resource "airbyte_source_mongodb_internal_poc" "my_source_mongodbinternalpoc" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,15 +50,12 @@ resource "airbyte_source_mongodb_internal_poc" "my_source_mongodbinternalpoc" {
### Nested Schema for `configuration`
-Required:
-
-- `source_type` (String) must be one of ["mongodb-internal-poc"]
-
Optional:
-- `auth_source` (String) The authentication source where the user information is stored.
+- `auth_source` (String) Default: "admin"
+The authentication source where the user information is stored.
- `connection_string` (String) The connection string of the database that you want to replicate..
-- `password` (String) The password associated with this username.
+- `password` (String, Sensitive) The password associated with this username.
- `replica_set` (String) The name of the replica set to be replicated.
- `user` (String) The username which is used to access the database.
diff --git a/docs/resources/source_mongodb_v2.md b/docs/resources/source_mongodb_v2.md
new file mode 100644
index 000000000..920722deb
--- /dev/null
+++ b/docs/resources/source_mongodb_v2.md
@@ -0,0 +1,115 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_source_mongodb_v2 Resource - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ SourceMongodbV2 Resource
+---
+
+# airbyte_source_mongodb_v2 (Resource)
+
+SourceMongodbV2 Resource
+
+## Example Usage
+
+```terraform
+resource "airbyte_source_mongodb_v2" "my_source_mongodbv2" {
+ configuration = {
+ database_config = {
+ mongo_db_atlas_replica_set = {
+ additional_properties = "{ \"see\": \"documentation\" }"
+ auth_source = "admin"
+ connection_string = "mongodb+srv://cluster0.abcd1.mongodb.net/"
+ database = "...my_database..."
+ password = "...my_password..."
+ username = "Curtis38"
+ }
+ }
+ discover_sample_size = 1
+ initial_waiting_seconds = 0
+ queue_size = 5
+ }
+ definition_id = "c03f8392-0634-4c9d-b1c4-26709282f0b3"
+ name = "Nora Waelchi"
+ secret_id = "...my_secret_id..."
+ workspace_id = "729ff502-4b69-40b2-b36f-2f7a3b95d4ab"
+}
+```
+
+
+## Schema
+
+### Required
+
+- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `name` (String) Name of the source e.g. dev-mysql-instance.
+- `workspace_id` (String)
+
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
+- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
+
+### Read-Only
+
+- `source_id` (String)
+- `source_type` (String)
+
+
+### Nested Schema for `configuration`
+
+Required:
+
+- `database_config` (Attributes) Configures the MongoDB cluster type. (see [below for nested schema](#nestedatt--configuration--database_config))
+
+Optional:
+
+- `discover_sample_size` (Number) Default: 10000
+The maximum number of documents to sample when attempting to discover the unique fields for a collection.
+- `initial_waiting_seconds` (Number) Default: 300
+The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds.
+- `queue_size` (Number) Default: 10000
+The size of the internal queue. This may interfere with memory consumption and efficiency of the connector, please be careful.
+
+
+### Nested Schema for `configuration.database_config`
+
+Optional:
+
+- `mongo_db_atlas_replica_set` (Attributes) MongoDB Atlas-hosted cluster configured as a replica set (see [below for nested schema](#nestedatt--configuration--database_config--mongo_db_atlas_replica_set))
+- `self_managed_replica_set` (Attributes) MongoDB self-hosted cluster configured as a replica set (see [below for nested schema](#nestedatt--configuration--database_config--self_managed_replica_set))
+
+
+### Nested Schema for `configuration.database_config.mongo_db_atlas_replica_set`
+
+Required:
+
+- `connection_string` (String) The connection string of the cluster that you want to replicate.
+- `database` (String) The name of the MongoDB database that contains the collection(s) to replicate.
+- `password` (String, Sensitive) The password associated with this username.
+- `username` (String) The username which is used to access the database.
+
+Optional:
+
+- `additional_properties` (String) Parsed as JSON.
+- `auth_source` (String) Default: "admin"
+The authentication source where the user information is stored. See https://www.mongodb.com/docs/manual/reference/connection-string/#mongodb-urioption-urioption.authSource for more details.
+
+
+
+### Nested Schema for `configuration.database_config.self_managed_replica_set`
+
+Required:
+
+- `connection_string` (String) The connection string of the cluster that you want to replicate. https://www.mongodb.com/docs/manual/reference/connection-string/#find-your-self-hosted-deployment-s-connection-string for more information.
+- `database` (String) The name of the MongoDB database that contains the collection(s) to replicate.
+
+Optional:
+
+- `additional_properties` (String) Parsed as JSON.
+- `auth_source` (String) Default: "admin"
+The authentication source where the user information is stored.
+- `password` (String, Sensitive) The password associated with this username.
+- `username` (String) The username which is used to access the database.
+
+
diff --git a/docs/resources/source_mssql.md b/docs/resources/source_mssql.md
index c8c1d5f2b..18b570fc2 100644
--- a/docs/resources/source_mssql.md
+++ b/docs/resources/source_mssql.md
@@ -21,32 +21,27 @@ resource "airbyte_source_mssql" "my_source_mssql" {
password = "...my_password..."
port = 1433
replication_method = {
- source_mssql_update_method_read_changes_using_change_data_capture_cdc_ = {
+ read_changes_using_change_data_capture_cdc = {
data_to_sync = "New Changes Only"
- initial_waiting_seconds = 7
- method = "CDC"
- snapshot_isolation = "Snapshot"
+ initial_waiting_seconds = 2
+ snapshot_isolation = "Read Committed"
}
}
schemas = [
"...",
]
- source_type = "mssql"
ssl_method = {
- source_mssql_ssl_method_encrypted_trust_server_certificate_ = {
- ssl_method = "encrypted_trust_server_certificate"
- }
+ source_mssql_encrypted_trust_server_certificate = {}
}
tunnel_method = {
- source_mssql_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ source_mssql_no_tunnel = {}
}
- username = "Bobbie60"
+ username = "Salvatore_Weissnat66"
}
- name = "Clarence Murazik"
- secret_id = "...my_secret_id..."
- workspace_id = "1ef5725f-1169-4ac1-a41d-8a23c23e34f2"
+ definition_id = "b6ad0e44-a4dc-4970-8078-573a20ac990f"
+ name = "Wm Corkery"
+ secret_id = "...my_secret_id..."
+ workspace_id = "7a67a851-50ea-4861-a0cd-618d74280681"
}
```
@@ -56,11 +51,12 @@ resource "airbyte_source_mssql" "my_source_mssql" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -76,13 +72,12 @@ Required:
- `database` (String) The name of the database.
- `host` (String) The hostname of the database.
- `port` (Number) The port of the database.
-- `source_type` (String) must be one of ["mssql"]
- `username` (String) The username which is used to access the database.
Optional:
- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) The password associated with the username.
+- `password` (String, Sensitive) The password associated with the username.
- `replication_method` (Attributes) Configures how data is extracted from the database. (see [below for nested schema](#nestedatt--configuration--replication_method))
- `schemas` (List of String) The list of schemas to sync from. Defaults to user. Case sensitive.
- `ssl_method` (Attributes) The encryption method which is used when communicating with the database. (see [below for nested schema](#nestedatt--configuration--ssl_method))
@@ -93,57 +88,24 @@ Optional:
Optional:
-- `source_mssql_update_method_read_changes_using_change_data_capture_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the SQL Server's change data capture feature. This must be enabled on your database. (see [below for nested schema](#nestedatt--configuration--replication_method--source_mssql_update_method_read_changes_using_change_data_capture_cdc))
-- `source_mssql_update_method_scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--source_mssql_update_method_scan_changes_with_user_defined_cursor))
-- `source_mssql_update_update_method_read_changes_using_change_data_capture_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the SQL Server's change data capture feature. This must be enabled on your database. (see [below for nested schema](#nestedatt--configuration--replication_method--source_mssql_update_update_method_read_changes_using_change_data_capture_cdc))
-- `source_mssql_update_update_method_scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--source_mssql_update_update_method_scan_changes_with_user_defined_cursor))
-
-
-### Nested Schema for `configuration.replication_method.source_mssql_update_method_read_changes_using_change_data_capture_cdc`
-
-Required:
+- `read_changes_using_change_data_capture_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the SQL Server's change data capture feature. This must be enabled on your database. (see [below for nested schema](#nestedatt--configuration--replication_method--read_changes_using_change_data_capture_cdc))
+- `scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--scan_changes_with_user_defined_cursor))
-- `method` (String) must be one of ["CDC"]
+
+### Nested Schema for `configuration.replication_method.read_changes_using_change_data_capture_cdc`
Optional:
-- `data_to_sync` (String) must be one of ["Existing and New", "New Changes Only"]
+- `data_to_sync` (String) must be one of ["Existing and New", "New Changes Only"]; Default: "Existing and New"
What data should be synced under the CDC. "Existing and New" will read existing data as a snapshot, and sync new changes through CDC. "New Changes Only" will skip the initial snapshot, and only sync new changes through CDC.
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `snapshot_isolation` (String) must be one of ["Snapshot", "Read Committed"]
+- `initial_waiting_seconds` (Number) Default: 300
+The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
+- `snapshot_isolation` (String) must be one of ["Snapshot", "Read Committed"]; Default: "Snapshot"
Existing data in the database are synced through an initial snapshot. This parameter controls the isolation level that will be used during the initial snapshotting. If you choose the "Snapshot" level, you must enable the snapshot isolation mode on the database.
-
-### Nested Schema for `configuration.replication_method.source_mssql_update_method_scan_changes_with_user_defined_cursor`
-
-Required:
-
-- `method` (String) must be one of ["STANDARD"]
-
-
-
-### Nested Schema for `configuration.replication_method.source_mssql_update_update_method_read_changes_using_change_data_capture_cdc`
-
-Required:
-
-- `method` (String) must be one of ["CDC"]
-
-Optional:
-
-- `data_to_sync` (String) must be one of ["Existing and New", "New Changes Only"]
-What data should be synced under the CDC. "Existing and New" will read existing data as a snapshot, and sync new changes through CDC. "New Changes Only" will skip the initial snapshot, and only sync new changes through CDC.
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `snapshot_isolation` (String) must be one of ["Snapshot", "Read Committed"]
-Existing data in the database are synced through an initial snapshot. This parameter controls the isolation level that will be used during the initial snapshotting. If you choose the "Snapshot" level, you must enable the snapshot isolation mode on the database.
-
-
-
-### Nested Schema for `configuration.replication_method.source_mssql_update_update_method_scan_changes_with_user_defined_cursor`
-
-Required:
-
-- `method` (String) must be one of ["STANDARD"]
+
+### Nested Schema for `configuration.replication_method.scan_changes_with_user_defined_cursor`
@@ -152,45 +114,15 @@ Required:
Optional:
-- `source_mssql_ssl_method_encrypted_trust_server_certificate` (Attributes) Use the certificate provided by the server without verification. (For testing purposes only!) (see [below for nested schema](#nestedatt--configuration--ssl_method--source_mssql_ssl_method_encrypted_trust_server_certificate))
-- `source_mssql_ssl_method_encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--ssl_method--source_mssql_ssl_method_encrypted_verify_certificate))
-- `source_mssql_update_ssl_method_encrypted_trust_server_certificate` (Attributes) Use the certificate provided by the server without verification. (For testing purposes only!) (see [below for nested schema](#nestedatt--configuration--ssl_method--source_mssql_update_ssl_method_encrypted_trust_server_certificate))
-- `source_mssql_update_ssl_method_encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--ssl_method--source_mssql_update_ssl_method_encrypted_verify_certificate))
-
-
-### Nested Schema for `configuration.ssl_method.source_mssql_ssl_method_encrypted_trust_server_certificate`
-
-Required:
-
-- `ssl_method` (String) must be one of ["encrypted_trust_server_certificate"]
+- `encrypted_trust_server_certificate` (Attributes) Use the certificate provided by the server without verification. (For testing purposes only!) (see [below for nested schema](#nestedatt--configuration--ssl_method--encrypted_trust_server_certificate))
+- `encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--ssl_method--encrypted_verify_certificate))
+
+### Nested Schema for `configuration.ssl_method.encrypted_trust_server_certificate`
-
-### Nested Schema for `configuration.ssl_method.source_mssql_ssl_method_encrypted_verify_certificate`
-
-Required:
-- `ssl_method` (String) must be one of ["encrypted_verify_certificate"]
-
-Optional:
-
-- `host_name_in_certificate` (String) Specifies the host name of the server. The value of this property must match the subject property of the certificate.
-
-
-
-### Nested Schema for `configuration.ssl_method.source_mssql_update_ssl_method_encrypted_trust_server_certificate`
-
-Required:
-
-- `ssl_method` (String) must be one of ["encrypted_trust_server_certificate"]
-
-
-
-### Nested Schema for `configuration.ssl_method.source_mssql_update_ssl_method_encrypted_verify_certificate`
-
-Required:
-
-- `ssl_method` (String) must be one of ["encrypted_verify_certificate"]
+
+### Nested Schema for `configuration.ssl_method.encrypted_verify_certificate`
Optional:
@@ -203,80 +135,41 @@ Optional:
Optional:
-- `source_mssql_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mssql_ssh_tunnel_method_no_tunnel))
-- `source_mssql_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mssql_ssh_tunnel_method_password_authentication))
-- `source_mssql_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mssql_ssh_tunnel_method_ssh_key_authentication))
-- `source_mssql_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mssql_update_ssh_tunnel_method_no_tunnel))
-- `source_mssql_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mssql_update_ssh_tunnel_method_password_authentication))
-- `source_mssql_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mssql_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.source_mssql_ssh_tunnel_method_no_tunnel`
-
-Required:
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-
-### Nested Schema for `configuration.tunnel_method.source_mssql_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mssql_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
+Optional:
-
-### Nested Schema for `configuration.tunnel_method.source_mssql_update_ssh_tunnel_method_no_tunnel`
-
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.source_mssql_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mssql_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/source_my_hours.md b/docs/resources/source_my_hours.md
index 0085bed7f..bb769889f 100644
--- a/docs/resources/source_my_hours.md
+++ b/docs/resources/source_my_hours.md
@@ -18,12 +18,12 @@ resource "airbyte_source_my_hours" "my_source_myhours" {
email = "john@doe.com"
logs_batch_size = 30
password = "...my_password..."
- source_type = "my-hours"
- start_date = "2016-01-01"
+ start_date = "%Y-%m-%d"
}
- name = "Elsa Kerluke"
- secret_id = "...my_secret_id..."
- workspace_id = "922151fe-1712-4099-853e-9f543d854439"
+ definition_id = "95261555-3a71-4349-8a3f-9799a12d6e33"
+ name = "Franklin Jerde"
+ secret_id = "...my_secret_id..."
+ workspace_id = "00d47724-56d0-4d26-9914-7bb3566ca647"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_my_hours" "my_source_myhours" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,12 +52,12 @@ resource "airbyte_source_my_hours" "my_source_myhours" {
Required:
- `email` (String) Your My Hours username
-- `password` (String) The password associated to the username
-- `source_type` (String) must be one of ["my-hours"]
+- `password` (String, Sensitive) The password associated to the username
- `start_date` (String) Start date for collecting time logs
Optional:
-- `logs_batch_size` (Number) Pagination size used for retrieving logs in days
+- `logs_batch_size` (Number) Default: 30
+Pagination size used for retrieving logs in days
diff --git a/docs/resources/source_mysql.md b/docs/resources/source_mysql.md
index b945dfda0..2c4cc4385 100644
--- a/docs/resources/source_mysql.md
+++ b/docs/resources/source_mysql.md
@@ -21,28 +21,23 @@ resource "airbyte_source_mysql" "my_source_mysql" {
password = "...my_password..."
port = 3306
replication_method = {
- source_mysql_update_method_read_changes_using_binary_log_cdc_ = {
- initial_waiting_seconds = 10
- method = "CDC"
+ read_changes_using_binary_log_cdc = {
+ initial_waiting_seconds = 7
server_time_zone = "...my_server_time_zone..."
}
}
- source_type = "mysql"
ssl_mode = {
- source_mysql_ssl_modes_preferred = {
- mode = "preferred"
- }
+ preferred = {}
}
tunnel_method = {
- source_mysql_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ source_mysql_no_tunnel = {}
}
- username = "Carley25"
+ username = "Eino_White"
}
- name = "Ruth Goodwin"
- secret_id = "...my_secret_id..."
- workspace_id = "bc154188-c2f5-46e8-9da7-832eabd617c3"
+ definition_id = "aba25784-141a-421c-8938-ad6fcbb78bed"
+ name = "Mr. Ross Cole"
+ secret_id = "...my_secret_id..."
+ workspace_id = "704ae193-8752-47d5-a3ef-7246d0c0b796"
}
```
@@ -52,11 +47,12 @@ resource "airbyte_source_mysql" "my_source_mysql" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -71,15 +67,15 @@ Required:
- `database` (String) The database name.
- `host` (String) The host name of the database.
-- `port` (Number) The port to connect to.
- `replication_method` (Attributes) Configures how data is extracted from the database. (see [below for nested schema](#nestedatt--configuration--replication_method))
-- `source_type` (String) must be one of ["mysql"]
- `username` (String) The username which is used to access the database.
Optional:
- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3). For more information read about JDBC URL parameters.
-- `password` (String) The password associated with the username.
+- `password` (String, Sensitive) The password associated with the username.
+- `port` (Number) Default: 3306
+The port to connect to.
- `ssl_mode` (Attributes) SSL connection modes. Read more in the docs. (see [below for nested schema](#nestedatt--configuration--ssl_mode))
- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
@@ -88,51 +84,21 @@ Optional:
Optional:
-- `source_mysql_update_method_read_changes_using_binary_log_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the MySQL binary log. This must be enabled on your database. (see [below for nested schema](#nestedatt--configuration--replication_method--source_mysql_update_method_read_changes_using_binary_log_cdc))
-- `source_mysql_update_method_scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--source_mysql_update_method_scan_changes_with_user_defined_cursor))
-- `source_mysql_update_update_method_read_changes_using_binary_log_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the MySQL binary log. This must be enabled on your database. (see [below for nested schema](#nestedatt--configuration--replication_method--source_mysql_update_update_method_read_changes_using_binary_log_cdc))
-- `source_mysql_update_update_method_scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--source_mysql_update_update_method_scan_changes_with_user_defined_cursor))
-
-
-### Nested Schema for `configuration.replication_method.source_mysql_update_method_read_changes_using_binary_log_cdc`
-
-Required:
-
-- `method` (String) must be one of ["CDC"]
-
-Optional:
-
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `server_time_zone` (String) Enter the configured MySQL server timezone. This should only be done if the configured timezone in your MySQL instance does not conform to IANNA standard.
-
-
-
-### Nested Schema for `configuration.replication_method.source_mysql_update_method_scan_changes_with_user_defined_cursor`
-
-Required:
-
-- `method` (String) must be one of ["STANDARD"]
+- `read_changes_using_binary_log_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the MySQL binary log. This must be enabled on your database. (see [below for nested schema](#nestedatt--configuration--replication_method--read_changes_using_binary_log_cdc))
+- `scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--scan_changes_with_user_defined_cursor))
-
-
-### Nested Schema for `configuration.replication_method.source_mysql_update_update_method_read_changes_using_binary_log_cdc`
-
-Required:
-
-- `method` (String) must be one of ["CDC"]
+
+### Nested Schema for `configuration.replication_method.read_changes_using_binary_log_cdc`
Optional:
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
+- `initial_waiting_seconds` (Number) Default: 300
+The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
- `server_time_zone` (String) Enter the configured MySQL server timezone. This should only be done if the configured timezone in your MySQL instance does not conform to IANNA standard.
-
-### Nested Schema for `configuration.replication_method.source_mysql_update_update_method_scan_changes_with_user_defined_cursor`
-
-Required:
-
-- `method` (String) must be one of ["STANDARD"]
+
+### Nested Schema for `configuration.replication_method.scan_changes_with_user_defined_cursor`
@@ -141,105 +107,45 @@ Required:
Optional:
-- `source_mysql_ssl_modes_preferred` (Attributes) Automatically attempt SSL connection. If the MySQL server does not support SSL, continue with a regular connection. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_ssl_modes_preferred))
-- `source_mysql_ssl_modes_required` (Attributes) Always connect with SSL. If the MySQL server doesn’t support SSL, the connection will not be established. Certificate Authority (CA) and Hostname are not verified. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_ssl_modes_required))
-- `source_mysql_ssl_modes_verify_ca` (Attributes) Always connect with SSL. Verifies CA, but allows connection even if Hostname does not match. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_ssl_modes_verify_ca))
-- `source_mysql_ssl_modes_verify_identity` (Attributes) Always connect with SSL. Verify both CA and Hostname. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_ssl_modes_verify_identity))
-- `source_mysql_update_ssl_modes_preferred` (Attributes) Automatically attempt SSL connection. If the MySQL server does not support SSL, continue with a regular connection. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_update_ssl_modes_preferred))
-- `source_mysql_update_ssl_modes_required` (Attributes) Always connect with SSL. If the MySQL server doesn’t support SSL, the connection will not be established. Certificate Authority (CA) and Hostname are not verified. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_update_ssl_modes_required))
-- `source_mysql_update_ssl_modes_verify_ca` (Attributes) Always connect with SSL. Verifies CA, but allows connection even if Hostname does not match. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_update_ssl_modes_verify_ca))
-- `source_mysql_update_ssl_modes_verify_identity` (Attributes) Always connect with SSL. Verify both CA and Hostname. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_mysql_update_ssl_modes_verify_identity))
-
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_ssl_modes_preferred`
-
-Required:
-
-- `mode` (String) must be one of ["preferred"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_ssl_modes_required`
-
-Required:
-
-- `mode` (String) must be one of ["required"]
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_ssl_modes_verify_ca`
-
-Required:
-
-- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify_ca"]
-
-Optional:
-
-- `client_certificate` (String) Client certificate (this is not a required field, but if you want to use it, you will need to add the Client key as well)
-- `client_key` (String) Client key (this is not a required field, but if you want to use it, you will need to add the Client certificate as well)
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_ssl_modes_verify_identity`
-
-Required:
-
-- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify_identity"]
-
-Optional:
-
-- `client_certificate` (String) Client certificate (this is not a required field, but if you want to use it, you will need to add the Client key as well)
-- `client_key` (String) Client key (this is not a required field, but if you want to use it, you will need to add the Client certificate as well)
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_update_ssl_modes_preferred`
-
-Required:
+- `preferred` (Attributes) Automatically attempt SSL connection. If the MySQL server does not support SSL, continue with a regular connection. (see [below for nested schema](#nestedatt--configuration--ssl_mode--preferred))
+- `required` (Attributes) Always connect with SSL. If the MySQL server doesn’t support SSL, the connection will not be established. Certificate Authority (CA) and Hostname are not verified. (see [below for nested schema](#nestedatt--configuration--ssl_mode--required))
+- `verify_ca` (Attributes) Always connect with SSL. Verifies CA, but allows connection even if Hostname does not match. (see [below for nested schema](#nestedatt--configuration--ssl_mode--verify_ca))
+- `verify_identity` (Attributes) Always connect with SSL. Verify both CA and Hostname. (see [below for nested schema](#nestedatt--configuration--ssl_mode--verify_identity))
-- `mode` (String) must be one of ["preferred"]
+
+### Nested Schema for `configuration.ssl_mode.preferred`
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_update_ssl_modes_required`
-
-Required:
-
-- `mode` (String) must be one of ["required"]
+
+### Nested Schema for `configuration.ssl_mode.required`
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_update_ssl_modes_verify_ca`
+
+### Nested Schema for `configuration.ssl_mode.verify_ca`
Required:
- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify_ca"]
Optional:
- `client_certificate` (String) Client certificate (this is not a required field, but if you want to use it, you will need to add the Client key as well)
-- `client_key` (String) Client key (this is not a required field, but if you want to use it, you will need to add the Client certificate as well)
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
+- `client_key` (String, Sensitive) Client key (this is not a required field, but if you want to use it, you will need to add the Client certificate as well)
+- `client_key_password` (String, Sensitive) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
-
-### Nested Schema for `configuration.ssl_mode.source_mysql_update_ssl_modes_verify_identity`
+
+### Nested Schema for `configuration.ssl_mode.verify_identity`
Required:
- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify_identity"]
Optional:
- `client_certificate` (String) Client certificate (this is not a required field, but if you want to use it, you will need to add the Client key as well)
-- `client_key` (String) Client key (this is not a required field, but if you want to use it, you will need to add the Client certificate as well)
-- `client_key_password` (String) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
+- `client_key` (String, Sensitive) Client key (this is not a required field, but if you want to use it, you will need to add the Client certificate as well)
+- `client_key_password` (String, Sensitive) Password for keystorage. This field is optional. If you do not add it - the password will be generated automatically.
@@ -248,80 +154,41 @@ Optional:
Optional:
-- `source_mysql_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mysql_ssh_tunnel_method_no_tunnel))
-- `source_mysql_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mysql_ssh_tunnel_method_password_authentication))
-- `source_mysql_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mysql_ssh_tunnel_method_ssh_key_authentication))
-- `source_mysql_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mysql_update_ssh_tunnel_method_no_tunnel))
-- `source_mysql_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mysql_update_ssh_tunnel_method_password_authentication))
-- `source_mysql_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_mysql_update_ssh_tunnel_method_ssh_key_authentication))
-
-
-### Nested Schema for `configuration.tunnel_method.source_mysql_ssh_tunnel_method_no_tunnel`
-
-Required:
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-
-### Nested Schema for `configuration.tunnel_method.source_mysql_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mysql_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mysql_update_ssh_tunnel_method_no_tunnel`
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
-Required:
+Optional:
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.source_mysql_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_mysql_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/source_netsuite.md b/docs/resources/source_netsuite.md
index 5d9f80f52..e10220080 100644
--- a/docs/resources/source_netsuite.md
+++ b/docs/resources/source_netsuite.md
@@ -21,15 +21,15 @@ resource "airbyte_source_netsuite" "my_source_netsuite" {
"...",
]
realm = "...my_realm..."
- source_type = "netsuite"
start_datetime = "2017-01-25T00:00:00Z"
token_key = "...my_token_key..."
token_secret = "...my_token_secret..."
- window_in_days = 7
+ window_in_days = 5
}
- name = "Miss Meredith Hand"
- secret_id = "...my_secret_id..."
- workspace_id = "4bf01bad-8706-4d46-882b-fbdc41ff5d4e"
+ definition_id = "b7242137-fe2e-49e2-ac4c-104f1dbe3b1f"
+ name = "Ramona Bahringer"
+ secret_id = "...my_secret_id..."
+ workspace_id = "77573847-65c7-4741-8014-d1f263651b77"
}
```
@@ -39,11 +39,12 @@ resource "airbyte_source_netsuite" "my_source_netsuite" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -56,17 +57,17 @@ resource "airbyte_source_netsuite" "my_source_netsuite" {
Required:
-- `consumer_key` (String) Consumer key associated with your integration
+- `consumer_key` (String, Sensitive) Consumer key associated with your integration
- `consumer_secret` (String) Consumer secret associated with your integration
- `realm` (String) Netsuite realm e.g. 2344535, as for `production` or 2344535_SB1, as for the `sandbox`
-- `source_type` (String) must be one of ["netsuite"]
- `start_datetime` (String) Starting point for your data replication, in format of "YYYY-MM-DDTHH:mm:ssZ"
-- `token_key` (String) Access token key
-- `token_secret` (String) Access token secret
+- `token_key` (String, Sensitive) Access token key
+- `token_secret` (String, Sensitive) Access token secret
Optional:
- `object_types` (List of String) The API names of the Netsuite objects you want to sync. Setting this speeds up the connection setup process by limiting the number of schemas that need to be retrieved from Netsuite.
-- `window_in_days` (Number) The amount of days used to query the data with date chunks. Set smaller value, if you have lots of data.
+- `window_in_days` (Number) Default: 30
+The amount of days used to query the data with date chunks. Set smaller value, if you have lots of data.
diff --git a/docs/resources/source_notion.md b/docs/resources/source_notion.md
index 4bcf04c38..2408d49b4 100644
--- a/docs/resources/source_notion.md
+++ b/docs/resources/source_notion.md
@@ -16,17 +16,16 @@ SourceNotion Resource
resource "airbyte_source_notion" "my_source_notion" {
configuration = {
credentials = {
- source_notion_authenticate_using_access_token = {
- auth_type = "token"
- token = "...my_token..."
+ source_notion_access_token = {
+ token = "...my_token..."
}
}
- source_type = "notion"
- start_date = "2020-11-16T00:00:00.000Z"
+ start_date = "2020-11-16T00:00:00.000Z"
}
- name = "Francisco Yost"
- secret_id = "...my_secret_id..."
- workspace_id = "cb35d176-38f1-4edb-b835-9ecc5cb860f8"
+ definition_id = "fe0e5e5f-386d-40ac-9af3-c6558d9b03d2"
+ name = "Jeannette Ward"
+ secret_id = "...my_secret_id..."
+ workspace_id = "dbadc477-cb62-4b59-b9f1-ee4249578a5b"
}
```
@@ -36,11 +35,12 @@ resource "airbyte_source_notion" "my_source_notion" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -53,60 +53,35 @@ resource "airbyte_source_notion" "my_source_notion" {
Required:
-- `source_type` (String) must be one of ["notion"]
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00.000Z. Any data before this date will not be replicated.
+- `credentials` (Attributes) Choose either OAuth (recommended for Airbyte Cloud) or Access Token. See our docs for more information. (see [below for nested schema](#nestedatt--configuration--credentials))
Optional:
-- `credentials` (Attributes) Pick an authentication method. (see [below for nested schema](#nestedatt--configuration--credentials))
+- `start_date` (String) UTC date and time in the format YYYY-MM-DDTHH:MM:SS.000Z. During incremental sync, any data generated before this date will not be replicated. If left blank, the start date will be set to 2 years before the present date.
### Nested Schema for `configuration.credentials`
Optional:
-- `source_notion_authenticate_using_access_token` (Attributes) Pick an authentication method. (see [below for nested schema](#nestedatt--configuration--credentials--source_notion_authenticate_using_access_token))
-- `source_notion_authenticate_using_o_auth2_0` (Attributes) Pick an authentication method. (see [below for nested schema](#nestedatt--configuration--credentials--source_notion_authenticate_using_o_auth2_0))
-- `source_notion_update_authenticate_using_access_token` (Attributes) Pick an authentication method. (see [below for nested schema](#nestedatt--configuration--credentials--source_notion_update_authenticate_using_access_token))
-- `source_notion_update_authenticate_using_o_auth2_0` (Attributes) Pick an authentication method. (see [below for nested schema](#nestedatt--configuration--credentials--source_notion_update_authenticate_using_o_auth2_0))
+- `access_token` (Attributes) Choose either OAuth (recommended for Airbyte Cloud) or Access Token. See our docs for more information. (see [below for nested schema](#nestedatt--configuration--credentials--access_token))
+- `o_auth20` (Attributes) Choose either OAuth (recommended for Airbyte Cloud) or Access Token. See our docs for more information. (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_notion_authenticate_using_access_token`
+
+### Nested Schema for `configuration.credentials.access_token`
Required:
-- `auth_type` (String) must be one of ["token"]
-- `token` (String) Notion API access token, see the docs for more information on how to obtain this token.
+- `token` (String, Sensitive) The Access Token for your private Notion integration. See the docs for more information on how to obtain this token.
-
-### Nested Schema for `configuration.credentials.source_notion_authenticate_using_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) Access Token is a token you received by complete the OauthWebFlow of Notion.
-- `auth_type` (String) must be one of ["OAuth2.0"]
-- `client_id` (String) The ClientID of your Notion integration.
-- `client_secret` (String) The ClientSecret of your Notion integration.
-
-
-
-### Nested Schema for `configuration.credentials.source_notion_update_authenticate_using_access_token`
-
-Required:
-
-- `auth_type` (String) must be one of ["token"]
-- `token` (String) Notion API access token, see the docs for more information on how to obtain this token.
-
-
-
-### Nested Schema for `configuration.credentials.source_notion_update_authenticate_using_o_auth2_0`
-
-Required:
-
-- `access_token` (String) Access Token is a token you received by complete the OauthWebFlow of Notion.
-- `auth_type` (String) must be one of ["OAuth2.0"]
-- `client_id` (String) The ClientID of your Notion integration.
-- `client_secret` (String) The ClientSecret of your Notion integration.
+- `access_token` (String, Sensitive) The Access Token received by completing the OAuth flow for your Notion integration. See our docs for more information.
+- `client_id` (String) The Client ID of your Notion integration. See our docs for more information.
+- `client_secret` (String) The Client Secret of your Notion integration. See our docs for more information.
diff --git a/docs/resources/source_nytimes.md b/docs/resources/source_nytimes.md
index ff83b2189..e8a01c83c 100644
--- a/docs/resources/source_nytimes.md
+++ b/docs/resources/source_nytimes.md
@@ -15,16 +15,16 @@ SourceNytimes Resource
```terraform
resource "airbyte_source_nytimes" "my_source_nytimes" {
configuration = {
- api_key = "...my_api_key..."
- end_date = "1851-01"
- period = "7"
- share_type = "facebook"
- source_type = "nytimes"
- start_date = "2022-08"
+ api_key = "...my_api_key..."
+ end_date = "1851-01"
+ period = "30"
+ share_type = "facebook"
+ start_date = "2022-08"
}
- name = "Mr. Emily Macejkovic"
- secret_id = "...my_secret_id..."
- workspace_id = "4fe44472-97cd-43b1-9d3b-bce247b7684e"
+ definition_id = "83b2c4dd-4d42-4907-b41e-e0bbab0457d9"
+ name = "Sue Durgan"
+ secret_id = "...my_secret_id..."
+ workspace_id = "e6ecd841-e72a-4766-a686-faa512d8044b"
}
```
@@ -34,11 +34,12 @@ resource "airbyte_source_nytimes" "my_source_nytimes" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,10 +52,9 @@ resource "airbyte_source_nytimes" "my_source_nytimes" {
Required:
-- `api_key` (String) API Key
+- `api_key` (String, Sensitive) API Key
- `period` (Number) must be one of ["1", "7", "30"]
Period of time (in days)
-- `source_type` (String) must be one of ["nytimes"]
- `start_date` (String) Start date to begin the article retrieval (format YYYY-MM)
Optional:
diff --git a/docs/resources/source_okta.md b/docs/resources/source_okta.md
index 818e96444..b61cd09fe 100644
--- a/docs/resources/source_okta.md
+++ b/docs/resources/source_okta.md
@@ -16,18 +16,17 @@ SourceOkta Resource
resource "airbyte_source_okta" "my_source_okta" {
configuration = {
credentials = {
- source_okta_authorization_method_api_token = {
+ source_okta_api_token = {
api_token = "...my_api_token..."
- auth_type = "api_token"
}
}
- domain = "...my_domain..."
- source_type = "okta"
- start_date = "2022-07-22T00:00:00Z"
+ domain = "...my_domain..."
+ start_date = "2022-07-22T00:00:00Z"
}
- name = "Mr. Emmett Heidenreich"
- secret_id = "...my_secret_id..."
- workspace_id = "6d71cffb-d0eb-474b-8421-953b44bd3c43"
+ definition_id = "05c5b711-2361-4f26-947b-86cdec1a2bc2"
+ name = "Isaac Bruen"
+ secret_id = "...my_secret_id..."
+ workspace_id = "5e3ceb6c-910d-4c95-a96c-b5f3bc4b3253"
}
```
@@ -37,11 +36,12 @@ resource "airbyte_source_okta" "my_source_okta" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -52,10 +52,6 @@ resource "airbyte_source_okta" "my_source_okta" {
### Nested Schema for `configuration`
-Required:
-
-- `source_type` (String) must be one of ["okta"]
-
Optional:
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
@@ -67,48 +63,24 @@ Optional:
Optional:
-- `source_okta_authorization_method_api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_okta_authorization_method_api_token))
-- `source_okta_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_okta_authorization_method_o_auth2_0))
-- `source_okta_update_authorization_method_api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_okta_update_authorization_method_api_token))
-- `source_okta_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_okta_update_authorization_method_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_okta_authorization_method_api_token`
-
-Required:
-
-- `api_token` (String) An Okta token. See the docs for instructions on how to generate it.
-- `auth_type` (String) must be one of ["api_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_okta_authorization_method_o_auth2_0`
-
-Required:
-
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your OAuth application.
-- `client_secret` (String) The Client Secret of your OAuth application.
-- `refresh_token` (String) Refresh Token to obtain new Access Token, when it's expired.
-
+- `api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--api_token))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_okta_update_authorization_method_api_token`
+
+### Nested Schema for `configuration.credentials.api_token`
Required:
-- `api_token` (String) An Okta token. See the docs for instructions on how to generate it.
-- `auth_type` (String) must be one of ["api_token"]
+- `api_token` (String, Sensitive) An Okta token. See the docs for instructions on how to generate it.
-
-### Nested Schema for `configuration.credentials.source_okta_update_authorization_method_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `auth_type` (String) must be one of ["oauth2.0"]
- `client_id` (String) The Client ID of your OAuth application.
- `client_secret` (String) The Client Secret of your OAuth application.
-- `refresh_token` (String) Refresh Token to obtain new Access Token, when it's expired.
+- `refresh_token` (String, Sensitive) Refresh Token to obtain new Access Token, when it's expired.
diff --git a/docs/resources/source_omnisend.md b/docs/resources/source_omnisend.md
index f0b9461ad..20f97491c 100644
--- a/docs/resources/source_omnisend.md
+++ b/docs/resources/source_omnisend.md
@@ -15,12 +15,12 @@ SourceOmnisend Resource
```terraform
resource "airbyte_source_omnisend" "my_source_omnisend" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "omnisend"
+ api_key = "...my_api_key..."
}
- name = "Lynn Miller"
- secret_id = "...my_secret_id..."
- workspace_id = "3e5953c0-0113-4986-baa4-1e6c31cc2f1f"
+ definition_id = "e6bd591e-2544-44d2-a34f-d1d8ea1c7d43"
+ name = "Rachel Ankunding"
+ secret_id = "...my_secret_id..."
+ workspace_id = "c9c1a8da-b7e7-43a5-9718-14e4dc1f633a"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_omnisend" "my_source_omnisend" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_omnisend" "my_source_omnisend" {
Required:
-- `api_key` (String) API Key
-- `source_type` (String) must be one of ["omnisend"]
+- `api_key` (String, Sensitive) API Key
diff --git a/docs/resources/source_onesignal.md b/docs/resources/source_onesignal.md
index e0b682980..6bafb676e 100644
--- a/docs/resources/source_onesignal.md
+++ b/docs/resources/source_onesignal.md
@@ -23,13 +23,13 @@ resource "airbyte_source_onesignal" "my_source_onesignal" {
},
]
outcome_names = "os__session_duration.count,os__click.count,CustomOutcomeName.sum"
- source_type = "onesignal"
start_date = "2020-11-16T00:00:00Z"
user_auth_key = "...my_user_auth_key..."
}
- name = "Joan Schaefer"
- secret_id = "...my_secret_id..."
- workspace_id = "41ffbe9c-bd79-45ee-a5e0-76cc7abf616e"
+ definition_id = "58a542d5-17fc-488b-8499-8d75efedea33"
+ name = "Krystal Hamill"
+ secret_id = "...my_secret_id..."
+ workspace_id = "15598db9-2c72-4d54-9f53-8928a50561c1"
}
```
@@ -39,11 +39,12 @@ resource "airbyte_source_onesignal" "my_source_onesignal" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -58,16 +59,15 @@ Required:
- `applications` (Attributes List) Applications keys, see the docs for more information on how to obtain this data (see [below for nested schema](#nestedatt--configuration--applications))
- `outcome_names` (String) Comma-separated list of names and the value (sum/count) for the returned outcome data. See the docs for more details
-- `source_type` (String) must be one of ["onesignal"]
- `start_date` (String) The date from which you'd like to replicate data for OneSignal API, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
-- `user_auth_key` (String) OneSignal User Auth Key, see the docs for more information on how to obtain this key.
+- `user_auth_key` (String, Sensitive) OneSignal User Auth Key, see the docs for more information on how to obtain this key.
### Nested Schema for `configuration.applications`
Required:
-- `app_api_key` (String)
+- `app_api_key` (String, Sensitive)
- `app_id` (String)
Optional:
diff --git a/docs/resources/source_oracle.md b/docs/resources/source_oracle.md
index 093d12f88..db3b9e1b5 100644
--- a/docs/resources/source_oracle.md
+++ b/docs/resources/source_oracle.md
@@ -16,35 +16,31 @@ SourceOracle Resource
resource "airbyte_source_oracle" "my_source_oracle" {
configuration = {
connection_data = {
- source_oracle_connect_by_service_name = {
- connection_type = "service_name"
- service_name = "...my_service_name..."
+ service_name = {
+ service_name = "...my_service_name..."
}
}
encryption = {
- source_oracle_encryption_native_network_encryption_nne_ = {
- encryption_algorithm = "RC4_56"
- encryption_method = "client_nne"
+ native_network_encryption_nne = {
+ encryption_algorithm = "3DES168"
}
}
host = "...my_host..."
jdbc_url_params = "...my_jdbc_url_params..."
password = "...my_password..."
- port = 4
+ port = 8
schemas = [
"...",
]
- source_type = "oracle"
tunnel_method = {
- source_oracle_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ source_oracle_no_tunnel = {}
}
- username = "Oswaldo42"
+ username = "Hellen.Champlin"
}
- name = "Cheryl McKenzie"
- secret_id = "...my_secret_id..."
- workspace_id = "b90f2e09-d19d-42fc-af9e-2e105944b935"
+ definition_id = "a1ad7b3d-761e-429e-b26a-e07d2b59ab56"
+ name = "Jake Pfeffer"
+ secret_id = "...my_secret_id..."
+ workspace_id = "c000ccde-ed12-4bd5-ab73-d022a608737f"
}
```
@@ -54,11 +50,12 @@ resource "airbyte_source_oracle" "my_source_oracle" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -73,18 +70,18 @@ Required:
- `encryption` (Attributes) The encryption method with is used when communicating with the database. (see [below for nested schema](#nestedatt--configuration--encryption))
- `host` (String) Hostname of the database.
-- `port` (Number) Port of the database.
-Oracle Corporations recommends the following port numbers:
-1521 - Default listening port for client connections to the listener.
-2484 - Recommended and officially registered listening port for client connections to the listener using TCP/IP with SSL
-- `source_type` (String) must be one of ["oracle"]
- `username` (String) The username which is used to access the database.
Optional:
- `connection_data` (Attributes) Connect data that will be used for DB connection (see [below for nested schema](#nestedatt--configuration--connection_data))
- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
-- `password` (String) The password associated with the username.
+- `password` (String, Sensitive) The password associated with the username.
+- `port` (Number) Default: 1521
+Port of the database.
+Oracle Corporations recommends the following port numbers:
+1521 - Default listening port for client connections to the listener.
+2484 - Recommended and officially registered listening port for client connections to the listener using TCP/IP with SSL
- `schemas` (List of String) The list of schemas to sync from. Defaults to user. Case sensitive.
- `tunnel_method` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method))
@@ -93,52 +90,23 @@ Optional:
Optional:
-- `source_oracle_encryption_native_network_encryption_nne` (Attributes) The native network encryption gives you the ability to encrypt database connections, without the configuration overhead of TCP/IP and SSL/TLS and without the need to open and listen on different ports. (see [below for nested schema](#nestedatt--configuration--encryption--source_oracle_encryption_native_network_encryption_nne))
-- `source_oracle_encryption_tls_encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--encryption--source_oracle_encryption_tls_encrypted_verify_certificate))
-- `source_oracle_update_encryption_native_network_encryption_nne` (Attributes) The native network encryption gives you the ability to encrypt database connections, without the configuration overhead of TCP/IP and SSL/TLS and without the need to open and listen on different ports. (see [below for nested schema](#nestedatt--configuration--encryption--source_oracle_update_encryption_native_network_encryption_nne))
-- `source_oracle_update_encryption_tls_encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--encryption--source_oracle_update_encryption_tls_encrypted_verify_certificate))
-
-
-### Nested Schema for `configuration.encryption.source_oracle_encryption_native_network_encryption_nne`
-
-Required:
-
-- `encryption_method` (String) must be one of ["client_nne"]
-
-Optional:
-
-- `encryption_algorithm` (String) must be one of ["AES256", "RC4_56", "3DES168"]
-This parameter defines what encryption algorithm is used.
-
-
-
-### Nested Schema for `configuration.encryption.source_oracle_encryption_tls_encrypted_verify_certificate`
-
-Required:
-
-- `encryption_method` (String) must be one of ["encrypted_verify_certificate"]
-- `ssl_certificate` (String) Privacy Enhanced Mail (PEM) files are concatenated certificate containers frequently used in certificate installations.
+- `native_network_encryption_nne` (Attributes) The native network encryption gives you the ability to encrypt database connections, without the configuration overhead of TCP/IP and SSL/TLS and without the need to open and listen on different ports. (see [below for nested schema](#nestedatt--configuration--encryption--native_network_encryption_nne))
+- `tls_encrypted_verify_certificate` (Attributes) Verify and use the certificate provided by the server. (see [below for nested schema](#nestedatt--configuration--encryption--tls_encrypted_verify_certificate))
-
-
-### Nested Schema for `configuration.encryption.source_oracle_update_encryption_native_network_encryption_nne`
-
-Required:
-
-- `encryption_method` (String) must be one of ["client_nne"]
+
+### Nested Schema for `configuration.encryption.native_network_encryption_nne`
Optional:
-- `encryption_algorithm` (String) must be one of ["AES256", "RC4_56", "3DES168"]
+- `encryption_algorithm` (String) must be one of ["AES256", "RC4_56", "3DES168"]; Default: "AES256"
This parameter defines what encryption algorithm is used.
-
-### Nested Schema for `configuration.encryption.source_oracle_update_encryption_tls_encrypted_verify_certificate`
+
+### Nested Schema for `configuration.encryption.tls_encrypted_verify_certificate`
Required:
-- `encryption_method` (String) must be one of ["encrypted_verify_certificate"]
- `ssl_certificate` (String) Privacy Enhanced Mail (PEM) files are concatenated certificate containers frequently used in certificate installations.
@@ -148,58 +116,24 @@ Required:
Optional:
-- `source_oracle_connect_by_service_name` (Attributes) Use service name (see [below for nested schema](#nestedatt--configuration--connection_data--source_oracle_connect_by_service_name))
-- `source_oracle_connect_by_system_id_sid` (Attributes) Use SID (Oracle System Identifier) (see [below for nested schema](#nestedatt--configuration--connection_data--source_oracle_connect_by_system_id_sid))
-- `source_oracle_update_connect_by_service_name` (Attributes) Use service name (see [below for nested schema](#nestedatt--configuration--connection_data--source_oracle_update_connect_by_service_name))
-- `source_oracle_update_connect_by_system_id_sid` (Attributes) Use SID (Oracle System Identifier) (see [below for nested schema](#nestedatt--configuration--connection_data--source_oracle_update_connect_by_system_id_sid))
+- `service_name` (Attributes) Use service name (see [below for nested schema](#nestedatt--configuration--connection_data--service_name))
+- `system_idsid` (Attributes) Use SID (Oracle System Identifier) (see [below for nested schema](#nestedatt--configuration--connection_data--system_idsid))
-
-### Nested Schema for `configuration.connection_data.source_oracle_connect_by_service_name`
+
+### Nested Schema for `configuration.connection_data.service_name`
Required:
- `service_name` (String)
-Optional:
-
-- `connection_type` (String) must be one of ["service_name"]
-
-
-### Nested Schema for `configuration.connection_data.source_oracle_connect_by_system_id_sid`
+
+### Nested Schema for `configuration.connection_data.system_idsid`
Required:
- `sid` (String)
-Optional:
-
-- `connection_type` (String) must be one of ["sid"]
-
-
-
-### Nested Schema for `configuration.connection_data.source_oracle_update_connect_by_service_name`
-
-Required:
-
-- `service_name` (String)
-
-Optional:
-
-- `connection_type` (String) must be one of ["service_name"]
-
-
-
-### Nested Schema for `configuration.connection_data.source_oracle_update_connect_by_system_id_sid`
-
-Required:
-
-- `sid` (String)
-
-Optional:
-
-- `connection_type` (String) must be one of ["sid"]
-
@@ -207,80 +141,41 @@ Optional:
Optional:
-- `source_oracle_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_oracle_ssh_tunnel_method_no_tunnel))
-- `source_oracle_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_oracle_ssh_tunnel_method_password_authentication))
-- `source_oracle_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_oracle_ssh_tunnel_method_ssh_key_authentication))
-- `source_oracle_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_oracle_update_ssh_tunnel_method_no_tunnel))
-- `source_oracle_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_oracle_update_ssh_tunnel_method_password_authentication))
-- `source_oracle_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_oracle_update_ssh_tunnel_method_ssh_key_authentication))
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-
-### Nested Schema for `configuration.tunnel_method.source_oracle_ssh_tunnel_method_no_tunnel`
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-Required:
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_oracle_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_oracle_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
-Required:
-
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_oracle_update_ssh_tunnel_method_no_tunnel`
-
-Required:
+Optional:
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.source_oracle_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_oracle_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/source_orb.md b/docs/resources/source_orb.md
index d6144b287..a9c9dce8b 100644
--- a/docs/resources/source_orb.md
+++ b/docs/resources/source_orb.md
@@ -16,21 +16,21 @@ SourceOrb Resource
resource "airbyte_source_orb" "my_source_orb" {
configuration = {
api_key = "...my_api_key..."
- lookback_window_days = 9
+ lookback_window_days = 6
numeric_event_properties_keys = [
"...",
]
- plan_id = "...my_plan_id..."
- source_type = "orb"
- start_date = "2022-03-01T00:00:00Z"
+ plan_id = "...my_plan_id..."
+ start_date = "2022-03-01T00:00:00Z"
string_event_properties_keys = [
"...",
]
subscription_usage_grouping_key = "...my_subscription_usage_grouping_key..."
}
- name = "Josephine Kilback"
- secret_id = "...my_secret_id..."
- workspace_id = "2f90849d-6aed-44ae-8b75-37cd9222c9ff"
+ definition_id = "f9cf17c9-c1c9-4188-a190-0dfc35041fcd"
+ name = "Shaun Schimmel"
+ secret_id = "...my_secret_id..."
+ workspace_id = "262ef24d-9236-49b1-bf5a-7ba288f10a06"
}
```
@@ -40,11 +40,12 @@ resource "airbyte_source_orb" "my_source_orb" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -57,16 +58,16 @@ resource "airbyte_source_orb" "my_source_orb" {
Required:
-- `api_key` (String) Orb API Key, issued from the Orb admin console.
-- `source_type` (String) must be one of ["orb"]
+- `api_key` (String, Sensitive) Orb API Key, issued from the Orb admin console.
- `start_date` (String) UTC date and time in the format 2022-03-01T00:00:00Z. Any data with created_at before this data will not be synced. For Subscription Usage, this becomes the `timeframe_start` API parameter.
Optional:
-- `lookback_window_days` (Number) When set to N, the connector will always refresh resources created within the past N days. By default, updated objects that are not newly created are not incrementally synced.
-- `numeric_event_properties_keys` (List of String) Property key names to extract from all events, in order to enrich ledger entries corresponding to an event deduction.
+- `lookback_window_days` (Number) Default: 0
+When set to N, the connector will always refresh resources created within the past N days. By default, updated objects that are not newly created are not incrementally synced.
+- `numeric_event_properties_keys` (List of String, Sensitive) Property key names to extract from all events, in order to enrich ledger entries corresponding to an event deduction.
- `plan_id` (String) Orb Plan ID to filter subscriptions that should have usage fetched.
-- `string_event_properties_keys` (List of String) Property key names to extract from all events, in order to enrich ledger entries corresponding to an event deduction.
-- `subscription_usage_grouping_key` (String) Property key name to group subscription usage by.
+- `string_event_properties_keys` (List of String, Sensitive) Property key names to extract from all events, in order to enrich ledger entries corresponding to an event deduction.
+- `subscription_usage_grouping_key` (String, Sensitive) Property key name to group subscription usage by.
diff --git a/docs/resources/source_orbit.md b/docs/resources/source_orbit.md
index 6f9cbf315..46f3c242d 100644
--- a/docs/resources/source_orbit.md
+++ b/docs/resources/source_orbit.md
@@ -15,14 +15,14 @@ SourceOrbit Resource
```terraform
resource "airbyte_source_orbit" "my_source_orbit" {
configuration = {
- api_token = "...my_api_token..."
- source_type = "orbit"
- start_date = "...my_start_date..."
- workspace = "...my_workspace..."
+ api_token = "...my_api_token..."
+ start_date = "...my_start_date..."
+ workspace = "...my_workspace..."
}
- name = "Jo Greenholt V"
- secret_id = "...my_secret_id..."
- workspace_id = "abfa2e76-1f0c-4a4d-856e-f1031e6899f0"
+ definition_id = "35ff19f3-8868-45d8-941e-7db0723f9473"
+ name = "Salvatore Schmitt DVM"
+ secret_id = "...my_secret_id..."
+ workspace_id = "e5b71225-778f-47a0-a3c1-e08d80f694c4"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_orbit" "my_source_orbit" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,8 +50,7 @@ resource "airbyte_source_orbit" "my_source_orbit" {
Required:
-- `api_token` (String) Authorizes you to work with Orbit workspaces associated with the token.
-- `source_type` (String) must be one of ["orbit"]
+- `api_token` (String, Sensitive) Authorizes you to work with Orbit workspaces associated with the token.
- `workspace` (String) The unique name of the workspace that your API token is associated with.
Optional:
diff --git a/docs/resources/source_outbrain_amplify.md b/docs/resources/source_outbrain_amplify.md
index 18a19739c..2fa20a753 100644
--- a/docs/resources/source_outbrain_amplify.md
+++ b/docs/resources/source_outbrain_amplify.md
@@ -16,20 +16,19 @@ SourceOutbrainAmplify Resource
resource "airbyte_source_outbrain_amplify" "my_source_outbrainamplify" {
configuration = {
credentials = {
- source_outbrain_amplify_authentication_method_access_token = {
+ source_outbrain_amplify_access_token = {
access_token = "...my_access_token..."
- type = "access_token"
}
}
end_date = "...my_end_date..."
- geo_location_breakdown = "subregion"
- report_granularity = "daily"
- source_type = "outbrain-amplify"
+ geo_location_breakdown = "region"
+ report_granularity = "monthly"
start_date = "...my_start_date..."
}
- name = "Cynthia Boyer"
- secret_id = "...my_secret_id..."
- workspace_id = "2cd55cc0-584a-4184-976d-971fc820c65b"
+ definition_id = "9d0f84cc-bad7-41da-b038-014a124b6e7b"
+ name = "Donna Leannon"
+ secret_id = "...my_secret_id..."
+ workspace_id = "37b0c992-762a-438a-a73d-79a85cb72465"
}
```
@@ -39,11 +38,12 @@ resource "airbyte_source_outbrain_amplify" "my_source_outbrainamplify" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -57,7 +57,6 @@ resource "airbyte_source_outbrain_amplify" "my_source_outbrainamplify" {
Required:
- `credentials` (Attributes) Credentials for making authenticated requests requires either username/password or access_token. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["outbrain-amplify"]
- `start_date` (String) Date in the format YYYY-MM-DD eg. 2017-01-25. Any data before this date will not be replicated.
Optional:
@@ -73,46 +72,23 @@ The granularity used for periodic data in reports. See
-### Nested Schema for `configuration.credentials.source_outbrain_amplify_authentication_method_access_token`
+
+### Nested Schema for `configuration.credentials.access_token`
Required:
-- `access_token` (String) Access Token for making authenticated requests.
-- `type` (String) must be one of ["access_token"]
+- `access_token` (String, Sensitive) Access Token for making authenticated requests.
-
-### Nested Schema for `configuration.credentials.source_outbrain_amplify_authentication_method_username_password`
+
+### Nested Schema for `configuration.credentials.username_password`
Required:
-- `password` (String) Add Password for authentication.
-- `type` (String) must be one of ["username_password"]
-- `username` (String) Add Username for authentication.
-
-
-
-### Nested Schema for `configuration.credentials.source_outbrain_amplify_update_authentication_method_access_token`
-
-Required:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `type` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_outbrain_amplify_update_authentication_method_username_password`
-
-Required:
-
-- `password` (String) Add Password for authentication.
-- `type` (String) must be one of ["username_password"]
+- `password` (String, Sensitive) Add Password for authentication.
- `username` (String) Add Username for authentication.
diff --git a/docs/resources/source_outreach.md b/docs/resources/source_outreach.md
index 585c858dc..ca19326a4 100644
--- a/docs/resources/source_outreach.md
+++ b/docs/resources/source_outreach.md
@@ -19,12 +19,12 @@ resource "airbyte_source_outreach" "my_source_outreach" {
client_secret = "...my_client_secret..."
redirect_uri = "...my_redirect_uri..."
refresh_token = "...my_refresh_token..."
- source_type = "outreach"
start_date = "2020-11-16T00:00:00Z"
}
- name = "Kim Kirlin"
- secret_id = "...my_secret_id..."
- workspace_id = "8e0cc885-187e-44de-84af-28c5dddb46aa"
+ definition_id = "18021619-8723-463e-89a2-aae62d9d7702"
+ name = "Tanya Hand"
+ secret_id = "...my_secret_id..."
+ workspace_id = "6995c576-52df-4199-822b-3629976b741d"
}
```
@@ -34,11 +34,12 @@ resource "airbyte_source_outreach" "my_source_outreach" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -54,8 +55,7 @@ Required:
- `client_id` (String) The Client ID of your Outreach developer application.
- `client_secret` (String) The Client Secret of your Outreach developer application.
- `redirect_uri` (String) A Redirect URI is the location where the authorization server sends the user once the app has been successfully authorized and granted an authorization code or access token.
-- `refresh_token` (String) The token for obtaining the new access token.
-- `source_type` (String) must be one of ["outreach"]
+- `refresh_token` (String, Sensitive) The token for obtaining the new access token.
- `start_date` (String) The date from which you'd like to replicate data for Outreach API, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
diff --git a/docs/resources/source_paypal_transaction.md b/docs/resources/source_paypal_transaction.md
index d78081001..f2a0e03fc 100644
--- a/docs/resources/source_paypal_transaction.md
+++ b/docs/resources/source_paypal_transaction.md
@@ -19,12 +19,13 @@ resource "airbyte_source_paypal_transaction" "my_source_paypaltransaction" {
client_secret = "...my_client_secret..."
is_sandbox = false
refresh_token = "...my_refresh_token..."
- source_type = "paypal-transaction"
start_date = "2021-06-11T23:59:59+00:00"
+ time_window = 7
}
- name = "Ernestine Little"
- secret_id = "...my_secret_id..."
- workspace_id = "da013191-1296-4466-85c1-d81f29042f56"
+ definition_id = "dd349afd-0cd9-45bc-be33-42dc402aef61"
+ name = "Edna Hamill"
+ secret_id = "...my_secret_id..."
+ workspace_id = "9f94f985-aa22-4e67-bc77-be4e4244a41c"
}
```
@@ -34,11 +35,12 @@ resource "airbyte_source_paypal_transaction" "my_source_paypaltransaction" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -53,12 +55,14 @@ Required:
- `client_id` (String) The Client ID of your Paypal developer application.
- `client_secret` (String) The Client Secret of your Paypal developer application.
-- `is_sandbox` (Boolean) Determines whether to use the sandbox or production environment.
-- `source_type` (String) must be one of ["paypal-transaction"]
-- `start_date` (String) Start Date for data extraction in ISO format. Date must be in range from 3 years till 12 hrs before present time.
+- `start_date` (String) Start Date for data extraction in ISO format. Date must be in range from 3 years till 12 hrs before present time.
Optional:
-- `refresh_token` (String) The key to refresh the expired access token.
+- `is_sandbox` (Boolean) Default: false
+Determines whether to use the sandbox or production environment.
+- `refresh_token` (String, Sensitive) The key to refresh the expired access token.
+- `time_window` (Number) Default: 7
+The number of days per request. Must be a number between 1 and 31.
diff --git a/docs/resources/source_paystack.md b/docs/resources/source_paystack.md
index 68355ea78..929eda1d7 100644
--- a/docs/resources/source_paystack.md
+++ b/docs/resources/source_paystack.md
@@ -15,14 +15,14 @@ SourcePaystack Resource
```terraform
resource "airbyte_source_paystack" "my_source_paystack" {
configuration = {
- lookback_window_days = 6
+ lookback_window_days = 9
secret_key = "...my_secret_key..."
- source_type = "paystack"
start_date = "2017-01-25T00:00:00Z"
}
- name = "Dr. Boyd Wilderman"
- secret_id = "...my_secret_id..."
- workspace_id = "2216cbe0-71bc-4163-a279-a3b084da9925"
+ definition_id = "5b489304-8e9c-41af-9961-b1c883a57271"
+ name = "Kari Lemke"
+ secret_id = "...my_secret_id..."
+ workspace_id = "b6433cb8-2b32-4ad0-bfd9-a9d8ba9b0df8"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_paystack" "my_source_paystack" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,12 +50,12 @@ resource "airbyte_source_paystack" "my_source_paystack" {
Required:
-- `secret_key` (String) The Paystack API key (usually starts with 'sk_live_'; find yours here).
-- `source_type` (String) must be one of ["paystack"]
+- `secret_key` (String, Sensitive) The Paystack API key (usually starts with 'sk_live_'; find yours here).
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
Optional:
-- `lookback_window_days` (Number) When set, the connector will always reload data from the past N days, where N is the value set here. This is useful if your data is updated after creation.
+- `lookback_window_days` (Number) Default: 0
+When set, the connector will always reload data from the past N days, where N is the value set here. This is useful if your data is updated after creation.
diff --git a/docs/resources/source_pendo.md b/docs/resources/source_pendo.md
index 8f1f63acb..4c98cd8a0 100644
--- a/docs/resources/source_pendo.md
+++ b/docs/resources/source_pendo.md
@@ -15,12 +15,12 @@ SourcePendo Resource
```terraform
resource "airbyte_source_pendo" "my_source_pendo" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "pendo"
+ api_key = "...my_api_key..."
}
- name = "Estelle Bechtelar"
- secret_id = "...my_secret_id..."
- workspace_id = "40847a74-2d84-4496-8bde-ecf6b99bc635"
+ definition_id = "6503c474-3ee7-49bd-93e2-04659bbdc56c"
+ name = "Mandy Conroy"
+ secret_id = "...my_secret_id..."
+ workspace_id = "0259c6b1-3998-4d3f-8543-0ae066d4a91b"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_pendo" "my_source_pendo" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_pendo" "my_source_pendo" {
Required:
-- `api_key` (String)
-- `source_type` (String) must be one of ["pendo"]
+- `api_key` (String, Sensitive)
diff --git a/docs/resources/source_persistiq.md b/docs/resources/source_persistiq.md
index aaa226c27..bf06782d4 100644
--- a/docs/resources/source_persistiq.md
+++ b/docs/resources/source_persistiq.md
@@ -15,12 +15,12 @@ SourcePersistiq Resource
```terraform
resource "airbyte_source_persistiq" "my_source_persistiq" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "persistiq"
+ api_key = "...my_api_key..."
}
- name = "Nicole Vandervort"
- secret_id = "...my_secret_id..."
- workspace_id = "df55c294-c060-4b06-a128-7764eef6d0c6"
+ definition_id = "bbc35ba8-92b6-4d58-85ab-7b9331a5ddaf"
+ name = "Taylor Keeling"
+ secret_id = "...my_secret_id..."
+ workspace_id = "5ec8caac-d8d2-4abf-9c0f-33811ddad7d7"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_persistiq" "my_source_persistiq" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_persistiq" "my_source_persistiq" {
Required:
-- `api_key` (String) PersistIq API Key. See the docs for more information on where to find that key.
-- `source_type` (String) must be one of ["persistiq"]
+- `api_key` (String, Sensitive) PersistIq API Key. See the docs for more information on where to find that key.
diff --git a/docs/resources/source_pexels_api.md b/docs/resources/source_pexels_api.md
index 3f0f8b35a..67bdde69e 100644
--- a/docs/resources/source_pexels_api.md
+++ b/docs/resources/source_pexels_api.md
@@ -17,15 +17,15 @@ resource "airbyte_source_pexels_api" "my_source_pexelsapi" {
configuration = {
api_key = "...my_api_key..."
color = "orange"
- locale = "en-US"
+ locale = "pt-BR"
orientation = "landscape"
- query = "oceans"
+ query = "people"
size = "small"
- source_type = "pexels-api"
}
- name = "Arnold Dooley"
- secret_id = "...my_secret_id..."
- workspace_id = "63457150-9a8e-4870-93c5-a1f9c242c7b6"
+ definition_id = "f68e00dc-dadd-4479-a116-8b4fa7262d2a"
+ name = "Brandy Weimann"
+ secret_id = "...my_secret_id..."
+ workspace_id = "6dd11df0-9849-4375-b622-7890d41f1391"
}
```
@@ -35,11 +35,12 @@ resource "airbyte_source_pexels_api" "my_source_pexelsapi" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -52,9 +53,8 @@ resource "airbyte_source_pexels_api" "my_source_pexelsapi" {
Required:
-- `api_key` (String) API key is required to access pexels api, For getting your's goto https://www.pexels.com/api/documentation and create account for free.
+- `api_key` (String, Sensitive) API key is required to access pexels api, For getting your's goto https://www.pexels.com/api/documentation and create account for free.
- `query` (String) Optional, the search query, Example Ocean, Tigers, Pears, etc.
-- `source_type` (String) must be one of ["pexels-api"]
Optional:
diff --git a/docs/resources/source_pinterest.md b/docs/resources/source_pinterest.md
index 6805e0477..2b394a4c6 100644
--- a/docs/resources/source_pinterest.md
+++ b/docs/resources/source_pinterest.md
@@ -16,20 +16,39 @@ SourcePinterest Resource
resource "airbyte_source_pinterest" "my_source_pinterest" {
configuration = {
credentials = {
- source_pinterest_authorization_method_access_token = {
- access_token = "...my_access_token..."
- auth_method = "access_token"
+ source_pinterest_o_auth2_0 = {
+ client_id = "...my_client_id..."
+ client_secret = "...my_client_secret..."
+ refresh_token = "...my_refresh_token..."
}
}
- source_type = "pinterest"
- start_date = "2022-07-28"
+ custom_reports = [
+ {
+ attribution_types = [
+ "HOUSEHOLD",
+ ]
+ click_window_days = "30"
+ columns = [
+ "TOTAL_IDEA_PIN_PRODUCT_TAG_VISIT",
+ ]
+ conversion_report_time = "TIME_OF_AD_ACTION"
+ engagement_window_days = "7"
+ granularity = "MONTH"
+ level = "CAMPAIGN"
+ name = "Ms. Edgar Halvorson"
+ start_date = "2022-07-28"
+ view_window_days = "0"
+ },
+ ]
+ start_date = "2022-07-28"
status = [
"ACTIVE",
]
}
- name = "Nathan Bauch"
- secret_id = "...my_secret_id..."
- workspace_id = "3df5b671-9890-4f42-a4bb-438d85b26059"
+ definition_id = "66a5ec46-f2bc-4e2e-b7bb-ccef588ac548"
+ name = "Lamar Lakin"
+ secret_id = "...my_secret_id..."
+ workspace_id = "a9dbf52c-7929-43e2-8aa8-1903348b38fe"
}
```
@@ -39,11 +58,12 @@ resource "airbyte_source_pinterest" "my_source_pinterest" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -54,14 +74,11 @@ resource "airbyte_source_pinterest" "my_source_pinterest" {
### Nested Schema for `configuration`
-Required:
-
-- `source_type` (String) must be one of ["pinterest"]
-- `start_date` (String) A date in the format YYYY-MM-DD. If you have not set a date, it would be defaulted to latest allowed date by api (89 days from today).
-
Optional:
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
+- `custom_reports` (Attributes List) A list which contains ad statistics entries, each entry must have a name and can contains fields, breakdowns or action_breakdowns. Click on "add" to fill this field. (see [below for nested schema](#nestedatt--configuration--custom_reports))
+- `start_date` (String) A date in the format YYYY-MM-DD. If you have not set a date, it would be defaulted to latest allowed date by api (89 days from today).
- `status` (List of String) Entity statuses based off of campaigns, ad_groups, and ads. If you do not have a status set, it will be ignored completely.
@@ -69,27 +86,14 @@ Optional:
Optional:
-- `source_pinterest_authorization_method_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_pinterest_authorization_method_access_token))
-- `source_pinterest_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_pinterest_authorization_method_o_auth2_0))
-- `source_pinterest_update_authorization_method_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_pinterest_update_authorization_method_access_token))
-- `source_pinterest_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_pinterest_update_authorization_method_o_auth2_0))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_pinterest_authorization_method_access_token`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) The Access Token to make authenticated requests.
-- `auth_method` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_pinterest_authorization_method_o_auth2_0`
-
-Required:
-
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `refresh_token` (String) Refresh Token to obtain new Access Token, when it's expired.
+- `refresh_token` (String, Sensitive) Refresh Token to obtain new Access Token, when it's expired.
Optional:
@@ -97,26 +101,30 @@ Optional:
- `client_secret` (String) The Client Secret of your OAuth application.
-
-### Nested Schema for `configuration.credentials.source_pinterest_update_authorization_method_access_token`
-
-Required:
-
-- `access_token` (String) The Access Token to make authenticated requests.
-- `auth_method` (String) must be one of ["access_token"]
-
-
-### Nested Schema for `configuration.credentials.source_pinterest_update_authorization_method_o_auth2_0`
+
+### Nested Schema for `configuration.custom_reports`
Required:
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `refresh_token` (String) Refresh Token to obtain new Access Token, when it's expired.
+- `columns` (List of String) A list of chosen columns
+- `name` (String) The name value of report
Optional:
-- `client_id` (String) The Client ID of your OAuth application
-- `client_secret` (String) The Client Secret of your OAuth application.
+- `attribution_types` (List of String) List of types of attribution for the conversion report
+- `click_window_days` (Number) must be one of ["0", "1", "7", "14", "30", "60"]; Default: 30
+Number of days to use as the conversion attribution window for a pin click action.
+- `conversion_report_time` (String) must be one of ["TIME_OF_AD_ACTION", "TIME_OF_CONVERSION"]; Default: "TIME_OF_AD_ACTION"
+The date by which the conversion metrics returned from this endpoint will be reported. There are two dates associated with a conversion event: the date that the user interacted with the ad, and the date that the user completed a conversion event..
+- `engagement_window_days` (Number) must be one of ["0", "1", "7", "14", "30", "60"]; Default: [30]
+Number of days to use as the conversion attribution window for an engagement action.
+- `granularity` (String) must be one of ["TOTAL", "DAY", "HOUR", "WEEK", "MONTH"]; Default: "TOTAL"
+Chosen granularity for API
+- `level` (String) must be one of ["ADVERTISER", "ADVERTISER_TARGETING", "CAMPAIGN", "CAMPAIGN_TARGETING", "AD_GROUP", "AD_GROUP_TARGETING", "PIN_PROMOTION", "PIN_PROMOTION_TARGETING", "KEYWORD", "PRODUCT_GROUP", "PRODUCT_GROUP_TARGETING", "PRODUCT_ITEM"]; Default: "ADVERTISER"
+Chosen level for API
+- `start_date` (String) A date in the format YYYY-MM-DD. If you have not set a date, it would be defaulted to latest allowed date by report api (913 days from today).
+- `view_window_days` (Number) must be one of ["0", "1", "7", "14", "30", "60"]; Default: [30]
+Number of days to use as the conversion attribution window for a view action.
diff --git a/docs/resources/source_pipedrive.md b/docs/resources/source_pipedrive.md
index 0d844c962..26dd89f2c 100644
--- a/docs/resources/source_pipedrive.md
+++ b/docs/resources/source_pipedrive.md
@@ -15,16 +15,13 @@ SourcePipedrive Resource
```terraform
resource "airbyte_source_pipedrive" "my_source_pipedrive" {
configuration = {
- authorization = {
- api_token = "...my_api_token..."
- auth_type = "Token"
- }
- replication_start_date = "2017-01-25T00:00:00Z"
- source_type = "pipedrive"
+ api_token = "...my_api_token..."
+ replication_start_date = "2017-01-25 00:00:00Z"
}
- name = "Rhonda Hammes"
- secret_id = "...my_secret_id..."
- workspace_id = "c2059c9c-3f56-47e0-a252-765b1d62fcda"
+ definition_id = "3b520112-5b29-4252-a784-d2d0f1707475"
+ name = "Sean Swaniawski"
+ secret_id = "...my_secret_id..."
+ workspace_id = "49780ba1-d6a2-48c6-aefe-59b72db22407"
}
```
@@ -34,11 +31,12 @@ resource "airbyte_source_pipedrive" "my_source_pipedrive" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,19 +49,7 @@ resource "airbyte_source_pipedrive" "my_source_pipedrive" {
Required:
+- `api_token` (String, Sensitive) The Pipedrive API Token.
- `replication_start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated. When specified and not None, then stream will behave as incremental
-- `source_type` (String) must be one of ["pipedrive"]
-
-Optional:
-
-- `authorization` (Attributes) (see [below for nested schema](#nestedatt--configuration--authorization))
-
-
-### Nested Schema for `configuration.authorization`
-
-Required:
-
-- `api_token` (String) The Pipedrive API Token.
-- `auth_type` (String) must be one of ["Token"]
diff --git a/docs/resources/source_pocket.md b/docs/resources/source_pocket.md
index 740aec074..267414337 100644
--- a/docs/resources/source_pocket.md
+++ b/docs/resources/source_pocket.md
@@ -23,14 +23,14 @@ resource "airbyte_source_pocket" "my_source_pocket" {
favorite = true
search = "...my_search..."
since = "2022-10-20 14:14:14"
- sort = "site"
- source_type = "pocket"
+ sort = "newest"
state = "unread"
tag = "...my_tag..."
}
- name = "Christina Bode"
- secret_id = "...my_secret_id..."
- workspace_id = "e2239e8f-25cd-40d1-9d95-9f439e39266c"
+ definition_id = "da763315-0acf-4ec2-81f7-3646e1c87958"
+ name = "Brandi Hane"
+ secret_id = "...my_secret_id..."
+ workspace_id = "82553101-4017-4845-aa4c-1173de2c277a"
}
```
@@ -40,11 +40,12 @@ resource "airbyte_source_pocket" "my_source_pocket" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -57,9 +58,8 @@ resource "airbyte_source_pocket" "my_source_pocket" {
Required:
-- `access_token` (String) The user's Pocket access token.
-- `consumer_key` (String) Your application's Consumer Key.
-- `source_type` (String) must be one of ["pocket"]
+- `access_token` (String, Sensitive) The user's Pocket access token.
+- `consumer_key` (String, Sensitive) Your application's Consumer Key.
Optional:
@@ -68,7 +68,8 @@ Select the content type of the items to retrieve.
- `detail_type` (String) must be one of ["simple", "complete"]
Select the granularity of the information about each item.
- `domain` (String) Only return items from a particular `domain`.
-- `favorite` (Boolean) Retrieve only favorited items.
+- `favorite` (Boolean) Default: false
+Retrieve only favorited items.
- `search` (String) Only return items whose title or url contain the `search` string.
- `since` (String) Only return items modified since the given timestamp.
- `sort` (String) must be one of ["newest", "oldest", "title", "site"]
diff --git a/docs/resources/source_pokeapi.md b/docs/resources/source_pokeapi.md
index 209cba1b0..fba642a83 100644
--- a/docs/resources/source_pokeapi.md
+++ b/docs/resources/source_pokeapi.md
@@ -15,12 +15,12 @@ SourcePokeapi Resource
```terraform
resource "airbyte_source_pokeapi" "my_source_pokeapi" {
configuration = {
- pokemon_name = "snorlax"
- source_type = "pokeapi"
+ pokemon_name = "luxray"
}
- name = "Jeremiah Hahn"
- secret_id = "...my_secret_id..."
- workspace_id = "aa2b2411-3695-4d1e-a698-fcc4596217c2"
+ definition_id = "e2388fd0-120f-462c-91a2-676b4d9282ad"
+ name = "Ramona Stiedemann"
+ secret_id = "...my_secret_id..."
+ workspace_id = "d5253fa0-2ef0-408f-918d-81572f724d1e"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_pokeapi" "my_source_pokeapi" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,7 @@ resource "airbyte_source_pokeapi" "my_source_pokeapi" {
Required:
-- `pokemon_name` (String) Pokemon requested from the API.
-- `source_type` (String) must be one of ["pokeapi"]
+- `pokemon_name` (String) must be one of ["bulbasaur", "ivysaur", "venusaur", "charmander", "charmeleon", "charizard", "squirtle", "wartortle", "blastoise", "caterpie", "metapod", "butterfree", "weedle", "kakuna", "beedrill", "pidgey", "pidgeotto", "pidgeot", "rattata", "raticate", "spearow", "fearow", "ekans", "arbok", "pikachu", "raichu", "sandshrew", "sandslash", "nidoranf", "nidorina", "nidoqueen", "nidoranm", "nidorino", "nidoking", "clefairy", "clefable", "vulpix", "ninetales", "jigglypuff", "wigglytuff", "zubat", "golbat", "oddish", "gloom", "vileplume", "paras", "parasect", "venonat", "venomoth", "diglett", "dugtrio", "meowth", "persian", "psyduck", "golduck", "mankey", "primeape", "growlithe", "arcanine", "poliwag", "poliwhirl", "poliwrath", "abra", "kadabra", "alakazam", "machop", "machoke", "machamp", "bellsprout", "weepinbell", "victreebel", "tentacool", "tentacruel", "geodude", "graveler", "golem", "ponyta", "rapidash", "slowpoke", "slowbro", "magnemite", "magneton", "farfetchd", "doduo", "dodrio", "seel", "dewgong", "grimer", "muk", "shellder", "cloyster", "gastly", "haunter", "gengar", "onix", "drowzee", "hypno", "krabby", "kingler", "voltorb", "electrode", "exeggcute", "exeggutor", "cubone", "marowak", "hitmonlee", "hitmonchan", "lickitung", "koffing", "weezing", "rhyhorn", "rhydon", "chansey", "tangela", "kangaskhan", "horsea", "seadra", "goldeen", "seaking", "staryu", "starmie", "mrmime", "scyther", "jynx", "electabuzz", "magmar", "pinsir", "tauros", "magikarp", "gyarados", "lapras", "ditto", "eevee", "vaporeon", "jolteon", "flareon", "porygon", "omanyte", "omastar", "kabuto", "kabutops", "aerodactyl", "snorlax", "articuno", "zapdos", "moltres", "dratini", "dragonair", "dragonite", "mewtwo", "mew", "chikorita", "bayleef", "meganium", "cyndaquil", "quilava", "typhlosion", "totodile", "croconaw", "feraligatr", "sentret", "furret", "hoothoot", "noctowl", "ledyba", "ledian", "spinarak", "ariados", "crobat", "chinchou", "lanturn", "pichu", "cleffa", "igglybuff", "togepi", "togetic", "natu", "xatu", "mareep", "flaaffy", "ampharos", "bellossom", "marill", "azumarill", "sudowoodo", "politoed", "hoppip", "skiploom", "jumpluff", "aipom", "sunkern", "sunflora", "yanma", "wooper", "quagsire", "espeon", "umbreon", "murkrow", "slowking", "misdreavus", "unown", "wobbuffet", "girafarig", "pineco", "forretress", "dunsparce", "gligar", "steelix", "snubbull", "granbull", "qwilfish", "scizor", "shuckle", "heracross", "sneasel", "teddiursa", "ursaring", "slugma", "magcargo", "swinub", "piloswine", "corsola", "remoraid", "octillery", "delibird", "mantine", "skarmory", "houndour", "houndoom", "kingdra", "phanpy", "donphan", "porygon2", "stantler", "smeargle", "tyrogue", "hitmontop", "smoochum", "elekid", "magby", "miltank", "blissey", "raikou", "entei", "suicune", "larvitar", "pupitar", "tyranitar", "lugia", "ho-oh", "celebi", "treecko", "grovyle", "sceptile", "torchic", "combusken", "blaziken", "mudkip", "marshtomp", "swampert", "poochyena", "mightyena", "zigzagoon", "linoone", "wurmple", "silcoon", "beautifly", "cascoon", "dustox", "lotad", "lombre", "ludicolo", "seedot", "nuzleaf", "shiftry", "taillow", "swellow", "wingull", "pelipper", "ralts", "kirlia", "gardevoir", "surskit", "masquerain", "shroomish", "breloom", "slakoth", "vigoroth", "slaking", "nincada", "ninjask", "shedinja", "whismur", "loudred", "exploud", "makuhita", "hariyama", "azurill", "nosepass", "skitty", "delcatty", "sableye", "mawile", "aron", "lairon", "aggron", "meditite", "medicham", "electrike", "manectric", "plusle", "minun", "volbeat", "illumise", "roselia", "gulpin", "swalot", "carvanha", "sharpedo", "wailmer", "wailord", "numel", "camerupt", "torkoal", "spoink", "grumpig", "spinda", "trapinch", "vibrava", "flygon", "cacnea", "cacturne", "swablu", "altaria", "zangoose", "seviper", "lunatone", "solrock", "barboach", "whiscash", "corphish", "crawdaunt", "baltoy", "claydol", "lileep", "cradily", "anorith", "armaldo", "feebas", "milotic", "castform", "kecleon", "shuppet", "banette", "duskull", "dusclops", "tropius", "chimecho", "absol", "wynaut", "snorunt", "glalie", "spheal", "sealeo", "walrein", "clamperl", "huntail", "gorebyss", "relicanth", "luvdisc", "bagon", "shelgon", "salamence", "beldum", "metang", "metagross", "regirock", "regice", "registeel", "latias", "latios", "kyogre", "groudon", "rayquaza", "jirachi", "deoxys", "turtwig", "grotle", "torterra", "chimchar", "monferno", "infernape", "piplup", "prinplup", "empoleon", "starly", "staravia", "staraptor", "bidoof", "bibarel", "kricketot", "kricketune", "shinx", "luxio", "luxray", "budew", "roserade", "cranidos", "rampardos", "shieldon", "bastiodon", "burmy", "wormadam", "mothim", "combee", "vespiquen", "pachirisu", "buizel", "floatzel", "cherubi", "cherrim", "shellos", "gastrodon", "ambipom", "drifloon", "drifblim", "buneary", "lopunny", "mismagius", "honchkrow", "glameow", "purugly", "chingling", "stunky", "skuntank", "bronzor", "bronzong", "bonsly", "mimejr", "happiny", "chatot", "spiritomb", "gible", "gabite", "garchomp", "munchlax", "riolu", "lucario", "hippopotas", "hippowdon", "skorupi", "drapion", "croagunk", "toxicroak", "carnivine", "finneon", "lumineon", "mantyke", "snover", "abomasnow", "weavile", "magnezone", "lickilicky", "rhyperior", "tangrowth", "electivire", "magmortar", "togekiss", "yanmega", "leafeon", "glaceon", "gliscor", "mamoswine", "porygon-z", "gallade", "probopass", "dusknoir", "froslass", "rotom", "uxie", "mesprit", "azelf", "dialga", "palkia", "heatran", "regigigas", "giratina", "cresselia", "phione", "manaphy", "darkrai", "shaymin", "arceus", "victini", "snivy", "servine", "serperior", "tepig", "pignite", "emboar", "oshawott", "dewott", "samurott", "patrat", "watchog", "lillipup", "herdier", "stoutland", "purrloin", "liepard", "pansage", "simisage", "pansear", "simisear", "panpour", "simipour", "munna", "musharna", "pidove", "tranquill", "unfezant", "blitzle", "zebstrika", "roggenrola", "boldore", "gigalith", "woobat", "swoobat", "drilbur", "excadrill", "audino", "timburr", "gurdurr", "conkeldurr", "tympole", "palpitoad", "seismitoad", "throh", "sawk", "sewaddle", "swadloon", "leavanny", "venipede", "whirlipede", "scolipede", "cottonee", "whimsicott", "petilil", "lilligant", "basculin", "sandile", "krokorok", "krookodile", "darumaka", "darmanitan", "maractus", "dwebble", "crustle", "scraggy", "scrafty", "sigilyph", "yamask", "cofagrigus", "tirtouga", "carracosta", "archen", "archeops", "trubbish", "garbodor", "zorua", "zoroark", "minccino", "cinccino", "gothita", "gothorita", "gothitelle", "solosis", "duosion", "reuniclus", "ducklett", "swanna", "vanillite", "vanillish", "vanilluxe", "deerling", "sawsbuck", "emolga", "karrablast", "escavalier", "foongus", "amoonguss", "frillish", "jellicent", "alomomola", "joltik", "galvantula", "ferroseed", "ferrothorn", "klink", "klang", "klinklang", "tynamo", "eelektrik", "eelektross", "elgyem", "beheeyem", "litwick", "lampent", "chandelure", "axew", "fraxure", "haxorus", "cubchoo", "beartic", "cryogonal", "shelmet", "accelgor", "stunfisk", "mienfoo", "mienshao", "druddigon", "golett", "golurk", "pawniard", "bisharp", "bouffalant", "rufflet", "braviary", "vullaby", "mandibuzz", "heatmor", "durant", "deino", "zweilous", "hydreigon", "larvesta", "volcarona", "cobalion", "terrakion", "virizion", "tornadus", "thundurus", "reshiram", "zekrom", "landorus", "kyurem", "keldeo", "meloetta", "genesect", "chespin", "quilladin", "chesnaught", "fennekin", "braixen", "delphox", "froakie", "frogadier", "greninja", "bunnelby", "diggersby", "fletchling", "fletchinder", "talonflame", "scatterbug", "spewpa", "vivillon", "litleo", "pyroar", "flabebe", "floette", "florges", "skiddo", "gogoat", "pancham", "pangoro", "furfrou", "espurr", "meowstic", "honedge", "doublade", "aegislash", "spritzee", "aromatisse", "swirlix", "slurpuff", "inkay", "malamar", "binacle", "barbaracle", "skrelp", "dragalge", "clauncher", "clawitzer", "helioptile", "heliolisk", "tyrunt", "tyrantrum", "amaura", "aurorus", "sylveon", "hawlucha", "dedenne", "carbink", "goomy", "sliggoo", "goodra", "klefki", "phantump", "trevenant", "pumpkaboo", "gourgeist", "bergmite", "avalugg", "noibat", "noivern", "xerneas", "yveltal", "zygarde", "diancie", "hoopa", "volcanion", "rowlet", "dartrix", "decidueye", "litten", "torracat", "incineroar", "popplio", "brionne", "primarina", "pikipek", "trumbeak", "toucannon", "yungoos", "gumshoos", "grubbin", "charjabug", "vikavolt", "crabrawler", "crabominable", "oricorio", "cutiefly", "ribombee", "rockruff", "lycanroc", "wishiwashi", "mareanie", "toxapex", "mudbray", "mudsdale", "dewpider", "araquanid", "fomantis", "lurantis", "morelull", "shiinotic", "salandit", "salazzle", "stufful", "bewear", "bounsweet", "steenee", "tsareena", "comfey", "oranguru", "passimian", "wimpod", "golisopod", "sandygast", "palossand", "pyukumuku", "typenull", "silvally", "minior", "komala", "turtonator", "togedemaru", "mimikyu", "bruxish", "drampa", "dhelmise", "jangmo-o", "hakamo-o", "kommo-o", "tapukoko", "tapulele", "tapubulu", "tapufini", "cosmog", "cosmoem", "solgaleo", "lunala", "nihilego", "buzzwole", "pheromosa", "xurkitree", "celesteela", "kartana", "guzzlord", "necrozma", "magearna", "marshadow", "poipole", "naganadel", "stakataka", "blacephalon", "zeraora", "meltan", "melmetal", "grookey", "thwackey", "rillaboom", "scorbunny", "raboot", "cinderace", "sobble", "drizzile", "inteleon", "skwovet", "greedent", "rookidee", "corvisquire", "corviknight", "blipbug", "dottler", "orbeetle", "nickit", "thievul", "gossifleur", "eldegoss", "wooloo", "dubwool", "chewtle", "drednaw", "yamper", "boltund", "rolycoly", "carkol", "coalossal", "applin", "flapple", "appletun", "silicobra", "sandaconda", "cramorant", "arrokuda", "barraskewda", "toxel", "toxtricity", "sizzlipede", "centiskorch", "clobbopus", "grapploct", "sinistea", "polteageist", "hatenna", "hattrem", "hatterene", "impidimp", "morgrem", "grimmsnarl", "obstagoon", "perrserker", "cursola", "sirfetchd", "mrrime", "runerigus", "milcery", "alcremie", "falinks", "pincurchin", "snom", "frosmoth", "stonjourner", "eiscue", "indeedee", "morpeko", "cufant", "copperajah", "dracozolt", "arctozolt", "dracovish", "arctovish", "duraludon", "dreepy", "drakloak", "dragapult", "zacian", "zamazenta", "eternatus", "kubfu", "urshifu", "zarude", "regieleki", "regidrago", "glastrier", "spectrier", "calyrex"]
+Pokemon requested from the API.
diff --git a/docs/resources/source_polygon_stock_api.md b/docs/resources/source_polygon_stock_api.md
index 5a80bed58..116b42ce4 100644
--- a/docs/resources/source_polygon_stock_api.md
+++ b/docs/resources/source_polygon_stock_api.md
@@ -15,20 +15,20 @@ SourcePolygonStockAPI Resource
```terraform
resource "airbyte_source_polygon_stock_api" "my_source_polygonstockapi" {
configuration = {
- adjusted = "false"
+ adjusted = "true"
api_key = "...my_api_key..."
end_date = "2020-10-14"
- limit = 100
+ limit = 120
multiplier = 1
- sort = "asc"
- source_type = "polygon-stock-api"
+ sort = "desc"
start_date = "2020-10-14"
- stocks_ticker = "IBM"
+ stocks_ticker = "MSFT"
timespan = "day"
}
- name = "Mary Fisher"
- secret_id = "...my_secret_id..."
- workspace_id = "fb5971e9-8190-4557-b89c-edbac7fda395"
+ definition_id = "15bf9f13-70c2-48b2-b8d2-5e4ee4a51abe"
+ name = "Antoinette Rempel"
+ secret_id = "...my_secret_id..."
+ workspace_id = "e8da5f86-7ba5-4cf8-9b48-a2cc4047b120"
}
```
@@ -38,11 +38,12 @@ resource "airbyte_source_polygon_stock_api" "my_source_polygonstockapi" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -58,7 +59,6 @@ Required:
- `api_key` (String) Your API ACCESS Key
- `end_date` (String) The target date for the aggregate window.
- `multiplier` (Number) The size of the timespan multiplier.
-- `source_type` (String) must be one of ["polygon-stock-api"]
- `start_date` (String) The beginning date for the aggregate window.
- `stocks_ticker` (String) The exchange symbol that this item is traded under.
- `timespan` (String) The size of the time window.
diff --git a/docs/resources/source_postgres.md b/docs/resources/source_postgres.md
index 304d0512c..0e23a3f3c 100644
--- a/docs/resources/source_postgres.md
+++ b/docs/resources/source_postgres.md
@@ -21,29 +21,25 @@ resource "airbyte_source_postgres" "my_source_postgres" {
password = "...my_password..."
port = 5432
replication_method = {
- source_postgres_update_method_detect_changes_with_xmin_system_column = {
- method = "Xmin"
- }
+ detect_changes_with_xmin_system_column = {}
}
schemas = [
"...",
]
- source_type = "postgres"
ssl_mode = {
- source_postgres_ssl_modes_allow = {
- mode = "allow"
+ source_postgres_allow = {
+ additional_properties = "{ \"see\": \"documentation\" }"
}
}
tunnel_method = {
- source_postgres_ssh_tunnel_method_no_tunnel = {
- tunnel_method = "NO_TUNNEL"
- }
+ source_postgres_no_tunnel = {}
}
- username = "Edwardo.Streich"
+ username = "Dagmar_Towne8"
}
- name = "Roosevelt Cummings"
- secret_id = "...my_secret_id..."
- workspace_id = "480632b9-954b-46fa-a206-369828553cb1"
+ definition_id = "558e983f-33bb-4c2f-8e75-b95ee5dd11c7"
+ name = "Brandi Gerhold"
+ secret_id = "...my_secret_id..."
+ workspace_id = "aa4d1c74-fcd7-4d93-9b8b-6b2c0920aa8b"
}
```
@@ -53,11 +49,12 @@ resource "airbyte_source_postgres" "my_source_postgres" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -72,14 +69,14 @@ Required:
- `database` (String) Name of the database.
- `host` (String) Hostname of the database.
-- `port` (Number) Port of the database.
-- `source_type` (String) must be one of ["postgres"]
- `username` (String) Username to access the database.
Optional:
- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (Eg. key1=value1&key2=value2&key3=value3). For more information read about JDBC URL parameters.
-- `password` (String) Password associated with the username.
+- `password` (String, Sensitive) Password associated with the username.
+- `port` (Number) Default: 5432
+Port of the database.
- `replication_method` (Attributes) Configures how data is extracted from the database. (see [below for nested schema](#nestedatt--configuration--replication_method))
- `schemas` (List of String) The list of schemas (case sensitive) to sync from. Defaults to public.
- `ssl_mode` (Attributes) SSL connection modes.
@@ -91,83 +88,37 @@ Optional:
Optional:
-- `source_postgres_update_method_detect_changes_with_xmin_system_column` (Attributes) Recommended - Incrementally reads new inserts and updates via Postgres Xmin system column. Only recommended for tables up to 500GB. (see [below for nested schema](#nestedatt--configuration--replication_method--source_postgres_update_method_detect_changes_with_xmin_system_column))
-- `source_postgres_update_method_read_changes_using_write_ahead_log_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the Postgres write-ahead log (WAL). This needs to be configured on the source database itself. Recommended for tables of any size. (see [below for nested schema](#nestedatt--configuration--replication_method--source_postgres_update_method_read_changes_using_write_ahead_log_cdc))
-- `source_postgres_update_method_scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--source_postgres_update_method_scan_changes_with_user_defined_cursor))
-- `source_postgres_update_update_method_detect_changes_with_xmin_system_column` (Attributes) Recommended - Incrementally reads new inserts and updates via Postgres Xmin system column. Only recommended for tables up to 500GB. (see [below for nested schema](#nestedatt--configuration--replication_method--source_postgres_update_update_method_detect_changes_with_xmin_system_column))
-- `source_postgres_update_update_method_read_changes_using_write_ahead_log_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the Postgres write-ahead log (WAL). This needs to be configured on the source database itself. Recommended for tables of any size. (see [below for nested schema](#nestedatt--configuration--replication_method--source_postgres_update_update_method_read_changes_using_write_ahead_log_cdc))
-- `source_postgres_update_update_method_scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--source_postgres_update_update_method_scan_changes_with_user_defined_cursor))
-
-
-### Nested Schema for `configuration.replication_method.source_postgres_update_method_detect_changes_with_xmin_system_column`
-
-Required:
+- `detect_changes_with_xmin_system_column` (Attributes) Recommended - Incrementally reads new inserts and updates via Postgres Xmin system column. Only recommended for tables up to 500GB. (see [below for nested schema](#nestedatt--configuration--replication_method--detect_changes_with_xmin_system_column))
+- `read_changes_using_write_ahead_log_cdc` (Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using the Postgres write-ahead log (WAL). This needs to be configured on the source database itself. Recommended for tables of any size. (see [below for nested schema](#nestedatt--configuration--replication_method--read_changes_using_write_ahead_log_cdc))
+- `scan_changes_with_user_defined_cursor` (Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see [below for nested schema](#nestedatt--configuration--replication_method--scan_changes_with_user_defined_cursor))
-- `method` (String) must be one of ["Xmin"]
+
+### Nested Schema for `configuration.replication_method.detect_changes_with_xmin_system_column`
-
-### Nested Schema for `configuration.replication_method.source_postgres_update_method_read_changes_using_write_ahead_log_cdc`
+
+### Nested Schema for `configuration.replication_method.read_changes_using_write_ahead_log_cdc`
Required:
-- `method` (String) must be one of ["CDC"]
- `publication` (String) A Postgres publication used for consuming changes. Read about publications and replication identities.
- `replication_slot` (String) A plugin logical replication slot. Read about replication slots.
Optional:
- `additional_properties` (String) Parsed as JSON.
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `lsn_commit_behaviour` (String) must be one of ["While reading Data", "After loading Data in the destination"]
-Determines when Airbtye should flush the LSN of processed WAL logs in the source database. `After loading Data in the destination` is default. If `While reading Data` is selected, in case of a downstream failure (while loading data into the destination), next sync would result in a full sync.
-- `plugin` (String) must be one of ["pgoutput"]
+- `initial_waiting_seconds` (Number) Default: 300
+The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
+- `lsn_commit_behaviour` (String) must be one of ["While reading Data", "After loading Data in the destination"]; Default: "After loading Data in the destination"
+Determines when Airbyte should flush the LSN of processed WAL logs in the source database. `After loading Data in the destination` is default. If `While reading Data` is selected, in case of a downstream failure (while loading data into the destination), next sync would result in a full sync.
+- `plugin` (String) must be one of ["pgoutput"]; Default: "pgoutput"
A logical decoding plugin installed on the PostgreSQL server.
-- `queue_size` (Number) The size of the internal queue. This may interfere with memory consumption and efficiency of the connector, please be careful.
-
-
-
-### Nested Schema for `configuration.replication_method.source_postgres_update_method_scan_changes_with_user_defined_cursor`
-
-Required:
-
-- `method` (String) must be one of ["Standard"]
-
-
-
-### Nested Schema for `configuration.replication_method.source_postgres_update_update_method_detect_changes_with_xmin_system_column`
+- `queue_size` (Number) Default: 10000
+The size of the internal queue. This may interfere with memory consumption and efficiency of the connector, please be careful.
-Required:
-
-- `method` (String) must be one of ["Xmin"]
-
-
-### Nested Schema for `configuration.replication_method.source_postgres_update_update_method_read_changes_using_write_ahead_log_cdc`
-
-Required:
-
-- `method` (String) must be one of ["CDC"]
-- `publication` (String) A Postgres publication used for consuming changes. Read about publications and replication identities.
-- `replication_slot` (String) A plugin logical replication slot. Read about replication slots.
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `initial_waiting_seconds` (Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 1200 seconds. Read about initial waiting time.
-- `lsn_commit_behaviour` (String) must be one of ["While reading Data", "After loading Data in the destination"]
-Determines when Airbtye should flush the LSN of processed WAL logs in the source database. `After loading Data in the destination` is default. If `While reading Data` is selected, in case of a downstream failure (while loading data into the destination), next sync would result in a full sync.
-- `plugin` (String) must be one of ["pgoutput"]
-A logical decoding plugin installed on the PostgreSQL server.
-- `queue_size` (Number) The size of the internal queue. This may interfere with memory consumption and efficiency of the connector, please be careful.
-
-
-
-### Nested Schema for `configuration.replication_method.source_postgres_update_update_method_scan_changes_with_user_defined_cursor`
-
-Required:
-
-- `method` (String) must be one of ["Standard"]
+
+### Nested Schema for `configuration.replication_method.scan_changes_with_user_defined_cursor`
@@ -176,177 +127,73 @@ Required:
Optional:
-- `source_postgres_ssl_modes_allow` (Attributes) Enables encryption only when required by the source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_ssl_modes_allow))
-- `source_postgres_ssl_modes_disable` (Attributes) Disables encryption of communication between Airbyte and source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_ssl_modes_disable))
-- `source_postgres_ssl_modes_prefer` (Attributes) Allows unencrypted connection only if the source database does not support encryption. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_ssl_modes_prefer))
-- `source_postgres_ssl_modes_require` (Attributes) Always require encryption. If the source database server does not support encryption, connection will fail. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_ssl_modes_require))
-- `source_postgres_ssl_modes_verify_ca` (Attributes) Always require encryption and verifies that the source database server has a valid SSL certificate. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_ssl_modes_verify_ca))
-- `source_postgres_ssl_modes_verify_full` (Attributes) This is the most secure mode. Always require encryption and verifies the identity of the source database server. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_ssl_modes_verify_full))
-- `source_postgres_update_ssl_modes_allow` (Attributes) Enables encryption only when required by the source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_update_ssl_modes_allow))
-- `source_postgres_update_ssl_modes_disable` (Attributes) Disables encryption of communication between Airbyte and source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_update_ssl_modes_disable))
-- `source_postgres_update_ssl_modes_prefer` (Attributes) Allows unencrypted connection only if the source database does not support encryption. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_update_ssl_modes_prefer))
-- `source_postgres_update_ssl_modes_require` (Attributes) Always require encryption. If the source database server does not support encryption, connection will fail. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_update_ssl_modes_require))
-- `source_postgres_update_ssl_modes_verify_ca` (Attributes) Always require encryption and verifies that the source database server has a valid SSL certificate. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_update_ssl_modes_verify_ca))
-- `source_postgres_update_ssl_modes_verify_full` (Attributes) This is the most secure mode. Always require encryption and verifies the identity of the source database server. (see [below for nested schema](#nestedatt--configuration--ssl_mode--source_postgres_update_ssl_modes_verify_full))
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_ssl_modes_allow`
+- `allow` (Attributes) Enables encryption only when required by the source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--allow))
+- `disable` (Attributes) Disables encryption of communication between Airbyte and source database. (see [below for nested schema](#nestedatt--configuration--ssl_mode--disable))
+- `prefer` (Attributes) Allows unencrypted connection only if the source database does not support encryption. (see [below for nested schema](#nestedatt--configuration--ssl_mode--prefer))
+- `require` (Attributes) Always require encryption. If the source database server does not support encryption, connection will fail. (see [below for nested schema](#nestedatt--configuration--ssl_mode--require))
+- `verify_ca` (Attributes) Always require encryption and verifies that the source database server has a valid SSL certificate. (see [below for nested schema](#nestedatt--configuration--ssl_mode--verify_ca))
+- `verify_full` (Attributes) This is the most secure mode. Always require encryption and verifies the identity of the source database server. (see [below for nested schema](#nestedatt--configuration--ssl_mode--verify_full))
-Required:
-
-- `mode` (String) must be one of ["allow"]
+
+### Nested Schema for `configuration.ssl_mode.allow`
Optional:
- `additional_properties` (String) Parsed as JSON.
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_ssl_modes_disable`
-
-Required:
-
-- `mode` (String) must be one of ["disable"]
+
+### Nested Schema for `configuration.ssl_mode.disable`
Optional:
- `additional_properties` (String) Parsed as JSON.
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_ssl_modes_prefer`
-
-Required:
-
-- `mode` (String) must be one of ["prefer"]
+
+### Nested Schema for `configuration.ssl_mode.prefer`
Optional:
- `additional_properties` (String) Parsed as JSON.
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_ssl_modes_require`
-
-Required:
-
-- `mode` (String) must be one of ["require"]
+
+### Nested Schema for `configuration.ssl_mode.require`
Optional:
- `additional_properties` (String) Parsed as JSON.
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_ssl_modes_verify_ca`
+
+### Nested Schema for `configuration.ssl_mode.verify_ca`
Required:
- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify-ca"]
Optional:
- `additional_properties` (String) Parsed as JSON.
- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
+- `client_key` (String, Sensitive) Client key
+- `client_key_password` (String, Sensitive) Password for keystorage. If you do not add it - the password will be generated automatically.
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_ssl_modes_verify_full`
+
+### Nested Schema for `configuration.ssl_mode.verify_full`
Required:
- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify-full"]
Optional:
- `additional_properties` (String) Parsed as JSON.
- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_update_ssl_modes_allow`
-
-Required:
-
-- `mode` (String) must be one of ["allow"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_update_ssl_modes_disable`
-
-Required:
-
-- `mode` (String) must be one of ["disable"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_update_ssl_modes_prefer`
-
-Required:
-
-- `mode` (String) must be one of ["prefer"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_update_ssl_modes_require`
-
-Required:
-
-- `mode` (String) must be one of ["require"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_update_ssl_modes_verify_ca`
-
-Required:
-
-- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify-ca"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
-
-
-
-### Nested Schema for `configuration.ssl_mode.source_postgres_update_ssl_modes_verify_full`
-
-Required:
-
-- `ca_certificate` (String) CA certificate
-- `mode` (String) must be one of ["verify-full"]
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `client_certificate` (String) Client certificate
-- `client_key` (String) Client key
-- `client_key_password` (String) Password for keystorage. If you do not add it - the password will be generated automatically.
+- `client_key` (String, Sensitive) Client key
+- `client_key_password` (String, Sensitive) Password for keystorage. If you do not add it - the password will be generated automatically.
@@ -355,80 +202,41 @@ Optional:
Optional:
-- `source_postgres_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_postgres_ssh_tunnel_method_no_tunnel))
-- `source_postgres_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_postgres_ssh_tunnel_method_password_authentication))
-- `source_postgres_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_postgres_ssh_tunnel_method_ssh_key_authentication))
-- `source_postgres_update_ssh_tunnel_method_no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_postgres_update_ssh_tunnel_method_no_tunnel))
-- `source_postgres_update_ssh_tunnel_method_password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_postgres_update_ssh_tunnel_method_password_authentication))
-- `source_postgres_update_ssh_tunnel_method_ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--source_postgres_update_ssh_tunnel_method_ssh_key_authentication))
+- `no_tunnel` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--no_tunnel))
+- `password_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--password_authentication))
+- `ssh_key_authentication` (Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see [below for nested schema](#nestedatt--configuration--tunnel_method--ssh_key_authentication))
-
-### Nested Schema for `configuration.tunnel_method.source_postgres_ssh_tunnel_method_no_tunnel`
+
+### Nested Schema for `configuration.tunnel_method.no_tunnel`
-Required:
-
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
-
-
-### Nested Schema for `configuration.tunnel_method.source_postgres_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.password_authentication`
Required:
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_postgres_ssh_tunnel_method_ssh_key_authentication`
-
-Required:
+- `tunnel_user_password` (String, Sensitive) OS-level password for logging into the jump server host
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_postgres_update_ssh_tunnel_method_no_tunnel`
-
-Required:
+Optional:
-- `tunnel_method` (String) must be one of ["NO_TUNNEL"]
-No ssh tunnel needed to connect to database
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
-
-### Nested Schema for `configuration.tunnel_method.source_postgres_update_ssh_tunnel_method_password_authentication`
+
+### Nested Schema for `configuration.tunnel_method.ssh_key_authentication`
Required:
+- `ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through a jump server tunnel host using username and password authentication
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host
-- `tunnel_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.tunnel_method.source_postgres_update_ssh_tunnel_method_ssh_key_authentication`
+- `tunnel_user` (String) OS-level username for logging into the jump server host.
-Required:
+Optional:
-- `ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-- `tunnel_host` (String) Hostname of the jump server host that allows inbound ssh tunnel.
-- `tunnel_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through a jump server tunnel host using username and ssh key
-- `tunnel_port` (Number) Port on the proxy/jump server that accepts inbound ssh connections.
-- `tunnel_user` (String) OS-level username for logging into the jump server host.
+- `tunnel_port` (Number) Default: 22
+Port on the proxy/jump server that accepts inbound ssh connections.
diff --git a/docs/resources/source_posthog.md b/docs/resources/source_posthog.md
index 27fbc1d87..28848ec98 100644
--- a/docs/resources/source_posthog.md
+++ b/docs/resources/source_posthog.md
@@ -17,13 +17,13 @@ resource "airbyte_source_posthog" "my_source_posthog" {
configuration = {
api_key = "...my_api_key..."
base_url = "https://posthog.example.com"
- events_time_step = 30
- source_type = "posthog"
+ events_time_step = 5
start_date = "2021-01-01T00:00:00Z"
}
- name = "Terence Wisozk"
- secret_id = "...my_secret_id..."
- workspace_id = "21ec2053-b749-4366-ac8e-e0f2bf19588d"
+ definition_id = "07521b21-ea9b-4c9d-9c88-f1ee12f8a7db"
+ name = "Daisy Ledner"
+ secret_id = "...my_secret_id..."
+ workspace_id = "41266a87-d389-4094-afa6-7bbea9f5a35d"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_posthog" "my_source_posthog" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,13 +51,14 @@ resource "airbyte_source_posthog" "my_source_posthog" {
Required:
-- `api_key` (String) API Key. See the docs for information on how to generate this key.
-- `source_type` (String) must be one of ["posthog"]
+- `api_key` (String, Sensitive) API Key. See the docs for information on how to generate this key.
- `start_date` (String) The date from which you'd like to replicate the data. Any data before this date will not be replicated.
Optional:
-- `base_url` (String) Base PostHog url. Defaults to PostHog Cloud (https://app.posthog.com).
-- `events_time_step` (Number) Set lower value in case of failing long running sync of events stream.
+- `base_url` (String) Default: "https://app.posthog.com"
+Base PostHog url. Defaults to PostHog Cloud (https://app.posthog.com).
+- `events_time_step` (Number) Default: 30
+Set lower value in case of failing long running sync of events stream.
diff --git a/docs/resources/source_postmarkapp.md b/docs/resources/source_postmarkapp.md
index e9925ed62..6a265a1dd 100644
--- a/docs/resources/source_postmarkapp.md
+++ b/docs/resources/source_postmarkapp.md
@@ -15,13 +15,13 @@ SourcePostmarkapp Resource
```terraform
resource "airbyte_source_postmarkapp" "my_source_postmarkapp" {
configuration = {
- source_type = "postmarkapp"
x_postmark_account_token = "...my_x_postmark_account_token..."
x_postmark_server_token = "...my_x_postmark_server_token..."
}
- name = "Mr. Sharon Swift"
- secret_id = "...my_secret_id..."
- workspace_id = "3deba297-be3e-490b-840d-f868fd52405c"
+ definition_id = "1bd0fb63-21f6-4b4c-a647-2a5f8aec8fed"
+ name = "Felix Wisoky"
+ secret_id = "...my_secret_id..."
+ workspace_id = "5367bfee-523e-436b-b4e8-f7b837d76b02"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_postmarkapp" "my_source_postmarkapp" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,7 +49,6 @@ resource "airbyte_source_postmarkapp" "my_source_postmarkapp" {
Required:
-- `source_type` (String) must be one of ["postmarkapp"]
- `x_postmark_account_token` (String) API Key for account
- `x_postmark_server_token` (String) API Key for server
diff --git a/docs/resources/source_prestashop.md b/docs/resources/source_prestashop.md
index bfcc11bc1..28e07a56d 100644
--- a/docs/resources/source_prestashop.md
+++ b/docs/resources/source_prestashop.md
@@ -15,14 +15,14 @@ SourcePrestashop Resource
```terraform
resource "airbyte_source_prestashop" "my_source_prestashop" {
configuration = {
- access_key = "...my_access_key..."
- source_type = "prestashop"
- start_date = "2022-01-01"
- url = "...my_url..."
+ access_key = "...my_access_key..."
+ start_date = "2022-01-01"
+ url = "...my_url..."
}
- name = "Evelyn Stracke"
- secret_id = "...my_secret_id..."
- workspace_id = "2f4f127f-b0e0-4bf1-b821-7978d0acca77"
+ definition_id = "d797c2fd-0239-4507-97b2-06b8fda8b48b"
+ name = "Dr. Jeffery Wuckert"
+ secret_id = "...my_secret_id..."
+ workspace_id = "631ebcaf-aa2e-4e7a-9e0c-b6197095b91e"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_prestashop" "my_source_prestashop" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,8 +50,7 @@ resource "airbyte_source_prestashop" "my_source_prestashop" {
Required:
-- `access_key` (String) Your PrestaShop access key. See the docs for info on how to obtain this.
-- `source_type` (String) must be one of ["prestashop"]
+- `access_key` (String, Sensitive) Your PrestaShop access key. See the docs for info on how to obtain this.
- `start_date` (String) The Start date in the format YYYY-MM-DD.
- `url` (String) Shop URL without trailing slash.
diff --git a/docs/resources/source_punk_api.md b/docs/resources/source_punk_api.md
index 33fb6c703..93a09f464 100644
--- a/docs/resources/source_punk_api.md
+++ b/docs/resources/source_punk_api.md
@@ -17,12 +17,12 @@ resource "airbyte_source_punk_api" "my_source_punkapi" {
configuration = {
brewed_after = "MM-YYYY"
brewed_before = "MM-YYYY"
- id = 22
- source_type = "punk-api"
+ id = 1
}
- name = "Darnell Turcotte"
- secret_id = "...my_secret_id..."
- workspace_id = "540ef53a-34a1-4b8f-a997-31adc05d85ae"
+ definition_id = "0c173d4d-6113-43dd-b2a9-5937ced0062e"
+ name = "Shelia Hettinger"
+ secret_id = "...my_secret_id..."
+ workspace_id = "4e78152c-bd26-46e4-812d-05e7f58d4a06"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_punk_api" "my_source_punkapi" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,7 +52,6 @@ Required:
- `brewed_after` (String) To extract specific data with Unique ID
- `brewed_before` (String) To extract specific data with Unique ID
-- `source_type` (String) must be one of ["punk-api"]
Optional:
diff --git a/docs/resources/source_pypi.md b/docs/resources/source_pypi.md
index a6b79c0f9..3ca2bf16c 100644
--- a/docs/resources/source_pypi.md
+++ b/docs/resources/source_pypi.md
@@ -16,12 +16,12 @@ SourcePypi Resource
resource "airbyte_source_pypi" "my_source_pypi" {
configuration = {
project_name = "sampleproject"
- source_type = "pypi"
version = "1.2.0"
}
- name = "Antonia Wintheiser"
- secret_id = "...my_secret_id..."
- workspace_id = "0fb38742-90d3-4365-a1ec-a16ef89451bd"
+ definition_id = "25cbff5b-31f2-4b93-84d3-ebf32902de61"
+ name = "Ann Blanda"
+ secret_id = "...my_secret_id..."
+ workspace_id = "882924ee-80aa-4298-8d84-713ebef014dd"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_pypi" "my_source_pypi" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,7 +50,6 @@ resource "airbyte_source_pypi" "my_source_pypi" {
Required:
- `project_name` (String) Name of the project/package. Can only be in lowercase with hyphen. This is the name used using pip command for installing the package.
-- `source_type` (String) must be one of ["pypi"]
Optional:
diff --git a/docs/resources/source_qualaroo.md b/docs/resources/source_qualaroo.md
index 6659a5d39..de27dbeca 100644
--- a/docs/resources/source_qualaroo.md
+++ b/docs/resources/source_qualaroo.md
@@ -15,17 +15,17 @@ SourceQualaroo Resource
```terraform
resource "airbyte_source_qualaroo" "my_source_qualaroo" {
configuration = {
- key = "...my_key..."
- source_type = "qualaroo"
- start_date = "2021-03-01T00:00:00.000Z"
+ key = "...my_key..."
+ start_date = "2021-03-01T00:00:00.000Z"
survey_ids = [
"...",
]
token = "...my_token..."
}
- name = "Sue Thompson"
- secret_id = "...my_secret_id..."
- workspace_id = "b518c4da-1fad-4355-92f0-6d4e5b72f0f5"
+ definition_id = "9af7c7e9-c462-409e-a52c-707cb05c4a8d"
+ name = "Cheryl Schmitt"
+ secret_id = "...my_secret_id..."
+ workspace_id = "4658e520-f854-4a56-b309-cc0ee4bba7fa"
}
```
@@ -35,11 +35,12 @@ resource "airbyte_source_qualaroo" "my_source_qualaroo" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -52,10 +53,9 @@ resource "airbyte_source_qualaroo" "my_source_qualaroo" {
Required:
-- `key` (String) A Qualaroo token. See the docs for instructions on how to generate it.
-- `source_type` (String) must be one of ["qualaroo"]
+- `key` (String, Sensitive) A Qualaroo token. See the docs for instructions on how to generate it.
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-- `token` (String) A Qualaroo token. See the docs for instructions on how to generate it.
+- `token` (String, Sensitive) A Qualaroo token. See the docs for instructions on how to generate it.
Optional:
diff --git a/docs/resources/source_quickbooks.md b/docs/resources/source_quickbooks.md
index c23c86ec2..bf297e598 100644
--- a/docs/resources/source_quickbooks.md
+++ b/docs/resources/source_quickbooks.md
@@ -16,23 +16,22 @@ SourceQuickbooks Resource
resource "airbyte_source_quickbooks" "my_source_quickbooks" {
configuration = {
credentials = {
- source_quickbooks_authorization_method_o_auth2_0 = {
+ source_quickbooks_o_auth2_0 = {
access_token = "...my_access_token..."
- auth_type = "oauth2.0"
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
realm_id = "...my_realm_id..."
refresh_token = "...my_refresh_token..."
- token_expiry_date = "2022-06-15T23:02:57.447Z"
+ token_expiry_date = "2020-06-15T02:42:19.793Z"
}
}
- sandbox = false
- source_type = "quickbooks"
- start_date = "2021-03-20T00:00:00Z"
+ sandbox = true
+ start_date = "2021-03-20T00:00:00Z"
}
- name = "William Gottlieb"
- secret_id = "...my_secret_id..."
- workspace_id = "e00a1d6e-b943-4464-9d03-084fbba5ccef"
+ definition_id = "054daa84-a4e2-48fe-a10a-8a64b77a4fe6"
+ name = "Patricia Dickens"
+ secret_id = "...my_secret_id..."
+ workspace_id = "88c95001-e515-4b2e-b405-22a67dad65e8"
}
```
@@ -42,11 +41,12 @@ resource "airbyte_source_quickbooks" "my_source_quickbooks" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -60,49 +60,30 @@ resource "airbyte_source_quickbooks" "my_source_quickbooks" {
Required:
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `sandbox` (Boolean) Determines whether to use the sandbox or production environment.
-- `source_type` (String) must be one of ["quickbooks"]
- `start_date` (String) The default value to use if no bookmark exists for an endpoint (rfc3339 date string). E.g, 2021-03-20T00:00:00Z. Any data before this date will not be replicated.
-
-### Nested Schema for `configuration.credentials`
-
Optional:
-- `source_quickbooks_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_quickbooks_authorization_method_o_auth2_0))
-- `source_quickbooks_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_quickbooks_update_authorization_method_o_auth2_0))
-
-
-### Nested Schema for `configuration.credentials.source_quickbooks_authorization_method_o_auth2_0`
+- `sandbox` (Boolean) Default: false
+Determines whether to use the sandbox or production environment.
-Required:
-
-- `access_token` (String) Access token fot making authenticated requests.
-- `client_id` (String) Identifies which app is making the request. Obtain this value from the Keys tab on the app profile via My Apps on the developer site. There are two versions of this key: development and production.
-- `client_secret` (String) Obtain this value from the Keys tab on the app profile via My Apps on the developer site. There are two versions of this key: development and production.
-- `realm_id` (String) Labeled Company ID. The Make API Calls panel is populated with the realm id and the current access token.
-- `refresh_token` (String) A token used when refreshing the access token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
+
+### Nested Schema for `configuration.credentials`
Optional:
-- `auth_type` (String) must be one of ["oauth2.0"]
-
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_quickbooks_update_authorization_method_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) Access token fot making authenticated requests.
+- `access_token` (String, Sensitive) Access token fot making authenticated requests.
- `client_id` (String) Identifies which app is making the request. Obtain this value from the Keys tab on the app profile via My Apps on the developer site. There are two versions of this key: development and production.
- `client_secret` (String) Obtain this value from the Keys tab on the app profile via My Apps on the developer site. There are two versions of this key: development and production.
- `realm_id` (String) Labeled Company ID. The Make API Calls panel is populated with the realm id and the current access token.
-- `refresh_token` (String) A token used when refreshing the access token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-Optional:
-
-- `auth_type` (String) must be one of ["oauth2.0"]
+- `refresh_token` (String, Sensitive) A token used when refreshing the access token.
+- `token_expiry_date` (String, Sensitive) The date-time when the access token should be refreshed.
diff --git a/docs/resources/source_railz.md b/docs/resources/source_railz.md
index 09a7c18fa..b353dc15e 100644
--- a/docs/resources/source_railz.md
+++ b/docs/resources/source_railz.md
@@ -15,14 +15,14 @@ SourceRailz Resource
```terraform
resource "airbyte_source_railz" "my_source_railz" {
configuration = {
- client_id = "...my_client_id..."
- secret_key = "...my_secret_key..."
- source_type = "railz"
- start_date = "...my_start_date..."
+ client_id = "...my_client_id..."
+ secret_key = "...my_secret_key..."
+ start_date = "...my_start_date..."
}
- name = "Clyde Schmeler Jr."
- secret_id = "...my_secret_id..."
- workspace_id = "fe51e528-a45a-4c82-b85f-8bc2caba8da4"
+ definition_id = "ae1d217c-0fcb-4e7d-ad34-33ea862799ca"
+ name = "Alvin Roob"
+ secret_id = "...my_secret_id..."
+ workspace_id = "833469d3-410e-4395-a0aa-c55dc9d09788"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_railz" "my_source_railz" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,8 +51,7 @@ resource "airbyte_source_railz" "my_source_railz" {
Required:
- `client_id` (String) Client ID (client_id)
-- `secret_key` (String) Secret key (secret_key)
-- `source_type` (String) must be one of ["railz"]
+- `secret_key` (String, Sensitive) Secret key (secret_key)
- `start_date` (String) Start date
diff --git a/docs/resources/source_recharge.md b/docs/resources/source_recharge.md
index c3ab7a31b..a867ef726 100644
--- a/docs/resources/source_recharge.md
+++ b/docs/resources/source_recharge.md
@@ -16,12 +16,12 @@ SourceRecharge Resource
resource "airbyte_source_recharge" "my_source_recharge" {
configuration = {
access_token = "...my_access_token..."
- source_type = "recharge"
start_date = "2021-05-14T00:00:00Z"
}
- name = "Angel Stokes"
- secret_id = "...my_secret_id..."
- workspace_id = "7ff4711a-a1bc-474b-86ce-cc74f77b4848"
+ definition_id = "427992f6-5a71-405f-ae57-0ad372ede129"
+ name = "Hugo Hagenes"
+ secret_id = "...my_secret_id..."
+ workspace_id = "1410fd6e-7ec4-4881-ab0c-62b8975147c3"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_recharge" "my_source_recharge" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_recharge" "my_source_recharge" {
Required:
-- `access_token` (String) The value of the Access Token generated. See the docs for more information.
-- `source_type` (String) must be one of ["recharge"]
+- `access_token` (String, Sensitive) The value of the Access Token generated. See the docs for more information.
- `start_date` (String) The date from which you'd like to replicate data for Recharge API, in the format YYYY-MM-DDT00:00:00Z. Any data before this date will not be replicated.
diff --git a/docs/resources/source_recreation.md b/docs/resources/source_recreation.md
index 9b923b0f5..67edd7294 100644
--- a/docs/resources/source_recreation.md
+++ b/docs/resources/source_recreation.md
@@ -17,11 +17,11 @@ resource "airbyte_source_recreation" "my_source_recreation" {
configuration = {
apikey = "...my_apikey..."
query_campsites = "...my_query_campsites..."
- source_type = "recreation"
}
- name = "Taylor Kertzmann"
- secret_id = "...my_secret_id..."
- workspace_id = "f0441d2c-3b80-4809-8373-e060459bebba"
+ definition_id = "e6c8bd1c-ccad-43b1-8406-5293193648ca"
+ name = "Naomi Dietrich"
+ secret_id = "...my_secret_id..."
+ workspace_id = "8652384b-db82-41f9-88ef-a40dc207c50e"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_recreation" "my_source_recreation" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_recreation" "my_source_recreation" {
Required:
-- `apikey` (String) API Key
-- `source_type` (String) must be one of ["recreation"]
+- `apikey` (String, Sensitive) API Key
Optional:
diff --git a/docs/resources/source_recruitee.md b/docs/resources/source_recruitee.md
index ff172c647..5c11e508f 100644
--- a/docs/resources/source_recruitee.md
+++ b/docs/resources/source_recruitee.md
@@ -15,13 +15,13 @@ SourceRecruitee Resource
```terraform
resource "airbyte_source_recruitee" "my_source_recruitee" {
configuration = {
- api_key = "...my_api_key..."
- company_id = 9
- source_type = "recruitee"
+ api_key = "...my_api_key..."
+ company_id = 4
}
- name = "Mrs. Tina White"
- secret_id = "...my_secret_id..."
- workspace_id = "6bcf1525-58da-4a95-be6c-d02756c354aa"
+ definition_id = "f1211e1f-cb26-4b90-8c0d-f941919892a2"
+ name = "Mrs. Sherri Rosenbaum"
+ secret_id = "...my_secret_id..."
+ workspace_id = "af7bc34c-463b-4838-9c5f-976535f73a45"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_recruitee" "my_source_recruitee" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_recruitee" "my_source_recruitee" {
Required:
-- `api_key` (String) Recruitee API Key. See here.
+- `api_key` (String, Sensitive) Recruitee API Key. See here.
- `company_id` (Number) Recruitee Company ID. You can also find this ID on the Recruitee API tokens page.
-- `source_type` (String) must be one of ["recruitee"]
diff --git a/docs/resources/source_recurly.md b/docs/resources/source_recurly.md
index 4d6e8253e..4f6d49ab3 100644
--- a/docs/resources/source_recurly.md
+++ b/docs/resources/source_recurly.md
@@ -15,14 +15,14 @@ SourceRecurly Resource
```terraform
resource "airbyte_source_recurly" "my_source_recurly" {
configuration = {
- api_key = "...my_api_key..."
- begin_time = "2021-12-01T00:00:00"
- end_time = "2021-12-01T00:00:00"
- source_type = "recurly"
+ api_key = "...my_api_key..."
+ begin_time = "2021-12-01T00:00:00"
+ end_time = "2021-12-01T00:00:00"
}
- name = "Josephine Dibbert"
- secret_id = "...my_secret_id..."
- workspace_id = "7e1763c5-208c-423e-9802-d82f0d45eb4a"
+ definition_id = "535fff5d-1d34-4f0c-8e54-86a3a161dc53"
+ name = "Mrs. Glen Gottlieb"
+ secret_id = "...my_secret_id..."
+ workspace_id = "acb8b41d-5bf9-44a0-9397-d3dfd90aff66"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_recurly" "my_source_recurly" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,8 +50,7 @@ resource "airbyte_source_recurly" "my_source_recurly" {
Required:
-- `api_key` (String) Recurly API Key. See the docs for more information on how to generate this key.
-- `source_type` (String) must be one of ["recurly"]
+- `api_key` (String, Sensitive) Recurly API Key. See the docs for more information on how to generate this key.
Optional:
diff --git a/docs/resources/source_redshift.md b/docs/resources/source_redshift.md
index 8a38f454e..32d4b1b2b 100644
--- a/docs/resources/source_redshift.md
+++ b/docs/resources/source_redshift.md
@@ -23,12 +23,12 @@ resource "airbyte_source_redshift" "my_source_redshift" {
schemas = [
"...",
]
- source_type = "redshift"
- username = "Nelda.Jaskolski"
+ username = "Elton_Morissette"
}
- name = "Clay Hintz"
- secret_id = "...my_secret_id..."
- workspace_id = "c18edc7f-787e-432e-84b3-d3ed0c5670ef"
+ definition_id = "b974a7d8-001c-4be4-b7da-a2d7b021550a"
+ name = "Jake Ondricka"
+ secret_id = "...my_secret_id..."
+ workspace_id = "f01cf56e-e294-4adb-85bd-340789cf0b8d"
}
```
@@ -38,11 +38,12 @@ resource "airbyte_source_redshift" "my_source_redshift" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -57,14 +58,14 @@ Required:
- `database` (String) Name of the database.
- `host` (String) Host Endpoint of the Redshift Cluster (must include the cluster-id, region and end with .redshift.amazonaws.com).
-- `password` (String) Password associated with the username.
-- `port` (Number) Port of the database.
-- `source_type` (String) must be one of ["redshift"]
+- `password` (String, Sensitive) Password associated with the username.
- `username` (String) Username to use to access the database.
Optional:
- `jdbc_url_params` (String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).
+- `port` (Number) Default: 5439
+Port of the database.
- `schemas` (List of String) The list of schemas to sync from. Specify one or more explicitly or keep empty to process all schemas. Schema names are case sensitive.
diff --git a/docs/resources/source_retently.md b/docs/resources/source_retently.md
index 6cf7842b1..d2e98692e 100644
--- a/docs/resources/source_retently.md
+++ b/docs/resources/source_retently.md
@@ -16,18 +16,18 @@ SourceRetently Resource
resource "airbyte_source_retently" "my_source_retently" {
configuration = {
credentials = {
- source_retently_authentication_mechanism_authenticate_via_retently_o_auth_ = {
- auth_type = "Client"
- client_id = "...my_client_id..."
- client_secret = "...my_client_secret..."
- refresh_token = "...my_refresh_token..."
+ authenticate_via_retently_o_auth = {
+ additional_properties = "{ \"see\": \"documentation\" }"
+ client_id = "...my_client_id..."
+ client_secret = "...my_client_secret..."
+ refresh_token = "...my_refresh_token..."
}
}
- source_type = "retently"
}
- name = "Kelly Pfeffer"
- secret_id = "...my_secret_id..."
- workspace_id = "c9f1cc50-3f6c-439b-8d0a-6290f957f385"
+ definition_id = "2c041244-3656-49fd-a4cd-2bcf08a635d7"
+ name = "Dave Schinner"
+ secret_id = "...my_secret_id..."
+ workspace_id = "6ceccfae-93f7-4f0f-8c4b-4f8d4f6833e1"
}
```
@@ -37,11 +37,12 @@ resource "airbyte_source_retently" "my_source_retently" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -55,71 +56,38 @@ resource "airbyte_source_retently" "my_source_retently" {
Optional:
- `credentials` (Attributes) Choose how to authenticate to Retently (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["retently"]
### Nested Schema for `configuration.credentials`
Optional:
-- `source_retently_authentication_mechanism_authenticate_via_retently_o_auth` (Attributes) Choose how to authenticate to Retently (see [below for nested schema](#nestedatt--configuration--credentials--source_retently_authentication_mechanism_authenticate_via_retently_o_auth))
-- `source_retently_authentication_mechanism_authenticate_with_api_token` (Attributes) Choose how to authenticate to Retently (see [below for nested schema](#nestedatt--configuration--credentials--source_retently_authentication_mechanism_authenticate_with_api_token))
-- `source_retently_update_authentication_mechanism_authenticate_via_retently_o_auth` (Attributes) Choose how to authenticate to Retently (see [below for nested schema](#nestedatt--configuration--credentials--source_retently_update_authentication_mechanism_authenticate_via_retently_o_auth))
-- `source_retently_update_authentication_mechanism_authenticate_with_api_token` (Attributes) Choose how to authenticate to Retently (see [below for nested schema](#nestedatt--configuration--credentials--source_retently_update_authentication_mechanism_authenticate_with_api_token))
+- `authenticate_via_retently_o_auth` (Attributes) Choose how to authenticate to Retently (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_via_retently_o_auth))
+- `authenticate_with_api_token` (Attributes) Choose how to authenticate to Retently (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_with_api_token))
-
-### Nested Schema for `configuration.credentials.source_retently_authentication_mechanism_authenticate_via_retently_o_auth`
+
+### Nested Schema for `configuration.credentials.authenticate_via_retently_o_auth`
Required:
- `client_id` (String) The Client ID of your Retently developer application.
- `client_secret` (String) The Client Secret of your Retently developer application.
-- `refresh_token` (String) Retently Refresh Token which can be used to fetch new Bearer Tokens when the current one expires.
+- `refresh_token` (String, Sensitive) Retently Refresh Token which can be used to fetch new Bearer Tokens when the current one expires.
Optional:
- `additional_properties` (String) Parsed as JSON.
-- `auth_type` (String) must be one of ["Client"]
-
-### Nested Schema for `configuration.credentials.source_retently_authentication_mechanism_authenticate_with_api_token`
+
+### Nested Schema for `configuration.credentials.authenticate_with_api_token`
Required:
-- `api_key` (String) Retently API Token. See the docs for more information on how to obtain this key.
+- `api_key` (String, Sensitive) Retently API Token. See the docs for more information on how to obtain this key.
Optional:
- `additional_properties` (String) Parsed as JSON.
-- `auth_type` (String) must be one of ["Token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_retently_update_authentication_mechanism_authenticate_via_retently_o_auth`
-
-Required:
-
-- `client_id` (String) The Client ID of your Retently developer application.
-- `client_secret` (String) The Client Secret of your Retently developer application.
-- `refresh_token` (String) Retently Refresh Token which can be used to fetch new Bearer Tokens when the current one expires.
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `auth_type` (String) must be one of ["Client"]
-
-
-
-### Nested Schema for `configuration.credentials.source_retently_update_authentication_mechanism_authenticate_with_api_token`
-
-Required:
-
-- `api_key` (String) Retently API Token. See the docs for more information on how to obtain this key.
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `auth_type` (String) must be one of ["Token"]
diff --git a/docs/resources/source_rki_covid.md b/docs/resources/source_rki_covid.md
index 9f0c9d8c2..7fa499c52 100644
--- a/docs/resources/source_rki_covid.md
+++ b/docs/resources/source_rki_covid.md
@@ -15,12 +15,12 @@ SourceRkiCovid Resource
```terraform
resource "airbyte_source_rki_covid" "my_source_rkicovid" {
configuration = {
- source_type = "rki-covid"
- start_date = "...my_start_date..."
+ start_date = "...my_start_date..."
}
- name = "Penny Morissette"
- secret_id = "...my_secret_id..."
- workspace_id = "7ef807aa-e03f-433c-a79f-b9de4032ba26"
+ definition_id = "f3303ab0-45c8-491f-a9c8-dcb6cc1cd73d"
+ name = "Leticia Zieme Sr."
+ secret_id = "...my_secret_id..."
+ workspace_id = "36d5989e-7dba-4ce4-805a-6307276c58b5"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_rki_covid" "my_source_rkicovid" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_rki_covid" "my_source_rkicovid" {
Required:
-- `source_type` (String) must be one of ["rki-covid"]
- `start_date` (String) UTC date in the format 2017-01-25. Any data before this date will not be replicated.
diff --git a/docs/resources/source_rss.md b/docs/resources/source_rss.md
index dfb2fe4b5..82fd10a38 100644
--- a/docs/resources/source_rss.md
+++ b/docs/resources/source_rss.md
@@ -15,12 +15,12 @@ SourceRss Resource
```terraform
resource "airbyte_source_rss" "my_source_rss" {
configuration = {
- source_type = "rss"
- url = "...my_url..."
+ url = "...my_url..."
}
- name = "Gustavo Donnelly"
- secret_id = "...my_secret_id..."
- workspace_id = "ba9216bc-b415-4835-8736-41723133edc0"
+ definition_id = "da21f739-86a7-41e9-92c2-b81056bc977a"
+ name = "Alison Wunsch"
+ secret_id = "...my_secret_id..."
+ workspace_id = "ff8dd835-d804-427d-a3a4-e1d8c723c8e5"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_rss" "my_source_rss" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_rss" "my_source_rss" {
Required:
-- `source_type` (String) must be one of ["rss"]
- `url` (String) RSS Feed URL
diff --git a/docs/resources/source_s3.md b/docs/resources/source_s3.md
index de1bd4527..d4403ae72 100644
--- a/docs/resources/source_s3.md
+++ b/docs/resources/source_s3.md
@@ -19,11 +19,9 @@ resource "airbyte_source_s3" "my_source_s3" {
aws_secret_access_key = "...my_aws_secret_access_key..."
bucket = "...my_bucket..."
dataset = "...my_dataset..."
- endpoint = "...my_endpoint..."
+ endpoint = "https://my-s3-endpoint.com"
format = {
- source_s3_file_format_avro = {
- filetype = "avro"
- }
+ avro = {}
}
path_pattern = "**"
provider = {
@@ -34,17 +32,14 @@ resource "airbyte_source_s3" "my_source_s3" {
path_prefix = "...my_path_prefix..."
start_date = "2021-01-01T00:00:00Z"
}
- schema = "{\"column_1\": \"number\", \"column_2\": \"string\", \"column_3\": \"array\", \"column_4\": \"object\", \"column_5\": \"boolean\"}"
- source_type = "s3"
- start_date = "2021-01-01T00:00:00.000000Z"
+ schema = "{\"column_1\": \"number\", \"column_2\": \"string\", \"column_3\": \"array\", \"column_4\": \"object\", \"column_5\": \"boolean\"}"
+ start_date = "2021-01-01T00:00:00.000000Z"
streams = [
{
- days_to_sync_if_history_is_full = 1
- file_type = "...my_file_type..."
+ days_to_sync_if_history_is_full = 3
format = {
- source_s3_file_based_stream_config_format_avro_format = {
- double_as_string = true
- filetype = "avro"
+ source_s3_avro_format = {
+ double_as_string = false
}
}
globs = [
@@ -52,16 +47,17 @@ resource "airbyte_source_s3" "my_source_s3" {
]
input_schema = "...my_input_schema..."
legacy_prefix = "...my_legacy_prefix..."
- name = "Flora Rempel"
+ name = "Tyler Grimes"
primary_key = "...my_primary_key..."
schemaless = false
validation_policy = "Skip Record"
},
]
}
- name = "Jacqueline Kiehn"
- secret_id = "...my_secret_id..."
- workspace_id = "2c22c553-5049-45c5-9bb3-c57c1e4981e8"
+ definition_id = "5b5a324c-6128-4aab-bad0-730782c3e822"
+ name = "Mr. Phillip Hermann DVM"
+ secret_id = "...my_secret_id..."
+ workspace_id = "3e25c699-48d0-4388-851e-c06fd3b8cc64"
}
```
@@ -72,11 +68,12 @@ resource "airbyte_source_s3" "my_source_s3" {
- `configuration` (Attributes) NOTE: When this Spec is changed, legacy_config_transformer.py must also be modified to uptake the changes
because it is responsible for converting legacy S3 v3 configs into v4 configs using the File-Based CDK. (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -90,19 +87,20 @@ because it is responsible for converting legacy S3 v3 configs into v4 configs us
Required:
- `bucket` (String) Name of the S3 bucket where the file(s) exist.
-- `source_type` (String) must be one of ["s3"]
- `streams` (Attributes List) Each instance of this configuration defines a stream. Use this to define which files belong in the stream, their format, and how they should be parsed and validated. When sending data to warehouse destination such as Snowflake or BigQuery, each stream is a separate table. (see [below for nested schema](#nestedatt--configuration--streams))
Optional:
-- `aws_access_key_id` (String) In order to access private Buckets stored on AWS S3, this connector requires credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-- `aws_secret_access_key` (String) In order to access private Buckets stored on AWS S3, this connector requires credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
+- `aws_access_key_id` (String, Sensitive) In order to access private Buckets stored on AWS S3, this connector requires credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
+- `aws_secret_access_key` (String, Sensitive) In order to access private Buckets stored on AWS S3, this connector requires credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
- `dataset` (String) Deprecated and will be removed soon. Please do not use this field anymore and use streams.name instead. The name of the stream you would like this source to output. Can contain letters, numbers, or underscores.
-- `endpoint` (String) Endpoint to an S3 compatible service. Leave empty to use AWS.
+- `endpoint` (String) Default: ""
+Endpoint to an S3 compatible service. Leave empty to use AWS. The custom endpoint must be secure, but the 'https' prefix is not required.
- `format` (Attributes) Deprecated and will be removed soon. Please do not use this field anymore and use streams.format instead. The format of the files you'd like to replicate (see [below for nested schema](#nestedatt--configuration--format))
- `path_pattern` (String) Deprecated and will be removed soon. Please do not use this field anymore and use streams.globs instead. A regular expression which tells the connector which files to replicate. All files which match this pattern will be replicated. Use | to separate multiple patterns. See this page to understand pattern syntax (GLOBSTAR and SPLIT flags are enabled). Use pattern ** to pick up all files.
- `provider` (Attributes) Deprecated and will be removed soon. Please do not use this field anymore and use bucket, aws_access_key_id, aws_secret_access_key and endpoint instead. Use this to load files from S3 or S3-compatible services (see [below for nested schema](#nestedatt--configuration--provider))
-- `schema` (String) Deprecated and will be removed soon. Please do not use this field anymore and use streams.input_schema instead. Optionally provide a schema to enforce, as a valid JSON string. Ensure this is a mapping of { "column" : "type" }, where types are valid JSON Schema datatypes. Leave as {} to auto-infer the schema.
+- `schema` (String) Default: "{}"
+Deprecated and will be removed soon. Please do not use this field anymore and use streams.input_schema instead. Optionally provide a schema to enforce, as a valid JSON string. Ensure this is a mapping of { "column" : "type" }, where types are valid JSON Schema datatypes. Leave as {} to auto-infer the schema.
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00.000000Z. Any file modified before this date will not be replicated.
@@ -110,19 +108,20 @@ Optional:
Required:
-- `file_type` (String) The data file type that is being extracted for a stream.
+- `format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format))
- `name` (String) The name of the stream.
Optional:
-- `days_to_sync_if_history_is_full` (Number) When the state history of the file store is full, syncs will only read files that were last modified in the provided day range.
-- `format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format))
+- `days_to_sync_if_history_is_full` (Number) Default: 3
+When the state history of the file store is full, syncs will only read files that were last modified in the provided day range.
- `globs` (List of String) The pattern used to specify which files should be selected from the file system. For more information on glob pattern matching look here.
- `input_schema` (String) The schema that will be used to validate records extracted from the file. This will override the stream schema that is auto-detected from incoming files.
- `legacy_prefix` (String) The path prefix configured in v3 versions of the S3 connector. This option is deprecated in favor of a single glob.
-- `primary_key` (String) The column or columns (for a composite key) that serves as the unique identifier of a record.
-- `schemaless` (Boolean) When enabled, syncs will not validate or structure records against the stream's schema.
-- `validation_policy` (String) must be one of ["Emit Record", "Skip Record", "Wait for Discover"]
+- `primary_key` (String, Sensitive) The column or columns (for a composite key) that serves as the unique identifier of a record.
+- `schemaless` (Boolean) Default: false
+When enabled, syncs will not validate or structure records against the stream's schema.
+- `validation_policy` (String) must be one of ["Emit Record", "Skip Record", "Wait for Discover"]; Default: "Emit Record"
The name of the validation policy that dictates sync behavior when a record does not adhere to the stream schema.
@@ -130,185 +129,95 @@ The name of the validation policy that dictates sync behavior when a record does
Optional:
-- `source_s3_file_based_stream_config_format_avro_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_file_based_stream_config_format_avro_format))
-- `source_s3_file_based_stream_config_format_csv_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_file_based_stream_config_format_csv_format))
-- `source_s3_file_based_stream_config_format_jsonl_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_file_based_stream_config_format_jsonl_format))
-- `source_s3_file_based_stream_config_format_parquet_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_file_based_stream_config_format_parquet_format))
-- `source_s3_update_file_based_stream_config_format_avro_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_avro_format))
-- `source_s3_update_file_based_stream_config_format_csv_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_csv_format))
-- `source_s3_update_file_based_stream_config_format_jsonl_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_jsonl_format))
-- `source_s3_update_file_based_stream_config_format_parquet_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format))
+- `avro_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--avro_format))
+- `csv_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--csv_format))
+- `document_file_type_format_experimental` (Attributes) Extract text from document formats (.pdf, .docx, .md, .pptx) and emit as one record per file. (see [below for nested schema](#nestedatt--configuration--streams--format--document_file_type_format_experimental))
+- `jsonl_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--jsonl_format))
+- `parquet_format` (Attributes) The configuration options that are used to alter how to read incoming files that deviate from the standard formatting. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format))
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
+
+### Nested Schema for `configuration.streams.format.parquet_format`
Optional:
-- `double_as_string` (Boolean) Whether to convert double fields to strings. This is recommended if you have decimal numbers with a high degree of precision because there can be a loss precision when handling floating point numbers.
-- `filetype` (String) must be one of ["avro"]
+- `double_as_string` (Boolean) Default: false
+Whether to convert double fields to strings. This is recommended if you have decimal numbers with a high degree of precision because there can be a loss precision when handling floating point numbers.
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
+
+### Nested Schema for `configuration.streams.format.parquet_format`
Optional:
-- `delimiter` (String) The character delimiting individual cells in the CSV data. This may only be a 1-character string. For tab-delimited data enter '\t'.
-- `double_quote` (Boolean) Whether two quotes in a quoted CSV value denote a single quote in the data.
-- `encoding` (String) The character encoding of the CSV data. Leave blank to default to UTF8. See list of python encodings for allowable options.
+- `delimiter` (String) Default: ","
+The character delimiting individual cells in the CSV data. This may only be a 1-character string. For tab-delimited data enter '\t'.
+- `double_quote` (Boolean) Default: true
+Whether two quotes in a quoted CSV value denote a single quote in the data.
+- `encoding` (String) Default: "utf8"
+The character encoding of the CSV data. Leave blank to default to UTF8. See list of python encodings for allowable options.
- `escape_char` (String) The character used for escaping special characters. To disallow escaping, leave this field blank.
- `false_values` (List of String) A set of case-sensitive strings that should be interpreted as false values.
-- `filetype` (String) must be one of ["csv"]
-- `header_definition` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition))
-- `inference_type` (String) must be one of ["None", "Primitive Types Only"]
+- `header_definition` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format--header_definition))
+- `inference_type` (String) must be one of ["None", "Primitive Types Only"]; Default: "None"
How to infer the types of the columns. If none, inference default to strings.
- `null_values` (List of String) A set of case-sensitive strings that should be interpreted as null values. For example, if the value 'NA' should be interpreted as null, enter 'NA' in this field.
-- `quote_char` (String) The character used for quoting CSV values. To disallow quoting, make this field blank.
-- `skip_rows_after_header` (Number) The number of rows to skip after the header row.
-- `skip_rows_before_header` (Number) The number of rows to skip before the header row. For example, if the header row is on the 3rd row, enter 2 in this field.
-- `strings_can_be_null` (Boolean) Whether strings can be interpreted as null values. If true, strings that match the null_values set will be interpreted as null. If false, strings that match the null_values set will be interpreted as the string itself.
+- `quote_char` (String) Default: "\""
+The character used for quoting CSV values. To disallow quoting, make this field blank.
+- `skip_rows_after_header` (Number) Default: 0
+The number of rows to skip after the header row.
+- `skip_rows_before_header` (Number) Default: 0
+The number of rows to skip before the header row. For example, if the header row is on the 3rd row, enter 2 in this field.
+- `strings_can_be_null` (Boolean) Default: true
+Whether strings can be interpreted as null values. If true, strings that match the null_values set will be interpreted as null. If false, strings that match the null_values set will be interpreted as the string itself.
- `true_values` (List of String) A set of case-sensitive strings that should be interpreted as true values.
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition`
+
+### Nested Schema for `configuration.streams.format.parquet_format.header_definition`
Optional:
-- `source_s3_file_based_stream_config_format_csv_format_csv_header_definition_autogenerated` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition--source_s3_file_based_stream_config_format_csv_format_csv_header_definition_autogenerated))
-- `source_s3_file_based_stream_config_format_csv_format_csv_header_definition_from_csv` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition--source_s3_file_based_stream_config_format_csv_format_csv_header_definition_from_csv))
-- `source_s3_file_based_stream_config_format_csv_format_csv_header_definition_user_provided` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition--source_s3_file_based_stream_config_format_csv_format_csv_header_definition_user_provided))
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition.source_s3_file_based_stream_config_format_csv_format_csv_header_definition_user_provided`
+- `autogenerated` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format--header_definition--autogenerated))
+- `from_csv` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format--header_definition--from_csv))
+- `user_provided` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--parquet_format--header_definition--user_provided))
-Optional:
-
-- `header_definition_type` (String) must be one of ["Autogenerated"]
+
+### Nested Schema for `configuration.streams.format.parquet_format.header_definition.user_provided`
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition.source_s3_file_based_stream_config_format_csv_format_csv_header_definition_user_provided`
+
+### Nested Schema for `configuration.streams.format.parquet_format.header_definition.user_provided`
-Optional:
-- `header_definition_type` (String) must be one of ["From CSV"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition.source_s3_file_based_stream_config_format_csv_format_csv_header_definition_user_provided`
+
+### Nested Schema for `configuration.streams.format.parquet_format.header_definition.user_provided`
Required:
- `column_names` (List of String) The column names that will be used while emitting the CSV records
-Optional:
-
-- `header_definition_type` (String) must be one of ["User Provided"]
-
-
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
-
-Optional:
-
-- `filetype` (String) must be one of ["jsonl"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
-
-Optional:
-
-- `decimal_as_float` (Boolean) Whether to convert decimal fields to floats. There is a loss of precision when converting decimals to floats, so this is not recommended.
-- `filetype` (String) must be one of ["parquet"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
-
-Optional:
-
-- `double_as_string` (Boolean) Whether to convert double fields to strings. This is recommended if you have decimal numbers with a high degree of precision because there can be a loss precision when handling floating point numbers.
-- `filetype` (String) must be one of ["avro"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
-
-Optional:
-
-- `delimiter` (String) The character delimiting individual cells in the CSV data. This may only be a 1-character string. For tab-delimited data enter '\t'.
-- `double_quote` (Boolean) Whether two quotes in a quoted CSV value denote a single quote in the data.
-- `encoding` (String) The character encoding of the CSV data. Leave blank to default to UTF8. See list of python encodings for allowable options.
-- `escape_char` (String) The character used for escaping special characters. To disallow escaping, leave this field blank.
-- `false_values` (List of String) A set of case-sensitive strings that should be interpreted as false values.
-- `filetype` (String) must be one of ["csv"]
-- `header_definition` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition))
-- `inference_type` (String) must be one of ["None", "Primitive Types Only"]
-How to infer the types of the columns. If none, inference default to strings.
-- `null_values` (List of String) A set of case-sensitive strings that should be interpreted as null values. For example, if the value 'NA' should be interpreted as null, enter 'NA' in this field.
-- `quote_char` (String) The character used for quoting CSV values. To disallow quoting, make this field blank.
-- `skip_rows_after_header` (Number) The number of rows to skip after the header row.
-- `skip_rows_before_header` (Number) The number of rows to skip before the header row. For example, if the header row is on the 3rd row, enter 2 in this field.
-- `strings_can_be_null` (Boolean) Whether strings can be interpreted as null values. If true, strings that match the null_values set will be interpreted as null. If false, strings that match the null_values set will be interpreted as the string itself.
-- `true_values` (List of String) A set of case-sensitive strings that should be interpreted as true values.
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition`
-
-Optional:
-
-- `source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_autogenerated` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition--source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_autogenerated))
-- `source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_from_csv` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition--source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_from_csv))
-- `source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_user_provided` (Attributes) How headers will be defined. `User Provided` assumes the CSV does not have a header row and uses the headers provided and `Autogenerated` assumes the CSV does not have a header row and the CDK will generate headers using for `f{i}` where `i` is the index starting from 0. Else, the default behavior is to use the header from the CSV file. If a user wants to autogenerate or provide column names for a CSV having headers, they can skip rows. (see [below for nested schema](#nestedatt--configuration--streams--format--source_s3_update_file_based_stream_config_format_parquet_format--header_definition--source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_user_provided))
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition.source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_user_provided`
-
-Optional:
-- `header_definition_type` (String) must be one of ["Autogenerated"]
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition.source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_user_provided`
+
+### Nested Schema for `configuration.streams.format.parquet_format`
Optional:
-- `header_definition_type` (String) must be one of ["From CSV"]
-
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format.header_definition.source_s3_update_file_based_stream_config_format_csv_format_csv_header_definition_user_provided`
-
-Required:
-
-- `column_names` (List of String) The column names that will be used while emitting the CSV records
-
-Optional:
-
-- `header_definition_type` (String) must be one of ["User Provided"]
-
+- `skip_unprocessable_file_types` (Boolean) Default: true
+If true, skip files that cannot be parsed because of their file type and log a warning. If false, fail the sync. Corrupted files with valid file types will still result in a failed sync.
+
+### Nested Schema for `configuration.streams.format.parquet_format`
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
-
-Optional:
-
-- `filetype` (String) must be one of ["jsonl"]
-
-
-### Nested Schema for `configuration.streams.format.source_s3_update_file_based_stream_config_format_parquet_format`
+
+### Nested Schema for `configuration.streams.format.parquet_format`
Optional:
-- `decimal_as_float` (Boolean) Whether to convert decimal fields to floats. There is a loss of precision when converting decimals to floats, so this is not recommended.
-- `filetype` (String) must be one of ["parquet"]
+- `decimal_as_float` (Boolean) Default: false
+Whether to convert decimal fields to floats. There is a loss of precision when converting decimals to floats, so this is not recommended.
@@ -318,111 +227,62 @@ Optional:
Optional:
-- `source_s3_file_format_avro` (Attributes) This connector utilises fastavro for Avro parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_file_format_avro))
-- `source_s3_file_format_csv` (Attributes) This connector utilises PyArrow (Apache Arrow) for CSV parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_file_format_csv))
-- `source_s3_file_format_jsonl` (Attributes) This connector uses PyArrow for JSON Lines (jsonl) file parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_file_format_jsonl))
-- `source_s3_file_format_parquet` (Attributes) This connector utilises PyArrow (Apache Arrow) for Parquet parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_file_format_parquet))
-- `source_s3_update_file_format_avro` (Attributes) This connector utilises fastavro for Avro parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_update_file_format_avro))
-- `source_s3_update_file_format_csv` (Attributes) This connector utilises PyArrow (Apache Arrow) for CSV parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_update_file_format_csv))
-- `source_s3_update_file_format_jsonl` (Attributes) This connector uses PyArrow for JSON Lines (jsonl) file parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_update_file_format_jsonl))
-- `source_s3_update_file_format_parquet` (Attributes) This connector utilises PyArrow (Apache Arrow) for Parquet parsing. (see [below for nested schema](#nestedatt--configuration--format--source_s3_update_file_format_parquet))
-
-
-### Nested Schema for `configuration.format.source_s3_file_format_avro`
-
-Optional:
-
-- `filetype` (String) must be one of ["avro"]
-
-
-
-### Nested Schema for `configuration.format.source_s3_file_format_csv`
-
-Optional:
-
-- `additional_reader_options` (String) Optionally add a valid JSON string here to provide additional options to the csv reader. Mappings must correspond to options detailed here. 'column_types' is used internally to handle schema so overriding that would likely cause problems.
-- `advanced_options` (String) Optionally add a valid JSON string here to provide additional Pyarrow ReadOptions. Specify 'column_names' here if your CSV doesn't have header, or if you want to use custom column names. 'block_size' and 'encoding' are already used above, specify them again here will override the values above.
-- `block_size` (Number) The chunk size in bytes to process at a time in memory from each file. If your data is particularly wide and failing during schema detection, increasing this should solve it. Beware of raising this too high as you could hit OOM errors.
-- `delimiter` (String) The character delimiting individual cells in the CSV data. This may only be a 1-character string. For tab-delimited data enter '\t'.
-- `double_quote` (Boolean) Whether two quotes in a quoted CSV value denote a single quote in the data.
-- `encoding` (String) The character encoding of the CSV data. Leave blank to default to UTF8. See list of python encodings for allowable options.
-- `escape_char` (String) The character used for escaping special characters. To disallow escaping, leave this field blank.
-- `filetype` (String) must be one of ["csv"]
-- `infer_datatypes` (Boolean) Configures whether a schema for the source should be inferred from the current data or not. If set to false and a custom schema is set, then the manually enforced schema is used. If a schema is not manually set, and this is set to false, then all fields will be read as strings
-- `newlines_in_values` (Boolean) Whether newline characters are allowed in CSV values. Turning this on may affect performance. Leave blank to default to False.
-- `quote_char` (String) The character used for quoting CSV values. To disallow quoting, make this field blank.
-
-
-
-### Nested Schema for `configuration.format.source_s3_file_format_jsonl`
-
-Optional:
-
-- `block_size` (Number) The chunk size in bytes to process at a time in memory from each file. If your data is particularly wide and failing during schema detection, increasing this should solve it. Beware of raising this too high as you could hit OOM errors.
-- `filetype` (String) must be one of ["jsonl"]
-- `newlines_in_values` (Boolean) Whether newline characters are allowed in JSON values. Turning this on may affect performance. Leave blank to default to False.
-- `unexpected_field_behavior` (String) must be one of ["ignore", "infer", "error"]
-How JSON fields outside of explicit_schema (if given) are treated. Check PyArrow documentation for details
-
-
-
-### Nested Schema for `configuration.format.source_s3_file_format_parquet`
-
-Optional:
-
-- `batch_size` (Number) Maximum number of records per batch read from the input files. Batches may be smaller if there aren’t enough rows in the file. This option can help avoid out-of-memory errors if your data is particularly wide.
-- `buffer_size` (Number) Perform read buffering when deserializing individual column chunks. By default every group column will be loaded fully to memory. This option can help avoid out-of-memory errors if your data is particularly wide.
-- `columns` (List of String) If you only want to sync a subset of the columns from the file(s), add the columns you want here as a comma-delimited list. Leave it empty to sync all columns.
-- `filetype` (String) must be one of ["parquet"]
-
-
-
-### Nested Schema for `configuration.format.source_s3_update_file_format_avro`
-
-Optional:
+- `avro` (Attributes) This connector utilises fastavro for Avro parsing. (see [below for nested schema](#nestedatt--configuration--format--avro))
+- `csv` (Attributes) This connector utilises PyArrow (Apache Arrow) for CSV parsing. (see [below for nested schema](#nestedatt--configuration--format--csv))
+- `jsonl` (Attributes) This connector uses PyArrow for JSON Lines (jsonl) file parsing. (see [below for nested schema](#nestedatt--configuration--format--jsonl))
+- `parquet` (Attributes) This connector utilises PyArrow (Apache Arrow) for Parquet parsing. (see [below for nested schema](#nestedatt--configuration--format--parquet))
-- `filetype` (String) must be one of ["avro"]
+
+### Nested Schema for `configuration.format.avro`
-
-### Nested Schema for `configuration.format.source_s3_update_file_format_csv`
+
+### Nested Schema for `configuration.format.csv`
Optional:
- `additional_reader_options` (String) Optionally add a valid JSON string here to provide additional options to the csv reader. Mappings must correspond to options detailed here. 'column_types' is used internally to handle schema so overriding that would likely cause problems.
- `advanced_options` (String) Optionally add a valid JSON string here to provide additional Pyarrow ReadOptions. Specify 'column_names' here if your CSV doesn't have header, or if you want to use custom column names. 'block_size' and 'encoding' are already used above, specify them again here will override the values above.
-- `block_size` (Number) The chunk size in bytes to process at a time in memory from each file. If your data is particularly wide and failing during schema detection, increasing this should solve it. Beware of raising this too high as you could hit OOM errors.
-- `delimiter` (String) The character delimiting individual cells in the CSV data. This may only be a 1-character string. For tab-delimited data enter '\t'.
-- `double_quote` (Boolean) Whether two quotes in a quoted CSV value denote a single quote in the data.
-- `encoding` (String) The character encoding of the CSV data. Leave blank to default to UTF8. See list of python encodings for allowable options.
+- `block_size` (Number) Default: 10000
+The chunk size in bytes to process at a time in memory from each file. If your data is particularly wide and failing during schema detection, increasing this should solve it. Beware of raising this too high as you could hit OOM errors.
+- `delimiter` (String) Default: ","
+The character delimiting individual cells in the CSV data. This may only be a 1-character string. For tab-delimited data enter '\t'.
+- `double_quote` (Boolean) Default: true
+Whether two quotes in a quoted CSV value denote a single quote in the data.
+- `encoding` (String) Default: "utf8"
+The character encoding of the CSV data. Leave blank to default to UTF8. See list of python encodings for allowable options.
- `escape_char` (String) The character used for escaping special characters. To disallow escaping, leave this field blank.
-- `filetype` (String) must be one of ["csv"]
-- `infer_datatypes` (Boolean) Configures whether a schema for the source should be inferred from the current data or not. If set to false and a custom schema is set, then the manually enforced schema is used. If a schema is not manually set, and this is set to false, then all fields will be read as strings
-- `newlines_in_values` (Boolean) Whether newline characters are allowed in CSV values. Turning this on may affect performance. Leave blank to default to False.
-- `quote_char` (String) The character used for quoting CSV values. To disallow quoting, make this field blank.
+- `infer_datatypes` (Boolean) Default: true
+Configures whether a schema for the source should be inferred from the current data or not. If set to false and a custom schema is set, then the manually enforced schema is used. If a schema is not manually set, and this is set to false, then all fields will be read as strings
+- `newlines_in_values` (Boolean) Default: false
+Whether newline characters are allowed in CSV values. Turning this on may affect performance. Leave blank to default to False.
+- `quote_char` (String) Default: "\""
+The character used for quoting CSV values. To disallow quoting, make this field blank.
-
-### Nested Schema for `configuration.format.source_s3_update_file_format_jsonl`
+
+### Nested Schema for `configuration.format.jsonl`
Optional:
-- `block_size` (Number) The chunk size in bytes to process at a time in memory from each file. If your data is particularly wide and failing during schema detection, increasing this should solve it. Beware of raising this too high as you could hit OOM errors.
-- `filetype` (String) must be one of ["jsonl"]
-- `newlines_in_values` (Boolean) Whether newline characters are allowed in JSON values. Turning this on may affect performance. Leave blank to default to False.
-- `unexpected_field_behavior` (String) must be one of ["ignore", "infer", "error"]
+- `block_size` (Number) Default: 0
+The chunk size in bytes to process at a time in memory from each file. If your data is particularly wide and failing during schema detection, increasing this should solve it. Beware of raising this too high as you could hit OOM errors.
+- `newlines_in_values` (Boolean) Default: false
+Whether newline characters are allowed in JSON values. Turning this on may affect performance. Leave blank to default to False.
+- `unexpected_field_behavior` (String) must be one of ["ignore", "infer", "error"]; Default: "infer"
How JSON fields outside of explicit_schema (if given) are treated. Check PyArrow documentation for details
-
-### Nested Schema for `configuration.format.source_s3_update_file_format_parquet`
+
+### Nested Schema for `configuration.format.parquet`
Optional:
-- `batch_size` (Number) Maximum number of records per batch read from the input files. Batches may be smaller if there aren’t enough rows in the file. This option can help avoid out-of-memory errors if your data is particularly wide.
-- `buffer_size` (Number) Perform read buffering when deserializing individual column chunks. By default every group column will be loaded fully to memory. This option can help avoid out-of-memory errors if your data is particularly wide.
+- `batch_size` (Number) Default: 65536
+Maximum number of records per batch read from the input files. Batches may be smaller if there aren’t enough rows in the file. This option can help avoid out-of-memory errors if your data is particularly wide.
+- `buffer_size` (Number) Default: 2
+Perform read buffering when deserializing individual column chunks. By default every group column will be loaded fully to memory. This option can help avoid out-of-memory errors if your data is particularly wide.
- `columns` (List of String) If you only want to sync a subset of the columns from the file(s), add the columns you want here as a comma-delimited list. Leave it empty to sync all columns.
-- `filetype` (String) must be one of ["parquet"]
@@ -431,11 +291,13 @@ Optional:
Optional:
-- `aws_access_key_id` (String) In order to access private Buckets stored on AWS S3, this connector requires credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
-- `aws_secret_access_key` (String) In order to access private Buckets stored on AWS S3, this connector requires credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
+- `aws_access_key_id` (String, Sensitive) In order to access private Buckets stored on AWS S3, this connector requires credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
+- `aws_secret_access_key` (String, Sensitive) In order to access private Buckets stored on AWS S3, this connector requires credentials with the proper permissions. If accessing publicly available data, this field is not necessary.
- `bucket` (String) Name of the S3 bucket where the file(s) exist.
-- `endpoint` (String) Endpoint to an S3 compatible service. Leave empty to use AWS.
-- `path_prefix` (String) By providing a path-like prefix (e.g. myFolder/thisTable/) under which all the relevant files sit, we can optimize finding these in S3. This is optional but recommended if your bucket contains many folders/files which you don't need to replicate.
+- `endpoint` (String) Default: ""
+Endpoint to an S3 compatible service. Leave empty to use AWS.
+- `path_prefix` (String) Default: ""
+By providing a path-like prefix (e.g. myFolder/thisTable/) under which all the relevant files sit, we can optimize finding these in S3. This is optional but recommended if your bucket contains many folders/files which you don't need to replicate.
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any file modified before this date will not be replicated.
diff --git a/docs/resources/source_salesforce.md b/docs/resources/source_salesforce.md
index d3d243efe..4fe37c42e 100644
--- a/docs/resources/source_salesforce.md
+++ b/docs/resources/source_salesforce.md
@@ -15,24 +15,23 @@ SourceSalesforce Resource
```terraform
resource "airbyte_source_salesforce" "my_source_salesforce" {
configuration = {
- auth_type = "Client"
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
- force_use_bulk_api = true
+ force_use_bulk_api = false
is_sandbox = false
refresh_token = "...my_refresh_token..."
- source_type = "salesforce"
start_date = "2021-07-25"
streams_criteria = [
{
- criteria = "not contains"
+ criteria = "ends not with"
value = "...my_value..."
},
]
}
- name = "Gregg Boyer Sr."
- secret_id = "...my_secret_id..."
- workspace_id = "ebde64bf-cc54-469d-8015-dfa796206bef"
+ definition_id = "3692db06-d3b4-499d-8bda-e34afcb06318"
+ name = "Ms. Donna Krajcik"
+ secret_id = "...my_secret_id..."
+ workspace_id = "44d2b896-5caa-4bab-ae9d-6378e7243c02"
}
```
@@ -42,11 +41,12 @@ resource "airbyte_source_salesforce" "my_source_salesforce" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -61,14 +61,14 @@ Required:
- `client_id` (String) Enter your Salesforce developer application's Client ID
- `client_secret` (String) Enter your Salesforce developer application's Client secret
-- `refresh_token` (String) Enter your application's Salesforce Refresh Token used for Airbyte to access your Salesforce account.
-- `source_type` (String) must be one of ["salesforce"]
+- `refresh_token` (String, Sensitive) Enter your application's Salesforce Refresh Token used for Airbyte to access your Salesforce account.
Optional:
-- `auth_type` (String) must be one of ["Client"]
-- `force_use_bulk_api` (Boolean) Toggle to use Bulk API (this might cause empty fields for some streams)
-- `is_sandbox` (Boolean) Toggle if you're using a Salesforce Sandbox
+- `force_use_bulk_api` (Boolean) Default: false
+Toggle to use Bulk API (this might cause empty fields for some streams)
+- `is_sandbox` (Boolean) Default: false
+Toggle if you're using a Salesforce Sandbox
- `start_date` (String) Enter the date (or date-time) in the YYYY-MM-DD or YYYY-MM-DDTHH:mm:ssZ format. Airbyte will replicate the data updated on and after this date. If this field is blank, Airbyte will replicate the data for last two years.
- `streams_criteria` (Attributes List) Add filters to select only required stream based on `SObject` name. Use this field to filter which tables are displayed by this connector. This is useful if your Salesforce account has a large number of tables (>1000), in which case you may find it easier to navigate the UI and speed up the connector's performance if you restrict the tables displayed by this connector. (see [below for nested schema](#nestedatt--configuration--streams_criteria))
@@ -77,7 +77,10 @@ Optional:
Required:
-- `criteria` (String) must be one of ["starts with", "ends with", "contains", "exacts", "starts not with", "ends not with", "not contains", "not exacts"]
- `value` (String)
+Optional:
+
+- `criteria` (String) must be one of ["starts with", "ends with", "contains", "exacts", "starts not with", "ends not with", "not contains", "not exacts"]; Default: "contains"
+
diff --git a/docs/resources/source_salesloft.md b/docs/resources/source_salesloft.md
index a90fdd6c6..f2d1d4574 100644
--- a/docs/resources/source_salesloft.md
+++ b/docs/resources/source_salesloft.md
@@ -16,17 +16,16 @@ SourceSalesloft Resource
resource "airbyte_source_salesloft" "my_source_salesloft" {
configuration = {
credentials = {
- source_salesloft_credentials_authenticate_via_api_key = {
- api_key = "...my_api_key..."
- auth_type = "api_key"
+ authenticate_via_api_key = {
+ api_key = "...my_api_key..."
}
}
- source_type = "salesloft"
- start_date = "2020-11-16T00:00:00Z"
+ start_date = "2020-11-16T00:00:00Z"
}
- name = "Lynda Dicki"
- secret_id = "...my_secret_id..."
- workspace_id = "2c1aa010-e9aa-4c2e-9135-586d18f9f97a"
+ definition_id = "c073abf4-dfeb-4d41-8e5a-603e6b3fca03"
+ name = "Terrance Corwin"
+ secret_id = "...my_secret_id..."
+ workspace_id = "14510264-179a-4403-81bb-87b13a43b1ea"
}
```
@@ -36,11 +35,12 @@ resource "airbyte_source_salesloft" "my_source_salesloft" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -54,7 +54,6 @@ resource "airbyte_source_salesloft" "my_source_salesloft" {
Required:
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["salesloft"]
- `start_date` (String) The date from which you'd like to replicate data for Salesloft API, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
@@ -62,52 +61,26 @@ Required:
Optional:
-- `source_salesloft_credentials_authenticate_via_api_key` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_salesloft_credentials_authenticate_via_api_key))
-- `source_salesloft_credentials_authenticate_via_o_auth` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_salesloft_credentials_authenticate_via_o_auth))
-- `source_salesloft_update_credentials_authenticate_via_api_key` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_salesloft_update_credentials_authenticate_via_api_key))
-- `source_salesloft_update_credentials_authenticate_via_o_auth` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_salesloft_update_credentials_authenticate_via_o_auth))
+- `authenticate_via_api_key` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_via_api_key))
+- `authenticate_via_o_auth` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--authenticate_via_o_auth))
-
-### Nested Schema for `configuration.credentials.source_salesloft_credentials_authenticate_via_api_key`
+
+### Nested Schema for `configuration.credentials.authenticate_via_api_key`
Required:
-- `api_key` (String) API Key for making authenticated requests. More instruction on how to find this value in our docs
-- `auth_type` (String) must be one of ["api_key"]
+- `api_key` (String, Sensitive) API Key for making authenticated requests. More instruction on how to find this value in our docs
-
-### Nested Schema for `configuration.credentials.source_salesloft_credentials_authenticate_via_o_auth`
+
+### Nested Schema for `configuration.credentials.authenticate_via_o_auth`
Required:
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
+- `access_token` (String, Sensitive) Access Token for making authenticated requests.
- `client_id` (String) The Client ID of your Salesloft developer application.
- `client_secret` (String) The Client Secret of your Salesloft developer application.
-- `refresh_token` (String) The token for obtaining a new access token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-
-
-### Nested Schema for `configuration.credentials.source_salesloft_update_credentials_authenticate_via_api_key`
-
-Required:
-
-- `api_key` (String) API Key for making authenticated requests. More instruction on how to find this value in our docs
-- `auth_type` (String) must be one of ["api_key"]
-
-
-
-### Nested Schema for `configuration.credentials.source_salesloft_update_credentials_authenticate_via_o_auth`
-
-Required:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your Salesloft developer application.
-- `client_secret` (String) The Client Secret of your Salesloft developer application.
-- `refresh_token` (String) The token for obtaining a new access token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
+- `refresh_token` (String, Sensitive) The token for obtaining a new access token.
+- `token_expiry_date` (String, Sensitive) The date-time when the access token should be refreshed.
diff --git a/docs/resources/source_sap_fieldglass.md b/docs/resources/source_sap_fieldglass.md
index 5e2a052ca..da686b68a 100644
--- a/docs/resources/source_sap_fieldglass.md
+++ b/docs/resources/source_sap_fieldglass.md
@@ -15,12 +15,12 @@ SourceSapFieldglass Resource
```terraform
resource "airbyte_source_sap_fieldglass" "my_source_sapfieldglass" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "sap-fieldglass"
+ api_key = "...my_api_key..."
}
- name = "Juana Williamson"
- secret_id = "...my_secret_id..."
- workspace_id = "2bf7d67c-a84a-4d99-b41d-61243531870c"
+ definition_id = "d703a4ee-b23f-4e55-b942-b58b6d0d2093"
+ name = "Krystal Krajcik"
+ secret_id = "...my_secret_id..."
+ workspace_id = "8d8619ec-3981-4178-ae44-e5272c20971d"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_sap_fieldglass" "my_source_sapfieldglass" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_sap_fieldglass" "my_source_sapfieldglass" {
Required:
-- `api_key` (String) API Key
-- `source_type` (String) must be one of ["sap-fieldglass"]
+- `api_key` (String, Sensitive) API Key
diff --git a/docs/resources/source_secoda.md b/docs/resources/source_secoda.md
index 14e946432..e24a1d03b 100644
--- a/docs/resources/source_secoda.md
+++ b/docs/resources/source_secoda.md
@@ -15,12 +15,12 @@ SourceSecoda Resource
```terraform
resource "airbyte_source_secoda" "my_source_secoda" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "secoda"
+ api_key = "...my_api_key..."
}
- name = "Brett Leannon I"
- secret_id = "...my_secret_id..."
- workspace_id = "ad421bd4-3d1f-40cb-8a00-03eb22d9b3a7"
+ definition_id = "544a65a7-d2b4-4609-94ec-6467c968cce9"
+ name = "Edna Mitchell"
+ secret_id = "...my_secret_id..."
+ workspace_id = "8a35db32-f900-4f8c-be73-78a587702297"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_secoda" "my_source_secoda" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_secoda" "my_source_secoda" {
Required:
-- `api_key` (String) Your API Access Key. See here. The key is case sensitive.
-- `source_type` (String) must be one of ["secoda"]
+- `api_key` (String, Sensitive) Your API Access Key. See here. The key is case sensitive.
diff --git a/docs/resources/source_sendgrid.md b/docs/resources/source_sendgrid.md
index 8ec44ae9f..4348d3692 100644
--- a/docs/resources/source_sendgrid.md
+++ b/docs/resources/source_sendgrid.md
@@ -15,13 +15,13 @@ SourceSendgrid Resource
```terraform
resource "airbyte_source_sendgrid" "my_source_sendgrid" {
configuration = {
- apikey = "...my_apikey..."
- source_type = "sendgrid"
- start_time = "2020-01-01T01:01:01Z"
+ apikey = "...my_apikey..."
+ start_time = "2020-01-01T01:01:01Z"
}
- name = "Shari Pfannerstill"
- secret_id = "...my_secret_id..."
- workspace_id = "41c57d1f-edc2-4050-938d-c3ce185472f9"
+ definition_id = "37ec3d2a-b419-48d2-afe5-e34c931e7a72"
+ name = "Toby McGlynn"
+ secret_id = "...my_secret_id..."
+ workspace_id = "22c4d080-cde0-439d-95e8-c5778ddd1091"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_sendgrid" "my_source_sendgrid" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_sendgrid" "my_source_sendgrid" {
Required:
-- `apikey` (String) API Key, use admin to generate this key.
-- `source_type` (String) must be one of ["sendgrid"]
+- `apikey` (String, Sensitive) API Key, use admin to generate this key.
Optional:
diff --git a/docs/resources/source_sendinblue.md b/docs/resources/source_sendinblue.md
index 0d69e89da..e72437c22 100644
--- a/docs/resources/source_sendinblue.md
+++ b/docs/resources/source_sendinblue.md
@@ -15,12 +15,12 @@ SourceSendinblue Resource
```terraform
resource "airbyte_source_sendinblue" "my_source_sendinblue" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "sendinblue"
+ api_key = "...my_api_key..."
}
- name = "Terence Kassulke III"
- secret_id = "...my_secret_id..."
- workspace_id = "6a8be344-4eac-48b3-a287-5c6c1fe606d0"
+ definition_id = "0de87dfe-701e-4dbd-8d10-cf57eb672b8a"
+ name = "Derek Heller"
+ secret_id = "...my_secret_id..."
+ workspace_id = "3fb2a63d-a091-47a6-951f-ac3e8ec69bab"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_sendinblue" "my_source_sendinblue" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_sendinblue" "my_source_sendinblue" {
Required:
-- `api_key` (String) Your API Key. See here.
-- `source_type` (String) must be one of ["sendinblue"]
+- `api_key` (String, Sensitive) Your API Key. See here.
diff --git a/docs/resources/source_senseforce.md b/docs/resources/source_senseforce.md
index 17f4207e8..dd2c5d97f 100644
--- a/docs/resources/source_senseforce.md
+++ b/docs/resources/source_senseforce.md
@@ -18,13 +18,13 @@ resource "airbyte_source_senseforce" "my_source_senseforce" {
access_token = "...my_access_token..."
backend_url = "https://galaxyapi.senseforce.io"
dataset_id = "8f418098-ca28-4df5-9498-0df9fe78eda7"
- slice_range = 10
- source_type = "senseforce"
+ slice_range = 180
start_date = "2017-01-25"
}
- name = "Rodolfo Langworth"
- secret_id = "...my_secret_id..."
- workspace_id = "e50c1666-1a1d-4913-aa7e-8d53213f3f65"
+ definition_id = "974cd0d5-39af-4231-9a6f-8898d74d7cd0"
+ name = "Lillie Anderson"
+ secret_id = "...my_secret_id..."
+ workspace_id = "3c633751-f6c5-444c-a0e7-3f23dc46e62d"
}
```
@@ -34,11 +34,12 @@ resource "airbyte_source_senseforce" "my_source_senseforce" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,14 +52,14 @@ resource "airbyte_source_senseforce" "my_source_senseforce" {
Required:
-- `access_token` (String) Your API access token. See here. The toke is case sensitive.
+- `access_token` (String, Sensitive) Your API access token. See here. The toke is case sensitive.
- `backend_url` (String) Your Senseforce API backend URL. This is the URL shown during the Login screen. See here for more details. (Note: Most Senseforce backend APIs have the term 'galaxy' in their ULR)
- `dataset_id` (String) The ID of the dataset you want to synchronize. The ID can be found in the URL when opening the dataset. See here for more details. (Note: As the Senseforce API only allows to synchronize a specific dataset, each dataset you want to synchronize needs to be implemented as a separate airbyte source).
-- `source_type` (String) must be one of ["senseforce"]
- `start_date` (String) UTC date and time in the format 2017-01-25. Only data with "Timestamp" after this date will be replicated. Important note: This start date must be set to the first day of where your dataset provides data. If your dataset has data from 2020-10-10 10:21:10, set the start_date to 2020-10-10 or later
Optional:
-- `slice_range` (Number) The time increment used by the connector when requesting data from the Senseforce API. The bigger the value is, the less requests will be made and faster the sync will be. On the other hand, the more seldom the state is persisted and the more likely one could run into rate limites. Furthermore, consider that large chunks of time might take a long time for the Senseforce query to return data - meaning it could take in effect longer than with more smaller time slices. If there are a lot of data per day, set this setting to 1. If there is only very little data per day, you might change the setting to 10 or more.
+- `slice_range` (Number) Default: 10
+The time increment used by the connector when requesting data from the Senseforce API. The bigger the value is, the less requests will be made and faster the sync will be. On the other hand, the more seldom the state is persisted and the more likely one could run into rate limites. Furthermore, consider that large chunks of time might take a long time for the Senseforce query to return data - meaning it could take in effect longer than with more smaller time slices. If there are a lot of data per day, set this setting to 1. If there is only very little data per day, you might change the setting to 10 or more.
diff --git a/docs/resources/source_sentry.md b/docs/resources/source_sentry.md
index dfd8b7e3c..3d317537f 100644
--- a/docs/resources/source_sentry.md
+++ b/docs/resources/source_sentry.md
@@ -19,14 +19,14 @@ resource "airbyte_source_sentry" "my_source_sentry" {
discover_fields = [
"{ \"see\": \"documentation\" }",
]
- hostname = "muted-ingredient.biz"
+ hostname = "impressionable-honesty.org"
organization = "...my_organization..."
project = "...my_project..."
- source_type = "sentry"
}
- name = "Krystal Quitzon"
- secret_id = "...my_secret_id..."
- workspace_id = "4c59f0a5-6ceb-4cad-a29c-a79181c95671"
+ definition_id = "72778d5d-b92d-416e-9dcb-06fc1f7a171f"
+ name = "Brooke Breitenberg"
+ secret_id = "...my_secret_id..."
+ workspace_id = "bfddb09b-9a90-43f6-8eb4-a54b7cf533c5"
}
```
@@ -36,11 +36,12 @@ resource "airbyte_source_sentry" "my_source_sentry" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -53,14 +54,14 @@ resource "airbyte_source_sentry" "my_source_sentry" {
Required:
-- `auth_token` (String) Log into Sentry and then create authentication tokens.For self-hosted, you can find or create authentication tokens by visiting "{instance_url_prefix}/settings/account/api/auth-tokens/"
+- `auth_token` (String, Sensitive) Log into Sentry and then create authentication tokens.For self-hosted, you can find or create authentication tokens by visiting "{instance_url_prefix}/settings/account/api/auth-tokens/"
- `organization` (String) The slug of the organization the groups belong to.
- `project` (String) The name (slug) of the Project you want to sync.
-- `source_type` (String) must be one of ["sentry"]
Optional:
- `discover_fields` (List of String) Fields to retrieve when fetching discover events
-- `hostname` (String) Host name of Sentry API server.For self-hosted, specify your host name here. Otherwise, leave it empty.
+- `hostname` (String) Default: "sentry.io"
+Host name of Sentry API server.For self-hosted, specify your host name here. Otherwise, leave it empty.
diff --git a/docs/resources/source_sftp.md b/docs/resources/source_sftp.md
index c922caf87..8bf287270 100644
--- a/docs/resources/source_sftp.md
+++ b/docs/resources/source_sftp.md
@@ -16,22 +16,21 @@ SourceSftp Resource
resource "airbyte_source_sftp" "my_source_sftp" {
configuration = {
credentials = {
- source_sftp_authentication_wildcard_password_authentication = {
- auth_method = "SSH_PASSWORD_AUTH"
+ source_sftp_password_authentication = {
auth_user_password = "...my_auth_user_password..."
}
}
file_pattern = "log-([0-9]{4})([0-9]{2})([0-9]{2}) - This will filter files which `log-yearmmdd`"
file_types = "csv,json"
folder_path = "/logs/2022"
- host = "www.host.com"
+ host = "192.0.2.1"
port = 22
- source_type = "sftp"
user = "...my_user..."
}
- name = "Miss Tommy Emard"
- secret_id = "...my_secret_id..."
- workspace_id = "665163a3-6385-412a-b252-1b9f2e072467"
+ definition_id = "8a56e1f7-b10c-46dd-9e62-eb5fcf365dcc"
+ name = "Rogelio Schoen"
+ secret_id = "...my_secret_id..."
+ workspace_id = "e41cbe1d-2ecd-4015-81d5-2f6c56d3cf89"
}
```
@@ -41,11 +40,12 @@ resource "airbyte_source_sftp" "my_source_sftp" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -59,64 +59,41 @@ resource "airbyte_source_sftp" "my_source_sftp" {
Required:
- `host` (String) The server host address
-- `port` (Number) The server port
-- `source_type` (String) must be one of ["sftp"]
- `user` (String) The server user
Optional:
- `credentials` (Attributes) The server authentication method (see [below for nested schema](#nestedatt--configuration--credentials))
-- `file_pattern` (String) The regular expression to specify files for sync in a chosen Folder Path
-- `file_types` (String) Coma separated file types. Currently only 'csv' and 'json' types are supported.
-- `folder_path` (String) The directory to search files for sync
+- `file_pattern` (String) Default: ""
+The regular expression to specify files for sync in a chosen Folder Path
+- `file_types` (String) Default: "csv,json"
+Coma separated file types. Currently only 'csv' and 'json' types are supported.
+- `folder_path` (String) Default: ""
+The directory to search files for sync
+- `port` (Number) Default: 22
+The server port
### Nested Schema for `configuration.credentials`
Optional:
-- `source_sftp_authentication_wildcard_password_authentication` (Attributes) The server authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_sftp_authentication_wildcard_password_authentication))
-- `source_sftp_authentication_wildcard_ssh_key_authentication` (Attributes) The server authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_sftp_authentication_wildcard_ssh_key_authentication))
-- `source_sftp_update_authentication_wildcard_password_authentication` (Attributes) The server authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_sftp_update_authentication_wildcard_password_authentication))
-- `source_sftp_update_authentication_wildcard_ssh_key_authentication` (Attributes) The server authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_sftp_update_authentication_wildcard_ssh_key_authentication))
+- `password_authentication` (Attributes) The server authentication method (see [below for nested schema](#nestedatt--configuration--credentials--password_authentication))
+- `ssh_key_authentication` (Attributes) The server authentication method (see [below for nested schema](#nestedatt--configuration--credentials--ssh_key_authentication))
-
-### Nested Schema for `configuration.credentials.source_sftp_authentication_wildcard_password_authentication`
+
+### Nested Schema for `configuration.credentials.password_authentication`
Required:
-- `auth_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through password authentication
-- `auth_user_password` (String) OS-level password for logging into the jump server host
+- `auth_user_password` (String, Sensitive) OS-level password for logging into the jump server host
-
-### Nested Schema for `configuration.credentials.source_sftp_authentication_wildcard_ssh_key_authentication`
+
+### Nested Schema for `configuration.credentials.ssh_key_authentication`
Required:
-- `auth_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through ssh key
-- `auth_ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
-
-
-
-### Nested Schema for `configuration.credentials.source_sftp_update_authentication_wildcard_password_authentication`
-
-Required:
-
-- `auth_method` (String) must be one of ["SSH_PASSWORD_AUTH"]
-Connect through password authentication
-- `auth_user_password` (String) OS-level password for logging into the jump server host
-
-
-
-### Nested Schema for `configuration.credentials.source_sftp_update_authentication_wildcard_ssh_key_authentication`
-
-Required:
-
-- `auth_method` (String) must be one of ["SSH_KEY_AUTH"]
-Connect through ssh key
-- `auth_ssh_key` (String) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
+- `auth_ssh_key` (String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )
diff --git a/docs/resources/source_sftp_bulk.md b/docs/resources/source_sftp_bulk.md
index 0fc7a449a..2dea85d83 100644
--- a/docs/resources/source_sftp_bulk.md
+++ b/docs/resources/source_sftp_bulk.md
@@ -17,21 +17,21 @@ resource "airbyte_source_sftp_bulk" "my_source_sftpbulk" {
configuration = {
file_most_recent = false
file_pattern = "log-([0-9]{4})([0-9]{2})([0-9]{2}) - This will filter files which `log-yearmmdd`"
- file_type = "json"
+ file_type = "csv"
folder_path = "/logs/2022"
host = "192.0.2.1"
password = "...my_password..."
port = 22
private_key = "...my_private_key..."
separator = ","
- source_type = "sftp-bulk"
start_date = "2017-01-25T00:00:00Z"
stream_name = "ftp_contacts"
- username = "Pearline_Bailey"
+ username = "Serena.Beer65"
}
- name = "Wm Bartoletti"
- secret_id = "...my_secret_id..."
- workspace_id = "50edf22a-94d2-40ec-90ea-41d1f465e851"
+ definition_id = "6ecf0509-1d90-48d9-9001-753384297337"
+ name = "Dr. Jasmine Grimes"
+ secret_id = "...my_secret_id..."
+ workspace_id = "9291353f-9549-4bcc-b4d3-89bbf5d24f5b"
}
```
@@ -41,11 +41,12 @@ resource "airbyte_source_sftp_bulk" "my_source_sftpbulk" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -58,22 +59,26 @@ resource "airbyte_source_sftp_bulk" "my_source_sftpbulk" {
Required:
-- `folder_path` (String) The directory to search files for sync
- `host` (String) The server host address
-- `port` (Number) The server port
-- `source_type` (String) must be one of ["sftp-bulk"]
- `start_date` (String) The date from which you'd like to replicate data for all incremental streams, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
- `stream_name` (String) The name of the stream or table you want to create
- `username` (String) The server user
Optional:
-- `file_most_recent` (Boolean) Sync only the most recent file for the configured folder path and file pattern
-- `file_pattern` (String) The regular expression to specify files for sync in a chosen Folder Path
-- `file_type` (String) must be one of ["csv", "json"]
+- `file_most_recent` (Boolean) Default: false
+Sync only the most recent file for the configured folder path and file pattern
+- `file_pattern` (String) Default: ""
+The regular expression to specify files for sync in a chosen Folder Path
+- `file_type` (String) must be one of ["csv", "json"]; Default: "csv"
The file type you want to sync. Currently only 'csv' and 'json' files are supported.
-- `password` (String) OS-level password for logging into the jump server host
-- `private_key` (String) The private key
-- `separator` (String) The separator used in the CSV files. Define None if you want to use the Sniffer functionality
+- `folder_path` (String) Default: ""
+The directory to search files for sync
+- `password` (String, Sensitive) OS-level password for logging into the jump server host
+- `port` (Number) Default: 22
+The server port
+- `private_key` (String, Sensitive) The private key
+- `separator` (String) Default: ","
+The separator used in the CSV files. Define None if you want to use the Sniffer functionality
diff --git a/docs/resources/source_shopify.md b/docs/resources/source_shopify.md
index 3fe62e4f9..d6611b183 100644
--- a/docs/resources/source_shopify.md
+++ b/docs/resources/source_shopify.md
@@ -16,18 +16,17 @@ SourceShopify Resource
resource "airbyte_source_shopify" "my_source_shopify" {
configuration = {
credentials = {
- source_shopify_shopify_authorization_method_api_password = {
+ api_password = {
api_password = "...my_api_password..."
- auth_method = "api_password"
}
}
- shop = "my-store"
- source_type = "shopify"
- start_date = "2022-01-02"
+ shop = "my-store"
+ start_date = "2022-08-02"
}
- name = "Randal Kris"
- secret_id = "...my_secret_id..."
- workspace_id = "df54fdd5-ea95-4433-98da-fb42a8d63388"
+ definition_id = "4e1dc4a0-1d44-4fb9-b610-a4d0de91eaa4"
+ name = "Clinton Baumbach"
+ secret_id = "...my_secret_id..."
+ workspace_id = "cb870eb9-8050-4c39-a745-0657bfd1cb4d"
}
```
@@ -37,11 +36,12 @@ resource "airbyte_source_shopify" "my_source_shopify" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -55,65 +55,35 @@ resource "airbyte_source_shopify" "my_source_shopify" {
Required:
- `shop` (String) The name of your Shopify store found in the URL. For example, if your URL was https://NAME.myshopify.com, then the name would be 'NAME' or 'NAME.myshopify.com'.
-- `source_type` (String) must be one of ["shopify"]
Optional:
- `credentials` (Attributes) The authorization method to use to retrieve data from Shopify (see [below for nested schema](#nestedatt--configuration--credentials))
-- `start_date` (String) The date you would like to replicate data from. Format: YYYY-MM-DD. Any data before this date will not be replicated.
+- `start_date` (String) Default: "2020-01-01"
+The date you would like to replicate data from. Format: YYYY-MM-DD. Any data before this date will not be replicated.
### Nested Schema for `configuration.credentials`
Optional:
-- `source_shopify_shopify_authorization_method_api_password` (Attributes) API Password Auth (see [below for nested schema](#nestedatt--configuration--credentials--source_shopify_shopify_authorization_method_api_password))
-- `source_shopify_shopify_authorization_method_o_auth2_0` (Attributes) OAuth2.0 (see [below for nested schema](#nestedatt--configuration--credentials--source_shopify_shopify_authorization_method_o_auth2_0))
-- `source_shopify_update_shopify_authorization_method_api_password` (Attributes) API Password Auth (see [below for nested schema](#nestedatt--configuration--credentials--source_shopify_update_shopify_authorization_method_api_password))
-- `source_shopify_update_shopify_authorization_method_o_auth2_0` (Attributes) OAuth2.0 (see [below for nested schema](#nestedatt--configuration--credentials--source_shopify_update_shopify_authorization_method_o_auth2_0))
+- `api_password` (Attributes) API Password Auth (see [below for nested schema](#nestedatt--configuration--credentials--api_password))
+- `o_auth20` (Attributes) OAuth2.0 (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_shopify_shopify_authorization_method_api_password`
+
+### Nested Schema for `configuration.credentials.api_password`
Required:
-- `api_password` (String) The API Password for your private application in the `Shopify` store.
-- `auth_method` (String) must be one of ["api_password"]
+- `api_password` (String, Sensitive) The API Password for your private application in the `Shopify` store.
-
-### Nested Schema for `configuration.credentials.source_shopify_shopify_authorization_method_o_auth2_0`
-
-Required:
-
-- `auth_method` (String) must be one of ["oauth2.0"]
-
-Optional:
-
-- `access_token` (String) The Access Token for making authenticated requests.
-- `client_id` (String) The Client ID of the Shopify developer application.
-- `client_secret` (String) The Client Secret of the Shopify developer application.
-
-
-
-### Nested Schema for `configuration.credentials.source_shopify_update_shopify_authorization_method_api_password`
-
-Required:
-
-- `api_password` (String) The API Password for your private application in the `Shopify` store.
-- `auth_method` (String) must be one of ["api_password"]
-
-
-
-### Nested Schema for `configuration.credentials.source_shopify_update_shopify_authorization_method_o_auth2_0`
-
-Required:
-
-- `auth_method` (String) must be one of ["oauth2.0"]
+
+### Nested Schema for `configuration.credentials.o_auth20`
Optional:
-- `access_token` (String) The Access Token for making authenticated requests.
+- `access_token` (String, Sensitive) The Access Token for making authenticated requests.
- `client_id` (String) The Client ID of the Shopify developer application.
- `client_secret` (String) The Client Secret of the Shopify developer application.
diff --git a/docs/resources/source_shortio.md b/docs/resources/source_shortio.md
index 7ece64e90..844f4ae7a 100644
--- a/docs/resources/source_shortio.md
+++ b/docs/resources/source_shortio.md
@@ -15,14 +15,14 @@ SourceShortio Resource
```terraform
resource "airbyte_source_shortio" "my_source_shortio" {
configuration = {
- domain_id = "...my_domain_id..."
- secret_key = "...my_secret_key..."
- source_type = "shortio"
- start_date = "2023-07-30T03:43:59.244Z"
+ domain_id = "...my_domain_id..."
+ secret_key = "...my_secret_key..."
+ start_date = "2023-07-30T03:43:59.244Z"
}
- name = "Troy Streich I"
- secret_id = "...my_secret_id..."
- workspace_id = "9ea5f9b1-8a24-44fd-a190-39dacd38ed0d"
+ definition_id = "b2aae6c2-0ac9-4c19-9b3e-1c883c55acce"
+ name = "Bethany Donnelly"
+ secret_id = "...my_secret_id..."
+ workspace_id = "29a15c36-062a-463f-9716-d2b265f2af56"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_shortio" "my_source_shortio" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,8 +51,7 @@ resource "airbyte_source_shortio" "my_source_shortio" {
Required:
- `domain_id` (String)
-- `secret_key` (String) Short.io Secret Key
-- `source_type` (String) must be one of ["shortio"]
+- `secret_key` (String, Sensitive) Short.io Secret Key
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
diff --git a/docs/resources/source_slack.md b/docs/resources/source_slack.md
index 3b9897835..c15e7f3a8 100644
--- a/docs/resources/source_slack.md
+++ b/docs/resources/source_slack.md
@@ -19,19 +19,18 @@ resource "airbyte_source_slack" "my_source_slack" {
"...",
]
credentials = {
- source_slack_authentication_mechanism_api_token = {
- api_token = "...my_api_token..."
- option_title = "API Token Credentials"
+ source_slack_api_token = {
+ api_token = "...my_api_token..."
}
}
- join_channels = false
- lookback_window = 7
- source_type = "slack"
+ join_channels = true
+ lookback_window = 14
start_date = "2017-01-25T00:00:00Z"
}
- name = "Dr. Jamie Wintheiser"
- secret_id = "...my_secret_id..."
- workspace_id = "af15920c-90d1-4b49-81f2-bd89c8a32639"
+ definition_id = "dd581ac6-4878-476f-8ad6-15bcace687b3"
+ name = "Ms. Marian Bergstrom"
+ secret_id = "...my_secret_id..."
+ workspace_id = "986a7b02-fd25-4c77-a7b3-6354281d3e7f"
}
```
@@ -41,11 +40,12 @@ resource "airbyte_source_slack" "my_source_slack" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -58,63 +58,40 @@ resource "airbyte_source_slack" "my_source_slack" {
Required:
-- `join_channels` (Boolean) Whether to join all channels or to sync data only from channels the bot is already in. If false, you'll need to manually add the bot to all the channels from which you'd like to sync messages.
-- `lookback_window` (Number) How far into the past to look for messages in threads, default is 0 days
-- `source_type` (String) must be one of ["slack"]
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
Optional:
- `channel_filter` (List of String) A channel name list (without leading '#' char) which limit the channels from which you'd like to sync. Empty list means no filter.
- `credentials` (Attributes) Choose how to authenticate into Slack (see [below for nested schema](#nestedatt--configuration--credentials))
+- `join_channels` (Boolean) Default: true
+Whether to join all channels or to sync data only from channels the bot is already in. If false, you'll need to manually add the bot to all the channels from which you'd like to sync messages.
+- `lookback_window` (Number) Default: 0
+How far into the past to look for messages in threads, default is 0 days
### Nested Schema for `configuration.credentials`
Optional:
-- `source_slack_authentication_mechanism_api_token` (Attributes) Choose how to authenticate into Slack (see [below for nested schema](#nestedatt--configuration--credentials--source_slack_authentication_mechanism_api_token))
-- `source_slack_authentication_mechanism_sign_in_via_slack_o_auth` (Attributes) Choose how to authenticate into Slack (see [below for nested schema](#nestedatt--configuration--credentials--source_slack_authentication_mechanism_sign_in_via_slack_o_auth))
-- `source_slack_update_authentication_mechanism_api_token` (Attributes) Choose how to authenticate into Slack (see [below for nested schema](#nestedatt--configuration--credentials--source_slack_update_authentication_mechanism_api_token))
-- `source_slack_update_authentication_mechanism_sign_in_via_slack_o_auth` (Attributes) Choose how to authenticate into Slack (see [below for nested schema](#nestedatt--configuration--credentials--source_slack_update_authentication_mechanism_sign_in_via_slack_o_auth))
+- `api_token` (Attributes) Choose how to authenticate into Slack (see [below for nested schema](#nestedatt--configuration--credentials--api_token))
+- `sign_in_via_slack_o_auth` (Attributes) Choose how to authenticate into Slack (see [below for nested schema](#nestedatt--configuration--credentials--sign_in_via_slack_o_auth))
-
-### Nested Schema for `configuration.credentials.source_slack_authentication_mechanism_api_token`
+
+### Nested Schema for `configuration.credentials.api_token`
Required:
-- `api_token` (String) A Slack bot token. See the docs for instructions on how to generate it.
-- `option_title` (String) must be one of ["API Token Credentials"]
+- `api_token` (String, Sensitive) A Slack bot token. See the docs for instructions on how to generate it.
-
-### Nested Schema for `configuration.credentials.source_slack_authentication_mechanism_sign_in_via_slack_o_auth`
+
+### Nested Schema for `configuration.credentials.sign_in_via_slack_o_auth`
Required:
-- `access_token` (String) Slack access_token. See our docs if you need help generating the token.
+- `access_token` (String, Sensitive) Slack access_token. See our docs if you need help generating the token.
- `client_id` (String) Slack client_id. See our docs if you need help finding this id.
- `client_secret` (String) Slack client_secret. See our docs if you need help finding this secret.
-- `option_title` (String) must be one of ["Default OAuth2.0 authorization"]
-
-
-
-### Nested Schema for `configuration.credentials.source_slack_update_authentication_mechanism_api_token`
-
-Required:
-
-- `api_token` (String) A Slack bot token. See the docs for instructions on how to generate it.
-- `option_title` (String) must be one of ["API Token Credentials"]
-
-
-
-### Nested Schema for `configuration.credentials.source_slack_update_authentication_mechanism_sign_in_via_slack_o_auth`
-
-Required:
-
-- `access_token` (String) Slack access_token. See our docs if you need help generating the token.
-- `client_id` (String) Slack client_id. See our docs if you need help finding this id.
-- `client_secret` (String) Slack client_secret. See our docs if you need help finding this secret.
-- `option_title` (String) must be one of ["Default OAuth2.0 authorization"]
diff --git a/docs/resources/source_smaily.md b/docs/resources/source_smaily.md
index 9094ce421..a7612fa9b 100644
--- a/docs/resources/source_smaily.md
+++ b/docs/resources/source_smaily.md
@@ -18,11 +18,11 @@ resource "airbyte_source_smaily" "my_source_smaily" {
api_password = "...my_api_password..."
api_subdomain = "...my_api_subdomain..."
api_username = "...my_api_username..."
- source_type = "smaily"
}
- name = "Donnie Hauck"
- secret_id = "...my_secret_id..."
- workspace_id = "b6902b88-1a94-4f64-b664-a8f0af8c691d"
+ definition_id = "0bc649fe-5b08-4c82-9c40-ca1ab7663971"
+ name = "Ebony Carroll"
+ secret_id = "...my_secret_id..."
+ workspace_id = "331df025-a154-4586-87cd-fb558f87809d"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_smaily" "my_source_smaily" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,9 +50,8 @@ resource "airbyte_source_smaily" "my_source_smaily" {
Required:
-- `api_password` (String) API user password. See https://smaily.com/help/api/general/create-api-user/
+- `api_password` (String, Sensitive) API user password. See https://smaily.com/help/api/general/create-api-user/
- `api_subdomain` (String) API Subdomain. See https://smaily.com/help/api/general/create-api-user/
- `api_username` (String) API user username. See https://smaily.com/help/api/general/create-api-user/
-- `source_type` (String) must be one of ["smaily"]
diff --git a/docs/resources/source_smartengage.md b/docs/resources/source_smartengage.md
index c654b4525..eca027d34 100644
--- a/docs/resources/source_smartengage.md
+++ b/docs/resources/source_smartengage.md
@@ -15,12 +15,12 @@ SourceSmartengage Resource
```terraform
resource "airbyte_source_smartengage" "my_source_smartengage" {
configuration = {
- api_key = "...my_api_key..."
- source_type = "smartengage"
+ api_key = "...my_api_key..."
}
- name = "Carmen Crist"
- secret_id = "...my_secret_id..."
- workspace_id = "fbaf9476-a2ae-48dc-850c-8a3512c73784"
+ definition_id = "3d1fcf2b-6755-4110-90ec-6c18f2017e88"
+ name = "Neil Pagac"
+ secret_id = "...my_secret_id..."
+ workspace_id = "64f95e84-efb6-4a93-9326-1882dc6ea377"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_smartengage" "my_source_smartengage" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_smartengage" "my_source_smartengage" {
Required:
-- `api_key` (String) API Key
-- `source_type` (String) must be one of ["smartengage"]
+- `api_key` (String, Sensitive) API Key
diff --git a/docs/resources/source_smartsheets.md b/docs/resources/source_smartsheets.md
index ecdad7c4a..07e2dfb28 100644
--- a/docs/resources/source_smartsheets.md
+++ b/docs/resources/source_smartsheets.md
@@ -16,21 +16,20 @@ SourceSmartsheets Resource
resource "airbyte_source_smartsheets" "my_source_smartsheets" {
configuration = {
credentials = {
- source_smartsheets_authorization_method_api_access_token = {
+ api_access_token = {
access_token = "...my_access_token..."
- auth_type = "access_token"
}
}
metadata_fields = [
- "row_access_level",
+ "row_number",
]
- source_type = "smartsheets"
spreadsheet_id = "...my_spreadsheet_id..."
- start_datetime = "2000-01-01T13:00:00-07:00"
+ start_datetime = "2000-01-01T13:00:00"
}
- name = "Joann Bechtelar Jr."
- secret_id = "...my_secret_id..."
- workspace_id = "e966ec73-6d43-4194-b98c-783c92398ed3"
+ definition_id = "a6744848-ac2b-404b-aae9-e175304065f6"
+ name = "Tara King"
+ secret_id = "...my_secret_id..."
+ workspace_id = "901f87c9-df1a-4f8f-9013-d5d0cf403b28"
}
```
@@ -40,11 +39,12 @@ resource "airbyte_source_smartsheets" "my_source_smartsheets" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -58,77 +58,39 @@ resource "airbyte_source_smartsheets" "my_source_smartsheets" {
Required:
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["smartsheets"]
- `spreadsheet_id` (String) The spreadsheet ID. Find it by opening the spreadsheet then navigating to File > Properties
Optional:
- `metadata_fields` (List of String) A List of available columns which metadata can be pulled from.
-- `start_datetime` (String) Only rows modified after this date/time will be replicated. This should be an ISO 8601 string, for instance: `2000-01-01T13:00:00`
+- `start_datetime` (String) Default: "2020-01-01T00:00:00+00:00"
+Only rows modified after this date/time will be replicated. This should be an ISO 8601 string, for instance: `2000-01-01T13:00:00`
### Nested Schema for `configuration.credentials`
Optional:
-- `source_smartsheets_authorization_method_api_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_smartsheets_authorization_method_api_access_token))
-- `source_smartsheets_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_smartsheets_authorization_method_o_auth2_0))
-- `source_smartsheets_update_authorization_method_api_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_smartsheets_update_authorization_method_api_access_token))
-- `source_smartsheets_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_smartsheets_update_authorization_method_o_auth2_0))
+- `api_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--api_access_token))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_smartsheets_authorization_method_api_access_token`
+
+### Nested Schema for `configuration.credentials.api_access_token`
Required:
-- `access_token` (String) The access token to use for accessing your data from Smartsheets. This access token must be generated by a user with at least read access to the data you'd like to replicate. Generate an access token in the Smartsheets main menu by clicking Account > Apps & Integrations > API Access. See the setup guide for information on how to obtain this token.
+- `access_token` (String, Sensitive) The access token to use for accessing your data from Smartsheets. This access token must be generated by a user with at least read access to the data you'd like to replicate. Generate an access token in the Smartsheets main menu by clicking Account > Apps & Integrations > API Access. See the setup guide for information on how to obtain this token.
-Optional:
-
-- `auth_type` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_smartsheets_authorization_method_o_auth2_0`
-
-Required:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `client_id` (String) The API ID of the SmartSheets developer application.
-- `client_secret` (String) The API Secret the SmartSheets developer application.
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-Optional:
-
-- `auth_type` (String) must be one of ["oauth2.0"]
-
-
-### Nested Schema for `configuration.credentials.source_smartsheets_update_authorization_method_api_access_token`
-
-Required:
-
-- `access_token` (String) The access token to use for accessing your data from Smartsheets. This access token must be generated by a user with at least read access to the data you'd like to replicate. Generate an access token in the Smartsheets main menu by clicking Account > Apps & Integrations > API Access. See the setup guide for information on how to obtain this token.
-
-Optional:
-
-- `auth_type` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_smartsheets_update_authorization_method_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) Access Token for making authenticated requests.
+- `access_token` (String, Sensitive) Access Token for making authenticated requests.
- `client_id` (String) The API ID of the SmartSheets developer application.
- `client_secret` (String) The API Secret the SmartSheets developer application.
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-Optional:
-
-- `auth_type` (String) must be one of ["oauth2.0"]
+- `refresh_token` (String, Sensitive) The key to refresh the expired access_token.
+- `token_expiry_date` (String, Sensitive) The date-time when the access token should be refreshed.
diff --git a/docs/resources/source_snapchat_marketing.md b/docs/resources/source_snapchat_marketing.md
index 058f57af5..ebf90371b 100644
--- a/docs/resources/source_snapchat_marketing.md
+++ b/docs/resources/source_snapchat_marketing.md
@@ -19,12 +19,12 @@ resource "airbyte_source_snapchat_marketing" "my_source_snapchatmarketing" {
client_secret = "...my_client_secret..."
end_date = "2022-01-30"
refresh_token = "...my_refresh_token..."
- source_type = "snapchat-marketing"
start_date = "2022-01-01"
}
- name = "Chelsea Ortiz"
- secret_id = "...my_secret_id..."
- workspace_id = "5ca8649a-70cf-4d5d-a989-b7206451077d"
+ definition_id = "8a6950f0-007e-4330-87d9-5358a56819d2"
+ name = "Rudy Toy"
+ secret_id = "...my_secret_id..."
+ workspace_id = "1d7e3d24-dfd3-4d51-a342-f997d059d38a"
}
```
@@ -34,11 +34,12 @@ resource "airbyte_source_snapchat_marketing" "my_source_snapchatmarketing" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -53,12 +54,12 @@ Required:
- `client_id` (String) The Client ID of your Snapchat developer application.
- `client_secret` (String) The Client Secret of your Snapchat developer application.
-- `refresh_token` (String) Refresh Token to renew the expired Access Token.
-- `source_type` (String) must be one of ["snapchat-marketing"]
+- `refresh_token` (String, Sensitive) Refresh Token to renew the expired Access Token.
Optional:
- `end_date` (String) Date in the format 2017-01-25. Any data after this date will not be replicated.
-- `start_date` (String) Date in the format 2022-01-01. Any data before this date will not be replicated.
+- `start_date` (String) Default: "2022-01-01"
+Date in the format 2022-01-01. Any data before this date will not be replicated.
diff --git a/docs/resources/source_snowflake.md b/docs/resources/source_snowflake.md
index e2189edc1..eb62d910d 100644
--- a/docs/resources/source_snowflake.md
+++ b/docs/resources/source_snowflake.md
@@ -16,9 +16,8 @@ SourceSnowflake Resource
resource "airbyte_source_snowflake" "my_source_snowflake" {
configuration = {
credentials = {
- source_snowflake_authorization_method_o_auth2_0 = {
+ source_snowflake_o_auth2_0 = {
access_token = "...my_access_token..."
- auth_type = "OAuth"
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
refresh_token = "...my_refresh_token..."
@@ -29,12 +28,12 @@ resource "airbyte_source_snowflake" "my_source_snowflake" {
jdbc_url_params = "...my_jdbc_url_params..."
role = "AIRBYTE_ROLE"
schema = "AIRBYTE_SCHEMA"
- source_type = "snowflake"
warehouse = "AIRBYTE_WAREHOUSE"
}
- name = "Katrina Tillman"
- secret_id = "...my_secret_id..."
- workspace_id = "3d492ed1-4b8a-42c1-9545-45e955dcc185"
+ definition_id = "2e5fcf99-c418-476f-a0cb-c1b99ee1e960"
+ name = "Mrs. Jeanette Howell"
+ secret_id = "...my_secret_id..."
+ workspace_id = "0d51b311-4e9e-4d57-941c-3612b0e8c8cf"
}
```
@@ -44,11 +43,12 @@ resource "airbyte_source_snowflake" "my_source_snowflake" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -64,7 +64,6 @@ Required:
- `database` (String) The database you created for Airbyte to access data.
- `host` (String) The host domain of the snowflake instance (must include the account, region, cloud environment, and end with snowflakecomputing.com).
- `role` (String) The role you created for Airbyte to access Snowflake.
-- `source_type` (String) must be one of ["snowflake"]
- `warehouse` (String) The warehouse you created for Airbyte to access data.
Optional:
@@ -78,58 +77,29 @@ Optional:
Optional:
-- `source_snowflake_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_snowflake_authorization_method_o_auth2_0))
-- `source_snowflake_authorization_method_username_and_password` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_snowflake_authorization_method_username_and_password))
-- `source_snowflake_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_snowflake_update_authorization_method_o_auth2_0))
-- `source_snowflake_update_authorization_method_username_and_password` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_snowflake_update_authorization_method_username_and_password))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
+- `username_and_password` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--username_and_password))
-
-### Nested Schema for `configuration.credentials.source_snowflake_authorization_method_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `auth_type` (String) must be one of ["OAuth"]
- `client_id` (String) The Client ID of your Snowflake developer application.
- `client_secret` (String) The Client Secret of your Snowflake developer application.
Optional:
-- `access_token` (String) Access Token for making authenticated requests.
-- `refresh_token` (String) Refresh Token for making authenticated requests.
+- `access_token` (String, Sensitive) Access Token for making authenticated requests.
+- `refresh_token` (String, Sensitive) Refresh Token for making authenticated requests.
-
-### Nested Schema for `configuration.credentials.source_snowflake_authorization_method_username_and_password`
+
+### Nested Schema for `configuration.credentials.username_and_password`
Required:
-- `auth_type` (String) must be one of ["username/password"]
-- `password` (String) The password associated with the username.
-- `username` (String) The username you created to allow Airbyte to access the database.
-
-
-
-### Nested Schema for `configuration.credentials.source_snowflake_update_authorization_method_o_auth2_0`
-
-Required:
-
-- `auth_type` (String) must be one of ["OAuth"]
-- `client_id` (String) The Client ID of your Snowflake developer application.
-- `client_secret` (String) The Client Secret of your Snowflake developer application.
-
-Optional:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `refresh_token` (String) Refresh Token for making authenticated requests.
-
-
-
-### Nested Schema for `configuration.credentials.source_snowflake_update_authorization_method_username_and_password`
-
-Required:
-
-- `auth_type` (String) must be one of ["username/password"]
-- `password` (String) The password associated with the username.
+- `password` (String, Sensitive) The password associated with the username.
- `username` (String) The username you created to allow Airbyte to access the database.
diff --git a/docs/resources/source_sonar_cloud.md b/docs/resources/source_sonar_cloud.md
index 4ef4699fa..681ea4162 100644
--- a/docs/resources/source_sonar_cloud.md
+++ b/docs/resources/source_sonar_cloud.md
@@ -20,13 +20,13 @@ resource "airbyte_source_sonar_cloud" "my_source_sonarcloud" {
]
end_date = "YYYY-MM-DD"
organization = "airbyte"
- source_type = "sonar-cloud"
start_date = "YYYY-MM-DD"
user_token = "...my_user_token..."
}
- name = "Mildred Rosenbaum"
- secret_id = "...my_secret_id..."
- workspace_id = "43ad2daa-784a-4ba3-9230-edf73811a115"
+ definition_id = "d259943d-fa52-4a9e-875a-bffba2c1e7b6"
+ name = "Jose Lindgren"
+ secret_id = "...my_secret_id..."
+ workspace_id = "d761f19b-60aa-4080-8c97-1e60235dc09f"
}
```
@@ -36,11 +36,12 @@ resource "airbyte_source_sonar_cloud" "my_source_sonarcloud" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -53,10 +54,9 @@ resource "airbyte_source_sonar_cloud" "my_source_sonarcloud" {
Required:
-- `component_keys` (List of String) Comma-separated list of component keys.
+- `component_keys` (List of String, Sensitive) Comma-separated list of component keys.
- `organization` (String) Organization key. See here.
-- `source_type` (String) must be one of ["sonar-cloud"]
-- `user_token` (String) Your User Token. See here. The token is case sensitive.
+- `user_token` (String, Sensitive) Your User Token. See here. The token is case sensitive.
Optional:
diff --git a/docs/resources/source_spacex_api.md b/docs/resources/source_spacex_api.md
index 4fa19f065..342d1eaa1 100644
--- a/docs/resources/source_spacex_api.md
+++ b/docs/resources/source_spacex_api.md
@@ -15,13 +15,13 @@ SourceSpacexAPI Resource
```terraform
resource "airbyte_source_spacex_api" "my_source_spacexapi" {
configuration = {
- id = "382bd7ed-5650-4762-9c58-f4d7396564c2"
- options = "...my_options..."
- source_type = "spacex-api"
+ id = "adad73b7-9d20-4b48-acfd-c6fb504a12b7"
+ options = "...my_options..."
}
- name = "Lee Batz Jr."
- secret_id = "...my_secret_id..."
- workspace_id = "a961d24a-7dbb-48f5-b2d8-92cf7812cb51"
+ definition_id = "723cbf02-23ae-4822-a532-7d8cbc0547dc"
+ name = "Chad Swaniawski"
+ secret_id = "...my_secret_id..."
+ workspace_id = "7628c478-1358-42a6-b537-d9dfc7f45856"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_spacex_api" "my_source_spacexapi" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,6 +51,5 @@ Optional:
- `id` (String)
- `options` (String)
-- `source_type` (String) must be one of ["spacex-api"]
diff --git a/docs/resources/source_square.md b/docs/resources/source_square.md
index ff8865d50..b0505074e 100644
--- a/docs/resources/source_square.md
+++ b/docs/resources/source_square.md
@@ -16,19 +16,18 @@ SourceSquare Resource
resource "airbyte_source_square" "my_source_square" {
configuration = {
credentials = {
- source_square_authentication_api_key = {
- api_key = "...my_api_key..."
- auth_type = "API Key"
+ source_square_api_key = {
+ api_key = "...my_api_key..."
}
}
- include_deleted_objects = true
+ include_deleted_objects = false
is_sandbox = false
- source_type = "square"
- start_date = "2022-02-01"
+ start_date = "2022-11-22"
}
- name = "Miss Bruce Gibson"
- secret_id = "...my_secret_id..."
- workspace_id = "548f88f8-f1bf-40bc-8e1f-206d5d831d00"
+ definition_id = "55c9f06b-5482-4c9e-b770-03d0337f10a6"
+ name = "Connie Homenick"
+ secret_id = "...my_secret_id..."
+ workspace_id = "4ee32ccb-4d52-4da6-928f-2436a122e394"
}
```
@@ -38,11 +37,12 @@ resource "airbyte_source_square" "my_source_square" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -53,64 +53,39 @@ resource "airbyte_source_square" "my_source_square" {
### Nested Schema for `configuration`
-Required:
-
-- `is_sandbox` (Boolean) Determines whether to use the sandbox or production environment.
-- `source_type` (String) must be one of ["square"]
-
Optional:
- `credentials` (Attributes) Choose how to authenticate to Square. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `include_deleted_objects` (Boolean) In some streams there is an option to include deleted objects (Items, Categories, Discounts, Taxes)
-- `start_date` (String) UTC date in the format YYYY-MM-DD. Any data before this date will not be replicated. If not set, all data will be replicated.
+- `include_deleted_objects` (Boolean) Default: false
+In some streams there is an option to include deleted objects (Items, Categories, Discounts, Taxes)
+- `is_sandbox` (Boolean) Default: false
+Determines whether to use the sandbox or production environment.
+- `start_date` (String) Default: "2021-01-01"
+UTC date in the format YYYY-MM-DD. Any data before this date will not be replicated. If not set, all data will be replicated.
### Nested Schema for `configuration.credentials`
Optional:
-- `source_square_authentication_api_key` (Attributes) Choose how to authenticate to Square. (see [below for nested schema](#nestedatt--configuration--credentials--source_square_authentication_api_key))
-- `source_square_authentication_oauth_authentication` (Attributes) Choose how to authenticate to Square. (see [below for nested schema](#nestedatt--configuration--credentials--source_square_authentication_oauth_authentication))
-- `source_square_update_authentication_api_key` (Attributes) Choose how to authenticate to Square. (see [below for nested schema](#nestedatt--configuration--credentials--source_square_update_authentication_api_key))
-- `source_square_update_authentication_oauth_authentication` (Attributes) Choose how to authenticate to Square. (see [below for nested schema](#nestedatt--configuration--credentials--source_square_update_authentication_oauth_authentication))
-
-
-### Nested Schema for `configuration.credentials.source_square_authentication_api_key`
-
-Required:
-
-- `api_key` (String) The API key for a Square application
-- `auth_type` (String) must be one of ["API Key"]
-
-
-
-### Nested Schema for `configuration.credentials.source_square_authentication_oauth_authentication`
-
-Required:
-
-- `auth_type` (String) must be one of ["OAuth"]
-- `client_id` (String) The Square-issued ID of your application
-- `client_secret` (String) The Square-issued application secret for your application
-- `refresh_token` (String) A refresh token generated using the above client ID and secret
-
+- `api_key` (Attributes) Choose how to authenticate to Square. (see [below for nested schema](#nestedatt--configuration--credentials--api_key))
+- `oauth_authentication` (Attributes) Choose how to authenticate to Square. (see [below for nested schema](#nestedatt--configuration--credentials--oauth_authentication))
-
-### Nested Schema for `configuration.credentials.source_square_update_authentication_api_key`
+
+### Nested Schema for `configuration.credentials.api_key`
Required:
-- `api_key` (String) The API key for a Square application
-- `auth_type` (String) must be one of ["API Key"]
+- `api_key` (String, Sensitive) The API key for a Square application
-
-### Nested Schema for `configuration.credentials.source_square_update_authentication_oauth_authentication`
+
+### Nested Schema for `configuration.credentials.oauth_authentication`
Required:
-- `auth_type` (String) must be one of ["OAuth"]
- `client_id` (String) The Square-issued ID of your application
- `client_secret` (String) The Square-issued application secret for your application
-- `refresh_token` (String) A refresh token generated using the above client ID and secret
+- `refresh_token` (String, Sensitive) A refresh token generated using the above client ID and secret
diff --git a/docs/resources/source_strava.md b/docs/resources/source_strava.md
index 0c6532b7e..615c70092 100644
--- a/docs/resources/source_strava.md
+++ b/docs/resources/source_strava.md
@@ -16,16 +16,15 @@ SourceStrava Resource
resource "airbyte_source_strava" "my_source_strava" {
configuration = {
athlete_id = 17831421
- auth_type = "Client"
client_id = "12345"
client_secret = "fc6243f283e51f6ca989aab298b17da125496f50"
refresh_token = "fc6243f283e51f6ca989aab298b17da125496f50"
- source_type = "strava"
start_date = "2021-03-01T00:00:00Z"
}
- name = "Jeffrey Wintheiser"
- secret_id = "...my_secret_id..."
- workspace_id = "06673f3a-681c-4576-8dce-742409a215e0"
+ definition_id = "198a6bf6-f1cb-4db3-9a96-cd0e48f1e4b3"
+ name = "Elaine Johnson"
+ secret_id = "...my_secret_id..."
+ workspace_id = "6ca0b303-cf01-47cd-9783-63f1be7e9b4a"
}
```
@@ -35,11 +34,12 @@ resource "airbyte_source_strava" "my_source_strava" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -55,12 +55,7 @@ Required:
- `athlete_id` (Number) The Athlete ID of your Strava developer application.
- `client_id` (String) The Client ID of your Strava developer application.
- `client_secret` (String) The Client Secret of your Strava developer application.
-- `refresh_token` (String) The Refresh Token with the activity: read_all permissions.
-- `source_type` (String) must be one of ["strava"]
+- `refresh_token` (String, Sensitive) The Refresh Token with the activity: read_all permissions.
- `start_date` (String) UTC date and time. Any data before this date will not be replicated.
-Optional:
-
-- `auth_type` (String) must be one of ["Client"]
-
diff --git a/docs/resources/source_stripe.md b/docs/resources/source_stripe.md
index 78a2b3f52..e5bae528c 100644
--- a/docs/resources/source_stripe.md
+++ b/docs/resources/source_stripe.md
@@ -16,15 +16,17 @@ SourceStripe Resource
resource "airbyte_source_stripe" "my_source_stripe" {
configuration = {
account_id = "...my_account_id..."
+ call_rate_limit = 100
client_secret = "...my_client_secret..."
- lookback_window_days = 5
- slice_range = 10
- source_type = "stripe"
+ lookback_window_days = 10
+ num_workers = 3
+ slice_range = 360
start_date = "2017-01-25T00:00:00Z"
}
- name = "Seth Nitzsche"
- secret_id = "...my_secret_id..."
- workspace_id = "63e3af3d-d9dd-4a33-9cd6-3483e4a7a98e"
+ definition_id = "46c36bb7-337b-4f0b-aca9-3a8ae78e1e53"
+ name = "Marcella Muller"
+ secret_id = "...my_secret_id..."
+ workspace_id = "b6d5dc1e-250f-480f-bc59-5c3777bccfe7"
}
```
@@ -34,11 +36,12 @@ resource "airbyte_source_stripe" "my_source_stripe" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -53,12 +56,17 @@ Required:
- `account_id` (String) Your Stripe account ID (starts with 'acct_', find yours here).
- `client_secret` (String) Stripe API key (usually starts with 'sk_live_'; find yours here).
-- `source_type` (String) must be one of ["stripe"]
Optional:
-- `lookback_window_days` (Number) When set, the connector will always re-export data from the past N days, where N is the value set here. This is useful if your data is frequently updated after creation. Applies only to streams that do not support event-based incremental syncs: CheckoutSessionLineItems, Events, SetupAttempts, ShippingRates, BalanceTransactions, Files, FileLinks. More info here
-- `slice_range` (Number) The time increment used by the connector when requesting data from the Stripe API. The bigger the value is, the less requests will be made and faster the sync will be. On the other hand, the more seldom the state is persisted.
-- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Only data generated after this date will be replicated.
+- `call_rate_limit` (Number) The number of API calls per second that you allow connector to make. This value can not be bigger than real API call rate limit (https://stripe.com/docs/rate-limits). If not specified the default maximum is 25 and 100 calls per second for test and production tokens respectively.
+- `lookback_window_days` (Number) Default: 0
+When set, the connector will always re-export data from the past N days, where N is the value set here. This is useful if your data is frequently updated after creation. The Lookback Window only applies to streams that do not support event-based incremental syncs: Events, SetupAttempts, ShippingRates, BalanceTransactions, Files, FileLinks, Refunds. More info here
+- `num_workers` (Number) Default: 10
+The number of worker thread to use for the sync. The performance upper boundary depends on call_rate_limit setting and type of account.
+- `slice_range` (Number) Default: 365
+The time increment used by the connector when requesting data from the Stripe API. The bigger the value is, the less requests will be made and faster the sync will be. On the other hand, the more seldom the state is persisted.
+- `start_date` (String) Default: "2017-01-25T00:00:00Z"
+UTC date and time in the format 2017-01-25T00:00:00Z. Only data generated after this date will be replicated.
diff --git a/docs/resources/source_survey_sparrow.md b/docs/resources/source_survey_sparrow.md
index 4b759c695..b0a931d1b 100644
--- a/docs/resources/source_survey_sparrow.md
+++ b/docs/resources/source_survey_sparrow.md
@@ -17,18 +17,16 @@ resource "airbyte_source_survey_sparrow" "my_source_surveysparrow" {
configuration = {
access_token = "...my_access_token..."
region = {
- source_survey_sparrow_base_url_eu_based_account = {
- url_base = "https://eu-api.surveysparrow.com/v3"
- }
+ eu_based_account = {}
}
- source_type = "survey-sparrow"
survey_id = [
"{ \"see\": \"documentation\" }",
]
}
- name = "Hugo Kovacek"
- secret_id = "...my_secret_id..."
- workspace_id = "f02449d8-6f4b-4b20-be5d-911cbfe749ca"
+ definition_id = "4b91c615-d128-4040-ba03-eb3c0afcc3c8"
+ name = "Gerard Kerluke"
+ secret_id = "...my_secret_id..."
+ workspace_id = "fbbc8e3e-7db5-4a3e-846f-c1e0fa91f7ef"
}
```
@@ -38,11 +36,12 @@ resource "airbyte_source_survey_sparrow" "my_source_surveysparrow" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -55,8 +54,7 @@ resource "airbyte_source_survey_sparrow" "my_source_surveysparrow" {
Required:
-- `access_token` (String) Your access token. See here. The key is case sensitive.
-- `source_type` (String) must be one of ["survey-sparrow"]
+- `access_token` (String, Sensitive) Your access token. See here. The key is case sensitive.
Optional:
@@ -68,40 +66,14 @@ Optional:
Optional:
-- `source_survey_sparrow_base_url_eu_based_account` (Attributes) Is your account location is EU based? If yes, the base url to retrieve data will be different. (see [below for nested schema](#nestedatt--configuration--region--source_survey_sparrow_base_url_eu_based_account))
-- `source_survey_sparrow_base_url_global_account` (Attributes) Is your account location is EU based? If yes, the base url to retrieve data will be different. (see [below for nested schema](#nestedatt--configuration--region--source_survey_sparrow_base_url_global_account))
-- `source_survey_sparrow_update_base_url_eu_based_account` (Attributes) Is your account location is EU based? If yes, the base url to retrieve data will be different. (see [below for nested schema](#nestedatt--configuration--region--source_survey_sparrow_update_base_url_eu_based_account))
-- `source_survey_sparrow_update_base_url_global_account` (Attributes) Is your account location is EU based? If yes, the base url to retrieve data will be different. (see [below for nested schema](#nestedatt--configuration--region--source_survey_sparrow_update_base_url_global_account))
+- `eu_based_account` (Attributes) Is your account location is EU based? If yes, the base url to retrieve data will be different. (see [below for nested schema](#nestedatt--configuration--region--eu_based_account))
+- `global_account` (Attributes) Is your account location is EU based? If yes, the base url to retrieve data will be different. (see [below for nested schema](#nestedatt--configuration--region--global_account))
-
-### Nested Schema for `configuration.region.source_survey_sparrow_base_url_eu_based_account`
+
+### Nested Schema for `configuration.region.eu_based_account`
-Optional:
-
-- `url_base` (String) must be one of ["https://eu-api.surveysparrow.com/v3"]
-
-
-
-### Nested Schema for `configuration.region.source_survey_sparrow_base_url_global_account`
-
-Optional:
-
-- `url_base` (String) must be one of ["https://api.surveysparrow.com/v3"]
-
-
-
-### Nested Schema for `configuration.region.source_survey_sparrow_update_base_url_eu_based_account`
-
-Optional:
-
-- `url_base` (String) must be one of ["https://eu-api.surveysparrow.com/v3"]
-
-
-
-### Nested Schema for `configuration.region.source_survey_sparrow_update_base_url_global_account`
-
-Optional:
-- `url_base` (String) must be one of ["https://api.surveysparrow.com/v3"]
+
+### Nested Schema for `configuration.region.global_account`
diff --git a/docs/resources/source_surveymonkey.md b/docs/resources/source_surveymonkey.md
index 3d3fd7a8b..4e9c45519 100644
--- a/docs/resources/source_surveymonkey.md
+++ b/docs/resources/source_surveymonkey.md
@@ -17,20 +17,19 @@ resource "airbyte_source_surveymonkey" "my_source_surveymonkey" {
configuration = {
credentials = {
access_token = "...my_access_token..."
- auth_method = "oauth2.0"
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
}
- origin = "USA"
- source_type = "surveymonkey"
- start_date = "2021-01-01T00:00:00Z"
+ origin = "USA"
+ start_date = "2021-01-01T00:00:00Z"
survey_ids = [
"...",
]
}
- name = "Pearl Trantow"
- secret_id = "...my_secret_id..."
- workspace_id = "b8955d41-3e13-4a48-a310-907bd354c092"
+ definition_id = "147e293c-7a4b-42d7-bbc2-90ef00ad5372"
+ name = "Renee Howe"
+ secret_id = "...my_secret_id..."
+ workspace_id = "50a2e7cf-e6f3-44ac-865c-56f5fa6778e4"
}
```
@@ -40,11 +39,12 @@ resource "airbyte_source_surveymonkey" "my_source_surveymonkey" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -57,13 +57,12 @@ resource "airbyte_source_surveymonkey" "my_source_surveymonkey" {
Required:
-- `source_type` (String) must be one of ["surveymonkey"]
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
Optional:
- `credentials` (Attributes) The authorization method to use to retrieve data from SurveyMonkey (see [below for nested schema](#nestedatt--configuration--credentials))
-- `origin` (String) must be one of ["USA", "Europe", "Canada"]
+- `origin` (String) must be one of ["USA", "Europe", "Canada"]; Default: "USA"
Depending on the originating datacenter of the SurveyMonkey account, the API access URL may be different.
- `survey_ids` (List of String) IDs of the surveys from which you'd like to replicate data. If left empty, data from all boards to which you have access will be replicated.
@@ -72,8 +71,7 @@ Depending on the originating datacenter of the SurveyMonkey account, the API acc
Required:
-- `access_token` (String) Access Token for making authenticated requests. See the docs for information on how to generate this key.
-- `auth_method` (String) must be one of ["oauth2.0"]
+- `access_token` (String, Sensitive) Access Token for making authenticated requests. See the docs for information on how to generate this key.
Optional:
diff --git a/docs/resources/source_tempo.md b/docs/resources/source_tempo.md
index cf95bc67f..867d94b33 100644
--- a/docs/resources/source_tempo.md
+++ b/docs/resources/source_tempo.md
@@ -15,12 +15,12 @@ SourceTempo Resource
```terraform
resource "airbyte_source_tempo" "my_source_tempo" {
configuration = {
- api_token = "...my_api_token..."
- source_type = "tempo"
+ api_token = "...my_api_token..."
}
- name = "Edwin Haley"
- secret_id = "...my_secret_id..."
- workspace_id = "7f69e2c9-e6d1-40e9-9b3a-d4c6b03108d9"
+ definition_id = "5f462d7c-8446-4197-ba1b-271a5b009f29"
+ name = "Karen Kemmer"
+ secret_id = "...my_secret_id..."
+ workspace_id = "6dac9959-2aae-4b21-989b-3db558d4aa17"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_tempo" "my_source_tempo" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_tempo" "my_source_tempo" {
Required:
-- `api_token` (String) Tempo API Token. Go to Tempo>Settings, scroll down to Data Access and select API integration.
-- `source_type` (String) must be one of ["tempo"]
+- `api_token` (String, Sensitive) Tempo API Token. Go to Tempo>Settings, scroll down to Data Access and select API integration.
diff --git a/docs/resources/source_the_guardian_api.md b/docs/resources/source_the_guardian_api.md
index 15c04bafb..8855b450a 100644
--- a/docs/resources/source_the_guardian_api.md
+++ b/docs/resources/source_the_guardian_api.md
@@ -15,17 +15,17 @@ SourceTheGuardianAPI Resource
```terraform
resource "airbyte_source_the_guardian_api" "my_source_theguardianapi" {
configuration = {
- api_key = "...my_api_key..."
- end_date = "YYYY-MM-DD"
- query = "political"
- section = "media"
- source_type = "the-guardian-api"
- start_date = "YYYY-MM-DD"
- tag = "environment/recycling"
+ api_key = "...my_api_key..."
+ end_date = "YYYY-MM-DD"
+ query = "environment AND political"
+ section = "media"
+ start_date = "YYYY-MM-DD"
+ tag = "environment/energyefficiency"
}
- name = "Pauline Kozey IV"
- secret_id = "...my_secret_id..."
- workspace_id = "2b94f2ab-1fd5-4671-a9c3-26350a467143"
+ definition_id = "e21a7b03-b315-4af1-9bc4-a1418c27e2e4"
+ name = "Toby Rempel"
+ secret_id = "...my_secret_id..."
+ workspace_id = "4753d48e-30cc-4cb1-939d-dfc649b7a58a"
}
```
@@ -35,11 +35,12 @@ resource "airbyte_source_the_guardian_api" "my_source_theguardianapi" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -52,8 +53,7 @@ resource "airbyte_source_the_guardian_api" "my_source_theguardianapi" {
Required:
-- `api_key` (String) Your API Key. See here. The key is case sensitive.
-- `source_type` (String) must be one of ["the-guardian-api"]
+- `api_key` (String, Sensitive) Your API Key. See here. The key is case sensitive.
- `start_date` (String) Use this to set the minimum date (YYYY-MM-DD) of the results. Results older than the start_date will not be shown.
Optional:
diff --git a/docs/resources/source_tiktok_marketing.md b/docs/resources/source_tiktok_marketing.md
index 3929518fe..aa14fc583 100644
--- a/docs/resources/source_tiktok_marketing.md
+++ b/docs/resources/source_tiktok_marketing.md
@@ -15,24 +15,23 @@ SourceTiktokMarketing Resource
```terraform
resource "airbyte_source_tiktok_marketing" "my_source_tiktokmarketing" {
configuration = {
- attribution_window = 5
+ attribution_window = 3
credentials = {
- source_tiktok_marketing_authentication_method_o_auth2_0 = {
+ source_tiktok_marketing_o_auth2_0 = {
access_token = "...my_access_token..."
advertiser_id = "...my_advertiser_id..."
app_id = "...my_app_id..."
- auth_type = "oauth2.0"
secret = "...my_secret..."
}
}
- end_date = "2021-10-08"
+ end_date = "2022-10-15"
include_deleted = false
- source_type = "tiktok-marketing"
- start_date = "2022-12-21"
+ start_date = "2022-12-08"
}
- name = "Mrs. Joey Mueller"
- secret_id = "...my_secret_id..."
- workspace_id = "4d93a74c-0252-4fe3-b4b4-db8b778ebb6e"
+ definition_id = "fd338f32-2856-4cd8-8e7e-494b9e5830e9"
+ name = "Elijah Prosacco"
+ secret_id = "...my_secret_id..."
+ workspace_id = "12cdcae9-f85c-4701-b380-526f8856cdf3"
}
```
@@ -42,11 +41,12 @@ resource "airbyte_source_tiktok_marketing" "my_source_tiktokmarketing" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -59,76 +59,43 @@ resource "airbyte_source_tiktok_marketing" "my_source_tiktokmarketing" {
Optional:
-- `attribution_window` (Number) The attribution window in days.
+- `attribution_window` (Number) Default: 3
+The attribution window in days.
- `credentials` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--credentials))
- `end_date` (String) The date until which you'd like to replicate data for all incremental streams, in the format YYYY-MM-DD. All data generated between start_date and this date will be replicated. Not setting this option will result in always syncing the data till the current date.
-- `include_deleted` (Boolean) Set to active if you want to include deleted data in reports.
-- `source_type` (String) must be one of ["tiktok-marketing"]
-- `start_date` (String) The Start Date in format: YYYY-MM-DD. Any data before this date will not be replicated. If this parameter is not set, all data will be replicated.
+- `include_deleted` (Boolean) Default: false
+Set to active if you want to include deleted data in reports.
+- `start_date` (String) Default: "2016-09-01"
+The Start Date in format: YYYY-MM-DD. Any data before this date will not be replicated. If this parameter is not set, all data will be replicated.
### Nested Schema for `configuration.credentials`
Optional:
-- `source_tiktok_marketing_authentication_method_o_auth2_0` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_tiktok_marketing_authentication_method_o_auth2_0))
-- `source_tiktok_marketing_authentication_method_sandbox_access_token` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_tiktok_marketing_authentication_method_sandbox_access_token))
-- `source_tiktok_marketing_update_authentication_method_o_auth2_0` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_tiktok_marketing_update_authentication_method_o_auth2_0))
-- `source_tiktok_marketing_update_authentication_method_sandbox_access_token` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--credentials--source_tiktok_marketing_update_authentication_method_sandbox_access_token))
+- `o_auth20` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
+- `sandbox_access_token` (Attributes) Authentication method (see [below for nested schema](#nestedatt--configuration--credentials--sandbox_access_token))
-
-### Nested Schema for `configuration.credentials.source_tiktok_marketing_authentication_method_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) Long-term Authorized Access Token.
+- `access_token` (String, Sensitive) Long-term Authorized Access Token.
- `app_id` (String) The Developer Application App ID.
- `secret` (String) The Developer Application Secret.
Optional:
- `advertiser_id` (String) The Advertiser ID to filter reports and streams. Let this empty to retrieve all.
-- `auth_type` (String) must be one of ["oauth2.0"]
-
-### Nested Schema for `configuration.credentials.source_tiktok_marketing_authentication_method_sandbox_access_token`
+
+### Nested Schema for `configuration.credentials.sandbox_access_token`
Required:
-- `access_token` (String) The long-term authorized access token.
+- `access_token` (String, Sensitive) The long-term authorized access token.
- `advertiser_id` (String) The Advertiser ID which generated for the developer's Sandbox application.
-Optional:
-
-- `auth_type` (String) must be one of ["sandbox_access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_tiktok_marketing_update_authentication_method_o_auth2_0`
-
-Required:
-
-- `access_token` (String) Long-term Authorized Access Token.
-- `app_id` (String) The Developer Application App ID.
-- `secret` (String) The Developer Application Secret.
-
-Optional:
-
-- `advertiser_id` (String) The Advertiser ID to filter reports and streams. Let this empty to retrieve all.
-- `auth_type` (String) must be one of ["oauth2.0"]
-
-
-
-### Nested Schema for `configuration.credentials.source_tiktok_marketing_update_authentication_method_sandbox_access_token`
-
-Required:
-
-- `access_token` (String) The long-term authorized access token.
-- `advertiser_id` (String) The Advertiser ID which generated for the developer's Sandbox application.
-
-Optional:
-
-- `auth_type` (String) must be one of ["sandbox_access_token"]
-
diff --git a/docs/resources/source_todoist.md b/docs/resources/source_todoist.md
index a9ec4940e..67409062b 100644
--- a/docs/resources/source_todoist.md
+++ b/docs/resources/source_todoist.md
@@ -15,12 +15,12 @@ SourceTodoist Resource
```terraform
resource "airbyte_source_todoist" "my_source_todoist" {
configuration = {
- source_type = "todoist"
- token = "...my_token..."
+ token = "...my_token..."
}
- name = "Hope Collins"
- secret_id = "...my_secret_id..."
- workspace_id = "502bafb2-cbc4-4635-95e6-5da028c3e951"
+ definition_id = "fdefbe19-9921-44f3-bfa4-8acadc06400b"
+ name = "Kristy Hilpert"
+ secret_id = "...my_secret_id..."
+ workspace_id = "13a2ccf2-b1ad-4e2f-8984-bfb0e1b3d2b8"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_todoist" "my_source_todoist" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_todoist" "my_source_todoist" {
Required:
-- `source_type` (String) must be one of ["todoist"]
-- `token` (String) Your API Token. See here. The token is case sensitive.
+- `token` (String, Sensitive) Your API Token. See here. The token is case sensitive.
diff --git a/docs/resources/source_trello.md b/docs/resources/source_trello.md
index 73e81dbc3..96e49b431 100644
--- a/docs/resources/source_trello.md
+++ b/docs/resources/source_trello.md
@@ -18,14 +18,14 @@ resource "airbyte_source_trello" "my_source_trello" {
board_ids = [
"...",
]
- key = "...my_key..."
- source_type = "trello"
- start_date = "2021-03-01T00:00:00Z"
- token = "...my_token..."
+ key = "...my_key..."
+ start_date = "2021-03-01T00:00:00Z"
+ token = "...my_token..."
}
- name = "Philip Armstrong"
- secret_id = "...my_secret_id..."
- workspace_id = "a966489d-7b78-4673-a13a-12a6b9924945"
+ definition_id = "26a8838c-f8d2-427f-b18d-4240654f4782"
+ name = "Esther Abshire"
+ secret_id = "...my_secret_id..."
+ workspace_id = "b5a46242-8ebc-45c7-bead-f0c9ce16ebe8"
}
```
@@ -35,11 +35,12 @@ resource "airbyte_source_trello" "my_source_trello" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -52,13 +53,12 @@ resource "airbyte_source_trello" "my_source_trello" {
Required:
-- `key` (String) Trello API key. See the docs for instructions on how to generate it.
-- `source_type` (String) must be one of ["trello"]
+- `key` (String, Sensitive) Trello API key. See the docs for instructions on how to generate it.
- `start_date` (String) UTC date and time in the format 2017-01-25T00:00:00Z. Any data before this date will not be replicated.
-- `token` (String) Trello API token. See the docs for instructions on how to generate it.
+- `token` (String, Sensitive) Trello API token. See the docs for instructions on how to generate it.
Optional:
-- `board_ids` (List of String) IDs of the boards to replicate data from. If left empty, data from all boards to which you have access will be replicated.
+- `board_ids` (List of String) IDs of the boards to replicate data from. If left empty, data from all boards to which you have access will be replicated. Please note that this is not the 8-character ID in the board's shortLink (URL of the board). Rather, what is required here is the 24-character ID usually returned by the API
diff --git a/docs/resources/source_trustpilot.md b/docs/resources/source_trustpilot.md
index 6187d6e0e..583983bdc 100644
--- a/docs/resources/source_trustpilot.md
+++ b/docs/resources/source_trustpilot.md
@@ -19,17 +19,16 @@ resource "airbyte_source_trustpilot" "my_source_trustpilot" {
"...",
]
credentials = {
- source_trustpilot_authorization_method_api_key = {
- auth_type = "apikey"
+ source_trustpilot_api_key = {
client_id = "...my_client_id..."
}
}
- source_type = "trustpilot"
- start_date = "%Y-%m-%dT%H:%M:%S"
+ start_date = "%Y-%m-%dT%H:%M:%S"
}
- name = "Bradley Goodwin"
- secret_id = "...my_secret_id..."
- workspace_id = "f5c84383-6b86-4b3c-9f64-15b0449f9df1"
+ definition_id = "5fa64aee-8d2b-4de4-8eef-ceb9e0d54b08"
+ name = "Clifford Quigley"
+ secret_id = "...my_secret_id..."
+ workspace_id = "98fe3f92-c06a-49aa-b270-2875abb88c39"
}
```
@@ -39,11 +38,12 @@ resource "airbyte_source_trustpilot" "my_source_trustpilot" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -58,7 +58,6 @@ Required:
- `business_units` (List of String) The names of business units which shall be synchronized. Some streams e.g. configured_business_units or private_reviews use this configuration.
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["trustpilot"]
- `start_date` (String) For streams with sync. method incremental the start date time to be used
@@ -66,64 +65,26 @@ Required:
Optional:
-- `source_trustpilot_authorization_method_api_key` (Attributes) The API key authentication method gives you access to only the streams which are part of the Public API. When you want to get streams available via the Consumer API (e.g. the private reviews) you need to use authentication method OAuth 2.0. (see [below for nested schema](#nestedatt--configuration--credentials--source_trustpilot_authorization_method_api_key))
-- `source_trustpilot_authorization_method_o_auth_2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_trustpilot_authorization_method_o_auth_2_0))
-- `source_trustpilot_update_authorization_method_api_key` (Attributes) The API key authentication method gives you access to only the streams which are part of the Public API. When you want to get streams available via the Consumer API (e.g. the private reviews) you need to use authentication method OAuth 2.0. (see [below for nested schema](#nestedatt--configuration--credentials--source_trustpilot_update_authorization_method_api_key))
-- `source_trustpilot_update_authorization_method_o_auth_2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_trustpilot_update_authorization_method_o_auth_2_0))
+- `api_key` (Attributes) The API key authentication method gives you access to only the streams which are part of the Public API. When you want to get streams available via the Consumer API (e.g. the private reviews) you need to use authentication method OAuth 2.0. (see [below for nested schema](#nestedatt--configuration--credentials--api_key))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_trustpilot_authorization_method_api_key`
+
+### Nested Schema for `configuration.credentials.api_key`
Required:
- `client_id` (String) The API key of the Trustpilot API application.
-Optional:
-
-- `auth_type` (String) must be one of ["apikey"]
-
-
-### Nested Schema for `configuration.credentials.source_trustpilot_authorization_method_o_auth_2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) Access Token for making authenticated requests.
+- `access_token` (String, Sensitive) Access Token for making authenticated requests.
- `client_id` (String) The API key of the Trustpilot API application. (represents the OAuth Client ID)
- `client_secret` (String) The Secret of the Trustpilot API application. (represents the OAuth Client Secret)
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-Optional:
-
-- `auth_type` (String) must be one of ["oauth2.0"]
-
-
-
-### Nested Schema for `configuration.credentials.source_trustpilot_update_authorization_method_api_key`
-
-Required:
-
-- `client_id` (String) The API key of the Trustpilot API application.
-
-Optional:
-
-- `auth_type` (String) must be one of ["apikey"]
-
-
-
-### Nested Schema for `configuration.credentials.source_trustpilot_update_authorization_method_o_auth_2_0`
-
-Required:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `client_id` (String) The API key of the Trustpilot API application. (represents the OAuth Client ID)
-- `client_secret` (String) The Secret of the Trustpilot API application. (represents the OAuth Client Secret)
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-Optional:
-
-- `auth_type` (String) must be one of ["oauth2.0"]
+- `refresh_token` (String, Sensitive) The key to refresh the expired access_token.
+- `token_expiry_date` (String, Sensitive) The date-time when the access token should be refreshed.
diff --git a/docs/resources/source_tvmaze_schedule.md b/docs/resources/source_tvmaze_schedule.md
index 8c7e0937d..d2dd6bdb0 100644
--- a/docs/resources/source_tvmaze_schedule.md
+++ b/docs/resources/source_tvmaze_schedule.md
@@ -15,15 +15,15 @@ SourceTvmazeSchedule Resource
```terraform
resource "airbyte_source_tvmaze_schedule" "my_source_tvmazeschedule" {
configuration = {
- domestic_schedule_country_code = "US"
+ domestic_schedule_country_code = "GB"
end_date = "...my_end_date..."
- source_type = "tvmaze-schedule"
start_date = "...my_start_date..."
web_schedule_country_code = "global"
}
- name = "Gretchen Waters"
- secret_id = "...my_secret_id..."
- workspace_id = "e78bf606-8258-494e-a763-d5c72795b785"
+ definition_id = "79666080-f3ec-4ae3-8b49-1ea7992cd63d"
+ name = "Dr. Victoria Lemke"
+ secret_id = "...my_secret_id..."
+ workspace_id = "e3f7d5a4-33d3-40ca-8aa9-f684d9ab345e"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_tvmaze_schedule" "my_source_tvmazeschedule" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,7 +52,6 @@ resource "airbyte_source_tvmaze_schedule" "my_source_tvmazeschedule" {
Required:
- `domestic_schedule_country_code` (String) Country code for domestic TV schedule retrieval.
-- `source_type` (String) must be one of ["tvmaze-schedule"]
- `start_date` (String) Start date for TV schedule retrieval. May be in the future.
Optional:
diff --git a/docs/resources/source_twilio.md b/docs/resources/source_twilio.md
index e4b40a3ad..a23b47981 100644
--- a/docs/resources/source_twilio.md
+++ b/docs/resources/source_twilio.md
@@ -18,12 +18,12 @@ resource "airbyte_source_twilio" "my_source_twilio" {
account_sid = "...my_account_sid..."
auth_token = "...my_auth_token..."
lookback_window = 60
- source_type = "twilio"
start_date = "2020-10-01T00:00:00Z"
}
- name = "Andre Sporer"
- secret_id = "...my_secret_id..."
- workspace_id = "9e5635b3-3bc0-4f97-8c42-fc9f4844225e"
+ definition_id = "83cb2e52-a86a-4dbb-97c5-cbe7ccff9d07"
+ name = "Leslie Kihn"
+ secret_id = "...my_secret_id..."
+ workspace_id = "a4b37eb2-05dd-4b7f-9b71-195e07e10364"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_twilio" "my_source_twilio" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -51,12 +52,12 @@ resource "airbyte_source_twilio" "my_source_twilio" {
Required:
- `account_sid` (String) Twilio account SID
-- `auth_token` (String) Twilio Auth Token.
-- `source_type` (String) must be one of ["twilio"]
+- `auth_token` (String, Sensitive) Twilio Auth Token.
- `start_date` (String) UTC date and time in the format 2020-10-01T00:00:00Z. Any data before this date will not be replicated.
Optional:
-- `lookback_window` (Number) How far into the past to look for records. (in minutes)
+- `lookback_window` (Number) Default: 0
+How far into the past to look for records. (in minutes)
diff --git a/docs/resources/source_twilio_taskrouter.md b/docs/resources/source_twilio_taskrouter.md
index a0dddb445..25c7b1541 100644
--- a/docs/resources/source_twilio_taskrouter.md
+++ b/docs/resources/source_twilio_taskrouter.md
@@ -17,11 +17,11 @@ resource "airbyte_source_twilio_taskrouter" "my_source_twiliotaskrouter" {
configuration = {
account_sid = "...my_account_sid..."
auth_token = "...my_auth_token..."
- source_type = "twilio-taskrouter"
}
- name = "Cathy Ratke"
- secret_id = "...my_secret_id..."
- workspace_id = "6065c0ef-a6f9-43b9-8a1b-8c95be1254b7"
+ definition_id = "3a6dfd2a-6022-45b2-ac62-eb10f1a0d51f"
+ name = "Guy Rath II"
+ secret_id = "...my_secret_id..."
+ workspace_id = "16cb49da-06c2-439e-baf3-ca2cc2a5392d"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_twilio_taskrouter" "my_source_twiliotaskrouter" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,7 +50,6 @@ resource "airbyte_source_twilio_taskrouter" "my_source_twiliotaskrouter" {
Required:
- `account_sid` (String) Twilio Account ID
-- `auth_token` (String) Twilio Auth Token
-- `source_type` (String) must be one of ["twilio-taskrouter"]
+- `auth_token` (String, Sensitive) Twilio Auth Token
diff --git a/docs/resources/source_twitter.md b/docs/resources/source_twitter.md
index 00876aa99..130f7b97d 100644
--- a/docs/resources/source_twitter.md
+++ b/docs/resources/source_twitter.md
@@ -15,15 +15,15 @@ SourceTwitter Resource
```terraform
resource "airbyte_source_twitter" "my_source_twitter" {
configuration = {
- api_key = "...my_api_key..."
- end_date = "2022-05-29T22:05:47.839Z"
- query = "...my_query..."
- source_type = "twitter"
- start_date = "2022-02-11T15:55:53.597Z"
+ api_key = "...my_api_key..."
+ end_date = "2022-09-12T14:25:08.896Z"
+ query = "...my_query..."
+ start_date = "2022-06-24T22:46:50.628Z"
}
- name = "Elbert Kuhic"
- secret_id = "...my_secret_id..."
- workspace_id = "10d1f655-8c99-4c72-ad2b-c0f94087d9ca"
+ definition_id = "89040904-7267-4ce8-aa32-2e02b7e6dd49"
+ name = "Domingo Heller"
+ secret_id = "...my_secret_id..."
+ workspace_id = "592a5dd7-ddbd-4797-92eb-894fd682a677"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_twitter" "my_source_twitter" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,9 +51,8 @@ resource "airbyte_source_twitter" "my_source_twitter" {
Required:
-- `api_key` (String) App only Bearer Token. See the docs for more information on how to obtain this token.
+- `api_key` (String, Sensitive) App only Bearer Token. See the docs for more information on how to obtain this token.
- `query` (String) Query for matching Tweets. You can learn how to build this query by reading build a query guide .
-- `source_type` (String) must be one of ["twitter"]
Optional:
diff --git a/docs/resources/source_typeform.md b/docs/resources/source_typeform.md
index 0f4c59a15..603d1d4e0 100644
--- a/docs/resources/source_typeform.md
+++ b/docs/resources/source_typeform.md
@@ -16,24 +16,23 @@ SourceTypeform Resource
resource "airbyte_source_typeform" "my_source_typeform" {
configuration = {
credentials = {
- source_typeform_authorization_method_o_auth2_0 = {
+ source_typeform_o_auth2_0 = {
access_token = "...my_access_token..."
- auth_type = "oauth2.0"
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
refresh_token = "...my_refresh_token..."
- token_expiry_date = "2021-02-23T09:05:08.511Z"
+ token_expiry_date = "2022-10-02T21:15:25.365Z"
}
}
form_ids = [
"...",
]
- source_type = "typeform"
- start_date = "2021-03-01T00:00:00Z"
+ start_date = "2021-03-01T00:00:00Z"
}
- name = "Rosemarie Spencer"
- secret_id = "...my_secret_id..."
- workspace_id = "aac9b4ca-a1cf-4e9e-95df-903907f37831"
+ definition_id = "dbbaeb9b-5c2e-42ee-8b85-f41cf2efd5ed"
+ name = "Nancy Hansen"
+ secret_id = "...my_secret_id..."
+ workspace_id = "e4deda30-dd3c-4fb0-aa2f-ad0584130837"
}
```
@@ -43,11 +42,12 @@ resource "airbyte_source_typeform" "my_source_typeform" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -61,7 +61,6 @@ resource "airbyte_source_typeform" "my_source_typeform" {
Required:
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["typeform"]
Optional:
@@ -73,64 +72,26 @@ Optional:
Optional:
-- `source_typeform_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_typeform_authorization_method_o_auth2_0))
-- `source_typeform_authorization_method_private_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_typeform_authorization_method_private_token))
-- `source_typeform_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_typeform_update_authorization_method_o_auth2_0))
-- `source_typeform_update_authorization_method_private_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_typeform_update_authorization_method_private_token))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
+- `private_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--private_token))
-
-### Nested Schema for `configuration.credentials.source_typeform_authorization_method_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) Access Token for making authenticated requests.
+- `access_token` (String, Sensitive) Access Token for making authenticated requests.
- `client_id` (String) The Client ID of the Typeform developer application.
- `client_secret` (String) The Client Secret the Typeform developer application.
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
+- `refresh_token` (String, Sensitive) The key to refresh the expired access_token.
+- `token_expiry_date` (String, Sensitive) The date-time when the access token should be refreshed.
-Optional:
-
-- `auth_type` (String) must be one of ["oauth2.0"]
-
-
-
-### Nested Schema for `configuration.credentials.source_typeform_authorization_method_private_token`
-
-Required:
-
-- `access_token` (String) Log into your Typeform account and then generate a personal Access Token.
-
-Optional:
-
-- `auth_type` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_typeform_update_authorization_method_o_auth2_0`
-Required:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `client_id` (String) The Client ID of the Typeform developer application.
-- `client_secret` (String) The Client Secret the Typeform developer application.
-- `refresh_token` (String) The key to refresh the expired access_token.
-- `token_expiry_date` (String) The date-time when the access token should be refreshed.
-
-Optional:
-
-- `auth_type` (String) must be one of ["oauth2.0"]
-
-
-
-### Nested Schema for `configuration.credentials.source_typeform_update_authorization_method_private_token`
+
+### Nested Schema for `configuration.credentials.private_token`
Required:
-- `access_token` (String) Log into your Typeform account and then generate a personal Access Token.
-
-Optional:
-
-- `auth_type` (String) must be one of ["access_token"]
+- `access_token` (String, Sensitive) Log into your Typeform account and then generate a personal Access Token.
diff --git a/docs/resources/source_us_census.md b/docs/resources/source_us_census.md
index 3a3f1131b..4e2170b7f 100644
--- a/docs/resources/source_us_census.md
+++ b/docs/resources/source_us_census.md
@@ -17,12 +17,12 @@ resource "airbyte_source_us_census" "my_source_uscensus" {
configuration = {
api_key = "...my_api_key..."
query_params = "get=MOVEDIN,GEOID1,GEOID2,MOVEDOUT,FULL1_NAME,FULL2_NAME,MOVEDNET&for=county:*"
- query_path = "data/2018/acs"
- source_type = "us-census"
+ query_path = "data/2019/cbp"
}
- name = "Ginger Gislason"
- secret_id = "...my_secret_id..."
- workspace_id = "54a85466-597c-4502-b3c1-471d51aaa6dd"
+ definition_id = "e5de43c9-07f6-43cc-82bc-2f7f5dfb2c26"
+ name = "Kyle McKenzie"
+ secret_id = "...my_secret_id..."
+ workspace_id = "915d3324-b481-49ff-b934-29d3165dd859"
}
```
@@ -32,11 +32,12 @@ resource "airbyte_source_us_census" "my_source_uscensus" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -49,9 +50,8 @@ resource "airbyte_source_us_census" "my_source_uscensus" {
Required:
-- `api_key` (String) Your API Key. Get your key here.
+- `api_key` (String, Sensitive) Your API Key. Get your key here.
- `query_path` (String) The path portion of the GET request
-- `source_type` (String) must be one of ["us-census"]
Optional:
diff --git a/docs/resources/source_vantage.md b/docs/resources/source_vantage.md
index fcbfd4b8a..4e45996be 100644
--- a/docs/resources/source_vantage.md
+++ b/docs/resources/source_vantage.md
@@ -16,11 +16,11 @@ SourceVantage Resource
resource "airbyte_source_vantage" "my_source_vantage" {
configuration = {
access_token = "...my_access_token..."
- source_type = "vantage"
}
- name = "Corey Pacocha"
- secret_id = "...my_secret_id..."
- workspace_id = "6487c5fc-2b86-42a0-8bef-69e100157630"
+ definition_id = "5e9c61e2-0db5-4f4b-b11c-60c3a7ba3362"
+ name = "Tracey Rippin"
+ secret_id = "...my_secret_id..."
+ workspace_id = "5dfad932-4f6a-4b9f-8334-526eae71eb75"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_vantage" "my_source_vantage" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_vantage" "my_source_vantage" {
Required:
-- `access_token` (String) Your API Access token. See here.
-- `source_type` (String) must be one of ["vantage"]
+- `access_token` (String, Sensitive) Your API Access token. See here.
diff --git a/docs/resources/source_webflow.md b/docs/resources/source_webflow.md
index 929ea88bc..959747437 100644
--- a/docs/resources/source_webflow.md
+++ b/docs/resources/source_webflow.md
@@ -15,13 +15,13 @@ SourceWebflow Resource
```terraform
resource "airbyte_source_webflow" "my_source_webflow" {
configuration = {
- api_key = "a very long hex sequence"
- site_id = "a relatively long hex sequence"
- source_type = "webflow"
+ api_key = "a very long hex sequence"
+ site_id = "a relatively long hex sequence"
}
- name = "Taylor Paucek"
- secret_id = "...my_secret_id..."
- workspace_id = "fded84a3-5a41-4238-a1a7-35ac26ae33be"
+ definition_id = "9d7dd0bf-2f57-4219-978f-bbe9226a954f"
+ name = "Cary Mitchell"
+ secret_id = "...my_secret_id..."
+ workspace_id = "12e392ce-90b9-4169-bb30-db2efb21ef2b"
}
```
@@ -31,11 +31,12 @@ resource "airbyte_source_webflow" "my_source_webflow" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -48,8 +49,7 @@ resource "airbyte_source_webflow" "my_source_webflow" {
Required:
-- `api_key` (String) The API token for authenticating to Webflow. See https://university.webflow.com/lesson/intro-to-the-webflow-api
+- `api_key` (String, Sensitive) The API token for authenticating to Webflow. See https://university.webflow.com/lesson/intro-to-the-webflow-api
- `site_id` (String) The id of the Webflow site you are requesting data from. See https://developers.webflow.com/#sites
-- `source_type` (String) must be one of ["webflow"]
diff --git a/docs/resources/source_whisky_hunter.md b/docs/resources/source_whisky_hunter.md
index f78b5ecad..cd905e371 100644
--- a/docs/resources/source_whisky_hunter.md
+++ b/docs/resources/source_whisky_hunter.md
@@ -14,12 +14,11 @@ SourceWhiskyHunter Resource
```terraform
resource "airbyte_source_whisky_hunter" "my_source_whiskyhunter" {
- configuration = {
- source_type = "whisky-hunter"
- }
- name = "Miss Terrence Kulas"
- secret_id = "...my_secret_id..."
- workspace_id = "f46bca11-06fe-4965-b711-d08cf88ec9f7"
+ configuration = {}
+ definition_id = "c48bf07f-2e77-4213-a664-6fa9b2db7532"
+ name = "Jeremy Kutch"
+ secret_id = "...my_secret_id..."
+ workspace_id = "785b8d4a-d9bb-44c2-904c-6ceb0e440965"
}
```
@@ -29,11 +28,12 @@ resource "airbyte_source_whisky_hunter" "my_source_whiskyhunter" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -44,8 +44,4 @@ resource "airbyte_source_whisky_hunter" "my_source_whiskyhunter" {
### Nested Schema for `configuration`
-Optional:
-
-- `source_type` (String) must be one of ["whisky-hunter"]
-
diff --git a/docs/resources/source_wikipedia_pageviews.md b/docs/resources/source_wikipedia_pageviews.md
index fb8f9d007..197dca7c5 100644
--- a/docs/resources/source_wikipedia_pageviews.md
+++ b/docs/resources/source_wikipedia_pageviews.md
@@ -15,18 +15,18 @@ SourceWikipediaPageviews Resource
```terraform
resource "airbyte_source_wikipedia_pageviews" "my_source_wikipediapageviews" {
configuration = {
- access = "mobile-app"
- agent = "spider"
- article = "Are_You_the_One%3F"
- country = "IN"
- end = "...my_end..."
- project = "www.mediawiki.org"
- source_type = "wikipedia-pageviews"
- start = "...my_start..."
+ access = "mobile-app"
+ agent = "automated"
+ article = "Are_You_the_One%3F"
+ country = "IN"
+ end = "...my_end..."
+ project = "www.mediawiki.org"
+ start = "...my_start..."
}
- name = "Laura Murray"
- secret_id = "...my_secret_id..."
- workspace_id = "6ed333bb-0ce8-4aa6-9432-a986eb7e14ca"
+ definition_id = "ecaf35c1-5b37-479d-be3d-ccb9fd6e1ad7"
+ name = "Stella Balistreri"
+ secret_id = "...my_secret_id..."
+ workspace_id = "320ef50a-8ca7-46b0-83ea-280df1804a67"
}
```
@@ -36,11 +36,12 @@ resource "airbyte_source_wikipedia_pageviews" "my_source_wikipediapageviews" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -59,7 +60,6 @@ Required:
- `country` (String) The ISO 3166-1 alpha-2 code of a country for which to retrieve top articles.
- `end` (String) The date of the last day to include, in YYYYMMDD or YYYYMMDDHH format.
- `project` (String) If you want to filter by project, use the domain of any Wikimedia project.
-- `source_type` (String) must be one of ["wikipedia-pageviews"]
- `start` (String) The date of the first day to include, in YYYYMMDD or YYYYMMDDHH format.
diff --git a/docs/resources/source_woocommerce.md b/docs/resources/source_woocommerce.md
index 6affe5809..5063e9cd8 100644
--- a/docs/resources/source_woocommerce.md
+++ b/docs/resources/source_woocommerce.md
@@ -15,15 +15,15 @@ SourceWoocommerce Resource
```terraform
resource "airbyte_source_woocommerce" "my_source_woocommerce" {
configuration = {
- api_key = "...my_api_key..."
- api_secret = "...my_api_secret..."
- shop = "...my_shop..."
- source_type = "woocommerce"
- start_date = "2021-01-01"
+ api_key = "...my_api_key..."
+ api_secret = "...my_api_secret..."
+ shop = "...my_shop..."
+ start_date = "2021-01-01"
}
- name = "Laura Lindgren III"
- secret_id = "...my_secret_id..."
- workspace_id = "0097019a-48f8-48ec-a7bf-904e01105d38"
+ definition_id = "f3e58149-5129-457c-a986-96756fe05881"
+ name = "Julia Cole"
+ secret_id = "...my_secret_id..."
+ workspace_id = "ad45dc07-8875-4452-bf36-dab5122890f3"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_woocommerce" "my_source_woocommerce" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,10 +51,9 @@ resource "airbyte_source_woocommerce" "my_source_woocommerce" {
Required:
-- `api_key` (String) Customer Key for API in WooCommerce shop
+- `api_key` (String, Sensitive) Customer Key for API in WooCommerce shop
- `api_secret` (String) Customer Secret for API in WooCommerce shop
- `shop` (String) The name of the store. For https://EXAMPLE.com, the shop name is 'EXAMPLE.com'.
-- `source_type` (String) must be one of ["woocommerce"]
- `start_date` (String) The date you would like to replicate data from. Format: YYYY-MM-DD
diff --git a/docs/resources/source_xero.md b/docs/resources/source_xero.md
deleted file mode 100644
index 4026095cf..000000000
--- a/docs/resources/source_xero.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_xero Resource - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceXero Resource
----
-
-# airbyte_source_xero (Resource)
-
-SourceXero Resource
-
-## Example Usage
-
-```terraform
-resource "airbyte_source_xero" "my_source_xero" {
- configuration = {
- authentication = {
- access_token = "...my_access_token..."
- client_id = "...my_client_id..."
- client_secret = "...my_client_secret..."
- refresh_token = "...my_refresh_token..."
- token_expiry_date = "...my_token_expiry_date..."
- }
- source_type = "xero"
- start_date = "2022-03-01T00:00:00Z"
- tenant_id = "...my_tenant_id..."
- }
- name = "Roger Hudson"
- secret_id = "...my_secret_id..."
- workspace_id = "6beb68a0-f657-4b7d-83a1-480f8de30f06"
-}
-```
-
-
-## Schema
-
-### Required
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `source_id` (String)
-- `source_type` (String)
-
-
-### Nested Schema for `configuration`
-
-Required:
-
-- `authentication` (Attributes) (see [below for nested schema](#nestedatt--configuration--authentication))
-- `source_type` (String) must be one of ["xero"]
-- `start_date` (String) UTC date and time in the format YYYY-MM-DDTHH:mm:ssZ. Any data with created_at before this data will not be synced.
-- `tenant_id` (String) Enter your Xero organization's Tenant ID
-
-
-### Nested Schema for `configuration.authentication`
-
-Required:
-
-- `access_token` (String) Enter your Xero application's access token
-- `client_id` (String) Enter your Xero application's Client ID
-- `client_secret` (String) Enter your Xero application's Client Secret
-- `refresh_token` (String) Enter your Xero application's refresh token
-- `token_expiry_date` (String) The date-time when the access token should be refreshed
-
-
diff --git a/docs/resources/source_xkcd.md b/docs/resources/source_xkcd.md
index 521fbe492..ae97297c5 100644
--- a/docs/resources/source_xkcd.md
+++ b/docs/resources/source_xkcd.md
@@ -14,12 +14,11 @@ SourceXkcd Resource
```terraform
resource "airbyte_source_xkcd" "my_source_xkcd" {
- configuration = {
- source_type = "xkcd"
- }
- name = "Mr. Laurence Littel"
- secret_id = "...my_secret_id..."
- workspace_id = "18d97e15-2297-4510-9a80-312292cc61c2"
+ configuration = {}
+ definition_id = "e992c2a3-f4c8-4fc0-a6c7-cc4eafdab4c1"
+ name = "Wilbert Ortiz"
+ secret_id = "...my_secret_id..."
+ workspace_id = "6c12869f-984d-4613-8285-42bb37a458fa"
}
```
@@ -29,11 +28,12 @@ resource "airbyte_source_xkcd" "my_source_xkcd" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -44,8 +44,4 @@ resource "airbyte_source_xkcd" "my_source_xkcd" {
### Nested Schema for `configuration`
-Optional:
-
-- `source_type` (String) must be one of ["xkcd"]
-
diff --git a/docs/resources/source_yandex_metrica.md b/docs/resources/source_yandex_metrica.md
index c51ceb07f..8e8532719 100644
--- a/docs/resources/source_yandex_metrica.md
+++ b/docs/resources/source_yandex_metrica.md
@@ -15,15 +15,15 @@ SourceYandexMetrica Resource
```terraform
resource "airbyte_source_yandex_metrica" "my_source_yandexmetrica" {
configuration = {
- auth_token = "...my_auth_token..."
- counter_id = "...my_counter_id..."
- end_date = "2022-01-01"
- source_type = "yandex-metrica"
- start_date = "2022-01-01"
+ auth_token = "...my_auth_token..."
+ counter_id = "...my_counter_id..."
+ end_date = "2022-01-01"
+ start_date = "2022-01-01"
}
- name = "Dominic Marvin"
- secret_id = "...my_secret_id..."
- workspace_id = "e102da2d-e35f-48e0-9bf3-3eaab45402ac"
+ definition_id = "71a16fff-1f04-4aee-bc30-6c4f3397c204"
+ name = "June Williamson"
+ secret_id = "...my_secret_id..."
+ workspace_id = "deba481e-413d-4d76-8cc3-ae1d775ee978"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_yandex_metrica" "my_source_yandexmetrica" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,9 +51,8 @@ resource "airbyte_source_yandex_metrica" "my_source_yandexmetrica" {
Required:
-- `auth_token` (String) Your Yandex Metrica API access token
+- `auth_token` (String, Sensitive) Your Yandex Metrica API access token
- `counter_id` (String) Counter ID
-- `source_type` (String) must be one of ["yandex-metrica"]
- `start_date` (String) Starting point for your data replication, in format of "YYYY-MM-DD".
Optional:
diff --git a/docs/resources/source_yotpo.md b/docs/resources/source_yotpo.md
index 0b4f4c155..c78e1321e 100644
--- a/docs/resources/source_yotpo.md
+++ b/docs/resources/source_yotpo.md
@@ -17,13 +17,13 @@ resource "airbyte_source_yotpo" "my_source_yotpo" {
configuration = {
access_token = "...my_access_token..."
app_key = "...my_app_key..."
- email = "Ibrahim74@gmail.com"
- source_type = "yotpo"
+ email = "Bradley96@hotmail.com"
start_date = "2022-03-01T00:00:00.000Z"
}
- name = "Clark McGlynn"
- secret_id = "...my_secret_id..."
- workspace_id = "61aae5eb-5f0c-4492-b574-4d08a2267aae"
+ definition_id = "746ac11e-b024-4372-8c2f-a90b3fc58aed"
+ name = "Reginald Howell"
+ secret_id = "...my_secret_id..."
+ workspace_id = "07de9609-725c-46d5-a5da-35039f4e4098"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_yotpo" "my_source_yotpo" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,10 +51,13 @@ resource "airbyte_source_yotpo" "my_source_yotpo" {
Required:
-- `access_token` (String) Access token recieved as a result of API call to https://api.yotpo.com/oauth/token (Ref- https://apidocs.yotpo.com/reference/yotpo-authentication)
-- `app_key` (String) App key found at settings (Ref- https://settings.yotpo.com/#/general_settings)
-- `email` (String) Email address registered with yotpo.
-- `source_type` (String) must be one of ["yotpo"]
+- `access_token` (String, Sensitive) Access token recieved as a result of API call to https://api.yotpo.com/oauth/token (Ref- https://apidocs.yotpo.com/reference/yotpo-authentication)
+- `app_key` (String, Sensitive) App key found at settings (Ref- https://settings.yotpo.com/#/general_settings)
- `start_date` (String) Date time filter for incremental filter, Specify which date to extract from.
+Optional:
+
+- `email` (String) Default: "example@gmail.com"
+Email address registered with yotpo.
+
diff --git a/docs/resources/source_younium.md b/docs/resources/source_younium.md
deleted file mode 100644
index 187ae007c..000000000
--- a/docs/resources/source_younium.md
+++ /dev/null
@@ -1,62 +0,0 @@
----
-# generated by https://github.com/hashicorp/terraform-plugin-docs
-page_title: "airbyte_source_younium Resource - terraform-provider-airbyte"
-subcategory: ""
-description: |-
- SourceYounium Resource
----
-
-# airbyte_source_younium (Resource)
-
-SourceYounium Resource
-
-## Example Usage
-
-```terraform
-resource "airbyte_source_younium" "my_source_younium" {
- configuration = {
- legal_entity = "...my_legal_entity..."
- password = "...my_password..."
- playground = true
- source_type = "younium"
- username = "Jairo.Monahan79"
- }
- name = "Martha Orn"
- secret_id = "...my_secret_id..."
- workspace_id = "1becb83d-2378-4ae3-bfc2-3d9450a986a4"
-}
-```
-
-
-## Schema
-
-### Required
-
-- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
-- `workspace_id` (String)
-
-### Optional
-
-- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
-
-### Read-Only
-
-- `source_id` (String)
-- `source_type` (String)
-
-
-### Nested Schema for `configuration`
-
-Required:
-
-- `legal_entity` (String) Legal Entity that data should be pulled from
-- `password` (String) Account password for younium account API key
-- `source_type` (String) must be one of ["younium"]
-- `username` (String) Username for Younium account
-
-Optional:
-
-- `playground` (Boolean) Property defining if connector is used against playground or production environment
-
-
diff --git a/docs/resources/source_youtube_analytics.md b/docs/resources/source_youtube_analytics.md
index c3833c756..a65431cc8 100644
--- a/docs/resources/source_youtube_analytics.md
+++ b/docs/resources/source_youtube_analytics.md
@@ -16,15 +16,16 @@ SourceYoutubeAnalytics Resource
resource "airbyte_source_youtube_analytics" "my_source_youtubeanalytics" {
configuration = {
credentials = {
- client_id = "...my_client_id..."
- client_secret = "...my_client_secret..."
- refresh_token = "...my_refresh_token..."
+ additional_properties = "{ \"see\": \"documentation\" }"
+ client_id = "...my_client_id..."
+ client_secret = "...my_client_secret..."
+ refresh_token = "...my_refresh_token..."
}
- source_type = "youtube-analytics"
}
- name = "Tommy Rippin"
- secret_id = "...my_secret_id..."
- workspace_id = "707f06b2-8ecc-4864-9238-6f62c969c4cc"
+ definition_id = "bb8c2a23-b3c0-4134-a218-66cf518dbd5e"
+ name = "Mr. Clay Terry"
+ secret_id = "...my_secret_id..."
+ workspace_id = "e07eadc6-f53d-4253-9b8b-1e39d437be8f"
}
```
@@ -34,11 +35,12 @@ resource "airbyte_source_youtube_analytics" "my_source_youtubeanalytics" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -52,7 +54,6 @@ resource "airbyte_source_youtube_analytics" "my_source_youtubeanalytics" {
Required:
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `source_type` (String) must be one of ["youtube-analytics"]
### Nested Schema for `configuration.credentials`
@@ -61,7 +62,7 @@ Required:
- `client_id` (String) The Client ID of your developer application
- `client_secret` (String) The client secret of your developer application
-- `refresh_token` (String) A refresh token generated using the above client ID and secret
+- `refresh_token` (String, Sensitive) A refresh token generated using the above client ID and secret
Optional:
diff --git a/docs/resources/source_zendesk_chat.md b/docs/resources/source_zendesk_chat.md
index 45c2cc0b7..548db31e6 100644
--- a/docs/resources/source_zendesk_chat.md
+++ b/docs/resources/source_zendesk_chat.md
@@ -16,18 +16,17 @@ SourceZendeskChat Resource
resource "airbyte_source_zendesk_chat" "my_source_zendeskchat" {
configuration = {
credentials = {
- source_zendesk_chat_authorization_method_access_token = {
+ source_zendesk_chat_access_token = {
access_token = "...my_access_token..."
- credentials = "access_token"
}
}
- source_type = "zendesk-chat"
- start_date = "2021-02-01T00:00:00Z"
- subdomain = "...my_subdomain..."
+ start_date = "2021-02-01T00:00:00Z"
+ subdomain = "...my_subdomain..."
}
- name = "Mabel Lebsack MD"
- secret_id = "...my_secret_id..."
- workspace_id = "3fd3c81d-a10f-48c2-bdf9-31da3edb51fa"
+ definition_id = "f797fa8a-e012-4beb-a22c-99641ef630f5"
+ name = "Julian Kuhic"
+ secret_id = "...my_secret_id..."
+ workspace_id = "c0e34b35-2ddb-404c-9bce-387d66444a18"
}
```
@@ -37,11 +36,12 @@ resource "airbyte_source_zendesk_chat" "my_source_zendeskchat" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -54,69 +54,38 @@ resource "airbyte_source_zendesk_chat" "my_source_zendeskchat" {
Required:
-- `source_type` (String) must be one of ["zendesk-chat"]
- `start_date` (String) The date from which you'd like to replicate data for Zendesk Chat API, in the format YYYY-MM-DDT00:00:00Z.
Optional:
- `credentials` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials))
-- `subdomain` (String) Required if you access Zendesk Chat from a Zendesk Support subdomain.
+- `subdomain` (String) Default: ""
+Required if you access Zendesk Chat from a Zendesk Support subdomain.
### Nested Schema for `configuration.credentials`
Optional:
-- `source_zendesk_chat_authorization_method_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_chat_authorization_method_access_token))
-- `source_zendesk_chat_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_chat_authorization_method_o_auth2_0))
-- `source_zendesk_chat_update_authorization_method_access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_chat_update_authorization_method_access_token))
-- `source_zendesk_chat_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_chat_update_authorization_method_o_auth2_0))
+- `access_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--access_token))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_zendesk_chat_authorization_method_access_token`
+
+### Nested Schema for `configuration.credentials.access_token`
Required:
-- `access_token` (String) The Access Token to make authenticated requests.
-- `credentials` (String) must be one of ["access_token"]
+- `access_token` (String, Sensitive) The Access Token to make authenticated requests.
-
-### Nested Schema for `configuration.credentials.source_zendesk_chat_authorization_method_o_auth2_0`
-
-Required:
-
-- `credentials` (String) must be one of ["oauth2.0"]
-
-Optional:
-
-- `access_token` (String) Access Token for making authenticated requests.
-- `client_id` (String) The Client ID of your OAuth application
-- `client_secret` (String) The Client Secret of your OAuth application.
-- `refresh_token` (String) Refresh Token to obtain new Access Token, when it's expired.
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_chat_update_authorization_method_access_token`
-
-Required:
-
-- `access_token` (String) The Access Token to make authenticated requests.
-- `credentials` (String) must be one of ["access_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_chat_update_authorization_method_o_auth2_0`
-
-Required:
-
-- `credentials` (String) must be one of ["oauth2.0"]
+
+### Nested Schema for `configuration.credentials.o_auth20`
Optional:
-- `access_token` (String) Access Token for making authenticated requests.
+- `access_token` (String, Sensitive) Access Token for making authenticated requests.
- `client_id` (String) The Client ID of your OAuth application
- `client_secret` (String) The Client Secret of your OAuth application.
-- `refresh_token` (String) Refresh Token to obtain new Access Token, when it's expired.
+- `refresh_token` (String, Sensitive) Refresh Token to obtain new Access Token, when it's expired.
diff --git a/docs/resources/source_zendesk_sell.md b/docs/resources/source_zendesk_sell.md
new file mode 100644
index 000000000..92d86a955
--- /dev/null
+++ b/docs/resources/source_zendesk_sell.md
@@ -0,0 +1,53 @@
+---
+# generated by https://github.com/hashicorp/terraform-plugin-docs
+page_title: "airbyte_source_zendesk_sell Resource - terraform-provider-airbyte"
+subcategory: ""
+description: |-
+ SourceZendeskSell Resource
+---
+
+# airbyte_source_zendesk_sell (Resource)
+
+SourceZendeskSell Resource
+
+## Example Usage
+
+```terraform
+resource "airbyte_source_zendesk_sell" "my_source_zendesksell" {
+ configuration = {
+ api_token = "f23yhd630otl94y85a8bf384958473pto95847fd006da49382716or937ruw059"
+ }
+ definition_id = "6797a763-e10f-499e-8087-9e49484a7485"
+ name = "Jane Batz"
+ secret_id = "...my_secret_id..."
+ workspace_id = "4aee427f-93df-49bf-84b7-84edaaf2f424"
+}
+```
+
+
+## Schema
+
+### Required
+
+- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
+- `name` (String) Name of the source e.g. dev-mysql-instance.
+- `workspace_id` (String)
+
+### Optional
+
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
+- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
+
+### Read-Only
+
+- `source_id` (String)
+- `source_type` (String)
+
+
+### Nested Schema for `configuration`
+
+Required:
+
+- `api_token` (String, Sensitive) The API token for authenticating to Zendesk Sell
+
+
diff --git a/docs/resources/source_zendesk_sunshine.md b/docs/resources/source_zendesk_sunshine.md
index 625122394..f3fade55e 100644
--- a/docs/resources/source_zendesk_sunshine.md
+++ b/docs/resources/source_zendesk_sunshine.md
@@ -16,19 +16,18 @@ SourceZendeskSunshine Resource
resource "airbyte_source_zendesk_sunshine" "my_source_zendesksunshine" {
configuration = {
credentials = {
- source_zendesk_sunshine_authorization_method_api_token = {
- api_token = "...my_api_token..."
- auth_method = "api_token"
- email = "Leonor_Funk@hotmail.com"
+ source_zendesk_sunshine_api_token = {
+ api_token = "...my_api_token..."
+ email = "Robbie51@hotmail.com"
}
}
- source_type = "zendesk-sunshine"
- start_date = "2021-01-01T00:00:00Z"
- subdomain = "...my_subdomain..."
+ start_date = "2021-01-01T00:00:00Z"
+ subdomain = "...my_subdomain..."
}
- name = "Mrs. Edith Hermiston"
- secret_id = "...my_secret_id..."
- workspace_id = "726d1532-1b83-42a5-ad69-180ff60eb9a6"
+ definition_id = "6f099262-2de7-4b1a-93e5-915fe5844c8d"
+ name = "Kristie Moen"
+ secret_id = "...my_secret_id..."
+ workspace_id = "7badf74d-23a8-47a4-aabf-6ae57802daa8"
}
```
@@ -38,11 +37,12 @@ resource "airbyte_source_zendesk_sunshine" "my_source_zendesksunshine" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -55,7 +55,6 @@ resource "airbyte_source_zendesk_sunshine" "my_source_zendesksunshine" {
Required:
-- `source_type` (String) must be one of ["zendesk-sunshine"]
- `start_date` (String) The date from which you'd like to replicate data for Zendesk Sunshine API, in the format YYYY-MM-DDT00:00:00Z.
- `subdomain` (String) The subdomain for your Zendesk Account.
@@ -68,49 +67,24 @@ Optional:
Optional:
-- `source_zendesk_sunshine_authorization_method_api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_sunshine_authorization_method_api_token))
-- `source_zendesk_sunshine_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_sunshine_authorization_method_o_auth2_0))
-- `source_zendesk_sunshine_update_authorization_method_api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_sunshine_update_authorization_method_api_token))
-- `source_zendesk_sunshine_update_authorization_method_o_auth2_0` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_sunshine_update_authorization_method_o_auth2_0))
+- `api_token` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--api_token))
+- `o_auth20` (Attributes) (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_zendesk_sunshine_authorization_method_api_token`
+
+### Nested Schema for `configuration.credentials.api_token`
Required:
-- `api_token` (String) API Token. See the docs for information on how to generate this key.
-- `auth_method` (String) must be one of ["api_token"]
+- `api_token` (String, Sensitive) API Token. See the docs for information on how to generate this key.
- `email` (String) The user email for your Zendesk account
-
-### Nested Schema for `configuration.credentials.source_zendesk_sunshine_authorization_method_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) Long-term access Token for making authenticated requests.
-- `auth_method` (String) must be one of ["oauth2.0"]
-- `client_id` (String) The Client ID of your OAuth application.
-- `client_secret` (String) The Client Secret of your OAuth application.
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_sunshine_update_authorization_method_api_token`
-
-Required:
-
-- `api_token` (String) API Token. See the docs for information on how to generate this key.
-- `auth_method` (String) must be one of ["api_token"]
-- `email` (String) The user email for your Zendesk account
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_sunshine_update_authorization_method_o_auth2_0`
-
-Required:
-
-- `access_token` (String) Long-term access Token for making authenticated requests.
-- `auth_method` (String) must be one of ["oauth2.0"]
+- `access_token` (String, Sensitive) Long-term access Token for making authenticated requests.
- `client_id` (String) The Client ID of your OAuth application.
- `client_secret` (String) The Client Secret of your OAuth application.
diff --git a/docs/resources/source_zendesk_support.md b/docs/resources/source_zendesk_support.md
index 88d61152d..d014d91f0 100644
--- a/docs/resources/source_zendesk_support.md
+++ b/docs/resources/source_zendesk_support.md
@@ -16,20 +16,20 @@ SourceZendeskSupport Resource
resource "airbyte_source_zendesk_support" "my_source_zendesksupport" {
configuration = {
credentials = {
- source_zendesk_support_authentication_api_token = {
- api_token = "...my_api_token..."
- credentials = "api_token"
- email = "Ezequiel.Lindgren56@yahoo.com"
+ source_zendesk_support_api_token = {
+ additional_properties = "{ \"see\": \"documentation\" }"
+ api_token = "...my_api_token..."
+ email = "Ansel_McLaughlin@gmail.com"
}
}
- ignore_pagination = true
- source_type = "zendesk-support"
+ ignore_pagination = false
start_date = "2020-10-15T00:00:00Z"
subdomain = "...my_subdomain..."
}
- name = "Alexander Friesen"
- secret_id = "...my_secret_id..."
- workspace_id = "82dbec75-c68c-4606-9946-8ce304d8849b"
+ definition_id = "7526c0e6-8d41-4f29-878b-d831a4caf6a0"
+ name = "Linda Weissnat"
+ secret_id = "...my_secret_id..."
+ workspace_id = "20a84c82-feed-435f-9471-260525978122"
}
```
@@ -39,11 +39,12 @@ resource "airbyte_source_zendesk_support" "my_source_zendesksupport" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -56,13 +57,13 @@ resource "airbyte_source_zendesk_support" "my_source_zendesksupport" {
Required:
-- `source_type` (String) must be one of ["zendesk-support"]
- `subdomain` (String) This is your unique Zendesk subdomain that can be found in your account URL. For example, in https://MY_SUBDOMAIN.zendesk.com/, MY_SUBDOMAIN is the value of your subdomain.
Optional:
- `credentials` (Attributes) Zendesk allows two authentication methods. We recommend using `OAuth2.0` for Airbyte Cloud users and `API token` for Airbyte Open Source users. (see [below for nested schema](#nestedatt--configuration--credentials))
-- `ignore_pagination` (Boolean) Makes each stream read a single page of data.
+- `ignore_pagination` (Boolean) Default: false
+Makes each stream read a single page of data.
- `start_date` (String) The UTC date and time from which you'd like to replicate data, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
@@ -70,66 +71,33 @@ Optional:
Optional:
-- `source_zendesk_support_authentication_api_token` (Attributes) Zendesk allows two authentication methods. We recommend using `OAuth2.0` for Airbyte Cloud users and `API token` for Airbyte Open Source users. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_support_authentication_api_token))
-- `source_zendesk_support_authentication_o_auth2_0` (Attributes) Zendesk allows two authentication methods. We recommend using `OAuth2.0` for Airbyte Cloud users and `API token` for Airbyte Open Source users. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_support_authentication_o_auth2_0))
-- `source_zendesk_support_update_authentication_api_token` (Attributes) Zendesk allows two authentication methods. We recommend using `OAuth2.0` for Airbyte Cloud users and `API token` for Airbyte Open Source users. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_support_update_authentication_api_token))
-- `source_zendesk_support_update_authentication_o_auth2_0` (Attributes) Zendesk allows two authentication methods. We recommend using `OAuth2.0` for Airbyte Cloud users and `API token` for Airbyte Open Source users. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_support_update_authentication_o_auth2_0))
+- `api_token` (Attributes) Zendesk allows two authentication methods. We recommend using `OAuth2.0` for Airbyte Cloud users and `API token` for Airbyte Open Source users. (see [below for nested schema](#nestedatt--configuration--credentials--api_token))
+- `o_auth20` (Attributes) Zendesk allows two authentication methods. We recommend using `OAuth2.0` for Airbyte Cloud users and `API token` for Airbyte Open Source users. (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_zendesk_support_authentication_api_token`
+
+### Nested Schema for `configuration.credentials.api_token`
Required:
-- `api_token` (String) The value of the API token generated. See our full documentation for more information on generating this token.
+- `api_token` (String, Sensitive) The value of the API token generated. See our full documentation for more information on generating this token.
- `email` (String) The user email for your Zendesk account.
Optional:
- `additional_properties` (String) Parsed as JSON.
-- `credentials` (String) must be one of ["api_token"]
-
-### Nested Schema for `configuration.credentials.source_zendesk_support_authentication_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) The OAuth access token. See the Zendesk docs for more information on generating this token.
+- `access_token` (String, Sensitive) The OAuth access token. See the Zendesk docs for more information on generating this token.
Optional:
- `additional_properties` (String) Parsed as JSON.
- `client_id` (String) The OAuth client's ID. See this guide for more information.
- `client_secret` (String) The OAuth client secret. See this guide for more information.
-- `credentials` (String) must be one of ["oauth2.0"]
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_support_update_authentication_api_token`
-
-Required:
-
-- `api_token` (String) The value of the API token generated. See our full documentation for more information on generating this token.
-- `email` (String) The user email for your Zendesk account.
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `credentials` (String) must be one of ["api_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_support_update_authentication_o_auth2_0`
-
-Required:
-
-- `access_token` (String) The OAuth access token. See the Zendesk docs for more information on generating this token.
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `client_id` (String) The OAuth client's ID. See this guide for more information.
-- `client_secret` (String) The OAuth client secret. See this guide for more information.
-- `credentials` (String) must be one of ["oauth2.0"]
diff --git a/docs/resources/source_zendesk_talk.md b/docs/resources/source_zendesk_talk.md
index e75284df5..a4b0ae93b 100644
--- a/docs/resources/source_zendesk_talk.md
+++ b/docs/resources/source_zendesk_talk.md
@@ -16,19 +16,19 @@ SourceZendeskTalk Resource
resource "airbyte_source_zendesk_talk" "my_source_zendesktalk" {
configuration = {
credentials = {
- source_zendesk_talk_authentication_api_token = {
- api_token = "...my_api_token..."
- auth_type = "api_token"
- email = "Kacie27@hotmail.com"
+ source_zendesk_talk_api_token = {
+ additional_properties = "{ \"see\": \"documentation\" }"
+ api_token = "...my_api_token..."
+ email = "Brain88@gmail.com"
}
}
- source_type = "zendesk-talk"
- start_date = "2020-10-15T00:00:00Z"
- subdomain = "...my_subdomain..."
+ start_date = "2020-10-15T00:00:00Z"
+ subdomain = "...my_subdomain..."
}
- name = "Jackie Welch"
- secret_id = "...my_secret_id..."
- workspace_id = "bb0c69e3-72db-4134-8ba9-f78a5c0ed7aa"
+ definition_id = "9a97873e-c6ec-423f-8936-834bb7f256aa"
+ name = "Gwen Towne"
+ secret_id = "...my_secret_id..."
+ workspace_id = "7a7ac93c-e210-41f6-92ef-f8de56504728"
}
```
@@ -38,11 +38,12 @@ resource "airbyte_source_zendesk_talk" "my_source_zendesktalk" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -55,7 +56,6 @@ resource "airbyte_source_zendesk_talk" "my_source_zendesktalk" {
Required:
-- `source_type` (String) must be one of ["zendesk-talk"]
- `start_date` (String) The date from which you'd like to replicate data for Zendesk Talk API, in the format YYYY-MM-DDT00:00:00Z. All data generated after this date will be replicated.
- `subdomain` (String) This is your Zendesk subdomain that can be found in your account URL. For example, in https://{MY_SUBDOMAIN}.zendesk.com/, where MY_SUBDOMAIN is the value of your subdomain.
@@ -68,65 +68,32 @@ Optional:
Optional:
-- `source_zendesk_talk_authentication_api_token` (Attributes) Zendesk service provides two authentication methods. Choose between: `OAuth2.0` or `API token`. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_talk_authentication_api_token))
-- `source_zendesk_talk_authentication_o_auth2_0` (Attributes) Zendesk service provides two authentication methods. Choose between: `OAuth2.0` or `API token`. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_talk_authentication_o_auth2_0))
-- `source_zendesk_talk_update_authentication_api_token` (Attributes) Zendesk service provides two authentication methods. Choose between: `OAuth2.0` or `API token`. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_talk_update_authentication_api_token))
-- `source_zendesk_talk_update_authentication_o_auth2_0` (Attributes) Zendesk service provides two authentication methods. Choose between: `OAuth2.0` or `API token`. (see [below for nested schema](#nestedatt--configuration--credentials--source_zendesk_talk_update_authentication_o_auth2_0))
+- `api_token` (Attributes) Zendesk service provides two authentication methods. Choose between: `OAuth2.0` or `API token`. (see [below for nested schema](#nestedatt--configuration--credentials--api_token))
+- `o_auth20` (Attributes) Zendesk service provides two authentication methods. Choose between: `OAuth2.0` or `API token`. (see [below for nested schema](#nestedatt--configuration--credentials--o_auth20))
-
-### Nested Schema for `configuration.credentials.source_zendesk_talk_authentication_api_token`
+
+### Nested Schema for `configuration.credentials.api_token`
Required:
-- `api_token` (String) The value of the API token generated. See the docs for more information.
+- `api_token` (String, Sensitive) The value of the API token generated. See the docs for more information.
- `email` (String) The user email for your Zendesk account.
Optional:
- `additional_properties` (String) Parsed as JSON.
-- `auth_type` (String) must be one of ["api_token"]
-
-### Nested Schema for `configuration.credentials.source_zendesk_talk_authentication_o_auth2_0`
+
+### Nested Schema for `configuration.credentials.o_auth20`
Required:
-- `access_token` (String) The value of the API token generated. See the docs for more information.
+- `access_token` (String, Sensitive) The value of the API token generated. See the docs for more information.
Optional:
- `additional_properties` (String) Parsed as JSON.
-- `auth_type` (String) must be one of ["oauth2.0"]
-- `client_id` (String) Client ID
-- `client_secret` (String) Client Secret
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_talk_update_authentication_api_token`
-
-Required:
-
-- `api_token` (String) The value of the API token generated. See the docs for more information.
-- `email` (String) The user email for your Zendesk account.
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `auth_type` (String) must be one of ["api_token"]
-
-
-
-### Nested Schema for `configuration.credentials.source_zendesk_talk_update_authentication_o_auth2_0`
-
-Required:
-
-- `access_token` (String) The value of the API token generated. See the docs for more information.
-
-Optional:
-
-- `additional_properties` (String) Parsed as JSON.
-- `auth_type` (String) must be one of ["oauth2.0"]
- `client_id` (String) Client ID
- `client_secret` (String) Client Secret
diff --git a/docs/resources/source_zenloop.md b/docs/resources/source_zenloop.md
index 46ba116d7..1e3ba7fca 100644
--- a/docs/resources/source_zenloop.md
+++ b/docs/resources/source_zenloop.md
@@ -17,13 +17,13 @@ resource "airbyte_source_zenloop" "my_source_zenloop" {
configuration = {
api_token = "...my_api_token..."
date_from = "2021-10-24T03:30:30Z"
- source_type = "zenloop"
survey_group_id = "...my_survey_group_id..."
survey_id = "...my_survey_id..."
}
- name = "Ricardo Champlin"
- secret_id = "...my_secret_id..."
- workspace_id = "7261fb0c-58d2-47b5-9996-b5b4b50eef71"
+ definition_id = "30aace29-0d7b-43b3-98af-f5206e7c6651"
+ name = "Colleen Hodkiewicz"
+ secret_id = "...my_secret_id..."
+ workspace_id = "de9cd819-ecc3-47ba-9700-ba64daf2cd7c"
}
```
@@ -33,11 +33,12 @@ resource "airbyte_source_zenloop" "my_source_zenloop" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -50,8 +51,7 @@ resource "airbyte_source_zenloop" "my_source_zenloop" {
Required:
-- `api_token` (String) Zenloop API Token. You can get the API token in settings page here
-- `source_type` (String) must be one of ["zenloop"]
+- `api_token` (String, Sensitive) Zenloop API Token. You can get the API token in settings page here
Optional:
diff --git a/docs/resources/source_zoho_crm.md b/docs/resources/source_zoho_crm.md
index 3bc56397a..aac734e04 100644
--- a/docs/resources/source_zoho_crm.md
+++ b/docs/resources/source_zoho_crm.md
@@ -17,16 +17,16 @@ resource "airbyte_source_zoho_crm" "my_source_zohocrm" {
configuration = {
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
- dc_region = "US"
- edition = "Enterprise"
- environment = "Developer"
+ dc_region = "IN"
+ edition = "Ultimate"
+ environment = "Sandbox"
refresh_token = "...my_refresh_token..."
- source_type = "zoho-crm"
- start_datetime = "2000-01-01T13:00+00:00"
+ start_datetime = "2000-01-01 13:00"
}
- name = "Kenneth Fisher"
- secret_id = "...my_secret_id..."
- workspace_id = "b1710688-deeb-4ef8-97f3-dd0ccd33f11b"
+ definition_id = "7a306443-a75b-4cf4-a2e1-378db01d76f7"
+ name = "Jody Collins"
+ secret_id = "...my_secret_id..."
+ workspace_id = "a6e51f0c-20e4-4312-90cb-fe39df03e297"
}
```
@@ -36,11 +36,12 @@ resource "airbyte_source_zoho_crm" "my_source_zohocrm" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -57,15 +58,14 @@ Required:
- `client_secret` (String) OAuth2.0 Client Secret
- `dc_region` (String) must be one of ["US", "AU", "EU", "IN", "CN", "JP"]
Please choose the region of your Data Center location. More info by this Link
-- `edition` (String) must be one of ["Free", "Standard", "Professional", "Enterprise", "Ultimate"]
-Choose your Edition of Zoho CRM to determine API Concurrency Limits
- `environment` (String) must be one of ["Production", "Developer", "Sandbox"]
Please choose the environment
-- `refresh_token` (String) OAuth2.0 Refresh Token
-- `source_type` (String) must be one of ["zoho-crm"]
+- `refresh_token` (String, Sensitive) OAuth2.0 Refresh Token
Optional:
+- `edition` (String) must be one of ["Free", "Standard", "Professional", "Enterprise", "Ultimate"]; Default: "Free"
+Choose your Edition of Zoho CRM to determine API Concurrency Limits
- `start_datetime` (String) ISO 8601, for instance: `YYYY-MM-DD`, `YYYY-MM-DD HH:MM:SS+HH:MM`
diff --git a/docs/resources/source_zoom.md b/docs/resources/source_zoom.md
index 188b05b34..a055f4994 100644
--- a/docs/resources/source_zoom.md
+++ b/docs/resources/source_zoom.md
@@ -15,12 +15,12 @@ SourceZoom Resource
```terraform
resource "airbyte_source_zoom" "my_source_zoom" {
configuration = {
- jwt_token = "...my_jwt_token..."
- source_type = "zoom"
+ jwt_token = "...my_jwt_token..."
}
- name = "Alexis Gutmann IV"
- secret_id = "...my_secret_id..."
- workspace_id = "0aa10418-6ec7-459e-82f3-702c5c8e2d30"
+ definition_id = "d6f5cf39-b34f-4958-9f42-198f32822b82"
+ name = "Gregory Hirthe"
+ secret_id = "...my_secret_id..."
+ workspace_id = "bc2b7c1d-3540-4fbb-a2d8-a9d0010028d1"
}
```
@@ -30,11 +30,12 @@ resource "airbyte_source_zoom" "my_source_zoom" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -47,7 +48,6 @@ resource "airbyte_source_zoom" "my_source_zoom" {
Required:
-- `jwt_token` (String) JWT Token
-- `source_type` (String) must be one of ["zoom"]
+- `jwt_token` (String, Sensitive) JWT Token
diff --git a/docs/resources/source_zuora.md b/docs/resources/source_zuora.md
index d1174e7d4..44682f1ec 100644
--- a/docs/resources/source_zuora.md
+++ b/docs/resources/source_zuora.md
@@ -17,15 +17,15 @@ resource "airbyte_source_zuora" "my_source_zuora" {
configuration = {
client_id = "...my_client_id..."
client_secret = "...my_client_secret..."
- data_query = "Unlimited"
- source_type = "zuora"
+ data_query = "Live"
start_date = "...my_start_date..."
- tenant_endpoint = "US Performance Test"
- window_in_days = "200"
+ tenant_endpoint = "EU Production"
+ window_in_days = "0.5"
}
- name = "Joan Bednar"
- secret_id = "...my_secret_id..."
- workspace_id = "a44707bf-375b-4442-8282-1fdb2f69e592"
+ definition_id = "280d807c-dd8e-4b8c-b5c4-610938eb2433"
+ name = "Anne Funk"
+ secret_id = "...my_secret_id..."
+ workspace_id = "c5c5aa0b-5368-4b26-a568-aa6dc340bb15"
}
```
@@ -35,11 +35,12 @@ resource "airbyte_source_zuora" "my_source_zuora" {
### Required
- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
-- `name` (String)
+- `name` (String) Name of the source e.g. dev-mysql-instance.
- `workspace_id` (String)
### Optional
+- `definition_id` (String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided.
- `secret_id` (String) Optional secretID obtained through the public API OAuth redirect flow.
### Read-Only
@@ -54,15 +55,15 @@ Required:
- `client_id` (String) Your OAuth user Client ID
- `client_secret` (String) Your OAuth user Client Secret
-- `data_query` (String) must be one of ["Live", "Unlimited"]
-Choose between `Live`, or `Unlimited` - the optimized, replicated database at 12 hours freshness for high volume extraction Link
-- `source_type` (String) must be one of ["zuora"]
- `start_date` (String) Start Date in format: YYYY-MM-DD
- `tenant_endpoint` (String) must be one of ["US Production", "US Cloud Production", "US API Sandbox", "US Cloud API Sandbox", "US Central Sandbox", "US Performance Test", "EU Production", "EU API Sandbox", "EU Central Sandbox"]
Please choose the right endpoint where your Tenant is located. More info by this Link
Optional:
-- `window_in_days` (String) The amount of days for each data-chunk begining from start_date. Bigger the value - faster the fetch. (0.1 - as for couple of hours, 1 - as for a Day; 364 - as for a Year).
+- `data_query` (String) must be one of ["Live", "Unlimited"]; Default: "Live"
+Choose between `Live`, or `Unlimited` - the optimized, replicated database at 12 hours freshness for high volume extraction Link
+- `window_in_days` (String) Default: "90"
+The amount of days for each data-chunk begining from start_date. Bigger the value - faster the fetch. (0.1 - as for couple of hours, 1 - as for a Day; 364 - as for a Year).
diff --git a/docs/resources/workspace.md b/docs/resources/workspace.md
index a3c6ca178..d007f3ced 100644
--- a/docs/resources/workspace.md
+++ b/docs/resources/workspace.md
@@ -14,7 +14,7 @@ Workspace Resource
```terraform
resource "airbyte_workspace" "my_workspace" {
- name = "Glenda Schiller DDS"
+ name = "Jessie Moen"
}
```
@@ -27,7 +27,7 @@ resource "airbyte_workspace" "my_workspace" {
### Read-Only
-- `data_residency` (String) must be one of ["auto", "us", "eu"]
+- `data_residency` (String) must be one of ["auto", "us", "eu"]; Default: "auto"
- `workspace_id` (String)