Skip to content

Commit

Permalink
Merge pull request #131 from airbytehq/speakeasy-sdk-regen-1723505373
Browse files Browse the repository at this point in the history
chore: 🐝 Update SDK - Generate
  • Loading branch information
malikdiarra authored Aug 13, 2024
2 parents 37b4918 + 48a39ec commit cdfa5be
Show file tree
Hide file tree
Showing 618 changed files with 7,697 additions and 6,984 deletions.
125 changes: 62 additions & 63 deletions .speakeasy/gen.lock

Large diffs are not rendered by default.

8 changes: 4 additions & 4 deletions .speakeasy/workflow.lock
Original file line number Diff line number Diff line change
Expand Up @@ -2,17 +2,17 @@ speakeasyVersion: 1.335.2
sources:
my-source:
sourceNamespace: my-source
sourceRevisionDigest: sha256:11bc6c5cce07b8124f88147cb7e73b56ed406eefd4374dce9f478c06a1fd3d7d
sourceBlobDigest: sha256:20143bd569af5fe53d9cc6d46c85db6b8bdfa07d2f1adfd286931ba24c8cb1a6
sourceRevisionDigest: sha256:1ca6e2fd1c68d5cd3338dac401965c77156ba3782f2d17320b50f6db906ff262
sourceBlobDigest: sha256:b399d4769fc1c9af512eafc9c6ac24c67e7e3bd76ebcf3ea0720fca33693ed6d
tags:
- latest
- main
targets:
terraform-provider-airbyte:
source: my-source
sourceNamespace: my-source
sourceRevisionDigest: sha256:11bc6c5cce07b8124f88147cb7e73b56ed406eefd4374dce9f478c06a1fd3d7d
sourceBlobDigest: sha256:20143bd569af5fe53d9cc6d46c85db6b8bdfa07d2f1adfd286931ba24c8cb1a6
sourceRevisionDigest: sha256:1ca6e2fd1c68d5cd3338dac401965c77156ba3782f2d17320b50f6db906ff262
sourceBlobDigest: sha256:b399d4769fc1c9af512eafc9c6ac24c67e7e3bd76ebcf3ea0720fca33693ed6d
outLocation: /github/workspace/repo
workflow:
workflowVersion: 1.0.0
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ terraform {
required_providers {
airbyte = {
source = "airbytehq/airbyte"
version = "0.7.1"
version = "0.9.0"
}
}
}
Expand Down
12 changes: 11 additions & 1 deletion RELEASES.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,14 @@ Based on:
### Generated
- [terraform v0.7.1] .
### Releases
- [Terraform v0.7.1] https://registry.terraform.io/providers/airbytehq/airbyte/0.7.1 - .
- [Terraform v0.7.1] https://registry.terraform.io/providers/airbytehq/airbyte/0.7.1 - .

## 2024-08-12 23:29:25
### Changes
Based on:
- OpenAPI Doc
- Speakeasy CLI 1.363.1 (2.396.0) https://github.com/speakeasy-api/speakeasy
### Generated
- [terraform v0.9.0] .
### Releases
- [Terraform v0.9.0] https://registry.terraform.io/providers/airbytehq/airbyte/0.9.0 - .
35 changes: 35 additions & 0 deletions docs/data-sources/destination_langchain.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "airbyte_destination_langchain Data Source - terraform-provider-airbyte"
subcategory: ""
description: |-
DestinationLangchain DataSource
---

# airbyte_destination_langchain (Data Source)

DestinationLangchain DataSource

## Example Usage

```terraform
data "airbyte_destination_langchain" "my_destination_langchain" {
destination_id = "...my_destination_id..."
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- `destination_id` (String)

### Read-Only

- `configuration` (String) The values required to configure the destination. Parsed as JSON.
- `destination_type` (String)
- `name` (String)
- `workspace_id` (String)


35 changes: 0 additions & 35 deletions docs/data-sources/source_datadog.md

This file was deleted.

Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "airbyte_source_clazar Data Source - terraform-provider-airbyte"
page_title: "airbyte_source_google_analytics_v4_service_account_only Data Source - terraform-provider-airbyte"
subcategory: ""
description: |-
SourceClazar DataSource
SourceGoogleAnalyticsV4ServiceAccountOnly DataSource
---

# airbyte_source_clazar (Data Source)
# airbyte_source_google_analytics_v4_service_account_only (Data Source)

SourceClazar DataSource
SourceGoogleAnalyticsV4ServiceAccountOnly DataSource

## Example Usage

```terraform
data "airbyte_source_clazar" "my_source_clazar" {
data "airbyte_source_google_analytics_v4_service_account_only" "my_source_googleanalyticsv4serviceaccountonly" {
source_id = "...my_source_id..."
}
```
Expand Down
Original file line number Diff line number Diff line change
@@ -1,19 +1,19 @@
---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "airbyte_source_goldcast Data Source - terraform-provider-airbyte"
page_title: "airbyte_source_punk_api Data Source - terraform-provider-airbyte"
subcategory: ""
description: |-
SourceGoldcast DataSource
SourcePunkAPI DataSource
---

# airbyte_source_goldcast (Data Source)
# airbyte_source_punk_api (Data Source)

SourceGoldcast DataSource
SourcePunkAPI DataSource

## Example Usage

```terraform
data "airbyte_source_goldcast" "my_source_goldcast" {
data "airbyte_source_punk_api" "my_source_punkapi" {
source_id = "...my_source_id..."
}
```
Expand Down
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ terraform {
required_providers {
airbyte = {
source = "airbytehq/airbyte"
version = "0.7.1"
version = "0.9.0"
}
}
}
Expand Down
147 changes: 147 additions & 0 deletions docs/resources/destination_langchain.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,147 @@
---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "airbyte_destination_langchain Resource - terraform-provider-airbyte"
subcategory: ""
description: |-
DestinationLangchain Resource
---

# airbyte_destination_langchain (Resource)

DestinationLangchain Resource

## Example Usage

```terraform
resource "airbyte_destination_langchain" "my_destination_langchain" {
configuration = {
embedding = {
fake = {}
}
indexing = {
chroma_local_persistance = {
collection_name = "...my_collection_name..."
destination_path = "/local/my_chroma_db"
}
}
processing = {
chunk_overlap = 7
chunk_size = 3
text_fields = [
"...",
]
}
}
definition_id = "a735a4e1-8012-43f0-976f-b78bf74fa22d"
name = "Jack Christiansen"
workspace_id = "1b5f134d-0007-4497-b4ae-87c30892ffb0"
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Required

- `configuration` (Attributes) (see [below for nested schema](#nestedatt--configuration))
- `name` (String) Name of the destination e.g. dev-mysql-instance.
- `workspace_id` (String)

### Optional

- `definition_id` (String) The UUID of the connector definition. One of configuration.destinationType or definitionId must be provided. Requires replacement if changed.

### Read-Only

- `destination_id` (String)
- `destination_type` (String)

<a id="nestedatt--configuration"></a>
### Nested Schema for `configuration`

Required:

- `embedding` (Attributes) Embedding configuration (see [below for nested schema](#nestedatt--configuration--embedding))
- `indexing` (Attributes) Indexing configuration (see [below for nested schema](#nestedatt--configuration--indexing))
- `processing` (Attributes) (see [below for nested schema](#nestedatt--configuration--processing))

<a id="nestedatt--configuration--embedding"></a>
### Nested Schema for `configuration.embedding`

Optional:

- `fake` (Attributes) Use a fake embedding made out of random vectors with 1536 embedding dimensions. This is useful for testing the data pipeline without incurring any costs. (see [below for nested schema](#nestedatt--configuration--embedding--fake))
- `open_ai` (Attributes) Use the OpenAI API to embed text. This option is using the text-embedding-ada-002 model with 1536 embedding dimensions. (see [below for nested schema](#nestedatt--configuration--embedding--open_ai))

<a id="nestedatt--configuration--embedding--fake"></a>
### Nested Schema for `configuration.embedding.fake`


<a id="nestedatt--configuration--embedding--open_ai"></a>
### Nested Schema for `configuration.embedding.open_ai`

Required:

- `openai_key` (String, Sensitive)



<a id="nestedatt--configuration--indexing"></a>
### Nested Schema for `configuration.indexing`

Optional:

- `chroma_local_persistance` (Attributes) Chroma is a popular vector store that can be used to store and retrieve embeddings. It will build its index in memory and persist it to disk by the end of the sync. (see [below for nested schema](#nestedatt--configuration--indexing--chroma_local_persistance))
- `doc_array_hnsw_search` (Attributes) DocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite. (see [below for nested schema](#nestedatt--configuration--indexing--doc_array_hnsw_search))
- `pinecone` (Attributes) Pinecone is a popular vector store that can be used to store and retrieve embeddings. It is a managed service and can also be queried from outside of langchain. (see [below for nested schema](#nestedatt--configuration--indexing--pinecone))

<a id="nestedatt--configuration--indexing--chroma_local_persistance"></a>
### Nested Schema for `configuration.indexing.chroma_local_persistance`

Required:

- `destination_path` (String) Path to the directory where chroma files will be written. The files will be placed inside that local mount.

Optional:

- `collection_name` (String) Name of the collection to use. Default: "langchain"


<a id="nestedatt--configuration--indexing--doc_array_hnsw_search"></a>
### Nested Schema for `configuration.indexing.doc_array_hnsw_search`

Required:

- `destination_path` (String) Path to the directory where hnswlib and meta data files will be written. The files will be placed inside that local mount. All files in the specified destination directory will be deleted on each run.


<a id="nestedatt--configuration--indexing--pinecone"></a>
### Nested Schema for `configuration.indexing.pinecone`

Required:

- `index` (String) Pinecone index to use
- `pinecone_environment` (String) Pinecone environment to use
- `pinecone_key` (String, Sensitive)



<a id="nestedatt--configuration--processing"></a>
### Nested Schema for `configuration.processing`

Required:

- `chunk_size` (Number) Size of chunks in tokens to store in vector store (make sure it is not too big for the context if your LLM)
- `text_fields` (List of String) List of fields in the record that should be used to calculate the embedding. All other fields are passed along as meta fields. The field list is applied to all streams in the same way and non-existing fields are ignored. If none are defined, all fields are considered text fields. When specifying text fields, you can access nested fields in the record by using dot notation, e.g. `user.name` will access the `name` field in the `user` object. It's also possible to use wildcards to access all fields in an object, e.g. `users.*.name` will access all `names` fields in all entries of the `users` array.

Optional:

- `chunk_overlap` (Number) Size of overlap between chunks in tokens to store in vector store to better capture relevant context. Default: 0

## Import

Import is supported using the following syntax:

```shell
terraform import airbyte_destination_langchain.my_airbyte_destination_langchain ""
```
16 changes: 8 additions & 8 deletions docs/resources/destination_milvus.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,14 +30,14 @@ resource "airbyte_destination_milvus" "my_destination_milvus" {
}
collection = "...my_collection..."
db = "...my_db..."
host = "tcp://host.docker.internal:19530"
host = "tcp://my-local-milvus:19530"
text_field = "...my_text_field..."
vector_field = "...my_vector_field..."
}
omit_raw_text = true
omit_raw_text = false
processing = {
chunk_overlap = 6
chunk_size = 4
chunk_overlap = 1
chunk_size = 10
field_name_mappings = [
{
from_field = "...my_from_field..."
Expand All @@ -52,14 +52,14 @@ resource "airbyte_destination_milvus" "my_destination_milvus" {
]
text_splitter = {
by_markdown_header = {
split_level = 2
split_level = 6
}
}
}
}
definition_id = "5a4e1801-23f0-4d76-bb78-bf74fa22de12"
name = "Jenny Braun"
workspace_id = "f134d000-7497-474a-a87c-30892ffb0f41"
definition_id = "2248d601-2833-484b-987b-5cce36148543"
name = "Sylvia Smitham"
workspace_id = "3c5e509f-4525-421a-8478-78c254cd184f"
}
```

Expand Down
8 changes: 4 additions & 4 deletions docs/resources/destination_mongodb.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ resource "airbyte_destination_mongodb" "my_destination_mongodb" {
auth_type = {
login_password = {
password = "...my_password..."
username = "Ubaldo12"
username = "Ryleigh43"
}
}
database = "...my_database..."
Expand All @@ -32,9 +32,9 @@ resource "airbyte_destination_mongodb" "my_destination_mongodb" {
no_tunnel = {}
}
}
definition_id = "48d60128-3384-4bd8-bb5c-ce3614854333"
name = "Courtney Considine"
workspace_id = "5e509f45-2521-4a04-b878-c254cd184fd1"
definition_id = "e75f1c50-c9ec-4767-87b0-6cf86fe4a6f8"
name = "Mr. Malcolm Lubowitz"
workspace_id = "d646f802-e7b2-4183-b2bc-4f6db7afdaca"
}
```

Expand Down
Loading

0 comments on commit cdfa5be

Please sign in to comment.