From 2ae2914c19c6e17d7015de802082beb4f3ab8587 Mon Sep 17 00:00:00 2001 From: Jonathan Wihl Date: Wed, 28 Feb 2024 14:22:36 -0600 Subject: [PATCH] Wihl/docs reorg (#1395) * Added folder structure for DB capture docs * Merged Postgres docs for RDS and GCP; removed Postgres heroku docs * Removed duplicate MySQL docs * Removed duplicate SQL Server docs, fixed links * Altered folder structure for DB materializations * Reorganized materialization docs for consistency * Doc cleanup * Salesforce doc updates, updated relative paths to absolute * Fixing links * Updated relative links * Alloydb links updated * AlloyDb materialization links updated * Timescale links updated * Fixing broken relative links * Fixing broken links --- .../tutorials/continuous-materialized-view.md | 2 +- site/docs/guides/connect-network.md | 4 +- .../capture-connectors/MariaDB/MariaDB.md | 187 +++++------------- .../MariaDB/amazon-rds-mariadb.md | 8 +- .../capture-connectors/MySQL/MySQL.md | 19 +- .../MySQL/amazon-rds-mysql.md | 11 +- .../MySQL/google-cloud-sql-mysql.md | 11 +- .../PostgreSQL/PostgreSQL.md | 28 +-- .../PostgreSQL/amazon-rds-postgres.md | 20 +- .../Connectors/capture-connectors/README.md | 14 +- .../SQLServer/amazon-rds-sqlserver.md | 16 +- .../SQLServer/google-cloud-sql-sqlserver.md | 16 +- .../capture-connectors/SQLServer/sqlserver.md | 20 +- .../salesforce-historical-data.md} | 4 +- .../{ => Salesforce}/salesforce-real-time.md | 6 +- .../Salesforce/salesforce.md | 17 ++ .../Connectors/capture-connectors/alloydb.md | 2 +- .../MySQL/amazon-rds-mysql.md | 6 +- .../MySQL/google-cloud-sql-mysql.md | 8 +- .../materialization-connectors/MySQL/mysql.md | 19 +- .../PostgreSQL/PostgreSQL.md | 48 ++--- .../PostgreSQL/amazon-rds-postgres.md | 2 +- .../PostgreSQL/google-cloud-sql-postgres.md | 2 +- .../materialization-connectors/README.md | 7 +- .../SQLServer/sqlserver.md | 10 +- .../materialization-connectors/alloydb.md | 2 +- .../materialization-connectors/timescaledb.md | 2 +- 27 files changed, 207 insertions(+), 284 deletions(-) rename site/docs/reference/Connectors/capture-connectors/{salesforce.md => Salesforce/salesforce-historical-data.md} (97%) rename site/docs/reference/Connectors/capture-connectors/{ => Salesforce}/salesforce-real-time.md (94%) create mode 100644 site/docs/reference/Connectors/capture-connectors/Salesforce/salesforce.md diff --git a/site/docs/getting-started/tutorials/continuous-materialized-view.md b/site/docs/getting-started/tutorials/continuous-materialized-view.md index 251da4ad15..7cb136d17e 100644 --- a/site/docs/getting-started/tutorials/continuous-materialized-view.md +++ b/site/docs/getting-started/tutorials/continuous-materialized-view.md @@ -22,7 +22,7 @@ a materialized view that updates continuously based on a real-time data feed. In that case, you'll need to [install flowctl locally](../../getting-started/installation.mdx#get-started-with-the-flow-cli). Note that the steps you'll need to take will be different. Refer to this [guide](../../guides/flowctl/create-derivation.md#create-a-derivation-locally) for help. -* A Postgres database set up to [allow connections from Flow](../../reference/Connectors/materialization-connectors/PostgreSQL.md#setup). +* A Postgres database set up to [allow connections from Flow](/reference/Connectors/materialization-connectors/PostgreSQL/#setup). Amazon RDS, Amazon Aurora, Google Cloud SQL, Azure Database for PostgreSQL, and self-hosted databases are supported. ## Introduction diff --git a/site/docs/guides/connect-network.md b/site/docs/guides/connect-network.md index d88fadd27f..71bf829e54 100644 --- a/site/docs/guides/connect-network.md +++ b/site/docs/guides/connect-network.md @@ -59,9 +59,7 @@ basic configuration options. 5. Configure your internal network to allow the SSH server to access your capture or materialization endpoint. -6. Configure your network to expose the SSH server endpoint to external traffic. The method you use - depends on your organization's IT policies. Currently, Estuary doesn't provide a list of static IPs for - whitelisting purposes, but if you require one, [contact Estuary support](mailto:support@estuary.dev). +6. To grant external access to the SSH server, it's essential to configure your network settings accordingly. The approach you take will be dictated by your organization's IT policies. One recommended step is to whitelist Estuary's IP address, which is `34.121.207.128`. This ensures that connections from this specific IP are permitted through your network's firewall or security measures. ## Setup for AWS diff --git a/site/docs/reference/Connectors/capture-connectors/MariaDB/MariaDB.md b/site/docs/reference/Connectors/capture-connectors/MariaDB/MariaDB.md index 8dff6023c2..e7e2767685 100644 --- a/site/docs/reference/Connectors/capture-connectors/MariaDB/MariaDB.md +++ b/site/docs/reference/Connectors/capture-connectors/MariaDB/MariaDB.md @@ -29,10 +29,13 @@ To use this connector, you'll need a MariaDB database setup with the following. must be set to an IANA zone name or numerical offset or the capture configured with a `timezone` to use by default. :::tip Configuration Tip -To configure this connector to capture data from databases hosted on your internal network, you must set up SSH tunneling. For more specific instructions on setup, see [configure connections with SSH tunneling](../../../../guides/connect-network/). +To configure this connector to capture data from databases hosted on your internal network, you must set up SSH tunneling. For more specific instructions on setup, see [configure connections with SSH tunneling](/guides/connect-network/). ::: -### Setup +## Setup + +### Self Hosted MariaDB + To meet these requirements, do the following: 1. Create the watermarks table. This table can have any name and be in any database, so long as the capture's `config.json` file is modified accordingly. @@ -60,6 +63,53 @@ SET PERSIST binlog_expire_logs_seconds = 2592000; SET PERSIST time_zone = '-05:00' ``` +### Azure Database for MariaDB + +You can use this connector for MariaDB instances on Azure Database for MariaDB using the following setup instructions. + +1. Allow connections to the database from the Estuary Flow IP address. + + 1. Create a new [firewall rule](https://learn.microsoft.com/en-us/azure/mariadb/howto-manage-firewall-portal) + that grants access to the IP address `34.121.207.128`. + + :::info + Alternatively, you can allow secure connections via SSH tunneling. To do so: + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) + * When you configure your connector as described in the [configuration](#configuration) section above, + including the additional `networkTunnel` configuration to enable the SSH tunnel. + See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) + for additional details and a sample. + ::: + +2. Set the `binlog_expire_logs_seconds` [server perameter](https://learn.microsoft.com/en-us/azure/mariadb/howto-server-parameters#configure-server-parameters) +to `2592000`. + +3. Using your preferred MariaDB client, create the watermarks table. + +:::tip +Your username must be specified in the format `username@servername`. +::: + +```sql +CREATE DATABASE IF NOT EXISTS flow; +CREATE TABLE IF NOT EXISTS flow.watermarks (slot INTEGER PRIMARY KEY, watermark TEXT); +``` + +4. Create the `flow_capture` user with replication permission, the ability to read all tables, and the ability to read and write the watermarks table. + + The `SELECT` permission can be restricted to just the tables that need to be + captured, but automatic discovery requires `information_schema` access as well. +```sql +CREATE USER IF NOT EXISTS flow_capture + IDENTIFIED BY 'secret' +GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'flow_capture'; +GRANT SELECT ON *.* TO 'flow_capture'; +GRANT INSERT, UPDATE, DELETE ON flow.watermarks TO 'flow_capture'; +``` + +5. Note the instance's host under Server name, and the port under Connection Strings (usually `3306`). +Together, you'll use the host:port as the `address` property when you configure the connector. + ### Setting the MariaDB time zone MariaDB's [`time_zone` server system variable](https://mariadb.com/kb/en/server-system-variables/#system_time_zone) is set to `SYSTEM` by default. @@ -97,7 +147,7 @@ In this case, you may turn of backfilling on a per-table basis. See [properties] ## Configuration You configure connectors either in the Flow web app, or by directly editing the catalog specification file. -See [connectors](../../../../concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the MariaDB source connector. +See [connectors](/concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the MariaDB source connector. ### Properties @@ -154,136 +204,7 @@ captures: Your capture definition will likely be more complex, with additional bindings for each table in the source database. -[Learn more about capture definitions.](../../../../concepts/captures.md#pull-captures) - -## MariaDB on managed cloud platforms - -In addition to standard MariaDB, this connector supports cloud-based MariaDB instances on certain platforms. - -### Amazon RDS - -You can use this connector for MariaDB instances on Amazon RDS using the following setup instructions. - -Estuary recommends creating a [read replica](https://aws.amazon.com/rds/features/read-replicas/) -in RDS for use with Flow; however, it's not required. -You're able to apply the connector directly to the primary instance if you'd like. - -#### Setup - -1. Allow connections to the database from the Estuary Flow IP address. - - 1. [Modify the database](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html), setting **Public accessibility** to **Yes**. - - 2. Edit the VPC security group associated with your database, or create a new VPC security group and associate it with the database. - Refer to the [steps in the Amazon documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.Create). - Create a new inbound rule and a new outbound rule that allow all traffic from the IP address `34.121.207.128`. - - :::info - Alternatively, you can allow secure connections via SSH tunneling. To do so: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, - including the additional `networkTunnel` configuration to enable the SSH tunnel. - See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) - for additional details and a sample. - ::: - -2. Create a RDS parameter group to enable replication in MariaDB. - - 1. [Create a parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Creating). - Create a unique name and description and set the following properties: - * **Family**: mariadb10.6 - * **Type**: DB Parameter group - - 2. [Modify the new parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Modifying) and update the following parameters: - * binlog_format: ROW - * binlog_row_metadata: FULL - * read_only: 0 - - 3. If using the primary instance (not recommended), [associate the parameter group](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithDBInstanceParamGroups.html#USER_WorkingWithParamGroups.Associating) - with the database and set [Backup Retention Period](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithAutomatedBackups.html#USER_WorkingWithAutomatedBackups.Enabling) to 7 days. - Reboot the database to allow the changes to take effect. - -3. Create a read replica with the new parameter group applied (recommended). - - 1. [Create a read replica](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Create) - of your MariaDB database. - - 2. [Modify the replica](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html) - and set the following: - * **DB parameter group**: choose the parameter group you created previously - * **Backup retention period**: 7 days - * **Public access**: Publicly accessible - - 3. Reboot the replica to allow the changes to take effect. - -4. Switch to your MariaDB client. Run the following commands to create a new user for the capture with appropriate permissions, -and set up the watermarks table: - -```sql -CREATE DATABASE IF NOT EXISTS flow; -CREATE TABLE IF NOT EXISTS flow.watermarks (slot INTEGER PRIMARY KEY, watermark TEXT); -CREATE USER IF NOT EXISTS flow_capture - IDENTIFIED BY 'secret' -GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'flow_capture'; -GRANT SELECT ON *.* TO 'flow_capture'; -GRANT INSERT, UPDATE, DELETE ON flow.watermarks TO 'flow_capture'; -``` - -5. Run the following command to set the binary log retention to 7 days, the maximum value which RDS MariaDB permits: -```sql -CALL mysql.rds_set_configuration('binlog retention hours', 168); -``` - -6. In the [RDS console](https://console.aws.amazon.com/rds/), note the instance's Endpoint and Port. You'll need these for the `address` property when you configure the connector. - -### Azure Database for MariaDB - -You can use this connector for MariaDB instances on Azure Database for MariaDB using the following setup instructions. - -#### Setup - -1. Allow connections to the database from the Estuary Flow IP address. - - 1. Create a new [firewall rule](https://learn.microsoft.com/en-us/azure/mariadb/howto-manage-firewall-portal) - that grants access to the IP address `34.121.207.128`. - - :::info - Alternatively, you can allow secure connections via SSH tunneling. To do so: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, - including the additional `networkTunnel` configuration to enable the SSH tunnel. - See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) - for additional details and a sample. - ::: - -2. Set the `binlog_expire_logs_seconds` [server perameter](https://learn.microsoft.com/en-us/azure/mariadb/howto-server-parameters#configure-server-parameters) -to `2592000`. - -3. Using your preferred MariaDB client, create the watermarks table. - -:::tip -Your username must be specified in the format `username@servername`. -::: - -```sql -CREATE DATABASE IF NOT EXISTS flow; -CREATE TABLE IF NOT EXISTS flow.watermarks (slot INTEGER PRIMARY KEY, watermark TEXT); -``` - -4. Create the `flow_capture` user with replication permission, the ability to read all tables, and the ability to read and write the watermarks table. - - The `SELECT` permission can be restricted to just the tables that need to be - captured, but automatic discovery requires `information_schema` access as well. -```sql -CREATE USER IF NOT EXISTS flow_capture - IDENTIFIED BY 'secret' -GRANT REPLICATION CLIENT, REPLICATION SLAVE ON *.* TO 'flow_capture'; -GRANT SELECT ON *.* TO 'flow_capture'; -GRANT INSERT, UPDATE, DELETE ON flow.watermarks TO 'flow_capture'; -``` - -4. Note the instance's host under Server name, and the port under Connection Strings (usually `3306`). -Together, you'll use the host:port as the `address` property when you configure the connector. +[Learn more about capture definitions.](/concepts/captures.md#pull-captures) ## Troubleshooting Capture Errors diff --git a/site/docs/reference/Connectors/capture-connectors/MariaDB/amazon-rds-mariadb.md b/site/docs/reference/Connectors/capture-connectors/MariaDB/amazon-rds-mariadb.md index a05fa52623..e5556c61eb 100644 --- a/site/docs/reference/Connectors/capture-connectors/MariaDB/amazon-rds-mariadb.md +++ b/site/docs/reference/Connectors/capture-connectors/MariaDB/amazon-rds-mariadb.md @@ -40,10 +40,10 @@ To use this connector, you'll need a MariaDB database setup with the following. :::info Alternatively, you can allow secure connections via SSH tunneling. To do so: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. - See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) + See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. ::: @@ -109,7 +109,7 @@ In this case, you may turn of backfilling on a per-table basis. See [properties] ## Configuration You configure connectors either in the Flow web app, or by directly editing the catalog specification file. -See [connectors](../../../../concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the MariaDB source connector. +See [connectors](/concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the MariaDB source connector. ### Properties @@ -166,7 +166,7 @@ captures: Your capture definition will likely be more complex, with additional bindings for each table in the source database. -[Learn more about capture definitions.](../../../../concepts/captures.md#pull-captures) +[Learn more about capture definitions.](/concepts/captures.md#pull-captures) ## Troubleshooting Capture Errors diff --git a/site/docs/reference/Connectors/capture-connectors/MySQL/MySQL.md b/site/docs/reference/Connectors/capture-connectors/MySQL/MySQL.md index ed1284e755..54307781b2 100644 --- a/site/docs/reference/Connectors/capture-connectors/MySQL/MySQL.md +++ b/site/docs/reference/Connectors/capture-connectors/MySQL/MySQL.md @@ -38,7 +38,7 @@ To use this connector, you'll need a MySQL database setup with the following. must be set to an IANA zone name or numerical offset or the capture configured with a `timezone` to use by default. :::tip Configuration Tip -To configure this connector to capture data from databases hosted on your internal network, you must set up SSH tunneling. For more specific instructions on setup, see [configure connections with SSH tunneling](../../../../guides/connect-network/). +To configure this connector to capture data from databases hosted on your internal network, you must set up SSH tunneling. For more specific instructions on setup, see [configure connections with SSH tunneling](/guides/connect-network/). ::: ## Setup @@ -91,8 +91,8 @@ For each step, take note of which entity you're working with. * Edit the VPC security group associated with your instance, or create a new VPC security group and associate it with the instance as described in [the Amazon documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.Create). Create a new inbound rule and a new outbound rule that allow all traffic from the IP address `34.121.207.128`. 2. To allow secure connections via SSH tunneling: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) + * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. 2. Create a RDS parameter group to enable replication on your Aurora DB cluster. @@ -142,8 +142,8 @@ CALL mysql.rds_set_configuration('binlog retention hours', 168); * Create a new [firewall rule](https://docs.microsoft.com/en-us/azure/mysql/flexible-server/how-to-manage-firewall-portal#create-a-firewall-rule-after-server-is-created) that grants access to the IP address `34.121.207.128`. 2. To allow secure connections via SSH tunneling: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) + * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. 2. Set the `binlog_expire_logs_seconds` [server perameter](https://docs.microsoft.com/en-us/azure/mysql/single-server/concepts-server-parameters#configurable-server-parameters) to `2592000`. @@ -218,7 +218,8 @@ In this case, you may turn of backfilling on a per-table basis. See [properties] ## Configuration You configure connectors either in the Flow web app, or by directly editing the catalog specification file. -See [connectors](../../../../concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the MySQL source connector. + +See [connectors](/concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the MySQL source connector. ### Properties @@ -275,7 +276,8 @@ captures: Your capture definition will likely be more complex, with additional bindings for each table in the source database. -[Learn more about capture definitions.](../../../../concepts/captures.md#pull-captures) +[Learn more about capture definitions.](/concepts/captures.md#pull-captures) + ## Troubleshooting Capture Errors @@ -317,4 +319,5 @@ The `"binlog retention period is too short"` error should normally be fixed by s ### Empty Collection Key -Every Flow collection must declare a [key](../../../../concepts/collections.md#keys) which is used to group its documents. When testing your capture, if you encounter an error indicating collection key cannot be empty, you will need to either add a key to the table in your source, or manually edit the generated specification and specify keys for the collection before publishing to the catalog as documented [here](../../../../concepts/collections.md#empty-keys). +Every Flow collection must declare a [key](/concepts/collections.md#keys) which is used to group its documents. When testing your capture, if you encounter an error indicating collection key cannot be empty, you will need to either add a key to the table in your source, or manually edit the generated specification and specify keys for the collection before publishing to the catalog as documented [here](/concepts/collections.md#empty-keys). + diff --git a/site/docs/reference/Connectors/capture-connectors/MySQL/amazon-rds-mysql.md b/site/docs/reference/Connectors/capture-connectors/MySQL/amazon-rds-mysql.md index 9bc5c392f0..56d74d2dc0 100644 --- a/site/docs/reference/Connectors/capture-connectors/MySQL/amazon-rds-mysql.md +++ b/site/docs/reference/Connectors/capture-connectors/MySQL/amazon-rds-mysql.md @@ -34,8 +34,8 @@ To use this connector, you'll need a MySQL database setup with the following. * Edit the VPC security group associated with your database, or create a new VPC security group and associate it with the database as described in [the Amazon documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.Create). Create a new inbound rule and a new outbound rule that allow all traffic from the IP address `34.121.207.128`. 2. To allow secure connections via SSH tunneling: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) + * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. 2. Create a RDS parameter group to enable replication in MySQL. @@ -127,7 +127,7 @@ In this case, you may turn of backfilling on a per-table basis. See [properties] ## Configuration You configure connectors either in the Flow web app, or by directly editing the catalog specification file. -See [connectors](../../../../concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the MySQL source connector. +See [connectors](/concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the MySQL source connector. ### Properties @@ -184,7 +184,7 @@ captures: Your capture definition will likely be more complex, with additional bindings for each table in the source database. -[Learn more about capture definitions.](../../../../concepts/captures.md#pull-captures) +[Learn more about capture definitions.](/concepts/captures.md#pull-captures) ## Troubleshooting Capture Errors @@ -226,4 +226,5 @@ The `"binlog retention period is too short"` error should normally be fixed by s ### Empty Collection Key -Every Flow collection must declare a [key](../../../../concepts/collections.md#keys) which is used to group its documents. When testing your capture, if you encounter an error indicating collection key cannot be empty, you will need to either add a key to the table in your source, or manually edit the generated specification and specify keys for the collection before publishing to the catalog as documented [here](../../../../concepts/collections.md#empty-keys). +Every Flow collection must declare a [key](/concepts/collections.md#keys) which is used to group its documents. When testing your capture, if you encounter an error indicating collection key cannot be empty, you will need to either add a key to the table in your source, or manually edit the generated specification and specify keys for the collection before publishing to the catalog as documented [here](/concepts/collections.md#empty-keys). + diff --git a/site/docs/reference/Connectors/capture-connectors/MySQL/google-cloud-sql-mysql.md b/site/docs/reference/Connectors/capture-connectors/MySQL/google-cloud-sql-mysql.md index e942626e35..b49d36a237 100644 --- a/site/docs/reference/Connectors/capture-connectors/MySQL/google-cloud-sql-mysql.md +++ b/site/docs/reference/Connectors/capture-connectors/MySQL/google-cloud-sql-mysql.md @@ -33,8 +33,8 @@ To use this connector, you'll need a MySQL database setup with the following. * [Enable public IP on your database](https://cloud.google.com/sql/docs/mysql/configure-ip#add) and add `34.121.207.128` as an authorized IP address. 2. To allow secure connections via SSH tunneling: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) + * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. 2. Set the instance's `binlog_expire_logs_seconds` [flag](https://cloud.google.com/sql/docs/mysql/flags?_ga=2.8077298.-1359189752.1655241239&_gac=1.226418280.1655849730.Cj0KCQjw2MWVBhCQARIsAIjbwoOczKklaVaykkUiCMZ4n3_jVtsInpmlugWN92zx6rL5i7zTxm3AALIaAv6nEALw_wcB) to `2592000`. @@ -102,7 +102,7 @@ In this case, you may turn of backfilling on a per-table basis. See [properties] ## Configuration You configure connectors either in the Flow web app, or by directly editing the catalog specification file. -See [connectors](../../../../concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the MySQL source connector. +See [connectors](/concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the MySQL source connector. ### Properties @@ -159,7 +159,7 @@ captures: Your capture definition will likely be more complex, with additional bindings for each table in the source database. -[Learn more about capture definitions.](../../../../concepts/captures.md#pull-captures) +[Learn more about capture definitions.](/concepts/captures.md#pull-captures) ## Troubleshooting Capture Errors @@ -201,4 +201,5 @@ The `"binlog retention period is too short"` error should normally be fixed by s ### Empty Collection Key -Every Flow collection must declare a [key](../../../../concepts/collections.md#keys) which is used to group its documents. When testing your capture, if you encounter an error indicating collection key cannot be empty, you will need to either add a key to the table in your source, or manually edit the generated specification and specify keys for the collection before publishing to the catalog as documented [here](../../../../concepts/collections.md#empty-keys). +Every Flow collection must declare a [key](/concepts/collections.md#keys) which is used to group its documents. When testing your capture, if you encounter an error indicating collection key cannot be empty, you will need to either add a key to the table in your source, or manually edit the generated specification and specify keys for the collection before publishing to the catalog as documented [here](/concepts/collections.md#empty-keys). + diff --git a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/PostgreSQL.md b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/PostgreSQL.md index c70ef5c057..c0d1ea6cf1 100644 --- a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/PostgreSQL.md +++ b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/PostgreSQL.md @@ -36,7 +36,7 @@ You'll need a PostgreSQL database setup with the following: * In more restricted setups, this must be created manually, but can be created automatically if the connector has suitable permissions. :::tip Configuration Tip -To configure this connector to capture data from databases hosted on your internal network, you must set up SSH tunneling. For more specific instructions on setup, see [configure connections with SSH tunneling](../../../../guides/connect-network/). +To configure this connector to capture data from databases hosted on your internal network, you must set up SSH tunneling. For more specific instructions on setup, see [configure connections with SSH tunneling](/guides/connect-network/). ::: ## Setup @@ -115,8 +115,8 @@ For each step, take note of which entity you're working with. * Edit the VPC security group associated with your instance, or create a new VPC security group and associate it with the instance as described in [the Amazon documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.Create). Create a new inbound rule and a new outbound rule that allow all traffic from the IP address `34.121.207.128`. 2. To allow secure connections via SSH tunneling: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) + * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. 2. Enable logical replication on your Aurora DB cluster. @@ -161,8 +161,8 @@ and set up the watermarks table and publication. * Create a new [firewall rule](https://docs.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-manage-firewall-portal#create-a-firewall-rule-after-server-is-created) that grants access to the IP address `34.121.207.128`. 2. To allow secure connections via SSH tunneling: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) + * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. 2. In your Azure PostgreSQL instance's support parameters, [set replication to logical](https://docs.microsoft.com/en-us/azure/postgresql/single-server/concepts-logical#set-up-your-server) to enable logical replication. @@ -223,7 +223,8 @@ In this case, you may turn of backfilling on a per-table basis. See [properties] ## Configuration You configure connectors either in the Flow web app, or by directly editing the catalog specification file. -See [connectors](../../../../concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the PostgreSQL source connector. +See [connectors](/concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the PostgreSQL source connector. + ### Properties @@ -279,7 +280,8 @@ captures: ``` Your capture definition will likely be more complex, with additional bindings for each table in the source database. -[Learn more about capture definitions.](../../../../concepts/captures.md#pull-captures) +[Learn more about capture definitions.](/concepts/captures.md#pull-captures) + ## TOASTed values @@ -293,21 +295,21 @@ If a change event occurs on a row that contains a TOASTed value, _but the TOASTe As a result, the connector emits a row update with the a value omitted, which might cause unexpected results in downstream catalog tasks if adjustments are not made. -The PostgreSQL connector handles TOASTed values for you when you follow the [standard discovery workflow](../../../../concepts/connectors.md#flowctl-discover) -or use the [Flow UI](../../../../concepts/connectors.md#flow-ui) to create your capture. -It uses [merge](../../../reduction-strategies/merge.md) [reductions](../../../../concepts/schemas.md#reductions) +The PostgreSQL connector handles TOASTed values for you when you follow the [standard discovery workflow](/concepts/connectors.md#flowctl-discover) +or use the [Flow UI](/concepts/connectors.md#flow-ui) to create your capture. +It uses [merge](/reference/reduction-strategies/merge.md) [reductions](/concepts/schemas.md#reductions) to fill in the previous known TOASTed value in cases when that value is omitted from a row update. However, due to the event-driven nature of certain tasks in Flow, it's still possible to see unexpected results in your data flow, specifically: -- When you materialize the captured data to another system using a connector that requires [delta updates](../../../../concepts/materialization.md#delta-updates) -- When you perform a [derivation](../../../../concepts/derivations.md) that uses TOASTed values +- When you materialize the captured data to another system using a connector that requires [delta updates](/concepts/materialization.md#delta-updates) +- When you perform a [derivation](/concepts/derivations.md) that uses TOASTed values ### Troubleshooting If you encounter an issue that you suspect is due to TOASTed values, try the following: -- Ensure your collection's schema is using the merge [reduction strategy](../../../../concepts/schemas.md#reduce-annotations). +- Ensure your collection's schema is using the merge [reduction strategy](/concepts/schemas.md#reduce-annotations). - [Set REPLICA IDENTITY to FULL](https://www.postgresql.org/docs/9.4/sql-altertable.html) for the table. This circumvents the problem by forcing the WAL to record all values regardless of size. However, this can have performance impacts on your database and must be carefully evaluated. - [Contact Estuary support](mailto:support@estuary.dev) for assistance. diff --git a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/amazon-rds-postgres.md b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/amazon-rds-postgres.md index a4b9612fd4..e77b9d9157 100644 --- a/site/docs/reference/Connectors/capture-connectors/PostgreSQL/amazon-rds-postgres.md +++ b/site/docs/reference/Connectors/capture-connectors/PostgreSQL/amazon-rds-postgres.md @@ -38,8 +38,8 @@ You'll need a PostgreSQL database setup with the following: * Edit the VPC security group associated with your database, or create a new VPC security group and associate it with the database as described in [the Amazon documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.Create).Create a new inbound rule and a new outbound rule that allow all traffic from the IP address `34.121.207.128`. 2. To allow secure connections via SSH tunneling: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) + * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. 2. Enable logical replication on your RDS PostgreSQL instance. @@ -87,7 +87,7 @@ In this case, you may turn of backfilling on a per-table basis. See [properties] ## Configuration You configure connectors either in the Flow web app, or by directly editing the catalog specification file. -See [connectors](../../../../concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the PostgreSQL source connector. +See [connectors](/concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the PostgreSQL source connector. ### Properties @@ -143,7 +143,7 @@ captures: ``` Your capture definition will likely be more complex, with additional bindings for each table in the source database. -[Learn more about capture definitions.](../../../../concepts/captures.md#pull-captures) +[Learn more about capture definitions.](/concepts/captures.md#pull-captures) ## TOASTed values @@ -157,21 +157,21 @@ If a change event occurs on a row that contains a TOASTed value, _but the TOASTe As a result, the connector emits a row update with the a value omitted, which might cause unexpected results in downstream catalog tasks if adjustments are not made. -The PostgreSQL connector handles TOASTed values for you when you follow the [standard discovery workflow](../../../../concepts/connectors.md#flowctl-discover) -or use the [Flow UI](../../../../concepts/connectors.md#flow-ui) to create your capture. -It uses [merge](../../../reduction-strategies/merge.md) [reductions](../../../../concepts/schemas.md#reductions) +The PostgreSQL connector handles TOASTed values for you when you follow the [standard discovery workflow](/concepts/connectors.md#flowctl-discover) +or use the [Flow UI](/concepts/connectors.md#flow-ui) to create your capture. +It uses [merge](/reference/reduction-strategies/merge.md) [reductions](/concepts/schemas.md#reductions) to fill in the previous known TOASTed value in cases when that value is omitted from a row update. However, due to the event-driven nature of certain tasks in Flow, it's still possible to see unexpected results in your data flow, specifically: -- When you materialize the captured data to another system using a connector that requires [delta updates](../../../../concepts/materialization.md#delta-updates) -- When you perform a [derivation](../../../../concepts/derivations.md) that uses TOASTed values +- When you materialize the captured data to another system using a connector that requires [delta updates](/concepts/materialization.md#delta-updates) +- When you perform a [derivation](/concepts/derivations.md) that uses TOASTed values ### Troubleshooting If you encounter an issue that you suspect is due to TOASTed values, try the following: -- Ensure your collection's schema is using the merge [reduction strategy](../../../../concepts/schemas.md#reduce-annotations). +- Ensure your collection's schema is using the merge [reduction strategy](/concepts/schemas.md#reduce-annotations). - [Set REPLICA IDENTITY to FULL](https://www.postgresql.org/docs/9.4/sql-altertable.html) for the table. This circumvents the problem by forcing the WAL to record all values regardless of size. However, this can have performance impacts on your database and must be carefully evaluated. - [Contact Estuary support](mailto:support@estuary.dev) for assistance. diff --git a/site/docs/reference/Connectors/capture-connectors/README.md b/site/docs/reference/Connectors/capture-connectors/README.md index 64db87c6c9..8f17486b0b 100644 --- a/site/docs/reference/Connectors/capture-connectors/README.md +++ b/site/docs/reference/Connectors/capture-connectors/README.md @@ -48,22 +48,22 @@ All Estuary connectors capture data in real time, as it appears in the source sy * [Configuration](./http-ingest.md) * Package - ghcr.io/estuary/source-http-ingest:dev * MariaDB - * [Configuration](./mariadb.md) + * [Configuration](./MariaDB/) * Package - ghcr.io/estuary/source-mariadb:dev * Microsoft SQL Server - * [Configuration](./sqlserver.md) + * [Configuration](./SQLServer/) * Package - ghcr.io/estuary/source-sqlserver:dev * MongoDB - * [Configuration](./mongodb.md) + * [Configuration](./mongodb/) * Package - ghcr.io/estuary/source-mongodb:dev * MySQL - * [Configuration](./MySQL.md) + * [Configuration](./MySQL/) * Package - ghcr.io/estuary/source-mysql:dev * PostgreSQL - * [Configuration](./PostgreSQL.md) + * [Configuration](./PostgreSQL/) * Package — ghcr.io/estuary/source-postgres:dev * Salesforce (for real-time data) - * [Configuration](./salesforce-real-time.md) + * [Configuration](./Salesforce/) * Package - ghcr.io/estuary/source-salesforce-next:dev * SFTP * [Configuration](./sftp.md) @@ -181,7 +181,7 @@ The versions made available in Flow have been adapted for compatibility. * [Configuration](./recharge.md) * Package - ghcr.io/estuary/source-recharge:dev * Salesforce (For historical data) - * [Configuration](./salesforce.md) + * [Configuration](./Salesforce/) * Package - ghcr.io/estuary/source-salesforce:dev * SendGrid * [Configuration](./sendgrid.md) diff --git a/site/docs/reference/Connectors/capture-connectors/SQLServer/amazon-rds-sqlserver.md b/site/docs/reference/Connectors/capture-connectors/SQLServer/amazon-rds-sqlserver.md index d1d2a48a46..951f504240 100644 --- a/site/docs/reference/Connectors/capture-connectors/SQLServer/amazon-rds-sqlserver.md +++ b/site/docs/reference/Connectors/capture-connectors/SQLServer/amazon-rds-sqlserver.md @@ -48,8 +48,8 @@ To meet these requirements, follow the steps for your hosting type. * Edit the VPC security group associated with your database, or create a new VPC security group and associate it with the database as described in [the Amazon documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.Create).Create a new inbound rule and a new outbound rule that allow all traffic from the IP address `34.121.207.128`. 2. To allow secure connections via SSH tunneling: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) + * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. 2. In your SQL client, connect to your instance as the default `sqlserver` user and issue the following commands. @@ -77,7 +77,7 @@ EXEC sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'flow_waterm ## Configuration You configure connectors either in the Flow web app, or by directly editing the catalog specification file. -See [connectors](../../../../concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the SQL Server source connector. +See [connectors](/concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the SQL Server source connector. ### Properties @@ -124,16 +124,16 @@ captures: ``` Your capture definition will likely be more complex, with additional bindings for each table in the source database. -[Learn more about capture definitions.](../../../../concepts/captures.md#pull-captures) +[Learn more about capture definitions.](/concepts/captures.md#pull-captures) ## Specifying Flow collection keys -Every Flow collection must have a [key](../../../../concepts/collections.md#keys). +Every Flow collection must have a [key](/concepts/collections.md#keys). As long as your SQL Server tables have a primary key specified, the connector will set the corresponding collection's key accordingly. In cases where a SQL Server table you want to capture doesn't have a primary key, -you can manually add it to the collection definition during the [capture creation workflow](../../../../guides/create-dataflow.md#create-a-capture). +you can manually add it to the collection definition during the [capture creation workflow](/guides/create-dataflow.md#create-a-capture). 1. After you input the endpoint configuration and click **Next**, the tables in your database have been mapped to Flow collections. @@ -141,8 +141,8 @@ Click each collection's **Specification** tab and identify a collection where `" 2. Click inside the empty key value in the editor and input the name of column in the table to use as the key, formatted as a JSON pointer. For example `"key": ["/foo"],` - Make sure the key field is required, not nullable, and of an [allowed type](../../../../concepts/collections.md#schema-restrictions). - Make any other necessary changes to the [collection specification](../../../../concepts/collections.md#specification) to accommodate this. + Make sure the key field is required, not nullable, and of an [allowed type](/concepts/collections.md#schema-restrictions). + Make any other necessary changes to the [collection specification](/concepts/collections.md#specification) to accommodate this. 3. Repeat with other missing collection keys, if necessary. diff --git a/site/docs/reference/Connectors/capture-connectors/SQLServer/google-cloud-sql-sqlserver.md b/site/docs/reference/Connectors/capture-connectors/SQLServer/google-cloud-sql-sqlserver.md index dc59f7317d..5e3fb15073 100644 --- a/site/docs/reference/Connectors/capture-connectors/SQLServer/google-cloud-sql-sqlserver.md +++ b/site/docs/reference/Connectors/capture-connectors/SQLServer/google-cloud-sql-sqlserver.md @@ -38,8 +38,8 @@ on the database and the individual tables to be captured. * [Enable public IP on your database](https://cloud.google.com/sql/docs/sqlserver/configure-ip#add) and add `34.121.207.128` as an authorized IP address. 2. To allow secure connections via SSH tunneling: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) + * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. 2. In your SQL client, connect to your instance as the default `sqlserver` user and issue the following commands. @@ -69,7 +69,7 @@ Together, you'll use the host:port as the `address` property when you configure ## Configuration You configure connectors either in the Flow web app, or by directly editing the catalog specification file. -See [connectors](../../../../concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the SQL Server source connector. +See [connectors](/concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the SQL Server source connector. ### Properties @@ -116,16 +116,16 @@ captures: ``` Your capture definition will likely be more complex, with additional bindings for each table in the source database. -[Learn more about capture definitions.](../../../../concepts/captures.md#pull-captures) +[Learn more about capture definitions.](/concepts/captures.md#pull-captures) ## Specifying Flow collection keys -Every Flow collection must have a [key](../../../../concepts/collections.md#keys). +Every Flow collection must have a [key](/concepts/collections.md#keys). As long as your SQL Server tables have a primary key specified, the connector will set the corresponding collection's key accordingly. In cases where a SQL Server table you want to capture doesn't have a primary key, -you can manually add it to the collection definition during the [capture creation workflow](../../../../guides/create-dataflow.md#create-a-capture). +you can manually add it to the collection definition during the [capture creation workflow](/guides/create-dataflow.md#create-a-capture). 1. After you input the endpoint configuration and click **Next**, the tables in your database have been mapped to Flow collections. @@ -133,8 +133,8 @@ Click each collection's **Specification** tab and identify a collection where `" 2. Click inside the empty key value in the editor and input the name of column in the table to use as the key, formatted as a JSON pointer. For example `"key": ["/foo"],` - Make sure the key field is required, not nullable, and of an [allowed type](../../../../concepts/collections.md#schema-restrictions). - Make any other necessary changes to the [collection specification](../../../../concepts/collections.md#specification) to accommodate this. + Make sure the key field is required, not nullable, and of an [allowed type](/concepts/collections.md#schema-restrictions). + Make any other necessary changes to the [collection specification](/concepts/collections.md#specification) to accommodate this. 3. Repeat with other missing collection keys, if necessary. diff --git a/site/docs/reference/Connectors/capture-connectors/SQLServer/sqlserver.md b/site/docs/reference/Connectors/capture-connectors/SQLServer/sqlserver.md index 0b4bf65080..57f19b9fc8 100644 --- a/site/docs/reference/Connectors/capture-connectors/SQLServer/sqlserver.md +++ b/site/docs/reference/Connectors/capture-connectors/SQLServer/sqlserver.md @@ -71,11 +71,11 @@ EXEC sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'flow_waterm ``` 2. Allow secure connection to Estuary Flow from your hosting environment. Either: - * Set up an [SSH server for tunneling](../../../../../guides/connect-network/). + * Set up an [SSH server for tunneling](/guides/connect-network/). When you fill out the [endpoint configuration](#endpoint), include the additional `networkTunnel` configuration to enable the SSH tunnel. - See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) + See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. * Whitelist the Estuary IP address, `34.121.207.128` in your firewall rules. @@ -88,8 +88,8 @@ EXEC sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'flow_waterm * Create a new [firewall rule](https://learn.microsoft.com/en-us/azure/azure-sql/database/firewall-configure?view=azuresql#use-the-azure-portal-to-manage-server-level-ip-firewall-rules) that grants access to the IP address `34.121.207.128`. 2. To allow secure connections via SSH tunneling: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) + * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. 2. In your SQL client, connect to your instance as the default `sqlserver` user and issue the following commands. @@ -122,7 +122,7 @@ EXEC sys.sp_cdc_enable_table @source_schema = 'dbo', @source_name = 'flow_waterm ## Configuration You configure connectors either in the Flow web app, or by directly editing the catalog specification file. -See [connectors](../../../../concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the SQL Server source connector. +See [connectors](/concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the SQL Server source connector. ### Properties @@ -169,16 +169,16 @@ captures: ``` Your capture definition will likely be more complex, with additional bindings for each table in the source database. -[Learn more about capture definitions.](../../../../concepts/captures.md#pull-captures) +[Learn more about capture definitions.](/concepts/captures.md#pull-captures) ## Specifying Flow collection keys -Every Flow collection must have a [key](../../../../concepts/collections.md#keys). +Every Flow collection must have a [key](/concepts/collections.md#keys). As long as your SQL Server tables have a primary key specified, the connector will set the corresponding collection's key accordingly. In cases where a SQL Server table you want to capture doesn't have a primary key, -you can manually add it to the collection definition during the [capture creation workflow](../../../../guides/create-dataflow.md#create-a-capture). +you can manually add it to the collection definition during the [capture creation workflow](/guides/create-dataflow.md#create-a-capture). 1. After you input the endpoint configuration and click **Next**, the tables in your database have been mapped to Flow collections. @@ -186,8 +186,8 @@ Click each collection's **Specification** tab and identify a collection where `" 2. Click inside the empty key value in the editor and input the name of column in the table to use as the key, formatted as a JSON pointer. For example `"key": ["/foo"],` - Make sure the key field is required, not nullable, and of an [allowed type](../../../../concepts/collections.md#schema-restrictions). - Make any other necessary changes to the [collection specification](../../../../concepts/collections.md#specification) to accommodate this. + Make sure the key field is required, not nullable, and of an [allowed type](/concepts/collections.md#schema-restrictions). + Make any other necessary changes to the [collection specification](/concepts/collections.md#specification) to accommodate this. 3. Repeat with other missing collection keys, if necessary. diff --git a/site/docs/reference/Connectors/capture-connectors/salesforce.md b/site/docs/reference/Connectors/capture-connectors/Salesforce/salesforce-historical-data.md similarity index 97% rename from site/docs/reference/Connectors/capture-connectors/salesforce.md rename to site/docs/reference/Connectors/capture-connectors/Salesforce/salesforce-historical-data.md index fbe08416ae..62e6f775c6 100644 --- a/site/docs/reference/Connectors/capture-connectors/salesforce.md +++ b/site/docs/reference/Connectors/capture-connectors/Salesforce/salesforce-historical-data.md @@ -48,7 +48,7 @@ There are several ways to control this: * Apply a filter when you [configure](#endpoint) the connector. If you don't apply a filter, the connector captures all objects available to the user. -* During [capture creation in the web application](../../../guides/create-dataflow.md#create-a-capture), +* During [capture creation in the web application](/guides/create-dataflow.md#create-a-capture), remove the bindings for objects you don't want to capture. ## Prerequisites @@ -102,7 +102,7 @@ Through this process, you'll obtain the client ID, client secret, and refresh to ## Configuration You configure connectors either in the Flow web app, or by directly editing the Flow specification file. -See [connectors](../../../concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the batch Salesforce source connector. +See [connectors](/concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the batch Salesforce source connector. ### Formula Fields diff --git a/site/docs/reference/Connectors/capture-connectors/salesforce-real-time.md b/site/docs/reference/Connectors/capture-connectors/Salesforce/salesforce-real-time.md similarity index 94% rename from site/docs/reference/Connectors/capture-connectors/salesforce-real-time.md rename to site/docs/reference/Connectors/capture-connectors/Salesforce/salesforce-real-time.md index 589664c422..3fcfabed9d 100644 --- a/site/docs/reference/Connectors/capture-connectors/salesforce-real-time.md +++ b/site/docs/reference/Connectors/capture-connectors/Salesforce/salesforce-real-time.md @@ -2,7 +2,7 @@ This connector captures data from Salesforce objects into Flow collections in real time via the [Salesforce PushTopic API](https://developer.salesforce.com/docs/atlas.en-us.api_streaming.meta/api_streaming/pushtopic_events_intro.htm). -[A separate connector is available for syncing historical Salesforce data](./salesforce.md). +[A separate connector is available for syncing historical Salesforce data](./salesforce-historical-data.md). For help using both connectors in parallel, [contact your Estuary account manager](mailto:info@estuary.dev). This connector is available for use in the Flow web application. For local development or open-source workflows, [`ghcr.io/estuary/source-salesforce-next:dev`](https://ghcr.io/estuary/source-salesforce-next:dev) provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. @@ -38,7 +38,7 @@ There are several ways to control this: * Create a [dedicated Salesforce user](#create-a-read-only-salesforce-user) with access only to the objects you'd like to capture. -* During [capture creation in the web application](../../../guides/create-dataflow.md#create-a-capture), +* During [capture creation in the web application](/guides/create-dataflow.md#create-a-capture), remove the bindings for objects you don't want to capture. ## Prerequisites @@ -92,7 +92,7 @@ Through this process, you'll obtain the client ID, client secret, and refresh to ## Configuration You configure connectors either in the Flow web app, or by directly editing the catalog specification file. -See [connectors](../../../concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the real-time Salesforce source connector. +See [connectors](/concepts/connectors.md#using-connectors) to learn more about using connectors. The values and specification sample below provide configuration details specific to the real-time Salesforce source connector. ### Properties diff --git a/site/docs/reference/Connectors/capture-connectors/Salesforce/salesforce.md b/site/docs/reference/Connectors/capture-connectors/Salesforce/salesforce.md new file mode 100644 index 0000000000..4a7ceffc4c --- /dev/null +++ b/site/docs/reference/Connectors/capture-connectors/Salesforce/salesforce.md @@ -0,0 +1,17 @@ +# Salesforce + +## Overview +The Estuary Salesforce Connector facilitates seamless integration between Salesforce and Estuary's data processing framework. With this connector, users can effortlessly sync data from Salesforce objects into Estuary Flow collections. There are two primary types of connectors available: the Realtime Connector and the Historical Connector. + +## Salesforce Historical Data +The [Historical Data Connector](./salesforce-historical-data.md) is designed to capture data from Salesforce objects into Flow collections using batch processing methods. This connector is particularly suited for synchronizing historical Salesforce data. By leveraging batch processing capabilities, it efficiently retrieves and syncs large volumes of historical data, ensuring comprehensive integration with Estuary's data processing workflows. + +## Salesforce Real Time Data +The [Real-time Connector](./salesforce-real-time.md) provides a mechanism to capture data from Salesforce objects into Flow collections in real time. It utilizes the Salesforce PushTopic API, which enables the streaming of data changes from Salesforce to Estuary. Leveraging the real-time capabilities of the PushTopic API, this connector ensures that updates and modifications in Salesforce objects are promptly reflected in the corresponding Estuary Flow collections. + +## Running Both Connectors in Parallel +To combine the capabilities of both connectors, users can create two separate captures: one using the Historical Connector to capture historical data, and the other using the Realtime Connector to capture real-time updates. Both captures can be configured to point to the same Flow collection, effectively merging historical and real-time data within the same destination. + +This approach provides a comprehensive solution, allowing users to maintain an up-to-date representation of their Salesforce data while also preserving historical context. By seamlessly integrating historical and real-time data updates, users can leverage the combined power of batch processing and real-time streaming for enhanced data analysis and insights. + +For help using both connectors in parallel, [contact Estuary's support team](mailto:info@estuary.dev). \ No newline at end of file diff --git a/site/docs/reference/Connectors/capture-connectors/alloydb.md b/site/docs/reference/Connectors/capture-connectors/alloydb.md index 7b457e4bcc..92920f7906 100644 --- a/site/docs/reference/Connectors/capture-connectors/alloydb.md +++ b/site/docs/reference/Connectors/capture-connectors/alloydb.md @@ -6,7 +6,7 @@ sidebar_position: 1 This connector uses change data capture (CDC) to continuously capture table updates in an AlloyDB database into one or more Flow collections. AlloyDB is a fully managed, PostgreSQL-compatible database available in the Google Cloud platform. -This connector is derived from the [PostgreSQL capture connector](./PostgreSQL.md), +This connector is derived from the [PostgreSQL capture connector](/reference/Connectors/capture-connectors/PostgreSQL/), so the same configuration applies, but the setup steps look somewhat different. It's available for use in the Flow web application. For local development or open-source workflows, [`ghcr.io/estuary/source-alloydb:dev`](https://github.com/estuary/connectors/pkgs/container/source-alloydb) provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. diff --git a/site/docs/reference/Connectors/materialization-connectors/MySQL/amazon-rds-mysql.md b/site/docs/reference/Connectors/materialization-connectors/MySQL/amazon-rds-mysql.md index a1c7fb3ccc..9a643d0e38 100644 --- a/site/docs/reference/Connectors/materialization-connectors/MySQL/amazon-rds-mysql.md +++ b/site/docs/reference/Connectors/materialization-connectors/MySQL/amazon-rds-mysql.md @@ -61,11 +61,11 @@ setting the user name to `ec2-user`. 5. Find and note the [instance's public DNS](https://docs.aws.amazon.com/vpc/latest/userguide/vpc-dns.html#vpc-dns-viewing). This will be formatted like: `ec2-198-21-98-1.compute-1.amazonaws.com`. * **Connect with SSH tunneling** - 1. Refer to the [guide](../../../../../guides/connect-network/) to configure an SSH server on the cloud platform of your choice. + 1. Refer to the [guide](/guides/connect-network/) to configure an SSH server on the cloud platform of your choice. 2. Configure your connector as described in the [configuration](#configuration) section above, with the additional of the `networkTunnel` stanza to enable the SSH tunnel, if using. - See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) + See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. :::tip Configuration Tip @@ -179,7 +179,7 @@ materializations: ## Delta updates -This connector supports both standard (merge) and [delta updates](../../../../concepts/materialization.md#delta-updates). +This connector supports both standard (merge) and [delta updates](/concepts/materialization.md#delta-updates). The default is to use standard updates. ## Date & times diff --git a/site/docs/reference/Connectors/materialization-connectors/MySQL/google-cloud-sql-mysql.md b/site/docs/reference/Connectors/materialization-connectors/MySQL/google-cloud-sql-mysql.md index 0692611dbe..775a01ba30 100644 --- a/site/docs/reference/Connectors/materialization-connectors/MySQL/google-cloud-sql-mysql.md +++ b/site/docs/reference/Connectors/materialization-connectors/MySQL/google-cloud-sql-mysql.md @@ -164,13 +164,13 @@ materializations: * [Enable public IP on your database](https://cloud.google.com/sql/docs/mysql/configure-ip#add) and add `34.121.207.128` as an authorized IP address. 2. To allow secure connections via SSH tunneling: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) + * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. 2. Configure your connector as described in the [configuration](#configuration) section above, with the additional of the `networkTunnel` stanza to enable the SSH tunnel, if using. -See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) +See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. :::tip Configuration Tip @@ -192,7 +192,7 @@ Together, you'll use the host:port as the `address` property when you configure ## Delta updates -This connector supports both standard (merge) and [delta updates](../../../../concepts/materialization.md#delta-updates). +This connector supports both standard (merge) and [delta updates](/concepts/materialization.md#delta-updates). The default is to use standard updates. ## Date & times diff --git a/site/docs/reference/Connectors/materialization-connectors/MySQL/mysql.md b/site/docs/reference/Connectors/materialization-connectors/MySQL/mysql.md index 9d524d5e81..84308b5956 100644 --- a/site/docs/reference/Connectors/materialization-connectors/MySQL/mysql.md +++ b/site/docs/reference/Connectors/materialization-connectors/MySQL/mysql.md @@ -22,23 +22,22 @@ To use this connector, you'll need: To meet these requirements, follow the steps for your hosting type. -* [Amazon RDS](./amazon-rds-postgres/) -* [Google Cloud SQL](./google-cloud-sql-postgres/) -* [Azure Database for PostgreSQL](#azure-database-for-mysql) +* [Amazon RDS](./amazon-rds-mysql/) +* [Google Cloud SQL](./google-cloud-sql-mysql/) +* [Azure Database for MySQL](#azure-database-for-mysql) -In addition to standard PostgreSQL, this connector supports cloud-based PostgreSQL instances. Google Cloud Platform, Amazon Web Service, and Microsoft Azure are currently supported. You may use other cloud platforms, but Estuary doesn't guarantee performance. +In addition to standard MySQL, this connector supports cloud-based MySQL instances. Google Cloud Platform, Amazon Web Service, and Microsoft Azure are currently supported. You may use other cloud platforms, but Estuary doesn't guarantee performance. To connect securely, you can either enable direct access for Flows's IP or use an SSH tunnel. -### Azure Database for PostgreSQL +### Azure Database for MySQL You must configure your database to allow connections from Estuary. There are two ways to do this: by granting direct access to Flow's IP or by creating an SSH tunnel. * **Connect Directly With Azure Database For MySQL**: Create a new [firewall rule](https://learn.microsoft.com/en-us/azure/mysql/single-server/how-to-manage-firewall-using-portal) that grants access to the IP address `34.121.207.128` -* **Connect With SSH Tunneling**: Follow the instructions for setting up an SSH connection to [Azure Database](../../../../guides/connect-network/#setup-for-azure). - +* **Connect With SSH Tunneling**: Follow the instructions for setting up an SSH connection to [Azure Database](/guides/connect-network/#setup-for-azure). ## Configuration @@ -166,11 +165,11 @@ There are two ways to do this: by granting direct access to Flow's IP or by crea * **Connect with SSH tunneling** - 1. Refer to the [guide](../../../../../guides/connect-network/) to configure an SSH server on the cloud platform of your choice. + 1. Refer to the [guide](/guides/connect-network/) to configure an SSH server on the cloud platform of your choice. 2. Configure your connector as described in the [configuration](#configuration) section above, with the additional of the `networkTunnel` stanza to enable the SSH tunnel, if using. - See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) + See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. :::tip Configuration Tip @@ -184,7 +183,7 @@ You can find the host and port in the following locations in each platform's con ## Delta updates -This connector supports both standard (merge) and [delta updates](../../../../concepts/materialization.md#delta-updates). +This connector supports both standard (merge) and [delta updates](/concepts/materialization.md#delta-updates). The default is to use standard updates. ## Date & times diff --git a/site/docs/reference/Connectors/materialization-connectors/PostgreSQL/PostgreSQL.md b/site/docs/reference/Connectors/materialization-connectors/PostgreSQL/PostgreSQL.md index 1d0fbc1b74..c6e5c4f4ca 100644 --- a/site/docs/reference/Connectors/materialization-connectors/PostgreSQL/PostgreSQL.md +++ b/site/docs/reference/Connectors/materialization-connectors/PostgreSQL/PostgreSQL.md @@ -23,10 +23,20 @@ In addition to standard PostgreSQL, this connector supports cloud-based PostgreS To connect securely, you can either enable direct access for Flows's IP or use an SSH tunnel. +:::tip Configuration Tip +To configure the connector, you must specify the database address in the format `host:port`. (You can also supply `host` only; the connector will use the port `5432` by default, which is correct in many cases.) +You can find the host and port in the following locations in each platform's console: +* Amazon RDS and Amazon Aurora: host as Endpoint; port as Port. +* Google Cloud SQL: host as Private IP Address; port is always `5432`. You may need to [configure private IP](https://cloud.google.com/sql/docs/postgres/configure-private-ip) on your database. +* Azure Database: host as Server Name; port under Connection Strings (usually `5432`). +* TimescaleDB: host as Host; port as Port. +::: ### Azure Database for PostgreSQL -* Create a new [firewall rule](https://docs.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-manage-firewall-portal#create-a-firewall-rule-after-server-is-created) that grants access to the IP address `34.121.207.128`. +* **Connect Directly With Azure Database For PostgreSQL**: Create a new [firewall rule](https://docs.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-manage-firewall-portal#create-a-firewall-rule-after-server-is-created) that grants access to the IP address `34.121.207.128`. + +* **Connect With SSH Tunneling**: Follow the instructions for setting up an SSH connection to [Azure Database](/guides/connect-network/#setup-for-azure). ## Configuration @@ -79,42 +89,10 @@ materializations: source: ${PREFIX}/${COLLECTION_NAME} ``` -### Setup - -You must configure your database to allow connections from Estuary. -There are two ways to do this: by granting direct access to Flow's IP or by creating an SSH tunnel. - -* **Connect directly with Amazon RDS or Amazon Aurora**: Edit the VPC security group associated with your database instance, or create a new VPC security group and associate it with the database instance. - * [Modify the instance](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Modifying.html), choosing **Publicly accessible** in the **Connectivity** settings. See the instructions below to use SSH Tunneling instead of enabling public access. - - * Refer to the [steps in the Amazon documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.RDSSecurityGroups.html#Overview.RDSSecurityGroups.Create). - Create a new inbound rule and a new outbound rule that allow all traffic from the IP address `34.121.207.128`. - -* **Connect directly with Google Cloud SQL**: [Enable public IP on your database](https://cloud.google.com/sql/docs/mysql/configure-ip#add) and add `34.121.207.128` as an authorized IP address. See the instructions below to use SSH Tunneling instead of enabling public access. - - - -* **Connect with SSH tunneling** - 1. Refer to the [guide](../../../../../guides/connect-network/) to configure an SSH server on the cloud platform of your choice. - - 2. Configure your connector as described in the [configuration](#configuration) section above, - with the additional of the `networkTunnel` stanza to enable the SSH tunnel, if using. - See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) - for additional details and a sample. - - -:::tip Configuration Tip -To configure the connector, you must specify the database address in the format `host:port`. (You can also supply `host` only; the connector will use the port `5432` by default, which is correct in many cases.) -You can find the host and port in the following locations in each platform's console: -* Amazon RDS and Amazon Aurora: host as Endpoint; port as Port. -* Google Cloud SQL: host as Private IP Address; port is always `5432`. You may need to [configure private IP](https://cloud.google.com/sql/docs/postgres/configure-private-ip) on your database. -* Azure Database: host as Server Name; port under Connection Strings (usually `5432`). -* TimescaleDB: host as Host; port as Port. -::: - ## Delta updates -This connector supports both standard (merge) and [delta updates](../../../../concepts/materialization.md#delta-updates). +This connector supports both standard (merge) and [delta updates](/concepts/materialization.md#delta-updates). + The default is to use standard updates. ## Reserved words diff --git a/site/docs/reference/Connectors/materialization-connectors/PostgreSQL/amazon-rds-postgres.md b/site/docs/reference/Connectors/materialization-connectors/PostgreSQL/amazon-rds-postgres.md index 18317edde3..e23be65587 100644 --- a/site/docs/reference/Connectors/materialization-connectors/PostgreSQL/amazon-rds-postgres.md +++ b/site/docs/reference/Connectors/materialization-connectors/PostgreSQL/amazon-rds-postgres.md @@ -112,7 +112,7 @@ materializations: ## Delta updates -This connector supports both standard (merge) and [delta updates](../../../../concepts/materialization.md#delta-updates). +This connector supports both standard (merge) and [delta updates](/concepts/materialization.md#delta-updates). The default is to use standard updates. ## Reserved words diff --git a/site/docs/reference/Connectors/materialization-connectors/PostgreSQL/google-cloud-sql-postgres.md b/site/docs/reference/Connectors/materialization-connectors/PostgreSQL/google-cloud-sql-postgres.md index 223cb7089f..d5ba6be8b1 100644 --- a/site/docs/reference/Connectors/materialization-connectors/PostgreSQL/google-cloud-sql-postgres.md +++ b/site/docs/reference/Connectors/materialization-connectors/PostgreSQL/google-cloud-sql-postgres.md @@ -105,7 +105,7 @@ materializations: ## Delta updates -This connector supports both standard (merge) and [delta updates](../../../../concepts/materialization.md#delta-updates). +This connector supports both standard (merge) and [delta updates](/concepts/materialization.md#delta-updates). The default is to use standard updates. ## Reserved words diff --git a/site/docs/reference/Connectors/materialization-connectors/README.md b/site/docs/reference/Connectors/materialization-connectors/README.md index ff2c432309..e1231650fb 100644 --- a/site/docs/reference/Connectors/materialization-connectors/README.md +++ b/site/docs/reference/Connectors/materialization-connectors/README.md @@ -45,13 +45,13 @@ In the future, other open-source materialization connectors from third parties c * [Configuration](./motherduck.md) * Package - ghcr.io/estuary/materialize-motherduck:dev * MySQL - * [Configuration](./mysql.md) + * [Configuration](./MySQL/) * Package - ghcr.io/estuary/materialize-mysql:dev * Pinecone * [Configuration](./pinecone.md) * Package — ghcr.io/estuary/materialize-pinecone:dev * PostgreSQL - * [Configuration](./PostgreSQL.md) + * [Configuration](./PostgreSQL/) * Package — ghcr.io/estuary/materialize-postgres:dev * Rockset * [Configuration](./Rockset.md) @@ -62,6 +62,9 @@ In the future, other open-source materialization connectors from third parties c * SQLite * [Configuration](./SQLite.md) * Package — ghcr.io/estuary/materialize-sqlite:dev +* SQL Server + * [Configuration](./SQLServer/) + * Package - ghcr.io/estuary/materialize-sqlserver:dev * TimescaleDB * [Configuration](./timescaledb.md) * Package - ghcr.io/estuary/materialize-timescaledb:dev diff --git a/site/docs/reference/Connectors/materialization-connectors/SQLServer/sqlserver.md b/site/docs/reference/Connectors/materialization-connectors/SQLServer/sqlserver.md index 3ec7d74648..995f7884d1 100644 --- a/site/docs/reference/Connectors/materialization-connectors/SQLServer/sqlserver.md +++ b/site/docs/reference/Connectors/materialization-connectors/SQLServer/sqlserver.md @@ -39,11 +39,11 @@ GRANT CONTROL ON DATABASE:: TO flow_materialize; ``` 2. Allow secure connection to Estuary Flow from your hosting environment. Either: - * Set up an [SSH server for tunneling](../../../../../guides/connect-network/). + * Set up an [SSH server for tunneling](/guides/connect-network/). When you fill out the [endpoint configuration](#endpoint), include the additional `networkTunnel` configuration to enable the SSH tunnel. - See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) + See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. * Whitelist the Estuary IP address, `34.121.207.128` in your firewall rules. @@ -56,8 +56,8 @@ GRANT CONTROL ON DATABASE:: TO flow_materialize; * Create a new [firewall rule](https://learn.microsoft.com/en-us/azure/azure-sql/database/firewall-configure?view=azuresql#use-the-azure-portal-to-manage-server-level-ip-firewall-rules) that grants access to the IP address `34.121.207.128`. 2. To allow secure connections via SSH tunneling: - * Follow the guide to [configure an SSH server for tunneling](../../../../../guides/connect-network/) - * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](../../../../concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. + * Follow the guide to [configure an SSH server for tunneling](/guides/connect-network/) + * When you configure your connector as described in the [configuration](#configuration) section above, including the additional `networkTunnel` configuration to enable the SSH tunnel. See [Connecting to endpoints on secure networks](/concepts/connectors.md#connecting-to-endpoints-on-secure-networks) for additional details and a sample. 2. In your SQL client, connect to your instance as the default `sqlserver` user and issue the following commands. @@ -121,7 +121,7 @@ materializations: ## Delta updates -This connector supports both standard (merge) and [delta updates](../../../../concepts/materialization.md#delta-updates). +This connector supports both standard (merge) and [delta updates](/concepts/materialization.md#delta-updates). The default is to use standard updates. ## Reserved words diff --git a/site/docs/reference/Connectors/materialization-connectors/alloydb.md b/site/docs/reference/Connectors/materialization-connectors/alloydb.md index 3ac5fd3edc..7cf792f5fe 100644 --- a/site/docs/reference/Connectors/materialization-connectors/alloydb.md +++ b/site/docs/reference/Connectors/materialization-connectors/alloydb.md @@ -6,7 +6,7 @@ sidebar_position: 1 This connector materializes Flow collections into tables in an AlloyDB database. AlloyDB is a fully managed, PostgreSQL-compatible database available in the Google Cloud platform. -This connector is derived from the [PostgreSQL materialization connector](./PostgreSQL.md), +This connector is derived from the [PostgreSQL materialization connector](/reference/Connectors/materialization-connectors/PostgreSQL/), so the same configuration applies, but the setup steps look somewhat different. It's available for use in the Flow web application. For local development or open-source workflows, [`ghcr.io/estuary/materialize-alloydb:dev`](https://ghcr.io/estuary/materialize-alloydb:dev) provides the latest version of the connector as a Docker image. You can also follow the link in your browser to see past image versions. diff --git a/site/docs/reference/Connectors/materialization-connectors/timescaledb.md b/site/docs/reference/Connectors/materialization-connectors/timescaledb.md index ed1e35476d..0d09abe00b 100644 --- a/site/docs/reference/Connectors/materialization-connectors/timescaledb.md +++ b/site/docs/reference/Connectors/materialization-connectors/timescaledb.md @@ -2,7 +2,7 @@ This connector materializes Flow collections into tables in a TimescaleDB database. TimescaleDB provides managed PostgreSQL instances for real-time data. -The connector is derived from the main [PostgreSQL](./PostgreSQL.md) materialization connector +The connector is derived from the main [PostgreSQL](/reference/Connectors/materialization-connectors/PostgreSQL/) materialization connector and has the same configuration. By default, the connector only materializes regular PostgreSQL tables in TimescaleDB.