Skip to content

Commit

Permalink
(DOCSP-43962) [Kafka] Kafka connector nested components (#167) (#169)
Browse files Browse the repository at this point in the history
* (DOCSP-43962) Unnested 1 example in tip.

* (DOCSP-43962) Removing the subjunctive should

* (DOCSP-42962) Unnested 1 important admonition in table.

* (DOCSP-42962) Typo fix.

* (DOCSP-42962) Unnested 1 important admonition in table.

* (DOCSP-42962) Unnested 1 tip in table.

* (DOCSP-42962) Fixing indentation error and unnesting 1 important admonition in table.

* (DOCSP-42962) Spacing.

* (DOCSP-42962) Build errors.

* (DOCSP-42962) Unnested 1 example and 1 note in table.

* (DOCSP-42962) Unnested 2 examples and 1 note in include.

* (DOCSP-42962) Replace quotation marks with monospace.

* (DOCSP-42962) Unnested 2 notes in table.

* (DOCSP-42962) consistency.

* (DOCSP-42962) Edited for consistency and unnested 2 tips.

* (DOCSP-42962) Removing extra line breaks.

* (DOCSP-42962) Unnested 2 important admonitions in table.

* (DOCSP-42962) Fix rendering error.

* (DOCSP-42962) Fixing rendering issues.

* (DOCSP-42962) Unnested 2 notes and 1 example in table.

* (DOCSP-42962) Unnested 3 important admonitions

* (DOCSP-42962) Spacing fix.

* (DOCSP-42962) Unnested 1 example and 3 tips.

* (DOCSP-42962) Unnested 1 note, 2 tips, 1 important admonition.

* (DOCSP-42962) Spacing.

* (DOCSP-42962) Spacing.

* (DOCSP-42962) Normalizing spacing and adding reference to the Data Formats page.

* (DOCSP-42962) Unnested 3 tips, 1 example and 1 important admonition in table.

* (DOCSP-42962) Regularizing usage of pipeline operator line breaks.

* (DOCSP-42962) Spacing.

* (DOCSP-42962) Spacing.

* (DOCSP-42962) Normalizing spacing across pages.

* Update source/source-connector/configuration-properties/kafka-topic.txt



* Update source/source-connector/configuration-properties/kafka-topic.txt



---------

Co-authored-by: carriecwk <[email protected]>
  • Loading branch information
elyse-mdb and carriecwk authored Oct 4, 2024
1 parent e5b823d commit 1d2b87c
Show file tree
Hide file tree
Showing 20 changed files with 260 additions and 275 deletions.
12 changes: 5 additions & 7 deletions source/includes/copy-existing-admonition.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
.. note:: Data Copy Can Produce Duplicate Events

If any system changes the data in the database while the source connector
converts existing data from it, MongoDB may produce duplicate change
stream events to reflect the latest changes. Since the change stream
events on which the data copy relies are idempotent, the copied data is
eventually consistent.
If any system changes the data in the database while the source connector
converts existing data from it, MongoDB may produce duplicate change
stream events to reflect the latest changes. Since the change stream
events on which the data copy relies are idempotent, the copied data is
eventually consistent.
10 changes: 4 additions & 6 deletions source/includes/externalize-secrets.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,5 @@
.. important:: Avoid Exposing Your Authentication Credentials

To avoid exposing your authentication credentials in your
``connection.uri`` setting, use a
`ConfigProvider <https://docs.confluent.io/current/connect/security.html#externalizing-secrets>`__
and set the appropriate configuration parameters.
:gold:`IMPORTANT:` To avoid exposing your authentication credentials in your
``connection.uri`` setting, use a
`ConfigProvider <https://docs.confluent.io/current/connect/security.html#externalizing-secrets>`__
and set the appropriate configuration parameters.

2 changes: 2 additions & 0 deletions source/introduction/data-formats.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
.. _kafka-data-formats:

============
Data Formats
============
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,8 +75,9 @@ Settings
When set to ``true``, the default value, the
connector writes a batch of records as an ordered bulk write
operation.
| To learn more about bulk write operations, see the :ref:`Write
Model Strategies page <sink-connector-bulk-write-ops>`.
|
| To learn more about bulk write operations, see
:ref:`Bulk Write Operations <sink-connector-bulk-write-ops>`.
|
| **Default**: ``true``
| **Accepted Values**: ``true`` or ``false``
Expand Down Expand Up @@ -111,15 +112,13 @@ Settings
| The maximum number of tasks to create for this connector. The
connector may create fewer than the maximum tasks specified if it
cannot handle the level of parallelism you specify.

.. important:: Multiple Tasks May Process Messages Out of Order

If you specify a value greater than ``1``, the connector enables
parallel processing of the tasks. If your topic has multiple
partition logs, which enables the connector to read from the
topic in parallel, the tasks may process the messages out of
order.

|
| :gold:`IMPORTANT:` If you specify a value greater than ``1``,
the connector enables parallel processing of the tasks.
If your topic has multiple partition logs, which enables
the connector to read from the topic in parallel,
the tasks may process the messages out of order.
|
| **Default**: ``1``
| **Accepted Values**: An integer

Expand Down
40 changes: 23 additions & 17 deletions source/sink-connector/configuration-properties/error-handling.txt
Original file line number Diff line number Diff line change
Expand Up @@ -41,17 +41,21 @@ Settings
| Whether to continue processing messages if the connector encounters
an error. Allows the connector to override the ``errors.tolerance``
Kafka cluster setting.
|
| When set to ``none``, the connector reports any error and
blocks further processing of the rest of the messages.
|
| When set to ``all``, the connector ignores any problematic messages.
|
| When set to ``data``, the connector tolerates only data errors and
fails on all other errors.
|
| To learn more about error handling strategies, see the
:ref:`<kafka-sink-handle-errors>` page.

.. note::

This property overrides the `errors.tolerance <https://docs.confluent.io/platform/current/installation/configuration/connect/sink-connect-configs.html#errors-tolerance>`__
property of the Connect Framework.

|
| This property overrides the `errors.tolerance <https://docs.confluent.io/platform/current/installation/configuration/connect/sink-connect-configs.html#errors-tolerance>`__
| property of the Connect Framework.
|
| **Default:** Inherits the value from the ``errors.tolerance``
setting.
| **Accepted Values**: ``"none"`` or ``"all"``
Expand All @@ -64,16 +68,14 @@ Settings
failed operations to the log file. The connector classifies
errors as "tolerated" or "not tolerated" using the
``errors.tolerance`` or ``mongo.errors.tolerance`` settings.

|
| When set to ``true``, the connector logs both "tolerated" and
"not tolerated" errors.
| When set to ``false``, the connector logs "not tolerated" errors.

.. note::

This property overrides the `errors.log.enable <https://docs.confluent.io/platform/current/installation/configuration/connect/sink-connect-configs.html#errors-log-enable>`__
property of the Connect Framework.

|
| This property overrides the `errors.log.enable <https://docs.confluent.io/platform/current/installation/configuration/connect/sink-connect-configs.html#errors-log-enable>`__
| property of the Connect Framework.
|
| **Default:** ``false``
| **Accepted Values**: ``true`` or ``false``

Expand All @@ -95,7 +97,8 @@ Settings
| Name of topic to use as the dead letter queue. If blank, the
connector does not send any invalid messages to the dead letter
queue.
| For more information about the dead letter queue, see the
|
| To learn more about the dead letter queue, see the
:ref:`Dead Letter Queue Configuration Example <sink-dead-letter-queue-configuration-example>`.
|
| **Default:** ``""``
Expand All @@ -107,11 +110,13 @@ Settings
| **Description:**
| Whether the connector should include context headers when it
writes messages to the dead letter queue.
|
| To learn more about the dead letter queue, see the
:ref:`Dead Letter Queue Configuration Example <sink-dead-letter-queue-configuration-example>`.
|
| To learn about the exceptions the connector defines and
reports through context headers, see the
:ref:`<sink-configuration-error-handling-dlq-errors>` section.
reports through context headers, see
:ref:`<sink-configuration-error-handling-dlq-errors>`.
|
| **Default:** ``false``
| **Accepted Values**: ``true`` or ``false``
Expand All @@ -123,7 +128,8 @@ Settings
| The number of nodes on which to replicate the dead letter queue
topic. If you are running a single-node Kafka cluster, you must
set this to ``1``.
| For more information about the dead letter queue, see the
|
| To learn more about the dead letter queue, see the
:ref:`Dead Letter Queue Configuration Example <sink-dead-letter-queue-configuration-example>`.
|
| **Default:** ``3``
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,9 @@ Settings
|
| **Description:**
| Whether the connector should delete documents when the key value
matches a document in MongoDB and the value field is null. This
setting applies when you specify an id generation strategy that
matches a document in MongoDB and the value field is null.
|
| This setting applies when you specify an id generation strategy that
operates on the key document such as ``FullKeyStrategy``,
``PartialKeyStrategy``, and ``ProvidedInKeyStrategy``.
|
Expand Down
24 changes: 9 additions & 15 deletions source/sink-connector/configuration-properties/kafka-topic.txt
Original file line number Diff line number Diff line change
Expand Up @@ -42,10 +42,8 @@ Settings
| **Description:**
| A list of Kafka topics that the sink connector watches.

.. note::

You can define either the ``topics`` or the ``topics.regex``
setting, but not both.
You can define either the ``topics`` or the ``topics.regex``
setting, but not both.

| **Accepted Values**: A comma-separated list of valid Kafka topics

Expand All @@ -58,20 +56,16 @@ Settings
| A regular expression that matches the Kafka topics that the sink
connector watches.

.. example::

.. code-block:: properties

topics.regex=activity\\.\\w+\\.clicks$
For example, the following regex matches topic names such as
"activity.landing.clicks" and "activity.support.clicks".
It does not match the topic names "activity.landing.views" and "activity.clicks".

This regex matches topic names such as "activity.landing.clicks"
and "activity.support.clicks". It does not match the topic names
"activity.landing.views" and "activity.clicks".
.. code-block:: properties

.. note::
topics.regex=activity\\.\\w+\\.clicks$

You can define either the ``topics`` or the ``topics.regex``
setting, but not both.
You can define either the ``topics`` or the ``topics.regex``
setting, but not both.

| **Accepted Values**: A valid regular expression pattern using ``java.util.regex.Pattern``.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ Settings
| When set to ``true``, if the connector calls a command on your
MongoDB instance that's deprecated in the declared {+stable-api+}
version, it raises an exception.
|
| You can set the API version with the ``server.api.version``
configuration option. For more information on the {+stable-api+}, see
the MongoDB manual entry on the
Expand All @@ -83,6 +84,7 @@ Settings
| When set to ``true``, if the connector calls a command on your
MongoDB instance that's not covered in the declared {+stable-api+}
version, it raises an exception.
|
| You can set the API version with the ``server.api.version``
configuration option. For more information on the {+stable-api+}, see
the MongoDB manual entry on the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -43,14 +43,12 @@ Settings
database or collection in which to sink the data. The default
``DefaultNamespaceMapper`` uses values specified in the
``database`` and ``collection`` properties.

.. seealso::

The connector includes an alternative class for specifying the
database and collection called ``FieldPathNamespaceMapper``. See
the :ref:`FieldPathNamespaceMapper settings <fieldpathnamespacemapper-settings>`
for more information.

|
| The connector includes an alternative class for specifying the
| database and collection called ``FieldPathNamespaceMapper``. See
| :ref:`FieldPathNamespaceMapper Settings <fieldpathnamespacemapper-settings>`
| for more information.
|
| **Default**:

.. code-block:: none
Expand Down Expand Up @@ -149,10 +147,12 @@ You can use the following settings to customize the behavior of the
| **Description**:
| Whether to throw an exception when either the document is missing the
mapped field or it has an invalid BSON type.
|
| When set to ``true``, the connector does not process documents
missing the mapped field or that contain an invalid BSON type.
The connector may halt or skip processing depending on the related
error-handling configuration settings.
|
| When set to ``false``, if a document is missing the mapped field or
if it has an invalid BSON type, the connector defaults to
writing to the specified ``database`` and ``collection`` settings.
Expand Down
20 changes: 7 additions & 13 deletions source/sink-connector/configuration-properties/post-processors.txt
Original file line number Diff line number Diff line change
Expand Up @@ -40,13 +40,10 @@ Settings
| **Description:**
| A list of post-processor classes the connector should apply to
process the data before saving it to MongoDB.

.. seealso::

For more information on post-processors and examples of
their usage, see the section on
:doc:`Post-processors </sink-connector/fundamentals/post-processors>`.

|
| To learn more about post-processors and see examples of
| their usage, see
| :doc:`Sink Connector Post Processors </sink-connector/fundamentals/post-processors>`.
|
| **Default**:

Expand Down Expand Up @@ -129,12 +126,9 @@ Settings
| **Description:**
| The class that specifies the ``WriteModelStrategy`` the connector should
use for :manual:`Bulk Writes </core/bulk-write-operations/index.html>`.

.. seealso::

For information on how to create your own strategy, see
:ref:`<kafka-sink-write-model-create-custom-strategy>`.

|
| To learn more about how to create your own strategy, see
| :ref:`<kafka-sink-write-model-create-custom-strategy>`.
|
| **Default**:

Expand Down
32 changes: 18 additions & 14 deletions source/sink-connector/configuration-properties/time-series.txt
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,11 @@ Settings
| The date format pattern the connector should use to convert the
source data contained in the field specified by the
``timeseries.timefield`` setting.
The connector passes the date format pattern to the Java
|
| The connector passes the date format pattern to the Java
`DateTimeFormatter.ofPattern(pattern, locale) <https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html#ofPattern-java.lang.String-java.util.Locale->`__
method to perform date and time conversions on the time field.
|
| If the date value from the source data only contains date information,
the connector sets the time information to the start of the specified
day. If the date value does not contain the timezone offset, the
Expand All @@ -75,6 +77,7 @@ Settings
| **Description:**
| Whether to convert the data in the field into the BSON ``Date``
format.
|
| When set to ``true``, the connector uses the milliseconds
after epoch and discards fractional parts if the value is
a number. If the value is a string, the connector uses the
Expand All @@ -95,8 +98,9 @@ Settings
|
| **Description:**
| Which ``DateTimeFormatter`` locale language tag to use with the date
format pattern (e.g. ``"en-US"``). For more information on
locales, see the Java SE documentation of `Locale <https://docs.oracle.com/javase/8/docs/api/java/util/Locale.html>`__.
format pattern (e.g. ``"en-US"``).
|
| To learn more about locales, see the Java SE documentation of `Locale <https://docs.oracle.com/javase/8/docs/api/java/util/Locale.html>`__.
|
| **Default**: ``ROOT``
| **Accepted Values**: A valid ``Locale`` language tag format
Expand All @@ -107,12 +111,10 @@ Settings
| **Description:**
| Which top-level field to read from the source data to describe
a group of related time series documents.

.. important::

This field must not be the ``_id`` field nor the field you specified
in the ``timeseries.timefield`` setting.

|
| :gold:`IMPORTANT:` This field must not be the ``_id`` field nor the field you specified
in the ``timeseries.timefield`` setting.
|
| **Default**: ``""``
| **Accepted Values**: An empty string or the name of a field
that contains any BSON type except ``BsonArray``.
Expand All @@ -124,8 +126,9 @@ Settings
| The number of seconds MongoDB should wait before automatically
removing the time series collection data. The connector disables
timed expiry when the setting value is less than ``1``.
For more information on this collection setting, see the MongoDB
Server Manual page on :manual:`Automatic Removal for Time Series Collections </core/timeseries/timeseries-automatic-removal/>`.
|
| To learn more, see :manual:`Set up Automatic Removal for Time Series Collections </core/timeseries/timeseries-automatic-removal/>`
in the MongoDB manual.
|
| **Default**: ``0``
| **Accepted Values**: An integer
Expand All @@ -136,9 +139,10 @@ Settings
|
| **Description:**
| The expected interval between subsequent measurements of your
source data. For more information on this setting, see the
MongoDB Server Manual page on :manual:`Granularity for Time
Series Data </core/timeseries/timeseries-granularity/>`.
source data.
|
| To learn more, see :manual:`Set Granularity for Time Series Data
</core/timeseries/timeseries-granularity/>` in the MongoDB manual.
|
| *Optional*
| **Default**: ``""``
Expand Down
22 changes: 9 additions & 13 deletions source/sink-connector/configuration-properties/topic-override.txt
Original file line number Diff line number Diff line change
Expand Up @@ -40,19 +40,15 @@ Settings
| **Description:**
| Specify a topic and property name to override the corresponding
global or default property setting.

.. example::

The ``topic.override.foo.collection=bar`` setting instructs the
sink connector to store data from the ``foo`` topic in the ``bar``
collection.

.. note::

You can specify any valid configuration setting in the
``<propertyName>`` segment on a per-topic basis except
``connection.uri`` and ``topics``.

|
| For example, the ``topic.override.foo.collection=bar`` setting instructs
| the sink connector to store data from the ``foo`` topic in the ``bar``
| collection.
|
| You can specify any valid configuration setting in the
| ``<propertyName>`` segment on a per-topic basis except
| ``connection.uri`` and ``topics``.
|
| **Default**: ``""``
| **Accepted Values**: Accepted values specific to the overridden property

Expand Down
Loading

0 comments on commit 1d2b87c

Please sign in to comment.