You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The flink-tidb-connector doesn't work as expected when I use upsert mode to sink data to TiDB, the reason for this is as in title, flink version: 1.13.x. Is there any reason for making unique key as part of the primary?
The issue: flink-tidb-connector use official jdbc connector flink-jdbc-connector to complete data wiriting to tidb;
In upsert mode, the records in the buffer of flink-jdbc-connector are deduplicated by the primary key, and execute executeBatch to flush data out is disordered(because type of buffer is HashMap), refer to: TableBufferReducedStatementExecutor.java;
These may cause multiple records with the same primary key in the same batch and write TiDB out of order.
The text was updated successfully, but these errors were encountered:
@itinycheng Sorry to miss this issue.
You are right, it's better to use the primary key as keyFields.
And we can use SQL hint if someone needs to custom keyFields.
The flink-tidb-connector doesn't work as expected when I use upsert mode to sink data to TiDB, the reason for this is as in title, flink version: 1.13.x. Is there any reason for making unique key as part of the primary?
Code block:
The issue:
flink-tidb-connector
use official jdbc connectorflink-jdbc-connector
to complete data wiriting to tidb;In upsert mode, the records in the buffer of
flink-jdbc-connector
are deduplicated by the primary key, and executeexecuteBatch
to flush data out is disordered(because type of buffer is HashMap), refer to:TableBufferReducedStatementExecutor.java
;These may cause multiple records with the same primary key in the same batch and write TiDB out of order.
The text was updated successfully, but these errors were encountered: