You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Secondary indexes on in-memory tables are currently not very efficient: they're stored as a sorted list of rows, and don't get re-sorted until the end of a batch table edit operation.
This makes detecting duplicate values in a unique virtual column slow and nontrivial.
In order to do this, we would need to either:
Scan every row of the table for each insert operation, computing the virtual column for each row.
Use a data structure for indexes that can be updated in place
Secondary indexes on in-memory tables are currently not very efficient: they're stored as a sorted list of rows, and don't get re-sorted until the end of a batch table edit operation.
This makes detecting duplicate values in a unique virtual column slow and nontrivial.
In order to do this, we would need to either:
I added disabled tests for this in #2641
The text was updated successfully, but these errors were encountered: