-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initialize all shards on index creation to avoid mapping conflicts #799
Conversation
Signed-off-by: Daniel Widdis <[email protected]>
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #799 +/- ##
============================================
+ Coverage 75.76% 75.77% +0.01%
Complexity 833 833
============================================
Files 88 88
Lines 4035 4037 +2
Branches 371 371
============================================
+ Hits 3057 3059 +2
Misses 824 824
Partials 154 154 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a comment to discuss more on the approach
CreateIndexRequest request = new CreateIndexRequest(indexName).mapping(mapping).settings(indexSettings); | ||
CreateIndexRequest request = new CreateIndexRequest(indexName).mapping(mapping) | ||
.settings(indexSettings) | ||
.waitForActiveShards(ActiveShardCount.ALL); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would wait for all shards be it primary and replicas to become active and cause a performance hit just for creating an index especially in large clusters or when dealing with indices with a high number of shards.
One option I can think of here:
- We can register a cluster state listener to monitor a cluster state that would tell us if the index has been created
(Pseudo code below)
ClusterStateListener listener = new ClusterStateListener() {
@Override
public void clusterChanged(ClusterChangedEvent event) {
if (event.state().metadata().hasIndex(indexName)) {
// Index is created, proceed with mapping update
updateIndexMapping(indexName, mapping);
clusterService.removeListener(this);
}
}
};
clusterService.addListener(listener);
- We can separate out updating the mapping to the index
void updateIndexMapping(String indexName, String mapping) {
PutMappingRequest request = new PutMappingRequest(indexName)
.source(mapping);
client.admin().indices().putMapping(request, ActionListener.wrap(........
- [Optional] We can also add retries creating index if required
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would wait for all shards be it primary and replicas to become active and cause a performance hit just for creating an index especially in large clusters or when dealing with indices with a high number of shards.
Good concern, but we control this index and we are creating the settings at the same time as we are creating the mappings. We have either zero replicas in a single-node cluster (in which case this PR is no change) or exactly one replica, which is the very replica creating this race condition.
flow-framework/src/main/java/org/opensearch/flowframework/indices/FlowFrameworkIndicesHandler.java
Line 80 in f3a9e99
private static final Map<String, Object> indexSettings = Map.of("index.auto_expand_replicas", "0-1"); |
- We can register a cluster state listener to monitor a cluster state that would tell us if the index has been created
This is unneeded, the existing method doesn't return until the index has been created and assigned to the primary shard. And we already check the cluster state before indexing the document.
- We can separate out updating the mapping to the index
Fair enough but assuming you're doing the immediate refresh of this, it's no different than waiting for the one replica to have the mapping created.
- [Optional] We can also add retries creating index if required
Unfortunately in this case the retries will always fail. If you create an index with a text
field mapping, it is impossible to change it to keyword
without deleting the index.
The only other approach I can see that could possibly work is doing a GetMapping call before the first index request. This seems reasonable to do for the config index as we know we'll only initialize it once. But for creating a template we would need to check the mapping with every subsequent template request, not just the first one.
Note the performance/latency is a one-time event for the very first template creation.
Other possibilities for performance improvements that are well beyond the scope of a quick bug fix past code freeze:
- we could pre-create these indices on startup, somewhat like ml commons does with their config index on a cron job,
- we could create all the system indices in parallel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we could pre-create these indices on startup, somewhat like ml commons does with their config index on a cron job
This can be a good solution to this problem. I am still not aligned with using ActiveShardCount.ALL
even as a quick fix. Probably need a second opinion @amitgalitz @joshpalis thoughts?
For now we can at least separate out update mapping, that way we will be sure that the index is created before we update the mapping.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For now we can at least separate out update mapping, that way we will be sure that the index is created before we update the mapping.
Wouldn't separating the update mapping still lead to same possible problem if we create the index and then we update the document before we update mapping?
I am still not aligned with using ActiveShardCount.ALL even as a quick fix.
I think perf hit shouldn't be too big here because we have a max replica of 0-1 and maximum of 5 shards for our system indices so we aren't waiting for hundreds of shards across dozens of nodes ever.
I just want to confirm problem again though, it looks like ActiveShardCount
waits for 1 primary shard, we are saying we think we can have a replica shard on node 2 not initialized yet but it gets a document inserted before shard creation which leads to the wrong mapping?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just want to confirm problem again though
I believe the sequence is:
- one node gets the CreateIndex request with the mapping, updates that node, returns acknolwedged response
- workflow sees index created and moves to create first document
- that checks that the index exists (it does) but not whether the mapping has been updated across all nodes, and then inserts a document
- the inserted document doesn't have a mapping and uses dynamic mapping of text
- the initial index creation fails because it can't update/overwrite
I may be wrong on the diagnosis here, but I see no other way for this to fail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@reta @andrross @kaituo @sohami @saratvemulapalli any idea here
Description
Ensures index mappings are applied to all shards on system index creation to avoid mapping type conflicts on race condition.
Related Issues
Fixes #798
Check List
--signoff
.By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.