Skip to content

Commit

Permalink
Removing bias from shard allocation.
Browse files Browse the repository at this point in the history
We have noticed after running for a long time, in airmail the distribution of shard count amongst ingester seems uniform, but one or two indexers are getting most of the throughput.

This could be caused by an indirect bias in the allocation of shard to
ingester. For instance, in airmail, most indexes are very small, but a
few of them are much larger. Small indexes have 1 shard with a very low
throughput. Large indexes on the other hands have several shards with
typically >2MB of throughut.

Larger indexes are also more subject to scale up/down, since other
indexes tend to stick to having 1 shard (we don't scale down to 0).

This PR tries to remove any possible bias when assigning / removing
shards in
- scale up
- scale down
- rebalance.

Scale up
---------------------------

This is the most important change/bias.
In presence of a tie, we were picking the ingester with the
lowest ingester id.

Also, on replication, the logic picking a follower was broken
(for a given leader, we were always picking the same follower).

The `max_num_shards_to_allocate_per_node` was also wrong (division
instead of ceil div) (probably minor).

Scale down
----------------------------

The code was relying on the long term ingestion rate, and then
ties were solved by the hashmap iterator. The Hashmap iterator
is quite random so this was probably not a problem.

Rebalance
----------------------------

Arithmetic used to compute the target number of shards was a
little bit inaccurate.

The shard that are rebalanced are now picked at random (instead of
picking the oldest shards in the model).
  • Loading branch information
fulmicoton committed Jul 19, 2024
1 parent feb0fe7 commit 475cb8b
Show file tree
Hide file tree
Showing 2 changed files with 327 additions and 122 deletions.
Loading

0 comments on commit 475cb8b

Please sign in to comment.