-
Notifications
You must be signed in to change notification settings - Fork 689
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix additional edge cases in fast sync process #5663
base: master
Are you sure you want to change the base?
Fix additional edge cases in fast sync process #5663
Conversation
…re sent Previously, the state of `allowed_requests` will always be reset to the default value even if there are no new block requests in the end. This could cause an edge cause that `peer_block_request()` will early return next time when no ongoing block requests in fact.
…p sync When starting gap sync, the starting point is set to the last finalized block (`finalized_number`), and `best_queued_number` is updated to this block as well, see https://github.com/paritytech/polkadot-sdk/blob/9079f36/substrate/client/network/sync/src/strategy/chain_sync.rs#L1396-L1399. This results in a situation where the `best_queued_number` in chain sync could be smaller than `client.info().best_number`. Consequently, when `peer_block_request()` is invoked, blocks between `finalized_number` and `client.info().best_number` are redundantly requested. While re-downloading a few blocks is usually not problematic, it triggers an edge case in gap sync: when these re-downloaded blocks are imported, gap sync checks for completion by comparing `gap.end` with the `imported_block_number`, see https://github.com/paritytech/polkadot-sdk/blob/9079f36/substrate/client/network/sync/src/strategy/chain_sync.rs#L1844-L1845 For example, if the best block is 124845 and the finalized block is 123838, gap sync starts at 123838 with the range [1, 124845]. Blocks in the range [124839, 124845] will be re-downloaded. Once block 124845 is imported, gap sync will incorrectly consider the sync as complete, causing block history to fail to download. This patch prevents re-downloading duplicate blocks, ensuring that gap sync is not stopped prematurely, and block history is downloaded as expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! The changes are subtle indeed, I'll feel more comfortable once Dmitry has another look at this
Thanks for contributing 🙏
) -> Option<(Range<NumberFor<B>>, BlockRequest<B>)> { | ||
if best_num >= peer.best_number { | ||
// Nothing to download from the peer via normal block requests. | ||
if best_number == peer.best_number && best_hash == peer.best_hash { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if peer.best_number > best_number
? The fix won't work and the duplicate block request will be issued?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. The fix is incomplete in this case, the block requests would include both the duplicate and new blocks, so the gap sync will still be interrupted unexpectedly. I'll need some more time to think about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm running out of ideas on how to completely avoid duplicate block requests. The challenge lies in how duplicate block processing interferes with the gap sync state.
Why duplicate blocks pollute the gap sync state
-
Block request confusion: The main issue in the current chain sync is that multiple types of block requests are made, but the handler can't distinguish between them. As a result, blocks requested from one type of request can interfere with another sync component. For example, when duplicate blocks in the range [
finalized_number
,best_number
] are requested viapeer_block_request()
, once these blocks are received and enqueued,gap_sync.best_queued_number
is updated tobest_number
.
polkadot-sdk/substrate/client/network/sync/src/strategy/chain_sync.rs
Lines 1295 to 1299 in 03f6e42
if let Some(gap_sync) = &mut self.gap_sync { if number > gap_sync.best_queued_number && number <= gap_sync.target { gap_sync.best_queued_number = number; } }
This causes a problem because no further blocks will be requested frompeer_gap_block_request()
asgap_sync.best_queued_number == peer.best_number
, leading to a failure in downloading the block history. -
Impact on
gap_sync_complete
detection: The detection of whether gap_sync is complete can be affected by importing duplicate blocks, as I explained in the second commit message.
The second point can be updated to let gap_sync_complete = self.gap_sync.is_some() && self.client.info().block_gap.is_none();
which is more reliable too. However, I still haven’t found a solution for the issue with gap_sync.best_queued_number
outlined in the first point.
To ensure gap sync functions as expected, I suggest detecting whether a block response contains duplicate blocks in gap sync and preventing those duplicates from being processed further. This would prevent pollution of the gap sync state and allow block history to be downloaded correctly. Thoughts? @dmitry-markin
…uests are sent (#5774) This PR is cherry-picked from #5663 so that I can maintain a smaller polkadot-sdk diff downstream sooner than later. cc @lexnv @dmitry-markin --------- Co-authored-by: Alexandru Vasile <[email protected]> Co-authored-by: Dmitry Markin <[email protected]>
…nto more-fast-sync-edge-case-fixes
During local testing for issue #5406, I encountered additional edge cases that required fixing. These should be the final adjustments before I submit part 2 at #5406 (comment).
While the code changes themselves are relatively minor, the rationale behind them is more complex and involves careful handling of specific sync scenarios. I recommend going through the PR commit by commit. Each commit includes detailed explanations in the messages to provide context for the necessity of these changes. cc @dmitry-markin