Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache the start key for message pruning and use latest confirmed #1757

Merged
merged 5 commits into from
Jul 17, 2023

Conversation

PlasmaPower
Copy link
Collaborator

While shutting down my node, I noticed it would often be taking a while in the message pruner:

github.com/syndtr/goleveldb/leveldb.(*dbIter).Next(0xc07fdd3200)
        /home/lee/go/pkg/mod/github.com/syndtr/[email protected]/leveldb/db_iter.go:254 +0xc9
github.com/offchainlabs/nitro/arbnode.deleteFromLastPrunedUptoEndKey({0x49e8ba8, 0xc011722f50}, {0x4a18aa0?, 0xc00e07fd40?}, {0x64c8f4a, 0x1, 0x1}, 0x5222c9c)
        /home/lee/programming/go/nitro/arbnode/message_pruner.go:135 +0xe9
github.com/offchainlabs/nitro/arbnode.deleteOldMessageFromDB({0x49e8ba8, 0xc011722f50}, 0x2319edf3f9f2778d?, 0x50b852464f739f16?, {0x4a18aa0, 0xc00e07fd40}, {0x4a18aa0?, 0xc00e07fd40?})
        /home/lee/programming/go/nitro/arbnode/message_pruner.go:115 +0x8a
github.com/offchainlabs/nitro/arbnode.(*MessagePruner).prune(0xc000e9e140, {0x49e8ba8, 0xc011722f50}, 0x0?, {{0xb2, 0x8f, 0x52, 0x9, 0x8b, 0xcd, ...}, ...})
        /home/lee/programming/go/nitro/arbnode/message_pruner.go:111 +0x196

It would appear leveldb iteration isn't as efficient as I thought it would be. Perhaps the fact that we'd just deleted a lot of keys in the db that we'd otherwise be iterating over is slowing it down. Regardless, I've added in this PR an in-memory cache of where we should start pruning to avoid paying this cost repeatedly. This shouldn't pose an issue where we miss a key that was re-inserted, because we're only pruning confirmed messages which are at least a week old and can't be reorg'd.

I added a second commit to this PR (I'd recommend reviewing them individually) that switches (back?) to using the latest confirmed instead of latest staked rollup node for message pruning. We can't prune what we're staked on, because we might get a challenge about something before that, but we can't get a challenge about something already confirmed so we're safe to prune there.

@cla-bot cla-bot bot added the s Automatically added by the CLA bot if the creator of a PR is registered as having signed the CLA. label Jul 12, 2023
Copy link
Contributor

@tsahee tsahee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some optional comment. Can implement in a different PR.

@@ -434,6 +434,10 @@ func (s *HeaderReader) Client() arbutil.L1Interface {
return s.client
}

func (s *HeaderReader) UseFinalityData() bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless I'm missing something.. Seems like callopts in both l1Validator and Staker are initialized without params.
What if HeaderReader instead implemented a "getcallopts" function, that would return callopts which request the appropriate block?
This could simplify some things.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could have a param to such function that optionally ignores the treatfinality if in some cases you always want latest.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we'll just want to wait until the new challenge protocol replaces a lot of this code in this case

staker/staker.go Outdated Show resolved Hide resolved
staker/staker.go Outdated
@@ -280,8 +289,42 @@ func (s *Staker) Initialize(ctx context.Context) error {
return nil
}

func (s *Staker) latestNodeDetailsForUpdate(ctx context.Context, description string, node uint64) (arbutil.MessageIndex, *validator.GoGlobalState, error) {
stakedInfo, err := s.rollup.LookupNode(ctx, node)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could merge more code by using the fact that LatestConfirmed is identical to LatestStaked when getting zeros for address.
Could even test if walletOrZero is zeroes and in that case skip checking latestStaked (instead - use what you have for latestConfirmed)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've unified the functions. Let me know what you think. I spent a while thinking about how to do this and my current approach is the best I came up with, but there might be a slightly cleaner way.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One thing I didn't want to do is have LatestStakedNotifier and LatestConfirmedNotifier be the same interface, because I could imagine some components wanting both, and I don't want them to be confusable (looking at a component it should be clear whether it gets the latest staked or confirmed info).

staker/staker.go Outdated Show resolved Hide resolved
Copy link
Contributor

@tsahee tsahee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@tsahee tsahee merged commit 676bef3 into master Jul 17, 2023
7 checks passed
@PlasmaPower PlasmaPower deleted the message-pruner-cache-start branch July 17, 2023 23:44
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
s Automatically added by the CLA bot if the creator of a PR is registered as having signed the CLA.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants