You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have set up a local node for testing, following the deployment instructions in the official "running_local.md" document. During the testing process, I encountered several issues (my English is not very good, so I'm using translation software to help, I hope it can convey the meaning clearly).
I am using go-ethereum to send transactions and recording the time of sending, indicated as "time1" in the graph. Additionally, I am listening for new blocks on L2 and considering the block time of the block containing the transaction as "time2". I noticed that "time2" is a few milliseconds earlier than "time1". I want to understand why this is happening.
I would like to calculate the latency on L2. Is it reasonable to use "time2 - time1"? Or should I use "time3 - time1"? (where "time3" refers to the time when the sequencer submits user transactions as a batch to the L1 contract)
When I send many transactions, even though I have ensured that the transactions are sent in sequence according to the designated nonce, I still encounter errors due to "nonce too high", and these errors become more frequent as the number of transactions increases. Is this primarily caused by network latency?
The text was updated successfully, but these errors were encountered:
I have set up a local node for testing, following the deployment instructions in the official "running_local.md" document. During the testing process, I encountered several issues (my English is not very good, so I'm using translation software to help, I hope it can convey the meaning clearly).
I am using go-ethereum to send transactions and recording the time of sending, indicated as "time1" in the graph. Additionally, I am listening for new blocks on L2 and considering the block time of the block containing the transaction as "time2". I noticed that "time2" is a few milliseconds earlier than "time1". I want to understand why this is happening.
I would like to calculate the latency on L2. Is it reasonable to use "time2 - time1"? Or should I use "time3 - time1"? (where "time3" refers to the time when the sequencer submits user transactions as a batch to the L1 contract)
When I send many transactions, even though I have ensured that the transactions are sent in sequence according to the designated nonce, I still encounter errors due to "nonce too high", and these errors become more frequent as the number of transactions increases. Is this primarily caused by network latency?
The text was updated successfully, but these errors were encountered: