Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sending a request too rapidly after node startup makes it crash #2917

Closed
rbauduin opened this issue Aug 27, 2024 · 3 comments · Fixed by #2983
Closed

Sending a request too rapidly after node startup makes it crash #2917

rbauduin opened this issue Aug 27, 2024 · 3 comments · Fixed by #2983

Comments

@rbauduin
Copy link

I'm running my tests with a Makefile, which starts the local moonbeam node with docker if it is not yet running.
However, sending a query (to deploy the contract under test) to the node that was just started regularly (i.e. not always) results in a crash with this log:

====================

Version: 0.39.1-dcb449dafaf

   0: sp_panic_handler::set::{{closure}}
   1: std::panicking::rust_panic_with_hook
   2: std::panicking::begin_panic_handler::{{closure}}
   3: std::sys_common::backtrace::__rust_end_short_backtrace
   4: rust_begin_unwind
   5: core::panicking::panic_fmt
   6: core::option::expect_failed
   7: <moonbeam_service::PendingConsensusDataProvider<RuntimeApi> as fc_rpc::eth::pending::ConsensusDataProvider<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32,sp_runtime::traits::BlakeTwo256>,sp_runtime::OpaqueExtrinsic>>>::create_digest
   8: fc_rpc::eth::pending::<impl fc_rpc::eth::Eth<B,C,P,CT,BE,A,CIDP,EC>>::pending_runtime_api::{{closure}}
   9: <fc_rpc::eth::Eth<B,C,P,CT,BE,A,CIDP,EC> as fc_rpc_core::eth::EthApiServer>::estimate_gas::{{closure}}
  10: jsonrpsee_core::server::rpc_module::RpcModule<Context>::register_async_method::{{closure}}::{{closure}}
  11: <sc_rpc_server::middleware::Middleware<S> as jsonrpsee_server::middleware::rpc::RpcServiceT>::call::{{closure}}
  12: jsonrpsee_server::server::handle_rpc_call::{{closure}}
  13: <futures_util::future::future::Map<Fut,F> as core::future::future::Future>::poll
  14: <jsonrpsee_server::middleware::http::proxy_get_request::ProxyGetRequest<S> as tower_service::Service<http::request::Request<hyper::body::body::Body>>>::call::{{closure}}
  15: <tower::util::either::Either<A,B> as core::future::future::Future>::poll
  16: sc_rpc_server::start_server::{{closure}}::{{closure}}::{{closure}}::{{closure}}::{{closure}}
  17: hyper::proto::h1::dispatch::Dispatcher<D,Bs,I,T>::poll_catch
  18: <hyper::server::server::new_svc::NewSvcTask<I,N,S,E,W> as core::future::future::Future>::poll
  19: tokio::runtime::task::raw::poll
  20: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
  21: tokio::runtime::scheduler::multi_thread::worker::run
  22: tokio::runtime::task::raw::poll
  23: std::sys_common::backtrace::__rust_begin_short_backtrace
  24: core::ops::function::FnOnce::call_once{{vtable.shim}}
  25: std::sys::pal::unix::thread::Thread::new::thread_start
  26: <unknown>
  27: clone


Thread 'tokio-runtime-worker' panicked at 'Nimbus pre-runtime digest should be present', /moonbeam/node/service/src/lib.rs:1818

I had this happen for a query sent after the log message Development Service Ready , but before the first log of this kind:

 💤 Idle (0 peers), best: #0 (0xa2f3…54fa), finalized #0 (0xa2f3…54fa), ⬇ 0 ⬆ 0

My solution is to wait for the occurence of the Idle log message.

I'm happy to help reproduce it with our code if you want.

@noandrea
Copy link
Collaborator

thanks for reporting, we'll look into it

@gullomagico
Copy link

gullomagico commented Sep 5, 2024

Hi, I have a similar error even if I wait for the "Idle" log.
This happens when I try to deploy a contract with hardhat.
I'll share logs and docker run command:

docker run -d -p 9944:9944 -p 30333:30333 -p 30334:30334 --name dev-A moonbeamfoundation/moonbeam:v0.39.1 --dev --rpc-external --discover-local --log libp2p=trace,debug --sealing instant

2024-09-05 09:07:54.886 DEBUG tokio-runtime-worker sync: Propagating transactions    
2024-09-05 09:07:56.701  INFO tokio-runtime-worker substrate: 💤 Idle (0 peers), best: #0 (0xa2f3…54fa), finalized #0 (0xa2f3…54fa), ⬇ 0 ⬆ 0    
2024-09-05 09:07:57.786 DEBUG tokio-runtime-worker sync: Propagating transactions    
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker jsonrpsee-server: Accepting new connection 1/100
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker wasm_overrides: Neither WASM override nor substitute available for block 0xa2f3…54fa, using onchain code    
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker trie-cache: Local node trie cache dropped: empty
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker trie-cache: Local value trie cache dropped: local hit rate = 0% [0/2], shared hit rate = 100% [2/2]
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker trie-cache: Updated the shared node cache: 0 accesses, 0 new values, 0/1003 evicted (length = 331, inline size=406032/25985040, heap size=2077682/27702051)
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker trie-cache: Updated the shared value cache: 2 accesses, 0 new values, 0/821 evicted (length = 271, known_storage_keys = 271, inline size=61968/7929872, heap size=15404/5491901)
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker wasm_overrides: Neither WASM override nor substitute available for block 0xa2f3…54fa, using onchain code    
2024-09-05 09:07:58.753 DEBUG tokio-runtime-worker wasm-heap: allocator dropped: AllocationStats { bytes_allocated: 32, bytes_allocated_peak: 40, bytes_allocated_sum: 56, address_space_used: 56 }    
2024-09-05 09:07:58.753 DEBUG tokio-runtime-worker trie-cache: Local node trie cache dropped: empty
2024-09-05 09:07:58.753 DEBUG tokio-runtime-worker trie-cache: Local value trie cache dropped: local hit rate = 0% [0/3], shared hit rate = 100% [3/3]
2024-09-05 09:07:58.753 DEBUG tokio-runtime-worker trie-cache: Updated the shared node cache: 0 accesses, 0 new values, 0/1003 evicted (length = 331, inline size=406032/25985040, heap size=2077682/27702051)
2024-09-05 09:07:58.753 DEBUG tokio-runtime-worker trie-cache: Updated the shared value cache: 3 accesses, 0 new values, 0/821 evicted (length = 271, known_storage_keys = 271, inline size=61968/7929872, heap size=15404/5491901)
2024-09-05 09:07:58.753 DEBUG tokio-runtime-worker rpc_metrics: [http] eth_chainId call took 509 μs    

====================

Version: 0.39.1-dcb449dafaf

   0: sp_panic_handler::set::{{closure}}
   1: std::panicking::rust_panic_with_hook
   2: std::panicking::begin_panic_handler::{{closure}}
   3: std::sys_common::backtrace::__rust_end_short_backtrace
   4: rust_begin_unwind
   5: core::panicking::panic_fmt
   6: core::option::expect_failed
   7: <moonbeam_service::PendingConsensusDataProvider<RuntimeApi> as fc_rpc::eth::pending::ConsensusDataProvider<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32,sp_runtime::traits::BlakeTwo256>,sp_runtime::OpaqueExtrinsic>>>::create_digest
   8: fc_rpc::eth::pending::<impl fc_rpc::eth::Eth<B,C,P,CT,BE,A,CIDP,EC>>::pending_runtime_api::{{closure}}
   9: <fc_rpc::eth::Eth<B,C,P,CT,BE,A,CIDP,EC> as fc_rpc_core::eth::EthApiServer>::estimate_gas::{{closure}}
  10: jsonrpsee_core::server::rpc_module::RpcModule<Context>::register_async_method::{{closure}}::{{closure}}
  11: <sc_rpc_server::middleware::Middleware<S> as jsonrpsee_server::middleware::rpc::RpcServiceT>::call::{{closure}}
  12: jsonrpsee_server::server::handle_rpc_call::{{closure}}
  13: <futures_util::future::future::Map<Fut,F> as core::future::future::Future>::poll
  14: <jsonrpsee_server::middleware::http::proxy_get_request::ProxyGetRequest<S> as tower_service::Service<http::request::Request<hyper::body::body::Body>>>::call::{{closure}}
  15: <tower::util::either::Either<A,B> as core::future::future::Future>::poll
  16: sc_rpc_server::start_server::{{closure}}::{{closure}}::{{closure}}::{{closure}}::{{closure}}
  17: hyper::proto::h1::dispatch::Dispatcher<D,Bs,I,T>::poll_catch
  18: <hyper::server::server::new_svc::NewSvcTask<I,N,S,E,W> as core::future::future::Future>::poll
  19: tokio::runtime::task::raw::poll
  20: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
  21: tokio::runtime::scheduler::multi_thread::worker::run
  22: tokio::runtime::task::raw::poll
  23: std::sys_common::backtrace::__rust_begin_short_backtrace
  24: core::ops::function::FnOnce::call_once{{vtable.shim}}
  25: std::sys::pal::unix::thread::Thread::new::thread_start
  26: <unknown>
  27: <unknown>


Thread 'tokio-runtime-worker' panicked at 'Nimbus pre-runtime digest should be present', /moonbeam/node/service/src/lib.rs:1818

This is a bug. Please report it at:

        https://github.com/moonbeam-foundation/moonbeam/issues/new

2024-09-05 09:07:58.796 DEBUG tokio-runtime-worker jsonrpsee-server: Accepting new connection 1/100
2024-09-05 09:07:58.797 DEBUG tokio-runtime-worker rpc_metrics: [http] eth_blockNumber call took 50 μs    
2024-09-05 09:07:58.809 DEBUG tokio-runtime-worker jsonrpsee-server: Accepting new connection 1/100

@RomarQ
Copy link
Contributor

RomarQ commented Sep 27, 2024

Thanks for the reports @rbauduin @gullomagico

PR #2983 will fix the issue. You still need to wait for the node to be ready like you are currently doing, but it will no longer crash and instead will propagate the error to be included in the RPC response.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants