-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[validators] confirmation of correct flags and procedures #981
Comments
@sameh-farouk any news on this? Do you need more info from @coesensbert? Thanks! |
The procedures for adding a new validator remain unchanged. However, the referenced documentation is inaccurate. Using the author_rotateKeys RPC call is a simpler alternative to generating the key with subkey generate and inserting it into the node’s keystore with key insert. Executing both sequentially is incorrect. Also, adjustments are needed where the documentation refers to the sudo module is required. The Council module should be used instead. I will review the docs here and test the flow.
Here are my comments regarding the mentioned flags:
|
Great, once the flow is tested and docs updated I can continue finish the validator for the guardian stack. Thanks for the flag suggestions, resolved: threefoldtech/grid_deployment@e4de06b
We use the tfchain public RPC snapshot data to speed up a validator syncing with the chain. This snapshot is generated with a node with these flags: https://github.com/threefoldtech/grid_deployment/blob/development/grid-snapshots/devnet/docker-compose.yml#L10-L45 |
No, they won't be compatible. |
Successfully synced a devnet node from 0 with the new pruning flags. Took about 17h on an i5-12500 with nvme ssds. Stored data size is around 13GB, while a public RPC node has 110G. So that's good, we can lower the storage requirements by a lot. Need to do the same for mainnet to get the size there. While it seems obvious indeed to have snapshots, this will present 4 new nodes to create the snapshots and more maintenance for ops. Since validators will only added for mainnet, does it make sense to only have a snapshot creator for mainnet in this case? |
Nice work! What is the bandwidth of the machine? Curious to know. Is the bottleneck at the network or the disk speed?
Excellent question. IMO I agree with you here we can only go with mainnet snapshot for now. It can be discussed with the team in the following days. Will let you know if I have more info on my end. |
As you already know, Snapshots are primarily used to speed up the process of syncing new nodes when necessary, whether for adding new validators or migrating them to another machine. From a development perspective, I have no advice here. It's better to check with team leads regarding the trade-offs you want to make. Time could be more precious in some instances. But I have a question: why do we need an extra node for snapshot creation? Couldn't we just utilize one of the boot nodes for that as well? |
@sabrinasadik is checking this with @coesensbert in a couple of days (after September 19). This issue will then be updated. |
Update: |
It's been very long since we have added/removed validators to tfchain, for any net. Our docs and procedures are probably outdated. That was definitely the case regarding validator keys, but this is resolved now here: https://docs.grid.tf/threefold/itenv_threefold_main/src/branch/master/grid_operations/grid_tfchain#re-inserting-re-setting-session-aura-gran-keys-to-same-as-controller-account
These are some of our old docs on adding/removing validators:
https://docs.grid.tf/threefold/itenv_threefold_main/src/branch/master/kubernetes_clusters/hagrid-prod2/applications/tfchainmainnet/Adding-validators.md
https://docs.grid.tf/threefold/itenv_threefold_main/src/branch/master/kubernetes_clusters/hagrid-prod2/applications/tfchainmainnet/Removing-validators.md
This is related to:
Can dev confirm:
The text was updated successfully, but these errors were encountered: