Backing up the blockchain for the next time the database is corrupted (100% guaranteed to happen) #9900
Isanderthul
started this conversation in
Show and tell
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
tldr 🥇
sqlite3 /Users/dev/.chia/mainnet/db/blockchain_v1_mainnet.sqlite "VACUUM main INTO /tmp/blockchain_v1_mainnet.sqlite.$(date -u +'%Y%m%d%H%M%S')"
I would like to eventually make a script to backup the blockchain on 6 hourly basis. I realise that because it is sqlite copying the db while it is being accessed by the chia process (online database) will cause a bad backup.
Looking into
https://docs.kanboard.org/en/latest/admin_guide/sqlite.html
it seemssqlite3
is required, to guarantee that the backup is successful while the farmer is using the db.Online wisdom recommends four ways to backup the database.
Results of experimenting with different types of backups
.backup command
sqlite3 /Users/dev/.chia/mainnet/db/blockchain_v1_mainnet.sqlite ".backup /tmp/blockchain_v1_mainnet.sqlite.$(date -u +'%Y%m%d%H%M%S')"
backing up stops at 1.5GB, as concurrent chia transactions cause locking errors. Not an option.
.dump command
sqlite3 /Users/dev/.chia/mainnet/db/blockchain_v1_mainnet.sqlite .dump > "/tmp/blockchain_v1_mainnet.sql.$(date -u +'%Y%m%d%H%M%S')"
and later
sqlite3 /Users/dev/.chia/mainnet/db/blockchain_v1_mainnet.sqlite.restore < /tmp/blockchain_v1_mainnet.sql.blockchain_v1_mainnet.sqlite.20220121072032
dumping is very slow (55 minutes), the output is a list of sql commands, the backup need to be re-imported using sqlite3.
VACUUM INTO
backround reading
main is the table used by chia, as can be confirmed with this command
sqlite3 /Users/dev/.chia/mainnet/db/blockchain_v1_mainnet.sqlite "PRAGMA database_list;"
then to use sqlite3,
sqlite3 /Users/dev/.chia/mainnet/db/blockchain_v1_mainnet.sqlite "VACUUM main INTO /tmp/blockchain_v1_mainnet.sqlite.$(date -u +'%Y%m%d%H%M%S')"
vacuum seems to be fast (40 minutes), and is intended to be used on an online database, defragments the resulting database (41GB down to 36GB), and the output is a sqlite database file.
all in all seems to be the winner so far
Beta Was this translation helpful? Give feedback.
All reactions