Skip to content

Commit

Permalink
update
Browse files Browse the repository at this point in the history
Signed-off-by: Aolin <[email protected]>
  • Loading branch information
Oreoxmt committed Sep 30, 2024
1 parent 5f42df3 commit 371d4c4
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions br/br-checkpoint-restore.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,18 +67,18 @@ Checkpoint restore operations are divided into two parts: snapshot restore and P

### Snapshot restore

During the first restore, `br` creates a `__TiDB_BR_Temporary_Snapshot_Restore_Checkpoint` database in the target cluster to store checkpoint data, and records the upstream cluster ID and BackupTS of the backup data.
During the initial restore, `br` creates a `__TiDB_BR_Temporary_Snapshot_Restore_Checkpoint` database in the target cluster. This database records checkpoint data, the upstream cluster ID, and the BackupTS of the backup data.

If the restore fails, you can retry it using the same command, and `br` will automatically read the checkpoint information from the `__TiDB_BR_Temporary_Snapshot_Restore_Checkpoint` database and resume from the last restore point.

When the restore fails, if you try to restore to the same cluster using different backup data, `br` will report an error if the current upstream cluster ID or BackupTS is different from the checkpoint record. If the restore cluster has been cleaned, you can manually delete the `__TiDB_BR_Temporary_Snapshot_Restore_Checkpoint` database and retry with a different backup.
If the restore fails and you try to restore backup data with different checkpoint information to the same cluster, `br` reports an error. It indicates that the current upstream cluster ID or BackupTS is different from the checkpoint record. If the restore cluster has been cleaned, you can manually delete the `__TiDB_BR_Temporary_Snapshot_Restore_Checkpoint` database and retry with a different backup.

### PITR restore

[PITR (Point-in-time recovery)](/br/br-pitr-guide.md) consists of snapshot restore and log restore phases.

During the first restore, `br` first enters the snapshot restore phase, which follows the same process as the preceding snapshot restore. When `br` enters the snapshot restore stage, it records the upstream cluster ID and BackupTS (the start time point `start-ts` of log restore) of the backup data in the checkpoint. If restore fails during this phase, you cannot adjust the snapshot backup path (equal to `start-ts` of log restore) when continuing checkpoint restore.
During the initial restore, `br` first enters the snapshot restore phase, which follows the same process as the preceding [snapshot restore](#snapshot-restore-1). When `br` enters the snapshot restore stage, it records the upstream cluster ID and BackupTS (the start time point `start-ts` of log restore) of the backup data in the checkpoint. If restore fails during this phase, you cannot adjust the snapshot backup path (equal to `start-ts` of log restore) when continuing checkpoint restore.

When entering the log restore phase during the first restore, `br` creates a `__TiDB_BR_Temporary_Log_Restore_Checkpoint` database in the target cluster to store checkpoint data, and records the upstream cluster ID and the restore time range (`start-ts` and `restored-ts`). If restore fails during this phase, you need to specify the same log backup path and `restored-ts` as recorded in the checkpoint. Otherwise, `br` will report an error and prompt that the current specified restore time range or upstream cluster ID is different from the checkpoint record. If the restore cluster has been cleaned, you can manually delete the `__TiDB_BR_Temporary_Log_Restore_Checkpoint` database and retry with a different backup.
When entering the log restore phase during the initial restore, `br` creates a `__TiDB_BR_Temporary_Log_Restore_Checkpoint` database in the target cluster. This database records checkpoint data, the upstream cluster ID, and the restore time range (`start-ts` and `restored-ts`). If restore fails during this phase, you need to specify the same log backup path and `restored-ts` as recorded in the checkpoint when retrying. Otherwise, `br` will report an error and prompt that the current specified restore time range or upstream cluster ID is different from the checkpoint record. If the restore cluster has been cleaned, you can manually delete the `__TiDB_BR_Temporary_Log_Restore_Checkpoint` database and retry with a different backup.

Before restoring database and table data during the first log restore phase, `br` constructs a mapping of upstream and downstream cluster database and table IDs at the `restored-ts` time point. This mapping is persisted in the system table `mysql.tidb_pitr_id_map` to prevent duplicate allocation of database and table IDs. Deleting data from `mysql.tidb_pitr_id_map might` might lead to inconsistent PITR restore data.
Before entering the log restore phase during the initial restore, `br` constructs a mapping of upstream and downstream cluster database and table IDs at the `restored-ts` time point. This mapping is persisted in the system table `mysql.tidb_pitr_id_map` to prevent duplicate allocation of database and table IDs. Deleting data from `mysql.tidb_pitr_id_map might` might lead to inconsistent PITR restore data.

0 comments on commit 371d4c4

Please sign in to comment.