Skip to content

Commit

Permalink
Use tiup for BR, Lightning and Dumpling. (#17392)
Browse files Browse the repository at this point in the history
  • Loading branch information
dveeden authored May 10, 2024
1 parent d7e73db commit 82751be
Show file tree
Hide file tree
Showing 18 changed files with 149 additions and 149 deletions.
2 changes: 1 addition & 1 deletion best-practices/readonly-nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,5 +127,5 @@ spark.tispark.replica_read learner
To read data from read-only nodes when backing up cluster data, you can specify the `--replica-read-label` option in the br command line. Note that when running the following command in shell, you need to use single quotes to wrap the label to prevent `$` from being parsed.
```shell
br backup full ... --replica-read-label '$mode:readonly'
tiup br backup full ... --replica-read-label '$mode:readonly'
```
18 changes: 9 additions & 9 deletions br/backup-and-restore-storages.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ By default, BR sends a credential to each TiKV node when using Amazon S3, GCS, o
Note that this operation is not applicable to cloud environments. If you use IAM Role authorization, each node has its own role and permissions. In this case, you need to configure `--send-credentials-to-tikv=false` (or `-c=0` in short) to disable sending credentials:

```bash
./br backup full -c=0 -u pd-service:2379 --storage 's3://bucket-name/prefix'
tiup br backup full -c=0 -u pd-service:2379 --storage 's3://bucket-name/prefix'
```

If you back up or restore data using the [`BACKUP`](/sql-statements/sql-statement-backup.md) and [`RESTORE`](/sql-statements/sql-statement-restore.md) statements, you can add the `SEND_CREDENTIALS_TO_TIKV = FALSE` option:
Expand Down Expand Up @@ -50,14 +50,14 @@ This section provides some URI examples by using `external` as the `host` parame
**Back up snapshot data to Amazon S3**

```shell
./br backup full -u "${PD_IP}:2379" \
tiup br backup full -u "${PD_IP}:2379" \
--storage "s3://external/backup-20220915?access-key=${access-key}&secret-access-key=${secret-access-key}"
```

**Restore snapshot data from Amazon S3**

```shell
./br restore full -u "${PD_IP}:2379" \
tiup br restore full -u "${PD_IP}:2379" \
--storage "s3://external/backup-20220915?access-key=${access-key}&secret-access-key=${secret-access-key}"
```

Expand All @@ -67,14 +67,14 @@ This section provides some URI examples by using `external` as the `host` parame
**Back up snapshot data to GCS**

```shell
./br backup full --pd "${PD_IP}:2379" \
tiup br backup full --pd "${PD_IP}:2379" \
--storage "gcs://external/backup-20220915?credentials-file=${credentials-file-path}"
```

**Restore snapshot data from GCS**

```shell
./br restore full --pd "${PD_IP}:2379" \
tiup br restore full --pd "${PD_IP}:2379" \
--storage "gcs://external/backup-20220915?credentials-file=${credentials-file-path}"
```

Expand All @@ -84,14 +84,14 @@ This section provides some URI examples by using `external` as the `host` parame
**Back up snapshot data to Azure Blob Storage**

```shell
./br backup full -u "${PD_IP}:2379" \
tiup br backup full -u "${PD_IP}:2379" \
--storage "azure://external/backup-20220915?account-name=${account-name}&account-key=${account-key}"
```

**Restore the `test` database from snapshot backup data in Azure Blob Storage**

```shell
./br restore db --db test -u "${PD_IP}:2379" \
tiup br restore db --db test -u "${PD_IP}:2379" \
--storage "azure://external/backup-20220915account-name=${account-name}&account-key=${account-key}"
```

Expand Down Expand Up @@ -128,7 +128,7 @@ It is recommended that you configure access to S3 using either of the following
Associate an IAM role that can access S3 with EC2 instances where the TiKV and BR nodes run. After the association, BR can directly access the backup directories in S3 without additional settings.

```shell
br backup full --pd "${PD_IP}:2379" \
tiup br backup full --pd "${PD_IP}:2379" \
--storage "s3://${host}/${path}"
```

Expand Down Expand Up @@ -195,7 +195,7 @@ You can configure the account used to access GCS by specifying the access key. I
- Use BR to back up data to Azure Blob Storage:

```shell
./br backup full -u "${PD_IP}:2379" \
tiup br backup full -u "${PD_IP}:2379" \
--storage "azure://external/backup-20220915?account-name=${account-name}"
```

Expand Down
2 changes: 1 addition & 1 deletion br/br-batch-create-table.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ BR enables the Batch Create Table feature by default, with the default configura
To disable this feature, you can set `--ddl-batch-size` to `1`. See the following example command:

```shell
br restore full \
tiup br restore full \
--storage local:///br_data/ --pd "${PD_IP}:2379" --log-file restore.log \
--ddl-batch-size=1
```
Expand Down
2 changes: 1 addition & 1 deletion br/br-checkpoint-backup.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ To avoid this situation, `br` keeps the `gc-safepoint` for about one hour by def
The following example sets `gcttl` to 15 hours (54000 seconds) to extend the retention period of `gc-safepoint`:

```shell
br backup full \
tiup br backup full \
--storage local:///br_data/ --pd "${PD_IP}:2379" \
--gcttl 54000
```
Expand Down
4 changes: 2 additions & 2 deletions br/br-incremental-guide.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: TiDB Incremental Backup and Restore Guide
summary: Incremental data is the differentiated data between starting and end snapshots, along with DDLs. It reduces backup volume and requires setting `tidb_gc_life_time` for incremental backup. Use `br backup` with `--lastbackupts` for incremental backup and ensure all previous data is restored before restoring incremental data.
summary: Incremental data is the differentiated data between starting and end snapshots, along with DDLs. It reduces backup volume and requires setting `tidb_gc_life_time` for incremental backup. Use `tiup br backup` with `--lastbackupts` for incremental backup and ensure all previous data is restored before restoring incremental data.
---

# TiDB Incremental Backup and Restore Guide
Expand All @@ -13,7 +13,7 @@ Incremental data of a TiDB cluster is differentiated data between the starting s
## Back up incremental data

To back up incremental data, run the `br backup` command with **the last backup timestamp** `--lastbackupts` specified. In this way, br command-line tool automatically backs up incremental data generated between `lastbackupts` and the current time. To get `--lastbackupts`, run the `validate` command. The following is an example:
To back up incremental data, run the `tiup br backup` command with **the last backup timestamp** `--lastbackupts` specified. In this way, br command-line tool automatically backs up incremental data generated between `lastbackupts` and the current time. To get `--lastbackupts`, run the `validate` command. The following is an example:

```shell
LAST_BACKUP_TS=`tiup br validate decode --field="end-version" --storage "s3://backup-101/snapshot-202209081330?access-key=${access-key}&secret-access-key=${secret-access-key}"| tail -n1`
Expand Down
12 changes: 6 additions & 6 deletions br/br-pitr-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Before you back up or restore data using the br command-line tool (hereinafter r
> - The following examples assume that Amazon S3 access keys and secret keys are used to authorize permissions. If IAM roles are used to authorize permissions, you need to set `--send-credentials-to-tikv` to `false`.
> - If other storage systems or authorization methods are used to authorize permissions, adjust the parameter settings according to [Backup Storages](/br/backup-and-restore-storages.md).
To start a log backup, run `br log start`. A cluster can only run one log backup task each time.
To start a log backup, run `tiup br log start`. A cluster can only run one log backup task each time.

```shell
tiup br log start --task-name=pitr --pd "${PD_IP}:2379" \
Expand Down Expand Up @@ -48,7 +48,7 @@ checkpoint[global]: 2022-05-13 11:31:47.2 +0800; gap=4m53s

### Run full backup regularly

The snapshot backup can be used as a method of full backup. You can run `br backup full` to back up the cluster snapshot to the backup storage according to a fixed schedule (for example, every 2 days).
The snapshot backup can be used as a method of full backup. You can run `tiup br backup full` to back up the cluster snapshot to the backup storage according to a fixed schedule (for example, every 2 days).

```shell
tiup br backup full --pd "${PD_IP}:2379" \
Expand All @@ -57,10 +57,10 @@ tiup br backup full --pd "${PD_IP}:2379" \

## Run PITR

To restore the cluster to any point in time within the backup retention period, you can use `br restore point`. When you run this command, you need to specify the **time point you want to restore**, **the latest snapshot backup data before the time point**, and the **log backup data**. BR will automatically determine and read data needed for the restore, and then restore these data to the specified cluster in order.
To restore the cluster to any point in time within the backup retention period, you can use `tiup br restore point`. When you run this command, you need to specify the **time point you want to restore**, **the latest snapshot backup data before the time point**, and the **log backup data**. BR will automatically determine and read data needed for the restore, and then restore these data to the specified cluster in order.

```shell
br restore point --pd "${PD_IP}:2379" \
tiup br restore point --pd "${PD_IP}:2379" \
--storage='s3://backup-101/logbackup?access-key=${access-key}&secret-access-key=${secret-access-key}' \
--full-backup-storage='s3://backup-101/snapshot-${date}?access-key=${access-key}&secret-access-key=${secret-access-key}' \
--restored-ts '2022-05-15 18:00:00+0800'
Expand All @@ -80,7 +80,7 @@ Restore KV Files <--------------------------------------------------------------

As described in the [Usage Overview of TiDB Backup and Restore](/br/br-use-overview.md):

To perform PITR, you need to restore the full backup before the restore point, and the log backup between the full backup point and the restore point. Therefore, for log backups that exceed the backup retention period, you can use `br log truncate` to delete the backup before the specified time point. **It is recommended to only delete the log backup before the full snapshot**.
To perform PITR, you need to restore the full backup before the restore point, and the log backup between the full backup point and the restore point. Therefore, for log backups that exceed the backup retention period, you can use `tiup br log truncate` to delete the backup before the specified time point. **It is recommended to only delete the log backup before the full snapshot**.

The following steps describe how to clean up backup data that exceeds the backup retention period:

Expand All @@ -100,7 +100,7 @@ The following steps describe how to clean up backup data that exceeds the backup
4. Delete snapshot data earlier than the snapshot backup `FULL_BACKUP_TS`:

```shell
rm -rf s3://backup-101/snapshot-${date}
aws s3 rm --recursive s3://backup-101/snapshot-${date}
```

## Performance capabilities of PITR
Expand Down
Loading

0 comments on commit 82751be

Please sign in to comment.