Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PXB-3269 Reduce the time the Server is locked by xtrabackup #1603

Open
wants to merge 59 commits into
base: trunk
Choose a base branch
from

Conversation

aybek
Copy link
Contributor

@aybek aybek commented Aug 27, 2024

No description provided.

altmannmarcelo and others added 30 commits August 23, 2024 16:39
https://jira.percona.com/browse/PXB-3034

Changed lock-ddl option to be an enum. Possible Values are:

ON - Same as True
OFF - Same as False
REDUCED - Enable REDUCED lock mode. The first attempt to copy IBD
files are done without locking and changed tables (Affected by DDL)
are recopied (if needed) under DDL.
https://jira.percona.com/browse/PXB-3034

Add DDL tracking to xtrabackup. This new object is responsible for
tracking DDL's while the backup is running.
Later those changes will be handled in the end of backup and during
prepare.
https://jira.percona.com/browse/PXB-3034

Adjusted DDL tracking to produce correct files at the end of backup and
handle those files during prepare.
https://jira.percona.com/browse/PXB-3034

Added parallel copy capability to the second phase copy of .ibd files.
https://jira.percona.com/browse/PXB-3034

Added test cases under suite/lockless
Fixed test cases using --lock-ddl=false/true
Adjusted fil_open_for_xtrabackup to tolerate file been gone and
re-attempt to open the file 10 times.
2. PXB-3220 Allow deleted tables between disovery and file open, save them in missing tables list
3. PXB-3227: Rename table and then drop table, make sure that original table was deleted
…ed and loaded to cache;

moving ddl_tracker->add_table to the correct spot
…ume threads

https://perconadev.atlassian.net/browse/PXB-3113

The current debug-sync option in PXB completely suspends PXB process and user can resume by sending SIGCONT signal
This is useful for scenarios where PXB is paused and do certain operations on server and then resume PXB to complete.

But many bugs we found during testing, involves multiple threads in PXB. The goal of this work is to be able to
pause and resume the thread.

Since many tests use the existing debug-sync option, I dont want to disturb these tests. We can convert them to
the new mechanism later.

How to use?
-----------
The new mechanism is used with option --debug-sync-thread="sync_point_name"

In the code place a debug_sync_thread(“debug_point_1”) to stop thread at this place.

You can pass the debug_sync point via commandline --debug-sync-thread=”debug_sync_point1”

PXB will create a file of the debug_sync point name in the backup directory. It is suffixed with a threadnumber.
Please ensure that no two debug_sync points use same name (it doesn’t make sense to have two sync points with same name)

```
2024-03-28T15:58:23.310386-00:00 0 [Note] [MY-011825] [Xtrabackup] DEBUG_SYNC_THREAD: sleeping 1sec.  Resume this thread by deleting file /home/satya/WORK/pxb/bld/backup//xb_before_file_copy_4860396430306702017
```
In the test, after activating syncpoint, you can use wait_for_debug_sync_thread_point <syncpoint_name>

Do some stuff now. This thread is sleeping.

Once you are done, and if you want the thread to resume, you can do so by deleting the file 'rm backup_dir/sync_point_name_*`
Please use resume_debug_sync_thread_point <syncpoint_name> <backup_dir>. It dletes the syncpoint file and additionally checks that syncpoint is
indeed resumed.

More common/complicated scenario:
----------------------------------
The scenario is to signal another thread to stop after reaching the first sync point. To achieve this. Do steps 1 to 3 (above)

Echo the debug_sync point name into a file named “xb_debug_sync_thread”. Example:

4. echo "xtrabackup_copy_logfile_pause" > backup/xb_debug_sync_thread

5. send SIGUSR1 signal to PXB process. kill -SIGUSR1 496102

6. Wait for syncpoint to be reached. wait_for_debug_sync_thread <syncpoint_name>

PXB acknowledges it
2024-03-28T16:05:07.849926-00:00 0 [Note] [MY-011825] [Xtrabackup] SIGUSR1 received. Reading debug_sync point from xb_debug_sync_thread file in backup directory
2024-03-28T16:05:07.850004-00:00 0 [Note] [MY-011825] [Xtrabackup] DEBUG_SYNC_THREAD: Deleting  file/home/satya/WORK/pxb/bld/backup//xb_debug_sync_thread

and then prints this once the sync point is reached.
2024-03-28T16:05:08.508830-00:00 1 [Note] [MY-011825] [Xtrabackup] DEBUG_SYNC_THREAD: sleeping 1sec.  Resume this thread by deleting file /home/satya/WORK/pxb/bld/backup//xb_xtrabackup_copy_logfile_pause_10389933572825668634

At this point, we have two threads sleeping at two sync points. Either of them can be resumed by deleting the filenames mentioned in the error log.
(Or use resume_debug_sync_thread())
…sql.ibd seems to be corrupted.

https://perconadev.atlassian.net/browse/PXB-3252

Problem:
--------
With lock-ddl=REDUCED, ALTER ENCRYPTION='Y'/'N' happens. On general tablespaces, this is done inplace.
ie the space_id of tablespace will not change and the pages are encrypted or decrypted.

For file per table tablespaces, a new tablespace is created with encryption key and data is copied from
old tablespace to new tablespace.

In xtrabackup, the files are discovered and then they are copied. Between these two operations, the encrypted
tablespace can change. For example, PXB saw that ts1.ibd is encrypted with key1, loaded into cache.

Then server did ENCRYPTION='N' and then back to ENCRYPTION='Y', now the tablspace is encrypted with a different key.

Now PXB copy threads tries to copy this tablespce and cannot decrypt a page. Page 0 is always unencrypted. So the
problem typically detected at Page 1. It can happen on any page.

Since PXB cannot decrypt the page, it reports corruption and aborts the backup.

Fix:
----
On decryption errors, we track such tablespaces with separate corrupted list. We also them to the recopy tables list.
Under lock, these tablespaces are copied again. A .new extension is used.
Then we process the corrupted list under lock. Create .corrupt files for the tablespaces from the corrupted list.
For example, if the tablespace encrypted is ts1.ibd, the file will be ts1.ibd.corrupted.

On prepare, we delete the corresponding ts1.ibd if the ts1.ibd.corrupted is present. This has to be done before the
*.ibd scan becuase tablespace loading aborts on processing such half-written tablespaces.
If the .corrupted is present in incremental directory, delete the ts1.ibd.meta and ts.ibd.delta files from the incremental
backup directory.
…_is_index(page_type)

Problem:
--------
Unable to apply redo log record entry because page is in wrong state. It was observed that
tablespace is created by incremental backup

How did this happen?
--------------------

lets say tablespace is t1.ibd and happily in fullbackup
before incremental, this gets renamed to t2.ibd
incremental backup creates t2.ibd.delta and t2.ibd.meta files in incremental backup directory
later there is drop t2.ibd,  we have space_id.del file in incremental backup directory
also some redo generated on this table before it is dropped.

During prepare of incremental backup, when we process a space_id.del file, we check the tablespace if tablespace is found.
Lets say, it 2.del. To process 2.del, we first check, the tabespace that is with space_id 2.
Since the tablespace name is t1.ibd in the full backup directory, we delete it. Additionally,
we delete the .ibd and .meta files, so we try to delete t1.ibd.meta and t1.ibd.delta files.
They never existed, so we ignore the errors to delete them.

But in the inc backup directory, we still have t2.ibd.delta and t2.ibd.meta files. So inc backup prepare
creates a tablespace with space_id 2 and apply the delta file changes. This tablespace is wrong
because, we are creating a dropped tablespace and we dont have all the changes. incremental backup
creates this tablespace with all-zero 7 pages. Later when we do MLOG_INSERT into the index page,
we find out the page is NOT in correct state.

Fix:
----
We have to delete the right incremental files based on space_id. So we build metamap by scanning
*.meta files and with the key as space_id (found in meta file).

Later, when we process the space_id.del file, after removing the tablespace with space_id,
we will now ask aka meta map cache to give the .delta and .meta file belonging to deleted space_id.
By deleting the un-necessary .meta file and .delta, the tablespace is considred as dropped by redo
and corresponding redo entries are not applied.
… 2 in a file operation

https://perconadev.atlassian.net/browse/PXB-3253

Problem:
--------
Files disappear during backup with --lockd-ddl=reduced

Analysis:
---------
PXB open server files using os_file_create_simple_no_error_handling() via Fil_shard::open_file(),
Fil_shard::get_file_size(), Datafile::open_read_only. This API doesn't tolerate file open errors.

This particular bug occurs when the file disappeared after get_file_size() in Fil_shard::open_file().
(See the testcase for more details).

Fix:
----
If lock ddl is reduced and if we have not yet acquired/entered the copy under lock phase
ie is_server_locked() is false, we can tolerate the file open errors. So we use the function/API
os_file_create() instead of other variants. Within this, based on lock_ddl reduced mode, we
tolerate file opening errors.
… enabled

Problem:
-------
We cannot allow pagetracking with lock-ddl=REDUCED. This is because page-tracking gives
us a set of page_ids (space_id, page_nos). PXB should copy these pages and while we copy
these pages, tablespace disappear, get renamed, encrypted etc.

We will enable it if there is need or usecase for this. For now, we will disable it.

Fix:
----
Disable the combination of --page-tracking and --lock-ddl=REDUCED
Problem:
--------
InnoDB assumes directories or files do not disappear. It is true
for the engine because, it is in the startup mode and no opeartions are allowed
at this point of time.

Analysis:
---------
With lock-ddl=RECUCED, tables can be dropped concurrently when pxb does *.ibd scan
or subdirectories can disappear too.

Fix:
----
Handle walk_posix() for missing files/directories. The scan should continue and skip
these deleted files or directories.
…=REDUCED

Problem:
--------
ddl_tracker_t::backup_file_op assumes the required redo bytes are always present. see the assertion len < 6.
But it may happen we sometimes receive redo less than that. In such cases, we return nullptr and let the caller read more read and retry

Fix:
----
fil_tablespace_redo_create()/rename()/delete() variants handle this problem by returning nullptr and reading more redo and retry.
Moved ddl tracker calls to track after the validation is done.
… are not thread safe

Problem:
-------
xtrabackup uses multiple threads to scan the *.ibd files. With lock-ddl=reduced, we use several STL maps to track of missing, dropped or renamed tables.

Multiple threads are used only when number of IBDs are more than 8K

Unsafe calls:
  ddl_tracker->add_missing_table(phy_filename);
  ddl_tracker->add_renamed_table(space_id, path);

These calls from multiple threads operate on std::map/unordered_map and can cause race conditions.

Fix:
----
1. stream line mutex usage for entire ddl_tracker class. Currently used only for corrupted STL map.
2. Use space id instead of table id in messages
3. Rename add_table() since the name is confusing. Actual map elements should be renamed. it will be done later
…ckups with lock-ddl=REDUCED

Problem:
--------
Backups taken with lock-ddl=reduced, prepare failed to complete.

Analysis:
---------
When handling .ren files, the destination file name already exists and this causes assertion failure. See the below backup log

```
102: 2024-02-28T12:15:49.061631-00:00 1 [Note] [MY-011825] [Xtrabackup] DDL tracking : LSN: 73749548 create table ID: 788 Name: test/#sql-1fc79d_13#p#p3.ibd

423: 2024-02-28T12:15:50.121767-00:00 1 [Note] [MY-011825] [Xtrabackup] DDL tracking : LSN: 74312497 rename table ID: 425 From: test/tt_28_p#p#p3.ibd To: test/#sql2-1fc79d-13#p#p3.ibd

870: 2024-02-28T12:15:50.609015-00:00 2 [Note] [MY-011825] [Xtrabackup] Copying ./test/#sql-1fc79d_13#p#p3.ibd to /home/mohit.joshi/dbbackup_28_02_2024/full/test/#sql-1fc79d_13#p#p3.ibd

1337 2024-02-28T12:15:51.183699-00:00 1 [Note] [MY-011825] [Xtrabackup] DDL tracking : LSN: 74967007 rename table ID: 788 From: test/#sql-1fc79d_13#p#p3.ibd To: test/tt_28_p#p#p3.ibd

1491: 2024-02-28T12:15:51.398615-00:00 2 [Note] [MY-011825] [Xtrabackup] Copying ./test/tt_28_p#p#p3.ibd to /home/mohit.joshi/dbbackup_28_02_2024/full/test/tt_28_p#p#p3.ibd

2115:  2024-02-28T12:15:52.209645-00:00 1 [Note] [MY-011825] [Xtrabackup] DDL tracking : LSN: 75267178 delete table ID: 425 Name: test/#sql2-1fc79d-13#p#p3.ibd
```

Whats going on here?

Lets say we have partition  p3 with space id 425. This is being altered. So partition algorithm does this:

1. create new copy of space_id: 788 (#sql1).
2. rename the existing table 425 to some temp taname (#sql2)
3. we copied the new copy space_id 788 (#sql1) to backup.
4. we also copied the space_id 425 with original name (p3).
5. Later we saw a rename file for the copied tablespace 788. (788.ren created with destination name as tt_28_p#p#p3.ibd

The rename file for space_id 425 is skipped, because we knowthat it is dropped. So only a .del file. Final state of backup is:

===

788 in backup with name #sql1
425 in backup with name p3
788.ren file-> 788 From: test/#sql-1fc79d_13#p#p3.ibd To: test/tt_28_p#p#p3.ibd
425.drop file

===

Now prepare starts to process .ren files
it tries to rename 788  from #sql1 to p3. but p3 already exists..

Fix:
----
we skip rename and other operations if we know that tablespace is going to be dropped.
In the above example, we skipped 425.ren file.

So while preparing, we should handle the .del files first. Then we are applying all the consolidated operations in a way.
Then .ren can be processed
… during incremental backup with lock-ddl=REDUCED

Problem:
--------
File deleted between PXB discovery and opening the file. This time at Fil_shard::create_node.
It insists the file to be found.

Fix:
----
1. Tolerate the file missing error
2. Use different error code to track in missing files
3. free the tablespace object on error (otherwise, if fil_space_t remians in cache, pxb will try to copy the file)
…lock

Problem:
-------
With lock-ddl=reduced and Concurrent undo truncations, xtrabackup fails with an
assertion

Analysis:
---------
After truncation, the tablespace id of an undo tablespace id might change and xtrabackup returns error instead of crash. But higher layers of undo discovery do not
tolerate missing files or errors.

Fix:
---
Tolerate file deletions during undo discovery
    1. On prepare phase when handling ddl files some of the data files were not loaded to cache because of the first page validation therefore were left without applying ddls on them.
       To tolerate this issue we should open and load data files to cache without validation, to do so we are using fil_tablespace_open_for_recovery() function instead of fil_open_for_xtrabackup().
    2. Remove macro checks for debug_sync_thread() temporarily, for QA testing
This will be debug only option. Fix release build issues
…delta files for deleted tablespaces in incremental backup directory

Problem:
--------
    1. take full backup with --lock-ddl=reduced

    2. create table t1(a INT), lets say space_id 10

    3. start incremental backup and pause before backup_start() function (we take Bakcup lock here)

    4. incremental backup copied t1.ibd.meta and t1.ibd.delta by this time

    5. DROP TABLE t1

    6. resume the incremental backup. 10.del file is created

    7. prepare the full backup with --apply-log-only

    8. prepare incremental backup

    incremental backup prepare first processes the .del files. before this all tabelspaces are loaded via .ibd scan

    since there is no t1.ibd in backup directory( it is only present as meta and delta file) in incremental backup directory, space_id with 10 is not in cache.

    Hence prepare_handle_del_files() will not delete the files related to space_id 10.
    We end up with orphan .ibd or .ibu files. Server ignore orphan .ibd
    But if the tablespace is undo tablespace, orphan .ibu are not ignored by server.
    Server discovers them via *.ibu scan. This can lead to assertion failures.
Problem:
-------
1. take full backup
2. create table t1 before incremental backup
3. Take incremental backup under gdb and pause at backup_start
4. now rename t1 to t2
5. let it finish
6. prepare full
7. prepare incremental

It happens because tables created between full and incremental are copied as .delta/*.meta files and not as IBD files.
prepare_handle_ren_files() relies on *.ibd scan but this cannot work as the *.delta files are not yet loaded to fil_cache.

Fix:
----
Use the meta_map generated from *.meta scan. Use the space_id from space_id.ren to identify the correct .meta and delta files.
Rename the matched .meta and .delta files to the destination name stored in the .ren file
…DUCED

Problem:
--------
1.
Undo tablespaces are not tracked properly. Since undo tablespaces are not opened via
fil_open_for_xtrabackup(), they are not tracked as 'copied'. This leads to wrong
decisions in handle_ddl_operations.

2.
When new undo tablespaces are created,
Server doesn't write a MLOG_FILE_CREATE record and so these are missed by the tracking
system.

Fix:
----
1. track undo tablespaces that xtrabackup copies without lock (before lock state)
2. After lock is taken, undo tablespces are discovered again (after lock state)

With this before and after states, we now determine undo files to be deleted,
undo files to be copied. Truncated undo tablespace use different tablespace id, so
old undo file is marked as deleted and new version of undo tablespace is copied

For example undo_001.ibu of space_id 10 is truncated, the filename remains same
but it space_id becomes 11. xtrabackup creates 11.ibu.del for undo tablespace to be deleted.
then undo_001.ibu.new with space_id 11 is copied under lock
…ring prepare/or next server startup

Problem:
--------
If there are tables created in system tablespace and if ALTER ADD INDEX/DROP INDEX is executed before the
backup lock is taken, system tablespace could end up in corrupted state.

This is because this operation is not redologged and we are supposed to recopy the system tablespace files.
But we dont track system tablesapce, neither reopen and recopy them. Hence this issue.

Fix:
----
1. Track system tablespace in the list of tables tracked/backedup
2. Removing tracking for tables in system tablespace except of recopy. Other operations can be played via redolog
3. Reopen system tablespace and recopy them as ibdata1.new/ ibdata2.new
Problem:
-------
During prepare, for backups taken with lock-ddl=ON, we did *.ibd scan before
recovery.

This is allowed only for lock-ddl=REDUCED.

Fix:
----
During prepare, do *.ibd scan and processing of .new, .del, .ren, .corrupt files only if lock-ddl=REDUCED
Errors are caused by usage of const itertator cend(). Replaced with
end().
Aibek Bukabayev and others added 3 commits August 23, 2024 16:39
made some minor code changes for better readability
removed argument `prep_handle_ddls` from all using functions, using xtrabackup_prepare instead
@it-percona-cla
Copy link

it-percona-cla commented Aug 27, 2024

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
3 out of 4 committers have signed the CLA.

✅ altmannmarcelo
✅ aybek
✅ satya-bodapati
❌ Aibek Bukabayev


Aibek Bukabayev seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Add error handling for prepare_handle_rename() and prepare_handle_del() operations and some minor code refactoring
storage/innobase/fil/fil0fil.cc Show resolved Hide resolved
storage/innobase/fil/fil0fil.cc Show resolved Hide resolved
storage/innobase/fil/fil0fil.cc Outdated Show resolved Hide resolved
storage/innobase/include/fil0fil.h Show resolved Hide resolved
storage/innobase/fil/fil0fil.cc Outdated Show resolved Hide resolved
storage/innobase/xtrabackup/src/ddl_tracker.cc Outdated Show resolved Hide resolved
storage/innobase/xtrabackup/src/ddl_tracker.cc Outdated Show resolved Hide resolved
Aibek Bukabayev and others added 3 commits August 30, 2024 08:24
1. Move to_string to utils.cc
2. Add checks to ensure the ddl_trackers maps are not updated after
   we reach handle_ddl_operations()
storage/innobase/fil/fil0fil.cc Outdated Show resolved Hide resolved
storage/innobase/fil/fil0fil.cc Outdated Show resolved Hide resolved
storage/innobase/fil/fil0fil.cc Outdated Show resolved Hide resolved
storage/innobase/fil/fil0fil.cc Outdated Show resolved Hide resolved
storage/innobase/fil/fil0fil.cc Outdated Show resolved Hide resolved
storage/innobase/xtrabackup/src/ddl_tracker.h Outdated Show resolved Hide resolved
storage/innobase/xtrabackup/src/ddl_tracker.h Outdated Show resolved Hide resolved
storage/innobase/xtrabackup/src/ddl_tracker.cc Outdated Show resolved Hide resolved
storage/innobase/xtrabackup/src/ddl_tracker.cc Outdated Show resolved Hide resolved
storage/innobase/fil/fil0fil.cc Outdated Show resolved Hide resolved
storage/innobase/fil/fil0fil.cc Outdated Show resolved Hide resolved
storage/innobase/os/os0enc.cc Show resolved Hide resolved
storage/innobase/xtrabackup/src/xtrabackup.cc Outdated Show resolved Hide resolved
storage/innobase/xtrabackup/src/xtrabackup.cc Outdated Show resolved Hide resolved
storage/innobase/xtrabackup/src/xtrabackup.h Outdated Show resolved Hide resolved
storage/innobase/xtrabackup/src/xtrabackup.h Show resolved Hide resolved
storage/innobase/xtrabackup/src/xtrabackup.h Show resolved Hide resolved
storage/innobase/xtrabackup/test/bootstrap.sh Show resolved Hide resolved
satya-bodapati and others added 12 commits September 10, 2024 11:51
Add ifdef XTRABACKUP for the code introduced in innobase codebase.
(100% not possible though for places that are heavily refactored)
Setting handle_ddl_ops variable to false initially
It is possible that a space_id.ren with content of desired filename,
the destination file name could already exist.

If the source and desitnation to be renamed is same, skip rename.
Fix keyring test failures by replacing keyring_file to keyring_component
… from 65536

Problem:
--------
A regression introduced by 6c9aa00 caused extra opening of files.

Fix:
----
remove extra file open
With lock-ddl=REDUCED, the following sequence can happen (Not possible
with lock-ddl=ON)

    1. first an IBD with all zero(invalid) encryption is found
    2. so this is added to invalid encryption ids
    3. the same IBD is found again (because of concurrent DDL, they are both found with different names)
    4. this time IBD has proper encryption,  so fil_space_t is created
    5. encryption info from redo is parsed. fil_tablespace_redo_encryption()
    6. because the fil_space_t exists, encryption key is not added to the recovery encryption keys map
    7. later at the end of the backup, we check if we have found a valid encryption key for the invalid encryption space_ids.
    8. we haven’t (remember at step 6, we skipped it)
    9. backup is aborted

Fix:
----
Store keys in recv_sys->keys map even if there exists a tablespace
…es is not same as number of files in datadir

1. Verifies the ulimit -Sn , ulimit -Hn, current --open-files-limit
   parameter and increases the limit if possible, else throws an error
   early.

2. Despite the limit increase, backup may still fail because if there
   are new files that appear, we may need more handles than we first
   calculated.
…do files

Fixed the path for .del/.ren/.new files.
Removed extra scan of external tablespaces during prepare
storage/innobase/xtrabackup/src/ddl_tracker.cc Outdated Show resolved Hide resolved
db_name = db_name.substr(last_separator_pos + 1);
}

return db_name + '/' + space_name + EXT_NEW;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use Fil_path::SEPARATOR instead of '/'

storage/innobase/xtrabackup/src/ddl_tracker.cc Outdated Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants