Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix issue 149 and 150 #151

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions backup.c
Original file line number Diff line number Diff line change
Expand Up @@ -192,7 +192,7 @@ do_backup_database(parray *backup_list, pgBackupOption bkupopt)
uint32 xlogid, xrecoff;

/* find last completed database backup */
prev_backup = catalog_get_last_full_backup(backup_list);
Copy link
Contributor

@mikecaat mikecaat May 13, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For #150, I think it's enough to undo "backup.c", "catalog.c" and "pg_rman.h" of this change .

Your patches check if the full backup exists or not, but this can't solve the meaningless incremental backup issue anyway. If my understanding is right, #154 should be handled if to solve the issue. So, I think it's better that your patches focus to solve the differential backup problem only. #154 will solve the meaningless incremental backup issue.

But, adding a test case is necessary to prevent this degradation for the future. pg_rman supports full backup and incremental backup, not differential backup. So I think we need to check it. Although there may be a better way, I came up with the following test case.

  1. get full backup
  2. pgbench and store enough data. For example, execute "pg_bench -i -s 50".
  3. get incremental backup
  4. get incremental backup

If incremental backup works, 3's backup's data size is big because 2's data is stored, but 4's backup's data is small.
In my environment, the following is the result.

a. differential backup
This is the result when using "REL_13_STABLE" branch.
Because the differential backup is got, the third backup size is big(798MB). I think the reason why the third one(798MB) is smaller than the second one(1435MB) is WAL archive data is not stored in the third one.

> pg_rman show 
=====================================================================
 StartTime           EndTime              Mode    Size   TLI  Status
=====================================================================
2021-05-13 09:20:50  2021-05-13 09:20:52  INCR   798MB     1  OK
2021-05-13 09:20:33  2021-05-13 09:20:38  INCR  1435MB     1  OK
2021-05-13 09:19:29  2021-05-13 09:19:31  FULL    49MB     1  OK

b. incremental backup
This is the result when using "REL_13_STABLE" branch and revert 447c19a.
Since this is an incremental backup, the third one is very small(33MB).

> pg_rman show 
=====================================================================
 StartTime           EndTime              Mode    Size   TLI  Status
=====================================================================
2021-05-13 09:27:00  2021-05-13 09:27:03  INCR    33MB     1  OK
2021-05-13 09:26:41  2021-05-13 09:26:45  INCR  1435MB     1  OK
2021-05-13 09:26:04  2021-05-13 09:26:06  FULL    49MB     1  OK

Though?

prev_backup = catalog_get_lastest_backup(backup_list);
if (prev_backup == NULL || prev_backup->tli != current.tli)
{
if (current.full_backup_on_error)
Expand Down Expand Up @@ -1091,11 +1091,12 @@ confirm_block_size(const char *name, int blcksz)
else if (strcmp(name, "wal_block_size") == 0)
elog(DEBUG, "wal block size is %d", block_size);

PQclear(res);
//PQclear(res);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think to remove this line is ok.

if ((endp && *endp) || block_size != blcksz)
ereport(ERROR,
(errcode(ERROR_PG_INCOMPATIBLE),
errmsg("%s(%d) is not compatible(%d expected)", name, block_size, blcksz)));
PQclear(res);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this patch needs to remove two branks before PQclear(res).

}

/*
Expand Down
45 changes: 45 additions & 0 deletions catalog.c
Original file line number Diff line number Diff line change
Expand Up @@ -285,6 +285,51 @@ catalog_get_last_full_backup(parray *backup_list)
return NULL;
}

/*
* Find the last backup completed database backup from the backup list.
*/
pgBackup *
catalog_get_last_backup(parray *backup_list)
{
int i;
pgBackup *backup = NULL;

for (i = 0; i < parray_num(backup_list); i++)
{
backup = (pgBackup *) parray_get(backup_list, i);

/* Return the lastest valid backup. */

if (backup->status == BACKUP_STATUS_OK)
return backup;


}

return NULL;
}

/*
* Find the lastest backup completed database backup from the backup list.
*/
pgBackup *
catalog_get_lastest_backup(parray *backup_list)
{
pgBackup *last_full_backup = NULL;
pgBackup *last_backup = NULL;

last_full_backup = catalog_get_last_full_backup(backup_list);
last_backup = catalog_get_last_backup(backup_list);

if (last_full_backup != NULL)
{
return last_backup;
}

return NULL;

}

/*
* Find the last completed archived WAL backup from the backup list.
*/
Expand Down
2 changes: 2 additions & 0 deletions pg_rman.h
Original file line number Diff line number Diff line change
Expand Up @@ -289,6 +289,8 @@ extern parray *catalog_get_backup_list(const pgBackupRange *range);
extern pgBackup *catalog_get_last_full_backup(parray *backup_list);
extern pgBackup *catalog_get_last_arclog_backup(parray *backup_list);
extern pgBackup *catalog_get_last_srvlog_backup(parray *backup_list);
extern pgBackup *catalog_get_last_backup(parray *backup_list);
extern pgBackup *catalog_get_lastest_backup(parray *backup_list);

extern int catalog_lock(void);
extern void catalog_unlock(void);
Expand Down