diff --git a/.gitattributes b/.gitattributes index 39d192a..a295ec3 100644 --- a/.gitattributes +++ b/.gitattributes @@ -2,6 +2,7 @@ * text=auto # Declare files that will always have LF line endings on checkout. +backup.* text eol=lf *.sh text eol=lf *.md text eol=lf *.json text eol=lf diff --git a/README.md b/README.md index 1d851a9..d9332ab 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ --- title: Backup Container -description: A simple containerized backup solution for backing up one or more postgres databases to a secondary location.. +description: A simple containerized backup solution for backing up one or more postgres or mongo databases to a secondary location. author: WadeBarnes resourceType: Components personas: @@ -11,15 +11,24 @@ labels: - backup - backups - postgres + - mongo - database --- [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](LICENSE) # Backup Container -[Backup Container](https://github.com/BCDevOps/backup-container) is a simple containerized backup solution for backing up one or more postgres databases to a secondary location. _Code and documentation was originally pulled from the [HETS Project](https://github.com/bcgov/hets)_ +[Backup Container](https://github.com/BCDevOps/backup-container) is a simple containerized backup solution for backing up one or more postgres or mongo databases to a secondary location. _Code and documentation was originally pulled from the [HETS Project](https://github.com/bcgov/hets)_ -## Postgres Backups in OpenShift -This project provides you with a starting point for integrating backups into your OpenShift projects. The scripts and templates provided in the [openshift](./openshift) directory are compatible with the [openshift-developer-tools](https://github.com/BCDevOps/openshift-developer-tools) scripts. They help you create an OpenShift deployment or cronjob called `backup` in your projects that runs backups on a Postgres database(s) within the project environment. You only need to integrate the scripts and templates into your project(s), the builds can be done with this repository as the source. +# Backup Container Options +You can run the Backup Container for postgres and mongo databases separately or in a mixed environment. +For a mixed environment: +1) You MUST use the recommended `backup.conf` configuration. +2) Within the `backup.conf`, you MUST specify the `DatabaseType` for each listed database. +3) You will need to create two builds and two deployment configs. One for a postgres backup container and the other for a mongo backup container. +4) Mount the same `backup.conf` file (ConfigMap) to each deployed container. + +## Backups in OpenShift +This project provides you with a starting point for integrating backups into your OpenShift projects. The scripts and templates provided in the [openshift](./openshift) directory are compatible with the [openshift-developer-tools](https://github.com/BCDevOps/openshift-developer-tools) scripts. They help you create an OpenShift deployment or cronjob called `backup` in your projects that runs backups on databases within the project environment. You only need to integrate the scripts and templates into your project(s), the builds can be done with this repository as the source. Following are the instructions for running the backups and a restore. @@ -53,24 +62,14 @@ NFS backed storage is covered by the following backup and retention policies: - 90 days ### Restore/Verification Storage Volume -The default storage class for the restore/verification volume is `gluster-file-db`. The supplied deployment template will auto-provision this volume for you with it is published. Refer to the *Storage Performance* section for performance considerations. +The default storage class for the restore/verification volume is `netapp-file-standard`. The supplied deployment template will auto-provision this volume for you with it is published. Refer to the *Storage Performance* section for performance considerations. This volume should be large enough to host your largest database. Set the size by updating/overriding the `VERIFICATION_VOLUME_SIZE` value within the template. ### Storage Performance -The performance of `gluster-block` for restore/verification is far superior to that of `gluster-file-db`, however it should only be used in cases where the time it takes to verify a backup begins to encroach on the over-all timing and verification cycle. You want the verification(s) to complete before another backup and verification cycle begins and you want a bit of idle time between the end of one cycle and the beginning of another in case things take a little longer now and again. - -Restore/Verification timing for a 9GB database: -- `gluster-block`: ~15 minutes -- `gluster-file-db`: ~1 hour - -Restore/Verification timing for a 42GB database: -- `gluster-block`: ~1 hour -- `gluster-file-db`: ~3 hours +The performance of `netapp-block-standard` for restore/verification is far superior to that of `netapp-file-standard`, however it should only be used in cases where the time it takes to verify a backup begins to encroach on the over-all timing and verification cycle. You want the verification(s) to complete before another backup and verification cycle begins and you want a bit of idle time between the end of one cycle and the beginning of another in case things take a little longer now and again. -Restore/Verification timing for a 192GB database: -- `gluster-block`: not tested -- `gluster-file-db`: ~14.5 hours +*There are currently no performance stats for the `netapp` storage types.* ## Deployment / Configuration Together, the scripts and templates provided in the [openshift](./openshift) directory will automatically deploy the `backup` app as described below. The [backup-deploy.overrides.sh](./openshift/backup-deploy.overrides.sh) script generates the deployment configuration necessary for the [backup.conf](config/backup.conf) file to be mounted as a ConfigMap by the `backup` container. @@ -81,19 +80,19 @@ The following environment variables are defaults used by the `backup` app. | Name | Default (if not set) | Purpose | | ---- | ------- | ------- | -| BACKUP_STRATEGY | daily | To control the backup strategy used for backups. This is explained more below. | +| BACKUP_STRATEGY | rolling | To control the backup strategy used for backups. This is explained more below. | | BACKUP_DIR | /backups/ | The directory under which backups will be stored. The deployment configuration mounts the persistent volume claim to this location when first deployed. | -| NUM_BACKUPS | 31 | For backward compatibility this value is used with the daily backup strategy to set the number of backups to retain before pruning. | +| NUM_BACKUPS | 31 | Used for backward compatibility only, this value is used with the daily backup strategy to set the number of backups to retain before pruning. | | DAILY_BACKUPS | 6 | When using the rolling backup strategy this value is used to determine the number of daily (Mon-Sat) backups to retain before pruning. | | WEEKLY_BACKUPS | 4 | When using the rolling backup strategy this value is used to determine the number of weekly (Sun) backups to retain before pruning. | | MONTHLY_BACKUPS | 1 | When using the rolling backup strategy this value is used to determine the number of monthly (last day of the month) backups to retain before pruning. | | BACKUP_PERIOD | 1d | Only used for Legacy Mode. Ignored when running in Cron Mode. The schedule on which to run the backups. The value is used by a sleep command and can be defined in d, h, m, or s. | -| DATABASE_SERVICE_NAME | postgresql | The name of the service/host for the *default* database target. | +| DATABASE_SERVICE_NAME | postgresql | Used for backward compatibility only. The name of the service/host for the *default* database target. | | DATABASE_USER_KEY_NAME | database-user | The database user key name stored in database deployment resources specified by DATABASE_DEPLOYMENT_NAME. | | DATABASE_PASSWORD_KEY_NAME | database-password | The database password key name stored in database deployment resources specified by DATABASE_DEPLOYMENT_NAME. | -| POSTGRESQL_DATABASE | my_postgres_db | The name of the *default* database target; the name of the database you want to backup. | -| POSTGRESQL_USER | *wired to a secret* | The username for the database(s) hosted by the `postgresql` Postgres server. The deployment configuration makes the assumption you have your database credentials stored in secrets (which you should), and the key for the username is `database-user`. The name of the secret must be provided as the `DATABASE_DEPLOYMENT_NAME` parameter to the deployment configuration template. | -| POSTGRESQL_PASSWORD | *wired to a secret* | The password for the database(s) hosted by the `postgresql` Postgres server. The deployment configuration makes the assumption you have your database credentials stored in secrets (which you should), and the key for the username is `database-password`. The name of the secret must be provided as the `DATABASE_DEPLOYMENT_NAME` parameter to the deployment configuration template. | +| DATABASE_NAME | my_postgres_db | Used for backward compatibility only. The name of the *default* database target; the name of the database you want to backup. | +| DATABASE_USER | *wired to a secret* | The username for the database(s) hosted by the database server. The deployment configuration makes the assumption you have your database credentials stored in secrets (which you should), and the key for the username is `database-user`. The name of the secret must be provided as the `DATABASE_DEPLOYMENT_NAME` parameter to the deployment configuration template. | +| DATABASE_PASSWORD | *wired to a secret* | The password for the database(s) hosted by the database server. The deployment configuration makes the assumption you have your database credentials stored in secrets (which you should), and the key for the username is `database-password`. The name of the secret must be provided as the `DATABASE_DEPLOYMENT_NAME` parameter to the deployment configuration template. | | FTP_URL | | The FTP server URL. If not specified, the FTP backup feature is disabled. The default value in the deployment configuration is an empty value - not specified. | | FTP_USER | *wired to a secret* | The username for the FTP server. The deployment configuration creates a secret with the name specified in the FTP_SECRET_KEY parameter (default: `ftp-secret`). The key for the username is `ftp-user` and the value is an empty value by default. | | FTP_PASSWORD | *wired to a secret* | The password for the FTP server. The deployment configuration creates a secret with the name specified in the FTP_SECRET_KEY parameter (default: `ftp-secret`). The key for the password is `ftp-password` and the value is an empty value by default. | @@ -101,9 +100,11 @@ The following environment variables are defaults used by the `backup` app. | ENVIRONMENT_FRIENDLY_NAME | | A friendly (human readable) name of the environment. This variable is used by the webhook integration to identify the environment from which the backup notifications originate. The default value in the deployment configuration is an empty value - not specified. | | ENVIRONMENT_NAME | | A name or ID of the environment. This variable is used by the webhook integration to identify the environment from which the backup notifications originate. The default value in the deployment configuration is an empty value - not specified. | -Using this default configuration you can easily back up a single postgres database, however you can extend the configuration and use the `backup.conf` file to list a number of databases for backup and even set a cron schedule for the backups. +### backup.conf + +Using this default configuration you can easily back up a single postgres database, however we recommend you extend the configuration and use the `backup.conf` file to list a number of databases for backup and even set a cron schedule for the backups. -When using the `backup.conf` file the following environment variables are ignored, since you list all of your `host`/`database` pairs in the file; `DATABASE_SERVICE_NAME`, `POSTGRESQL_DATABASE`. To provide the credentials needed for the listed databases you extend the deployment configuration to include `hostname_USER` and `hostname_PASSWORD` credential pairs which are wired to the appropriate secrets (where hostname matches the hostname/servicename, in all caps and underscores, of the database). For example, if you are backing up a database named `wallet-db/my_wallet`, you would have to extend the deployment configuration to include a `WALLET_DB_USER` and `WALLET_DB_PASSWORD` credential pair, wired to the appropriate secrets, to access the database(s) on the `wallet-db` server. You may notice the default configuration is already wired for the host/service name `postgresql`, so you're already covered if all your databases are on a server of that name. +When using the `backup.conf` file the following environment variables are ignored, since you list all of your `host`/`database` pairs in the file; `DATABASE_SERVICE_NAME`, `DATABASE_NAME`. To provide the credentials needed for the listed databases you extend the deployment configuration to include `hostname_USER` and `hostname_PASSWORD` credential pairs which are wired to the appropriate secrets (where hostname matches the hostname/servicename, in all caps and underscores, of the database). For example, if you are backing up a database named `wallet-db/my_wallet`, you would have to extend the deployment configuration to include a `WALLET_DB_USER` and `WALLET_DB_PASSWORD` credential pair, wired to the appropriate secrets, to access the database(s) on the `wallet-db` server. ### Cron Mode @@ -260,8 +261,35 @@ Sample Error Message: For information on how setup a webhook in Rocket.Chat refer to [Incoming WebHook Scripting](https://rocket.chat/docs/administrator-guides/integrations/). The **Webhook URL** created during this process is the URL you use for `WEBHOOK_URL` to enable the Webhook integration feature. +## Database Plugin Support + +The backup container uses a plugin architecture to perform the database specific operations needed to support various database types. + +The plugins are loaded dynamically based on the container type. By default the `backup.null.plugin` will be loaded when the container type is not recognized. + +To add support for a new database type: +1) Update the `getContainerType` function in [backup.container.utils](./docker/backup.container.utils) to detect the new type of database. +2) Using the existing plugins as reference, implement the database specific scripts for the new database type. +3) Using the existing docker files as reference, create a new one to build the new container type. +4) Update the build and deployment templates and their documentation as needed. +5) Update the project documentation as needed. +6) Test, test, test. +7) Submit a PR. + +Plugin Examples: +- [backup.postgres.plugin](./docker/backup.postgres.plugin) + - Postgres backup implementation. + +- [backup.mongo.plugin](./docker/backup.mongo.plugin) + - Mongo backup implementation. + +- [backup.null.plugin](./docker/backup.null.plugin) + - Sample/Template backup implementation that simply outputs log messages for the various operations. + ## Backup +*The following sections describes (some) postgres specific implementation, however the steps are generally the same between database implementations.* + The purpose of the backup app is to do automatic backups. Deploy the Backup app to do daily backups. Viewing the Logs for the Backup App will show a record of backups that have been completed. The Backup app performs the following sequence of operations: @@ -326,6 +354,10 @@ Following are more detailed steps to perform a restore of a backup. Done! +## Tip and Tricks + +Please refer to the [Tips and Tricks](./docs/TipsAndTricks.md) document for solutions to known issues. + ## Getting Help or Reporting an Issue To report bugs/issues/feature requests, please file an [issue](../../issues). diff --git a/config/backup.conf b/config/backup.conf index 2693020..a3b4388 100644 --- a/config/backup.conf +++ b/config/backup.conf @@ -7,10 +7,20 @@ # The entries must be in one of the following forms: # - / # - :/ +# - =/ +# - =:/ +# can be postgres or mongo +# MUST be specified when you are sharing a +# single backup.conf file between postgres and mongo +# backup containers. If you do not specify +# the listed databases are assumed to be valid for the +# backup container in which the configuration is mounted. # # Examples: -# - postgresql/my_database -# - postgresql:5432/my_database +# - postgres=postgresql/my_database +# - postgres=postgresql:5432/my_database +# - mongo=mongodb/my_database +# - mongo=mongodb:27017/my_database # ----------------------------------------------------------- # Cron Scheduling: # ----------------------------------------------------------- @@ -29,10 +39,10 @@ # ----------------------------------------------------------- # Full Example: # ----------------------------------------------------------- -# postgresql:5432/TheOrgBook_Database -# wallet-db:5432/tob_holder -# wallet-db/tob_issuer +# postgres=postgresql:5432/TheOrgBook_Database +# mongo=mender-mongodb:27017/useradm +# postgres=wallet-db/tob_issuer # # 0 1 * * * default ./backup.sh -s # 0 4 * * * default ./backup.sh -s -v all -# ============================================================ +# ============================================================ \ No newline at end of file diff --git a/docker/Dockerfile b/docker/Dockerfile index 8666e1a..4cc88a7 100644 --- a/docker/Dockerfile +++ b/docker/Dockerfile @@ -7,8 +7,9 @@ ENV TZ=PST8PDT # Set the workdir to be root WORKDIR / -# Load the backup script into the container (must be executable). -COPY backup.sh / +# Load the backup scripts into the container (must be executable). +COPY backup.* / + COPY webhook-template.json / # ======================================================================================================== @@ -24,6 +25,7 @@ ARG GOCROND_VERSION=0.6.3 ADD https://github.com/$SOURCE_REPO/go-crond/releases/download/$GOCROND_VERSION/go-crond-64-linux /usr/bin/go-crond USER root + RUN chmod ug+x /usr/bin/go-crond # ======================================================================================================== diff --git a/docker/Dockerfile_Mongo b/docker/Dockerfile_Mongo new file mode 100644 index 0000000..4187c17 --- /dev/null +++ b/docker/Dockerfile_Mongo @@ -0,0 +1,42 @@ +# This image provides a mongo installation from which to run backups +FROM registry.access.redhat.com/rhscl/mongodb-36-rhel7 + +# Change timezone to PST for convenience +ENV TZ=PST8PDT + +# Set the workdir to be root +WORKDIR / + +# Load the backup scripts into the container (must be executable). +COPY backup.* / + +COPY webhook-template.json / + +# ======================================================================================================== +# Install go-crond (from https://github.com/BCDevOps/go-crond) +# - Adds some additional logging enhancements on top of the upstream project; +# https://github.com/webdevops/go-crond +# +# CRON Jobs in OpenShift: +# - https://blog.danman.eu/cron-jobs-in-openshift/ +# -------------------------------------------------------------------------------------------------------- +ARG SOURCE_REPO=BCDevOps +ARG GOCROND_VERSION=0.6.3 +ADD https://github.com/$SOURCE_REPO/go-crond/releases/download/$GOCROND_VERSION/go-crond-64-linux /usr/bin/go-crond + +USER root + +RUN chmod ug+x /usr/bin/go-crond +# ======================================================================================================== + +# ======================================================================================================== +# Perform operations that require root privilages here ... +# -------------------------------------------------------------------------------------------------------- +RUN echo $TZ > /etc/timezone +# ======================================================================================================== + +# Important - Reset to the base image's user account. +USER 26 + +# Set the default CMD. +CMD sh /backup.sh \ No newline at end of file diff --git a/docker/backup.config.utils b/docker/backup.config.utils new file mode 100644 index 0000000..b933846 --- /dev/null +++ b/docker/backup.config.utils @@ -0,0 +1,485 @@ +#!/bin/bash +# ================================================================================================================= +# Configuration Utility Functions: +# ----------------------------------------------------------------------------------------------------------------- +function getDatabaseName(){ + ( + _databaseSpec=${1} + _databaseName=$(echo ${_databaseSpec} | sed -n 's~^.*/\(.*$\)~\1~p') + echo "${_databaseName}" + ) +} + +function getDatabaseType(){ + ( + _databaseSpec=${1} + _databaseType=$(echo ${_databaseSpec} | sed -n 's~^\(.*\)=.*$~\1~p' | tr '[:upper:]' '[:lower:]') + echo "${_databaseType}" + ) +} + +function getPort(){ + ( + local OPTIND + local localhost + unset localhost + while getopts :l FLAG; do + case $FLAG in + l ) localhost=1 ;; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + if [ -z "${localhost}" ]; then + portsed="s~^.*:\([[:digit:]]\+\)/.*$~\1~p" + _port=$(echo ${_databaseSpec} | sed -n "${portsed}") + fi + + echo "${_port}" + ) +} + +function getHostname(){ + ( + local OPTIND + local localhost + unset localhost + while getopts :l FLAG; do + case $FLAG in + l ) localhost=1 ;; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + if [ -z "${localhost}" ]; then + _hostname=$(echo ${_databaseSpec} | sed 's~^.\+[=]~~;s~[:/].*~~') + else + _hostname="127.0.0.1" + fi + + echo "${_hostname}" + ) +} + +function getHostPrefix(){ + ( + _hostname=${1} + _hostPrefix=$(echo ${_hostname} | tr '[:lower:]' '[:upper:]' | sed "s~-~_~g") + echo "${_hostPrefix}" + ) +} + +function getHostUserParam(){ + ( + _hostname=${1} + _hostUser=$(getHostPrefix ${_hostname})_USER + echo "${_hostUser}" + ) +} + +function getHostPasswordParam(){ + ( + _hostname=${1} + _hostPassword=$(getHostPrefix ${_hostname})_PASSWORD + echo "${_hostPassword}" + ) +} + +function readConf(){ + ( + local OPTIND + local readCron + local quiet + local all + unset readCron + unset quiet + while getopts cqa FLAG; do + case $FLAG in + c ) readCron=1 ;; + q ) quiet=1 ;; + a ) all=1 ;; + esac + done + shift $((OPTIND-1)) + + # Remove all comments and any blank lines + filters="/^[[:blank:]]*$/d;/^[[:blank:]]*#/d;/#.*/d;" + + if [ -z "${readCron}" ]; then + # Read in the database config ... + # - Remove any lines that do not match the expected database spec format(s) + # - [=]/ + # - [=]:/ + filters+="/^[a-zA-Z0-9=_/-]*\(:[0-9]*\)\?\/[a-zA-Z0-9_/-]*$/!d;" + if [ -z "${all}" ]; then + # Remove any database configs that are not for the current container type + # Database configs that do not define the database type are assumed to be for the current container type + filters+="/\(^[a-zA-Z0-9_/-]*\(:[0-9]*\)\?\/[a-zA-Z0-9_/-]*$\)\|\(^${CONTAINER_TYPE}=\)/!d;" + fi + else + # Read in the cron config ... + # - Remove any lines that MATCH expected database spec format(s), + # leaving, what should be, cron tabs. + filters+="/^[a-zA-Z0-9=_/-]*\(:[0-9]*\)\?\/[a-zA-Z0-9_/-]*$/d;" + fi + + if [ -f ${BACKUP_CONF} ]; then + if [ -z "${quiet}" ]; then + echo "Reading backup config from ${BACKUP_CONF} ..." >&2 + fi + _value=$(sed "${filters}" ${BACKUP_CONF}) + fi + + if [ -z "${_value}" ] && [ -z "${readCron}" ]; then + # Backward compatibility + if [ -z "${quiet}" ]; then + echo "Reading backup config from environment variables ..." >&2 + fi + _value="${DATABASE_SERVICE_NAME}${DEFAULT_PORT:+:${DEFAULT_PORT}}${POSTGRESQL_DATABASE:+/${POSTGRESQL_DATABASE}}" + fi + + echo "${_value}" + ) +} + +function getNumBackupsToRetain(){ + ( + _count=0 + _backupType=${1:-$(getBackupType)} + + case "${_backupType}" in + daily) + _count=${DAILY_BACKUPS} + if (( ${_count} <= 0 )) && (( ${WEEKLY_BACKUPS} <= 0 )) && (( ${MONTHLY_BACKUPS} <= 0 )); then + _count=1 + fi + ;; + weekly) + _count=${WEEKLY_BACKUPS} + ;; + monthly) + _count=${MONTHLY_BACKUPS} + ;; + *) + _count=${NUM_BACKUPS} + ;; + esac + + echo "${_count}" + ) +} + +function getUsername(){ + ( + _databaseSpec=${1} + _hostname=$(getHostname ${_databaseSpec}) + _paramName=$(getHostUserParam ${_hostname}) + # Backward compatibility ... + _username="${!_paramName:-${DATABASE_USER}}" + echo ${_username} + ) +} + +function getPassword(){ + ( + _databaseSpec=${1} + _hostname=$(getHostname ${_databaseSpec}) + _paramName=$(getHostPasswordParam ${_hostname}) + # Backward compatibility ... + _password="${!_paramName:-${DATABASE_PASSWORD}}" + echo ${_password} + ) +} + +function isLastDayOfMonth(){ + ( + _date=${1:-$(date)} + _day=$(date -d "${_date}" +%-d) + _month=$(date -d "${_date}" +%-m) + _lastDayOfMonth=$(date -d "${_month}/1 + 1 month - 1 day" "+%-d") + + if (( ${_day} == ${_lastDayOfMonth} )); then + return 0 + else + return 1 + fi + ) +} + +function isLastDayOfWeek(){ + ( + # We're calling Sunday the last dayt of the week in this case. + _date=${1:-$(date)} + _dayOfWeek=$(date -d "${_date}" +%u) + + if (( ${_dayOfWeek} == 7 )); then + return 0 + else + return 1 + fi + ) +} + +function getBackupType(){ + ( + _backupType="" + if rollingStrategy; then + if isLastDayOfMonth && (( "${MONTHLY_BACKUPS}" > 0 )); then + _backupType="monthly" + elif isLastDayOfWeek; then + _backupType="weekly" + else + _backupType="daily" + fi + fi + echo "${_backupType}" + ) +} + +function rollingStrategy(){ + if [[ "${BACKUP_STRATEGY}" == "rolling" ]] && (( "${WEEKLY_BACKUPS}" >= 0 )) && (( "${MONTHLY_BACKUPS}" >= 0 )); then + return 0 + else + return 1 + fi +} + +function dailyStrategy(){ + if [[ "${BACKUP_STRATEGY}" == "daily" ]] || (( "${WEEKLY_BACKUPS}" < 0 )); then + return 0 + else + return 1 + fi +} + +function listSettings(){ + _backupDirectory=${1:-$(createBackupFolder -g)} + _databaseList=${2:-$(readConf -q)} + _yellow='\e[33m' + _nc='\e[0m' # No Color + _notConfigured="${_yellow}not configured${_nc}" + + echo -e \\n"Settings:" + _mode=$(getMode 2>/dev/null) + echo -e "- Run mode: ${_mode}"\\n + + if rollingStrategy; then + echo "- Backup strategy: rolling" + fi + if dailyStrategy; then + echo "- Backup strategy: daily" + fi + if ! rollingStrategy && ! dailyStrategy; then + echoYellow "- Backup strategy: Unknown backup strategy; ${BACKUP_STRATEGY}" + _configurationError=1 + fi + backupType=$(getBackupType) + if [ -z "${backupType}" ]; then + echo "- Current backup type: flat daily" + else + echo "- Current backup type: ${backupType}" + fi + echo "- Backups to retain:" + if rollingStrategy; then + echo " - Daily: $(getNumBackupsToRetain daily)" + echo " - Weekly: $(getNumBackupsToRetain weekly)" + echo " - Monthly: $(getNumBackupsToRetain monthly)" + else + echo " - Total: $(getNumBackupsToRetain)" + fi + echo "- Current backup folder: ${_backupDirectory}" + + if [[ "${_mode}" != ${ONCE} ]]; then + if [[ "${_mode}" == ${CRON} ]] || [[ "${_mode}" == ${SCHEDULED} ]]; then + _backupSchedule=$(readConf -cq) + echo "- Time Zone: $(date +"%Z %z")" + fi + _backupSchedule=$(formatList "${_backupSchedule:-${BACKUP_PERIOD}}") + echo -e \\n"- Schedule:" + echo "${_backupSchedule}" + fi + + if [[ "${CONTAINER_TYPE}" == "${UNKNOWN_DB}" ]] && [ -z "${_allowNullPlugin}" ]; then + echoRed "\n- Container Type: ${CONTAINER_TYPE}" + _configurationError=1 + else + echo -e "\n- Container Type: ${CONTAINER_TYPE}" + fi + + _databaseList=$(formatList "${_databaseList}") + echo "- Databases (filtered by container type):" + echo "${_databaseList}" + echo + + if [ -z "${FTP_URL}" ]; then + echo -e "- FTP server: ${_notConfigured}" + else + echo "- FTP server: ${FTP_URL}" + fi + + if [ -z "${WEBHOOK_URL}" ]; then + echo -e "- Webhook Endpoint: ${_notConfigured}" + else + echo "- Webhook Endpoint: ${WEBHOOK_URL}" + fi + + if [ -z "${ENVIRONMENT_FRIENDLY_NAME}" ]; then + echo -e "- Environment Friendly Name: ${_notConfigured}" + else + echo -e "- Environment Friendly Name: ${ENVIRONMENT_FRIENDLY_NAME}" + fi + if [ -z "${ENVIRONMENT_NAME}" ]; then + echo -e "- Environment Name (Id): ${_notConfigured}" + else + echo "- Environment Name (Id): ${ENVIRONMENT_NAME}" + fi + + if [ ! -z "${_configurationError}" ]; then + echo + logError "Configuration error! The script will exit." + sleep 5 + exit 1 + fi + echo +} + +function isScheduled(){ + ( + if [ ! -z "${SCHEDULED_RUN}" ]; then + return 0 + else + return 1 + fi + ) +} + +function isScripted(){ + ( + if [ ! -z "${SCHEDULED_RUN}" ]; then + return 0 + else + return 1 + fi + ) +} + +function restoreMode(){ + ( + if [ ! -z "${_restoreDatabase}" ]; then + return 0 + else + return 1 + fi + ) +} + +function verifyMode(){ + ( + if [ ! -z "${_verifyBackup}" ]; then + return 0 + else + return 1 + fi + ) +} + +function pruneMode(){ + ( + if [ ! -z "${RUN_PRUNE}" ]; then + return 0 + else + return 1 + fi + ) +} + +function cronMode(){ + ( + cronTabs=$(readConf -cq) + if isInstalled "go-crond" && [ ! -z "${cronTabs}" ]; then + return 0 + else + return 1 + fi + ) +} + +function runOnce() { + if [ ! -z "${RUN_ONCE}" ]; then + return 0 + else + return 1 + fi +} + +function getMode(){ + ( + unset _mode + + if pruneMode; then + _mode="${PRUNE}" + fi + + if [ -z "${_mode}" ] && restoreMode; then + _mode="${RESTORE}" + fi + + if [ -z "${_mode}" ] && verifyMode; then + # Determine if this is a scheduled verification or a manual one. + if isScheduled; then + if cronMode; then + _mode="${SCHEDULED_VERIFY}" + else + _mode="${ERROR}" + logError "Scheduled mode cannot be used without cron being installed and at least one cron tab being defined in ${BACKUP_CONF}." + fi + else + _mode="${VERIFY}" + fi + fi + + if [ -z "${_mode}" ] && runOnce; then + _mode="${ONCE}" + fi + + if [ -z "${_mode}" ] && isScheduled; then + if cronMode; then + _mode="${SCHEDULED}" + else + _mode="${ERROR}" + logError "Scheduled mode cannot be used without cron being installed and at least one cron tab being defined in ${BACKUP_CONF}." + fi + fi + + if [ -z "${_mode}" ] && cronMode; then + _mode="${CRON}" + fi + + if [ -z "${_mode}" ]; then + _mode="${LEGACY}" + fi + + echo "${_mode}" + ) +} + +function validateOperation(){ + ( + _databaseSpec=${1} + _mode=${2} + _rtnCd=0 + + if [[ "${_mode}" == ${RESTORE} ]] && ! isForContainerType ${_databaseSpec}; then + echoRed "\nYou are attempting to restore database '${_databaseSpec}' from a ${CONTAINER_TYPE} container." + echoRed "Cannot continue with the restore. It must be initiated from the matching container type." + _rtnCd=1 + fi + + return ${_rtnCd} + ) +} +# ====================================================================================== \ No newline at end of file diff --git a/docker/backup.container.utils b/docker/backup.container.utils new file mode 100644 index 0000000..3bb4115 --- /dev/null +++ b/docker/backup.container.utils @@ -0,0 +1,57 @@ +#!/bin/bash +# ================================================================================================================= +# Container Utility Functions: +# ----------------------------------------------------------------------------------------------------------------- +function isPostgres(){ + ( + if isInstalled "psql"; then + return 0 + else + return 1 + fi + ) +} + +function isMongo(){ + ( + if isInstalled "mongo"; then + return 0 + else + return 1 + fi + ) +} + +function getContainerType(){ + ( + local _containerType=${UNKNOWN_DB} + _rtnCd=0 + + if isPostgres; then + _containerType=${POSTGRE_DB} + elif isMongo; then + _containerType=${MONGO_DB} + else + _containerType=${UNKNOWN_DB} + _rtnCd=1 + fi + + echo "${_containerType}" + return ${_rtnCd} + ) +} + +function isForContainerType(){ + ( + _databaseSpec=${1} + _databaseType=$(getDatabaseType ${_databaseSpec}) + + # If the database type has not been defined, assume the database spec is valid for the current databse container type. + if [ -z "${_databaseType}" ] || [[ "${_databaseType}" == "${CONTAINER_TYPE}" ]]; then + return 0 + else + return 1 + fi + ) +} +# ====================================================================================== \ No newline at end of file diff --git a/docker/backup.file.utils b/docker/backup.file.utils new file mode 100644 index 0000000..79dae39 --- /dev/null +++ b/docker/backup.file.utils @@ -0,0 +1,233 @@ +#!/bin/bash +# ================================================================================================================= +# File Utility Functions +# ----------------------------------------------------------------------------------------------------------------- +function makeDirectory() +{ + ( + # Creates directories with permissions reclusively. + # ${1} is the directory to be created + # Inspired by https://unix.stackexchange.com/questions/49263/recursive-mkdir + directory="${1}" + test $# -eq 1 || { echo "Function 'makeDirectory' can create only one directory (with it's parent directories)."; exit 1; } + test -d "${directory}" && return 0 + test -d "$(dirname "${directory}")" || { makeDirectory "$(dirname "${directory}")" || return 1; } + test -d "${directory}" || { mkdir --mode=g+w "${directory}" || return 1; } + return 0 + ) +} + +function finalizeBackup(){ + ( + _filename=${1} + _inProgressFilename="${_filename}${IN_PROGRESS_BACKUP_FILE_EXTENSION}" + _finalFilename="${_filename}${BACKUP_FILE_EXTENSION}" + + if [ -f ${_inProgressFilename} ]; then + mv "${_inProgressFilename}" "${_finalFilename}" + echo "${_finalFilename}" + fi + ) +} + +function listExistingBackups(){ + ( + local _backupDir=${1:-${ROOT_BACKUP_DIR}} + local database + local databases=$(readConf -q) + local output="\nDatabase,Current Size" + + for database in ${databases}; do + if isForContainerType ${database}; then + output+="\n${database},$(getDbSize "${database}")" + fi + done + + echoMagenta "\n================================================================================================================================" + echoMagenta "Current Backups:" + echoMagenta "\n$(echo -ne "${output}" | column -t -s ,)" + echoMagenta "\n$(df -h ${_backupDir})" + echoMagenta "--------------------------------------------------------------------------------------------------------------------------------" + du -ah --time ${_backupDir} + echoMagenta "================================================================================================================================\n" + ) +} + +function getDirectoryName(){ + ( + local path=${1} + path="${path%"${path##*[!/]}"}" + local name="${path##*/}" + echo "${name}" + ) +} + +function getBackupTypeFromPath(){ + ( + local path=${1} + path="${path%"${path##*[!/]}"}" + path="$(dirname "${path}")" + local backupType=$(getDirectoryName "${path}") + echo "${backupType}" + ) +} + +function prune(){ + ( + local database + local backupDirs + local backupDir + local backupType + local backupTypes + local pruneBackup + unset backupTypes + unset backupDirs + unset pruneBackup + + local databases=$(readConf -q) + if rollingStrategy; then + backupTypes="daily weekly monthly" + for backupType in ${backupTypes}; do + backupDirs="${backupDirs} $(createBackupFolder -g ${backupType})" + done + else + backupDirs=$(createBackupFolder -g) + fi + + if [ ! -z "${_fromBackup}" ]; then + pruneBackup="$(findBackup "" "${_fromBackup}")" + while [ ! -z "${pruneBackup}" ]; do + echoYellow "\nAbout to delete backup file: ${pruneBackup}" + waitForAnyKey + rm -rfvd "${pruneBackup}" + + # Quietly delete any empty directories that are left behind ... + find ${ROOT_BACKUP_DIR} -type d -empty -delete > /dev/null 2>&1 + pruneBackup="$(findBackup "" "${_fromBackup}")" + done + else + for backupDir in ${backupDirs}; do + for database in ${databases}; do + unset backupType + if rollingStrategy; then + backupType=$(getBackupTypeFromPath "${backupDir}") + fi + pruneBackups "${backupDir}" "${database}" "${backupType}" + done + done + fi + ) +} + +function pruneBackups(){ + ( + _backupDir=${1} + _databaseSpec=${2} + _backupType=${3:-''} + _pruneDir="$(dirname "${_backupDir}")" + _numBackupsToRetain=$(getNumBackupsToRetain "${_backupType}") + _coreFilename=$(generateCoreFilename ${_databaseSpec}) + + if [ -d ${_pruneDir} ]; then + let _index=${_numBackupsToRetain}+1 + _filesToPrune=$(find ${_pruneDir}* -type f -printf '%T@ %p\n' | grep ${_coreFilename} | sort -r | tail -n +${_index} | sed 's~^.* \(.*$\)~\1~') + + if [ ! -z "${_filesToPrune}" ]; then + echoYellow "\nPruning ${_coreFilename} backups from ${_pruneDir} ..." + echo "${_filesToPrune}" | xargs rm -rfvd + + # Quietly delete any empty directories that are left behind ... + find ${ROOT_BACKUP_DIR} -type d -empty -delete > /dev/null 2>&1 + fi + fi + ) +} + +function touchBackupFile() { + ( + # For safety, make absolutely certain the directory and file exist. + # The pruning process removes empty directories, so if there is an error + # during a backup the backup directory could be deleted. + _backupFile=${1} + _backupDir="${_backupFile%/*}" + makeDirectory ${_backupDir} && touch ${_backupFile} + ) +} + +function findBackup(){ + ( + _databaseSpec=${1} + _fileName=${2} + + # If no backup file was specified, find the most recent for the database. + # Otherwise treat the value provided as a filter to find the most recent backup file matching the filter. + if [ -z "${_fileName}" ]; then + _coreFilename=$(generateCoreFilename ${_databaseSpec}) + _fileName=$(find ${ROOT_BACKUP_DIR}* -type f -printf '%T@ %p\n' | grep ${_coreFilename} | sort | tail -n 1 | sed 's~^.* \(.*$\)~\1~') + else + _fileName=$(find ${ROOT_BACKUP_DIR}* -type f -printf '%T@ %p\n' | grep ${_fileName} | sort | tail -n 1 | sed 's~^.* \(.*$\)~\1~') + fi + + echo "${_fileName}" + ) +} + +function createBackupFolder(){ + ( + local OPTIND + local genOnly + unset genOnly + while getopts g FLAG; do + case $FLAG in + g ) genOnly=1 ;; + esac + done + shift $((OPTIND-1)) + + _backupTypeDir="${1:-$(getBackupType)}" + if [ ! -z "${_backupTypeDir}" ]; then + _backupTypeDir=${_backupTypeDir}/ + fi + + _backupDir="${ROOT_BACKUP_DIR}${_backupTypeDir}`date +\%Y-\%m-\%d`/" + + # Don't actually create the folder if we're just generating it for printing the configuation. + if [ -z "${genOnly}" ]; then + echo "Making backup directory ${_backupDir} ..." >&2 + if ! makeDirectory ${_backupDir}; then + logError "Failed to create backup directory ${_backupDir}." + exit 1; + fi; + fi + + echo ${_backupDir} + ) +} + +function generateFilename(){ + ( + _backupDir=${1} + _databaseSpec=${2} + _coreFilename=$(generateCoreFilename ${_databaseSpec}) + _filename="${_backupDir}${_coreFilename}_`date +\%Y-\%m-\%d_%H-%M-%S`" + echo ${_filename} + ) +} + +function generateCoreFilename(){ + ( + _databaseSpec=${1} + _hostname=$(getHostname ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _coreFilename="${_hostname}-${_database}" + echo ${_coreFilename} + ) +} + +function getFileSize(){ + ( + _filename=${1} + echo $(du -h "${_filename}" | awk '{print $1}') + ) +} +# ================================================================================================================= \ No newline at end of file diff --git a/docker/backup.ftp b/docker/backup.ftp new file mode 100644 index 0000000..d0a935c --- /dev/null +++ b/docker/backup.ftp @@ -0,0 +1,23 @@ +#!/bin/bash +# ================================================================================================================= +# FTP Support Functions: +# ----------------------------------------------------------------------------------------------------------------- +function ftpBackup(){ + ( + if [ -z "${FTP_URL}" ] ; then + return 0 + fi + + _filename=${1} + _filenameWithExtension="${_filename}${BACKUP_FILE_EXTENSION}" + echo "Transferring ${_filenameWithExtension} to ${FTP_URL}" + curl --ftp-ssl -T ${_filenameWithExtension} --user ${FTP_USER}:${FTP_PASSWORD} ${FTP_URL} + + if [ ${?} -eq 0 ]; then + logInfo "Successfully transferred ${_filenameWithExtension} to the FTP server" + else + logError "Failed to transfer ${_filenameWithExtension} with the exit code ${?}" + fi + ) +} +# ================================================================================================================= diff --git a/docker/backup.logging b/docker/backup.logging new file mode 100644 index 0000000..50449f0 --- /dev/null +++ b/docker/backup.logging @@ -0,0 +1,111 @@ +#!/bin/bash +# ================================================================================================================= +# Logging Functions: +# ----------------------------------------------------------------------------------------------------------------- +function debugMsg (){ + _msg="${@}" + if [ "${BACKUP_LOG_LEVEL}" == "debug" ]; then + echoGreen "$(date) - [DEBUG] - ${@}" >&2 + fi +} + +function echoRed (){ + _msg="${@}" + _red='\e[31m' + _nc='\e[0m' # No Color + echo -e "${_red}${_msg}${_nc}" +} + +function echoYellow (){ + _msg="${@}" + _yellow='\e[33m' + _nc='\e[0m' # No Color + echo -e "${_yellow}${_msg}${_nc}" +} + +function echoBlue (){ + _msg="${@}" + _blue='\e[34m' + _nc='\e[0m' # No Color + echo -e "${_blue}${_msg}${_nc}" +} + +function echoGreen (){ + _msg="${@}" + _green='\e[32m' + _nc='\e[0m' # No Color + echo -e "${_green}${_msg}${_nc}" +} + +function echoMagenta (){ + _msg="${@}" + _magenta='\e[35m' + _nc='\e[0m' # No Color + echo -e "${_magenta}${_msg}${_nc}" +} + +function logInfo(){ + ( + infoMsg="${1}" + echo -e "${infoMsg}" + postMsgToWebhook "${ENVIRONMENT_FRIENDLY_NAME}" \ + "${ENVIRONMENT_NAME}" \ + "INFO" \ + "${infoMsg}" + ) +} + +function logWarn(){ + ( + warnMsg="${1}" + echoYellow "${warnMsg}" + postMsgToWebhook "${ENVIRONMENT_FRIENDLY_NAME}" \ + "${ENVIRONMENT_NAME}" \ + "WARN" \ + "${warnMsg}" + ) +} + +function logError(){ + ( + errorMsg="${1}" + echoRed "[!!ERROR!!] - ${errorMsg}" >&2 + postMsgToWebhook "${ENVIRONMENT_FRIENDLY_NAME}" \ + "${ENVIRONMENT_NAME}" \ + "ERROR" \ + "${errorMsg}" + ) +} + +function getWebhookPayload(){ + _payload=$(eval "cat <<-EOF +$(<${WEBHOOK_TEMPLATE}) +EOF +") + echo "${_payload}" +} + +function formatWebhookMsg(){ + ( + # Escape all double quotes + # Escape all newlines + filters='s~"~\\"~g;:a;N;$!ba;s~\n~\\n~g;' + _value=$(echo "${1}" | sed "${filters}") + echo "${_value}" + ) +} + +function postMsgToWebhook(){ + ( + if [ -z "${WEBHOOK_URL}" ] && [ -f ${WEBHOOK_TEMPLATE} ]; then + return 0 + fi + + projectFriendlyName=${1} + projectName=${2} + statusCode=${3} + message=$(formatWebhookMsg "${4}") + curl -s -X POST -H 'Content-Type: application/json' --data "$(getWebhookPayload)" "${WEBHOOK_URL}" > /dev/null + ) +} +# ================================================================================================================= \ No newline at end of file diff --git a/docker/backup.misc.utils b/docker/backup.misc.utils new file mode 100644 index 0000000..cab2ac3 --- /dev/null +++ b/docker/backup.misc.utils @@ -0,0 +1,30 @@ +#!/bin/bash +# ================================================================================================================= +# General Utility Functions: +# ----------------------------------------------------------------------------------------------------------------- +function waitForAnyKey() { + read -n1 -s -r -p $'\e[33mWould you like to continue?\e[0m Press Ctrl-C to exit, or any other key to continue ...' key + echo -e \\n + + # If we get here the user did NOT press Ctrl-C ... + return 0 +} + +function formatList(){ + ( + filters='s~^~ - ~;' + _value=$(echo "${1}" | sed "${filters}") + echo "${_value}" + ) +} + +function isInstalled(){ + rtnVal=$(type "$1" >/dev/null 2>&1) + rtnCd=$? + if [ ${rtnCd} -ne 0 ]; then + return 1 + else + return 0 + fi +} +# ====================================================================================== \ No newline at end of file diff --git a/docker/backup.mongo.plugin b/docker/backup.mongo.plugin new file mode 100644 index 0000000..0f5583c --- /dev/null +++ b/docker/backup.mongo.plugin @@ -0,0 +1,226 @@ +#!/bin/bash +# ================================================================================================================= +# Mongo Backup and Restore Functions: +# - Dynamically loaded as a plug-in +# ----------------------------------------------------------------------------------------------------------------- +function onBackupDatabase(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + _backupFile=${2} + + _hostname=$(getHostname ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort ${_databaseSpec}) + _portArg=${_port:+"--port=${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + echoGreen "Backing up '${_hostname}${_port:+:${_port}}${_database:+/${_database}}' to '${_backupFile}' ..." + + _authDbArg=${MONGODB_AUTHENTICATION_DATABASE:+"--authenticationDatabase ${MONGODB_AUTHENTICATION_DATABASE}"} + mongodump -h "${_hostname}" -d "${_database}" ${_authDbArg} ${_portArg} -u "${_username}" -p "${_password}" --quiet --gzip --archive=${_backupFile} + return ${?} + ) +} + +function onRestoreDatabase(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + _fileName=${2} + _adminPassword=${3} + + _hostname=$(getHostname ${flags} ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort ${flags} ${_databaseSpec}) + _portArg=${_port:+"--port=${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + echo -e "Restoring '${_fileName}' to '${_hostname}${_port:+:${_port}}${_database:+/${_database}}' ...\n" >&2 + + # ToDo: + # - Add support for restoring to a different database. + # The following implementation only supports restoring to a database of the same name, + # unlike the postgres implementation that allows the database to be restored to a database of a different + # name for testing. + # Ref: https://stackoverflow.com/questions/36321899/mongorestore-to-a-different-database + + _authDbArg=${MONGODB_AUTHENTICATION_DATABASE:+"--authenticationDatabase ${MONGODB_AUTHENTICATION_DATABASE}"} + mongorestore --drop -h ${_hostname} -d "${_database}" ${_authDbArg} ${_portArg} -u "${_username}" -p "${_password}" --gzip --archive=${_fileName} --nsInclude="*" + return ${?} + ) +} + +function onStartServer(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + # Start a local MongoDb instance + MONGODB_DATABASE=$(getDatabaseName "${_databaseSpec}") \ + MONGODB_USER=$(getUsername "${_databaseSpec}") \ + MONGODB_PASSWORD=$(getPassword "${_databaseSpec}") \ + MONGODB_ADMIN_PASSWORD=$(getPassword "${_databaseSpec}") \ + run-mongod >/dev/null 2>&1 & + ) +} + +function onStopServer(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + _port=$(getPort ${_databaseSpec}) + _portArg=${_port:+"--port ${_port}"} + _username=admin + _password=$(getPassword ${_databaseSpec}) + + _authDbArg=${MONGODB_AUTHENTICATION_DATABASE:+"--authenticationDatabase ${MONGODB_AUTHENTICATION_DATABASE}"} + mongo admin ${_authDbArg} ${_portArg} -u "${_username}" -p "${_password}" --quiet --eval "db.shutdownServer()" + + # Delete the database files and configuration + echo -e "Cleaning up ...\n" >&2 + rm -rf /var/lib/mongodb/data/* + ) +} + +function onPingDbServer(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + _hostname=$(getHostname ${flags} ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort ${flags} ${_databaseSpec}) + _portArg=${_port:+"--port ${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + + _dbAddressArg=${_hostname}${_port:+:${_port}}${_database:+/${_database}} + _authDbArg=${MONGODB_AUTHENTICATION_DATABASE:+"--authenticationDatabase ${MONGODB_AUTHENTICATION_DATABASE}"} + if mongo ${_dbAddressArg} ${_authDbArg} -u "${_username}" -p "${_password}" --quiet --eval='quit()' >/dev/null 2>&1; then + return 0 + else + return 1 + fi + ) +} + +function onVerifyBackup(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + _hostname=$(getHostname -l ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort -l ${_databaseSpec}) + _portArg=${_port:+"--port ${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + + _dbAddressArg=${_hostname}${_port:+:${_port}}${_database:+/${_database}} + _authDbArg=${MONGODB_AUTHENTICATION_DATABASE:+"--authenticationDatabase ${MONGODB_AUTHENTICATION_DATABASE}"} + collections=$(mongo ${_dbAddressArg} ${_authDbArg} -u "${_username}" -p "${_password}" --quiet --eval 'var dbs = [];dbs = db.getCollectionNames();for (i in dbs){ print(db.dbs[i]);}';) + rtnCd=${?} + + # Get the size of the restored database + if (( ${rtnCd} == 0 )); then + size=$(getDbSize -l "${_databaseSpec}") + rtnCd=${?} + fi + + if (( ${rtnCd} == 0 )); then + numResults=$(echo "${collections}"| wc -l) + if [[ ! -z "${collections}" ]] && (( numResults >= 1 )); then + # All good + verificationLog="\nThe restored database contained ${numResults} collections, and is ${size} in size." + else + # Not so good + verificationLog="\nNo collections were found in the restored database ${_database}." + rtnCd="3" + fi + fi + + echo ${verificationLog} + return ${rtnCd} + ) +} + +function onGetDbSize(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + _hostname=$(getHostname ${flags} ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort ${flags} ${_databaseSpec}) + _portArg=${_port:+"--port ${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + + _dbAddressArg=${_hostname}${_port:+:${_port}}${_database:+/${_database}} + _authDbArg=${MONGODB_AUTHENTICATION_DATABASE:+"--authenticationDatabase ${MONGODB_AUTHENTICATION_DATABASE}"} + size=$(mongo ${_dbAddressArg} ${_authDbArg} -u "${_username}" -p "${_password}" --quiet --eval 'printjson(db.stats().fsTotalSize)') + rtnCd=${?} + + echo ${size} + return ${rtnCd} + ) +} +# ================================================================================================================= \ No newline at end of file diff --git a/docker/backup.null.plugin b/docker/backup.null.plugin new file mode 100644 index 0000000..14ceed0 --- /dev/null +++ b/docker/backup.null.plugin @@ -0,0 +1,195 @@ +#!/bin/bash +# ================================================================================================================= +# Null Backup and Restore Functions: +# - Dynamically loaded as a plug-in +# - Refer to existing plug-ins for implementation examples. +# ----------------------------------------------------------------------------------------------------------------- +function onBackupDatabase(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + _backupFile=${2} + + _hostname=$(getHostname ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort ${_databaseSpec}) + _portArg=${_port:+"--port ${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + echoGreen "Backing up '${_hostname}${_port:+:${_port}}${_database:+/${_database}}' to '${_backupFile}' ..." + + echoRed "[backup.null.plugin] onBackupDatabase - Not Implemented" + # echoGreen "Starting database backup ..." + # Add your database specific backup operation(s) here. + return ${?} + ) +} + +function onRestoreDatabase(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + _fileName=${2} + _adminPassword=${3} + + _hostname=$(getHostname ${flags} ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort ${flags} ${_databaseSpec}) + _portArg=${_port:+"--port ${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + echo -e "Restoring '${_fileName}' to '${_hostname}${_port:+:${_port}}${_database:+/${_database}}' ...\n" >&2 + + echoRed "[backup.null.plugin] onRestoreDatabase - Not Implemented" + # Add your database specific restore operation(s) here. + return ${?} + ) +} + +function onStartServer(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + echoRed "[backup.null.plugin] onStartServer - Not Implemented" + # Add your NON-BLOCKING database specific startup operation(s) here. + # - Start the database server as a background job. + ) +} + +function onStopServer(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + + echoRed "[backup.null.plugin] onStopServer - Not Implemented" + + # echo "Shutting down..." + # Add your database specific shutdown operation(s) here. + + # Delete the database files and configuration + # echo -e "Cleaning up ...\n" >&2 + # Add your database specific cleanup operation(s) here. + ) +} + +function onPingDbServer(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + _hostname=$(getHostname ${flags} ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort ${flags} ${_databaseSpec}) + _portArg=${_port:+"--port ${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + + echoRed "[backup.null.plugin] onPingDbServer - Not Implemented" + # Add your database specific ping operation(s) here. + # if ; then + # return 0 + # else + # return 1 + # fi + ) +} + +function onVerifyBackup(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + _hostname=$(getHostname -l ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort -l ${_databaseSpec}) + _portArg=${_port:+"--port ${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + + echoRed "[backup.null.plugin] onVerifyBackup - Not Implemented" + # Add your database specific verification operation(s) here. + + # echo ${verificationLog} + # return ${rtnCd} + ) +} + +function onGetDbSize(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + _hostname=$(getHostname ${flags} ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort ${flags} ${_databaseSpec}) + _portArg=${_port:+"--port ${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + + echoRed "[backup.null.plugin] onGetDbSize - Not Implemented" + # Add your database specific get size operation(s) here. + + # echo ${size} + # return ${rtnCd} + ) +} +# ================================================================================================================= diff --git a/docker/backup.postgres.plugin b/docker/backup.postgres.plugin new file mode 100644 index 0000000..e5248ac --- /dev/null +++ b/docker/backup.postgres.plugin @@ -0,0 +1,247 @@ +#!/bin/bash +# ================================================================================================================= +# Postgres Backup and Restore Functions: +# - Dynamically loaded as a plug-in +# ----------------------------------------------------------------------------------------------------------------- +function onBackupDatabase(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + _backupFile=${2} + + _hostname=$(getHostname ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort ${_databaseSpec}) + _portArg=${_port:+"-p ${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + echoGreen "Backing up '${_hostname}${_port:+:${_port}}${_database:+/${_database}}' to '${_backupFile}' ..." + + PGPASSWORD=${_password} pg_dump -Fp -h "${_hostname}" ${_portArg} -U "${_username}" "${_database}" | gzip > ${_backupFile} + return ${PIPESTATUS[0]} + ) +} + +function onRestoreDatabase(){ + ( + local OPTIND + local unset quiet + local unset flags + while getopts :q FLAG; do + case $FLAG in + q ) + quiet=1 + flags+="-${FLAG} " + ;; + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + _fileName=${2} + _adminPassword=${3} + + _hostname=$(getHostname ${flags} ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort ${flags} ${_databaseSpec}) + _portArg=${_port:+"-p ${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + echo -e "Restoring '${_fileName}' to '${_hostname}${_port:+:${_port}}${_database:+/${_database}}' ...\n" >&2 + + export PGPASSWORD=${_adminPassword} + _rtnCd=0 + + # Drop + if (( ${_rtnCd} == 0 )); then + psql -h "${_hostname}" ${_portArg} -ac "DROP DATABASE \"${_database}\";" + _rtnCd=${?} + echo + fi + + # Create + if (( ${_rtnCd} == 0 )); then + psql -h "${_hostname}" ${_portArg} -ac "CREATE DATABASE \"${_database}\";" + _rtnCd=${?} + echo + fi + + # Grant User Access + if (( ${_rtnCd} == 0 )); then + psql -h "${_hostname}" ${_portArg} -ac "GRANT ALL ON DATABASE \"${_database}\" TO \"${_username}\";" + _rtnCd=${?} + echo + fi + + # Restore + if (( ${_rtnCd} == 0 )); then + gunzip -c "${_fileName}" | psql -v ON_ERROR_STOP=1 -x -h "${_hostname}" ${_portArg} -d "${_database}" + # Get the status code from psql specifically. ${?} would only provide the status of the last command, psql in this case. + _rtnCd=${PIPESTATUS[1]} + fi + + # List tables + if [ -z "${quiet}" ] && (( ${_rtnCd} == 0 )); then + psql -h "${_hostname}" ${_portArg} -d "${_database}" -c "\d" + _rtnCd=${?} + fi + + return ${_rtnCd} + ) +} + +function onStartServer(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + # Start a local PostgreSql instance + POSTGRESQL_DATABASE=$(getDatabaseName "${_databaseSpec}") \ + POSTGRESQL_USER=$(getUsername "${_databaseSpec}") \ + POSTGRESQL_PASSWORD=$(getPassword "${_databaseSpec}") \ + run-postgresql >/dev/null 2>&1 & + ) +} + +function onStopServer(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + # Stop the local PostgreSql instance + pg_ctl stop -D /var/lib/pgsql/data/userdata + + # Delete the database files and configuration + echo -e "Cleaning up ...\n" + rm -rf /var/lib/pgsql/data/userdata + ) +} + +function onPingDbServer(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + _hostname=$(getHostname ${flags} ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort ${flags} ${_databaseSpec}) + _portArg=${_port:+"-p ${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + + if PGPASSWORD=${_password} psql -h ${_hostname} ${_portArg} -U ${_username} -q -d ${_database} -c 'SELECT 1' >/dev/null 2>&1; then + return 0 + else + return 1 + fi + ) +} + +function onVerifyBackup(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + _hostname=$(getHostname -l ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort -l ${_databaseSpec}) + _portArg=${_port:+"-p ${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + + debugMsg "backup.postgres.plugin - onVerifyBackup" + tables=$(psql -h "${_hostname}" ${_portArg} -d "${_database}" -t -c "SELECT table_name FROM information_schema.tables WHERE table_schema='${TABLE_SCHEMA}' AND table_type='BASE TABLE';") + rtnCd=${?} + + # Get the size of the restored database + if (( ${rtnCd} == 0 )); then + size=$(getDbSize -l "${_databaseSpec}") + rtnCd=${?} + fi + + if (( ${rtnCd} == 0 )); then + numResults=$(echo "${tables}"| wc -l) + if [[ ! -z "${tables}" ]] && (( numResults >= 1 )); then + # All good + verificationLog="\nThe restored database contained ${numResults} tables, and is ${size} in size." + else + # Not so good + verificationLog="\nNo tables were found in the restored database." + rtnCd="3" + fi + fi + + echo ${verificationLog} + return ${rtnCd} + ) +} + +function onGetDbSize(){ + ( + local OPTIND + local unset flags + while getopts : FLAG; do + case $FLAG in + ? ) flags+="-${OPTARG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + + _hostname=$(getHostname ${flags} ${_databaseSpec}) + _database=$(getDatabaseName ${_databaseSpec}) + _port=$(getPort ${flags} ${_databaseSpec}) + _portArg=${_port:+"-p ${_port}"} + _username=$(getUsername ${_databaseSpec}) + _password=$(getPassword ${_databaseSpec}) + + size=$(PGPASSWORD=${_password} psql -h "${_hostname}" ${_portArg} -U "${_username}" -d "${_database}" -t -c "SELECT pg_size_pretty(pg_database_size(current_database())) as size;") + rtnCd=${?} + + echo ${size} + return ${rtnCd} + ) +} +# ================================================================================================================= diff --git a/docker/backup.server.utils b/docker/backup.server.utils new file mode 100644 index 0000000..9e938a1 --- /dev/null +++ b/docker/backup.server.utils @@ -0,0 +1,39 @@ +#!/bin/bash +# ================================================================================================================= +# Backup Server Utility Functions: +# ----------------------------------------------------------------------------------------------------------------- +function startCron(){ + logInfo "Starting backup server in cron mode ..." + listSettings + echoBlue "Starting go-crond as a background task ...\n" + CRON_CMD="go-crond -v --default-user=${UID} --allow-unprivileged ${BACKUP_CONF}" + exec ${CRON_CMD} & + wait +} + +function startLegacy(){ + ( + while true; do + runBackups + + echoYellow "Sleeping for ${BACKUP_PERIOD} ...\n" + sleep ${BACKUP_PERIOD} + done + ) +} + +function shutDown(){ + jobIds=$(jobs | awk -F '[][]' '{print $2}' ) + for jobId in ${jobIds} ; do + echo "Shutting down background job '${jobId}' ..." + kill %${jobId} + done + + if [ ! -z "${jobIds}" ]; then + echo "Waiting for any background jobs to complete ..." + fi + wait + + exit 0 +} +# ====================================================================================== \ No newline at end of file diff --git a/docker/backup.settings b/docker/backup.settings new file mode 100644 index 0000000..7de738c --- /dev/null +++ b/docker/backup.settings @@ -0,0 +1,55 @@ +#!/bin/bash +# ====================================================================================== +# Default Settings +# -------------------------------------------------------------------------------------- +export BACKUP_FILE_EXTENSION=".sql.gz" +export IN_PROGRESS_BACKUP_FILE_EXTENSION=".sql.gz.in_progress" +export DEFAULT_PORT=${POSTGRESQL_PORT_NUM:-5432} +export DATABASE_SERVICE_NAME=${DATABASE_SERVICE_NAME:-postgresql} +export POSTGRESQL_DATABASE=${POSTGRESQL_DATABASE:-my_postgres_db} +export TABLE_SCHEMA=${TABLE_SCHEMA:-public} + +# Supports: +# - daily +# - rolling +export BACKUP_STRATEGY=$(echo "${BACKUP_STRATEGY:-rolling}" | tr '[:upper:]' '[:lower:]') +export BACKUP_PERIOD=${BACKUP_PERIOD:-1d} +export ROOT_BACKUP_DIR=${ROOT_BACKUP_DIR:-${BACKUP_DIR:-/backups/}} +export BACKUP_CONF=${BACKUP_CONF:-backup.conf} + +# Used to prune the total number of backup when using the daily backup strategy. +# Default provides for one full month of backups +export NUM_BACKUPS=${NUM_BACKUPS:-31} + +# Used to prune the total number of backup when using the rolling backup strategy. +# Defaults provide for: +# - A week's worth of daily backups +# - A month's worth of weekly backups +# - The previous month's backup +export DAILY_BACKUPS=${DAILY_BACKUPS:-6} +export WEEKLY_BACKUPS=${WEEKLY_BACKUPS:-4} +export MONTHLY_BACKUPS=${MONTHLY_BACKUPS:-1} + +# Webhook defaults +WEBHOOK_TEMPLATE=${WEBHOOK_TEMPLATE:-webhook-template.json} + +# Modes: +export ONCE="once" +export SCHEDULED="scheduled" +export RESTORE="restore" +export VERIFY="verify" +export CRON="cron" +export LEGACY="legacy" +export ERROR="error" +export SCHEDULED_VERIFY="scheduled-verify" +export PRUNE="prune" + +# Supported Database Containers +export UNKNOWN_DB="null" +export MONGO_DB="mongo" +export POSTGRE_DB="postgres" +export CONTAINER_TYPE="$(getContainerType)" + +# Other: +export DATABASE_SERVER_TIMEOUT=${DATABASE_SERVER_TIMEOUT:-120} +# ====================================================================================== \ No newline at end of file diff --git a/docker/backup.sh b/docker/backup.sh index 78fff25..f1e17d8 100755 --- a/docker/backup.sh +++ b/docker/backup.sh @@ -1,1366 +1,43 @@ - #!/bin/bash - -# ================================================================================================================= -# Usage: -# ----------------------------------------------------------------------------------------------------------------- -function usage () { - cat <<-EOF - - Automated backup script for Postgresql databases. - - There are two modes of scheduling backups: - - Cron Mode: - - Allows one or more schedules to be defined as cron tabs in ${BACKUP_CONF}. - - If cron (go-crond) is installed (which is handled by the Docker file) and at least one cron tab is defined, the script will startup in Cron Mode, - otherwise it will default to Legacy Mode. - - Refer to ${BACKUP_CONF} for additional details and examples of using cron scheduling. - - - Legacy Mode: - - Uses a simple sleep command to set the schedule based on the setting of BACKUP_PERIOD; defaults to ${BACKUP_PERIOD} - - Refer to the project documentation for additional details on how to use this script. - - https://github.com/BCDevOps/backup-container - - Usage: - $0 [options] - - Standard Options: - ================= - -h prints this usage documentation. - - -1 run once. - Performs a single set of backups and exits. - - -s run in scheduled/silent (no questions asked) mode. - A flag to be used by cron scheduled backups to indicate they are being run on a schedule. - Requires cron (go-crond) to be installed and at least one cron tab to be defined in ${BACKUP_CONF} - Refer to ${BACKUP_CONF} for additional details and examples of using cron scheduling. - - -l lists existing backups. - Great for listing the available backups for a restore. - - -c lists the current configuration settings and exits. - Great for confirming the current settings, and listing the databases included in the backup schedule. - - -p prune backups - Used to manually prune backups. - This can be used with the '-f' option, see below, to prune specific backups or sets of backups. - Use caution when using the '-f' option. - - Verify Options: - ================ - The verify process performs the following basic operations: - - Start a local database server instance. - - Restore the selected backup locally, watching for errors. - - Run a table query on the restored database as a simple test to ensure tables were restored - and queries against the database succeed without error. - - Stop the local database server instance. - - Delete the local database and configuration. - - -v ; in the form /, or :/ - Triggers verify mode and starts verify mode on the specified database. - - Example: - $0 -v postgresql:5432/TheOrgBook_Database - - Would start the verification process on the database using the most recent backup for the database. - - $0 -v all - - Verify the most recent backup of all databases. - - -f ; an OPTIONAL filter to use to find/identify the backup file to restore. - Refer to the same option under 'Restore Options' for details. - - Restore Options: - ================ - The restore process performs the following basic operations: - - Drop and recreate the selected database. - - Grant the database user access to the recreated database - - Restore the database from the selected backup file - - Have the 'Admin' (postgres) password handy, the script will ask you for it during the restore. - - When in restore mode, the script will list the settings it will use and wait for your confirmation to continue. - This provides you with an opportunity to ensure you have selected the correct database and backup file - for the job. - - Restore mode will allow you to restore a database to a different location (host, and/or database name) provided - it can contact the host and you can provide the appropriate credentials. If you choose to do this, you will need - to provide a file filter using the '-f' option, since the script will likely not be able to determine which backup - file you would want to use. This functionality provides a convenient way to test your backups or migrate your - database/data without affecting the original database. - - -r ; in the form /, or :/ - Triggers restore mode and starts restore mode on the specified database. - - Example: - $0 -r postgresql:5432/TheOrgBook_Database - - Would start the restore process on the database using the most recent backup for the database. - - -f ; an OPTIONAL filter to use to find/identify the backup file to restore. - This can be a full or partial file specification. When only part of a filename is specified the restore process - attempts to find the most recent backup matching the filter. - If not specified, the restore process attempts to locate the most recent backup file for the specified database. - - Examples: - $0 -r wallet-db/test_db -f wallet-db-tob_holder - - Would try to find the latest backup matching on the partial file name provided. - - $0 -r wallet-db/test_db -f /backups/daily/2018-11-07/wallet-db-tob_holder_2018-11-07_23-59-35.sql.gz - - Would use the specific backup file. - - $0 -r wallet-db/test_db -f wallet-db-tob_holder_2018-11-07_23-59-35.sql.gz - - Would use the specific backup file regardless of its location in the root backup folder. - - -s OPTIONAL flag. Use with caution. Could cause unintentional data loss. - Run the restore in scripted/scheduled mode. In this mode the restore will not ask you to confirm the settings, - nor will ask you for the 'Admin' password. It will simply attempt to restore a database from a backup. - It's up to you to ensure it's targeting the correct database and using the correct backup file. - - -a ; an OPTIONAL flag used to specify the 'Admin' password. - Use with the '-s' flag to specify the 'Admin' password. Under normal usage conditions it's better to supply the - password when prompted so it is not visible on the console. - -EOF -exit 1 -} -# ================================================================================================================= - -# ================================================================================================================= -# Funtions: -# ----------------------------------------------------------------------------------------------------------------- -function echoRed (){ - _msg=${1} - _red='\e[31m' - _nc='\e[0m' # No Color - echo -e "${_red}${_msg}${_nc}" -} - -function echoYellow (){ - _msg=${1} - _yellow='\e[33m' - _nc='\e[0m' # No Color - echo -e "${_yellow}${_msg}${_nc}" -} - -function echoBlue (){ - _msg=${1} - _blue='\e[34m' - _nc='\e[0m' # No Color - echo -e "${_blue}${_msg}${_nc}" -} - -function echoGreen (){ - _msg=${1} - _green='\e[32m' - _nc='\e[0m' # No Color - echo -e "${_green}${_msg}${_nc}" -} - -function echoMagenta (){ - _msg=${1} - _magenta='\e[35m' - _nc='\e[0m' # No Color - echo -e "${_magenta}${_msg}${_nc}" -} - -function logInfo(){ - ( - infoMsg="${1}" - echo -e "${infoMsg}" - postMsgToWebhook "${ENVIRONMENT_FRIENDLY_NAME}" \ - "${ENVIRONMENT_NAME}" \ - "INFO" \ - "${infoMsg}" - ) -} - -function logWarn(){ - ( - warnMsg="${1}" - echoYellow "${warnMsg}" - postMsgToWebhook "${ENVIRONMENT_FRIENDLY_NAME}" \ - "${ENVIRONMENT_NAME}" \ - "WARN" \ - "${warnMsg}" - ) -} - -function logError(){ - ( - errorMsg="${1}" - echoRed "[!!ERROR!!] - ${errorMsg}" >&2 - postMsgToWebhook "${ENVIRONMENT_FRIENDLY_NAME}" \ - "${ENVIRONMENT_NAME}" \ - "ERROR" \ - "${errorMsg}" - ) -} - -function getWebhookPayload(){ - _payload=$(eval "cat <<-EOF -$(<${WEBHOOK_TEMPLATE}) -EOF -") - echo "${_payload}" -} - -function formatWebhookMsg(){ - ( - # Escape all double quotes - # Escape all newlines - filters='s~"~\\"~g;:a;N;$!ba;s~\n~\\n~g;' - _value=$(echo "${1}" | sed "${filters}") - echo "${_value}" - ) -} - -function postMsgToWebhook(){ - ( - if [ -z "${WEBHOOK_URL}" ] && [ -f ${WEBHOOK_TEMPLATE} ]; then - return 0 - fi - - projectFriendlyName=${1} - projectName=${2} - statusCode=${3} - message=$(formatWebhookMsg "${4}") - curl -s -X POST -H 'Content-Type: application/json' --data "$(getWebhookPayload)" "${WEBHOOK_URL}" > /dev/null - ) -} - -function waitForAnyKey() { - read -n1 -s -r -p $'\e[33mWould you like to continue?\e[0m Press Ctrl-C to exit, or any other key to continue ...' key - echo -e \\n - - # If we get here the user did NOT press Ctrl-C ... - return 0 -} - -function runOnce() { - if [ ! -z "${RUN_ONCE}" ]; then - return 0 - else - return 1 - fi -} - -function getDatabaseName(){ - ( - _databaseSpec=${1} - _databaseName=$(echo ${_databaseSpec} | sed 's~^.*/\(.*$\)~\1~') - echo "${_databaseName}" - ) -} - -function getPort(){ - ( - _databaseSpec=${1} - _port=$(echo ${_databaseSpec} | sed "s~\(^.*:\)\(.*\)/\(.*$\)~\2~;s~${_databaseSpec}~~g;") - if [ -z ${_port} ]; then - _port=${DEFAULT_PORT} - fi - echo "${_port}" - ) -} - -function getHostname(){ - ( - _databaseSpec=${1} - _hostname=$(echo ${_databaseSpec} | sed 's~\(^.*\)/.*$~\1~;s~\(^.*\):.*$~\1~;') - echo "${_hostname}" - ) -} - -function getHostPrefix(){ - ( - _hostname=${1} - _hostPrefix=$(echo ${_hostname} | tr '[:lower:]' '[:upper:]' | sed "s~-~_~g") - echo "${_hostPrefix}" - ) -} - -function getHostUserParam(){ - ( - _hostname=${1} - _hostUser=$(getHostPrefix ${_hostname})_USER - echo "${_hostUser}" - ) -} - -function getHostPasswordParam(){ - ( - _hostname=${1} - _hostPassword=$(getHostPrefix ${_hostname})_PASSWORD - echo "${_hostPassword}" - ) -} - -function readConf(){ - ( - local OPTIND - local readCron - local quiet - unset readCron - unset quiet - while getopts cq FLAG; do - case $FLAG in - c ) readCron=1 ;; - q ) quiet=1 ;; - esac - done - shift $((OPTIND-1)) - - # Remove all comments and any blank lines - filters="/^[[:blank:]]*$/d;/^[[:blank:]]*#/d;/#.*/d;" - - if [ -z "${readCron}" ]; then - # Read in the database config ... - # - Remove any lines that do not match the expected database spec format(s) - # - / - # - :/ - filters="${filters}/^[a-zA-Z0-9_/-]*\(:[0-9]*\)\?\/[a-zA-Z0-9_/-]*$/!d;" - else - # Read in the cron config ... - # - Remove any lines that MATCH expected database spec format(s), - # leaving, what should be, cron tabs. - filters="${filters}/^[a-zA-Z0-9_/-]*\(:[0-9]*\)\?\/[a-zA-Z0-9_/-]*$/d;" - fi - - if [ -f ${BACKUP_CONF} ]; then - if [ -z "${quiet}" ]; then - echo "Reading backup config from ${BACKUP_CONF} ..." >&2 - fi - _value=$(sed "${filters}" ${BACKUP_CONF}) - fi - - if [ -z "${_value}" ] && [ -z "${readCron}" ]; then - # Backward compatibility - if [ -z "${quiet}" ]; then - echo "Reading backup config from environment variables ..." >&2 - fi - _value="${DATABASE_SERVICE_NAME}:${DEFAULT_PORT}/${POSTGRESQL_DATABASE}" - fi - - echo "${_value}" - ) -} - -function makeDirectory() -{ - ( - # Creates directories with permissions reclusively. - # ${1} is the directory to be created - # Inspired by https://unix.stackexchange.com/questions/49263/recursive-mkdir - directory="${1}" - test $# -eq 1 || { echo "Function 'makeDirectory' can create only one directory (with it's parent directories)."; exit 1; } - test -d "${directory}" && return 0 - test -d "$(dirname "${directory}")" || { makeDirectory "$(dirname "${directory}")" || return 1; } - test -d "${directory}" || { mkdir --mode=g+w "${directory}" || return 1; } - return 0 - ) -} - -function finalizeBackup(){ - ( - _filename=${1} - _inProgressFilename="${_filename}${IN_PROGRESS_BACKUP_FILE_EXTENSION}" - _finalFilename="${_filename}${BACKUP_FILE_EXTENSION}" - - if [ -f ${_inProgressFilename} ]; then - mv "${_inProgressFilename}" "${_finalFilename}" - echo "${_finalFilename}" - fi - ) -} - -function ftpBackup(){ - ( - if [ -z "${FTP_URL}" ] ; then - return 0 - fi - - _filename=${1} - _filenameWithExtension="${_filename}${BACKUP_FILE_EXTENSION}" - echo "Transferring ${_filenameWithExtension} to ${FTP_URL}" - curl --ftp-ssl -T ${_filenameWithExtension} --user ${FTP_USER}:${FTP_PASSWORD} ${FTP_URL} - - if [ ${?} -eq 0 ]; then - logInfo "Successfully transferred ${_filenameWithExtension} to the FTP server" - else - logError "Failed to transfer ${_filenameWithExtension} with the exit code ${?}" - fi - ) -} - -function listExistingBackups(){ - ( - local _backupDir=${1:-${ROOT_BACKUP_DIR}} - local database - - local databases=$(readConf -q) - local output="\nDatabase,Current Size" - for database in ${databases}; do - output="${output}\n${database},$(getDbSize "${database}")" - done - - echoMagenta "\n================================================================================================================================" - echoMagenta "Current Backups:" - echoMagenta "\n$(echo -ne "${output}" | column -t -s ,)" - echoMagenta "\n$(df -h ${_backupDir})" - echoMagenta "--------------------------------------------------------------------------------------------------------------------------------" - du -ah --time ${_backupDir} - echoMagenta "================================================================================================================================\n" - ) -} - -function getNumBackupsToRetain(){ - ( - _count=0 - _backupType=${1:-$(getBackupType)} - - case "${_backupType}" in - daily) - _count=${DAILY_BACKUPS} - if (( ${_count} <= 0 )) && (( ${WEEKLY_BACKUPS} <= 0 )) && (( ${MONTHLY_BACKUPS} <= 0 )); then - _count=1 - fi - ;; - weekly) - _count=${WEEKLY_BACKUPS} - ;; - monthly) - _count=${MONTHLY_BACKUPS} - ;; - *) - _count=${NUM_BACKUPS} - ;; - esac - - echo "${_count}" - ) -} - -getDirectoryName(){ - ( - local path=${1} - path="${path%"${path##*[!/]}"}" - local name="${path##*/}" - echo "${name}" - ) -} - -getBackupTypeFromPath(){ - ( - local path=${1} - path="${path%"${path##*[!/]}"}" - path="$(dirname "${path}")" - local backupType=$(getDirectoryName "${path}") - echo "${backupType}" - ) -} - -function prune(){ - ( - local database - local backupDirs - local backupDir - local backupType - local backupTypes - local pruneBackup - unset backupTypes - unset backupDirs - unset pruneBackup - - local databases=$(readConf -q) - if rollingStrategy; then - backupTypes="daily weekly monthly" - for backupType in ${backupTypes}; do - backupDirs="${backupDirs} $(createBackupFolder -g ${backupType})" - done - else - backupDirs=$(createBackupFolder -g) - fi - - if [ ! -z "${_fromBackup}" ]; then - pruneBackup="$(findBackup "" "${_fromBackup}")" - while [ ! -z "${pruneBackup}" ]; do - echoYellow "\nAbout to delete backup file: ${pruneBackup}" - waitForAnyKey - rm -rfvd "${pruneBackup}" - - # Quietly delete any empty directories that are left behind ... - find ${ROOT_BACKUP_DIR} -type d -empty -delete > /dev/null 2>&1 - pruneBackup="$(findBackup "" "${_fromBackup}")" - done - else - for backupDir in ${backupDirs}; do - for database in ${databases}; do - unset backupType - if rollingStrategy; then - backupType=$(getBackupTypeFromPath "${backupDir}") - fi - pruneBackups "${backupDir}" "${database}" "${backupType}" - done - done - fi - ) -} - -function pruneBackups(){ - ( - _backupDir=${1} - _databaseSpec=${2} - _backupType=${3:-''} - _pruneDir="$(dirname "${_backupDir}")" - _numBackupsToRetain=$(getNumBackupsToRetain "${_backupType}") - _coreFilename=$(generateCoreFilename ${_databaseSpec}) - - if [ -d ${_pruneDir} ]; then - let _index=${_numBackupsToRetain}+1 - _filesToPrune=$(find ${_pruneDir}* -type f -printf '%T@ %p\n' | grep ${_coreFilename} | sort -r | tail -n +${_index} | sed 's~^.* \(.*$\)~\1~') - - if [ ! -z "${_filesToPrune}" ]; then - echoYellow "\nPruning ${_coreFilename} backups from ${_pruneDir} ..." - echo "${_filesToPrune}" | xargs rm -rfvd - - # Quietly delete any empty directories that are left behind ... - find ${ROOT_BACKUP_DIR} -type d -empty -delete > /dev/null 2>&1 - fi - fi - ) -} - -function getUsername(){ - ( - _databaseSpec=${1} - _hostname=$(getHostname ${_databaseSpec}) - _paramName=$(getHostUserParam ${_hostname}) - # Backward compatibility ... - _username="${!_paramName:-${POSTGRESQL_USER}}" - echo ${_username} - ) -} - -function getPassword(){ - ( - _databaseSpec=${1} - _hostname=$(getHostname ${_databaseSpec}) - _paramName=$(getHostPasswordParam ${_hostname}) - # Backward compatibility ... - _password="${!_paramName:-${POSTGRESQL_PASSWORD}}" - echo ${_password} - ) -} - -function backupDatabase(){ - ( - _databaseSpec=${1} - _fileName=${2} - - _hostname=$(getHostname ${_databaseSpec}) - _port=$(getPort ${_databaseSpec}) - _database=$(getDatabaseName ${_databaseSpec}) - _username=$(getUsername ${_databaseSpec}) - _password=$(getPassword ${_databaseSpec}) - _backupFile="${_fileName}${IN_PROGRESS_BACKUP_FILE_EXTENSION}" - - echoGreen "\nBacking up ${_databaseSpec} ..." - - touchBackupFile "${_backupFile}" - PGPASSWORD=${_password} pg_dump -Fp -h "${_hostname}" -p "${_port}" -U "${_username}" "${_database}" | gzip > ${_backupFile} - # Get the status code from pg_dump. ${?} would provide the status of the last command, gzip in this case. - _rtnCd=${PIPESTATUS[0]} - - if (( ${_rtnCd} != 0 )); then - rm -rfvd ${_backupFile} - fi - return ${_rtnCd} - ) -} - -function touchBackupFile() { - ( - # For safety, make absolutely certain the directory and file exist. - # The pruning process removes empty directories, so if there is an error - # during a backup the backup directory could be deleted. - _backupFile=${1} - _backupDir="${_backupFile%/*}" - makeDirectory ${_backupDir} && touch ${_backupFile} - ) -} - -function findBackup(){ - ( - _databaseSpec=${1} - _fileName=${2} - - # If no backup file was specified, find the most recent for the database. - # Otherwise treat the value provided as a filter to find the most recent backup file matching the filter. - if [ -z "${_fileName}" ]; then - _coreFilename=$(generateCoreFilename ${_databaseSpec}) - _fileName=$(find ${ROOT_BACKUP_DIR}* -type f -printf '%T@ %p\n' | grep ${_coreFilename} | sort | tail -n 1 | sed 's~^.* \(.*$\)~\1~') - else - _fileName=$(find ${ROOT_BACKUP_DIR}* -type f -printf '%T@ %p\n' | grep ${_fileName} | sort | tail -n 1 | sed 's~^.* \(.*$\)~\1~') - fi - - echo "${_fileName}" - ) -} - -function restoreDatabase(){ - ( - local OPTIND - local quiet - local localhost - unset quiet - unset localhost - while getopts ql FLAG; do - case $FLAG in - q ) quiet=1 ;; - l ) localhost=1 ;; - esac - done - shift $((OPTIND-1)) - - _databaseSpec=${1} - _fileName=${2} - _fileName=$(findBackup "${_databaseSpec}" "${_fileName}") - - if [ -z "${quiet}" ]; then - echoBlue "\nRestoring database ..." - echo -e "\nSettings:" - echo "- Database: ${_databaseSpec}" - - if [ ! -z "${_fileName}" ]; then - echo -e "- Backup file: ${_fileName}\n" - else - echoRed "- Backup file: No backup file found or specified. Cannot continue with the restore.\n" - exit 0 - fi - waitForAnyKey - fi - - _database=$(getDatabaseName ${_databaseSpec}) - _username=$(getUsername ${_databaseSpec}) - _password=$(getPassword ${_databaseSpec}) - if [ -z "${localhost}" ]; then - _hostname=$(getHostname ${_databaseSpec}) - _port=$(getPort ${_databaseSpec}) - else - _hostname="127.0.0.1" - _port="${DEFAULT_PORT}" - fi - - echo "Restoring to ${_hostname}:${_port} ..." - - if [ -z "${quiet}" ] && [ -z "${_adminPassword}" ]; then - # Ask for the Admin Password for the database, if it has not already been provided. - _msg="Admin password (${_databaseSpec}):" - _yellow='\033[1;33m' - _nc='\033[0m' # No Color - _message=$(echo -e "${_yellow}${_msg}${_nc}") - read -r -s -p $"${_message}" _adminPassword - echo -e "\n" - fi - - export PGPASSWORD=${_adminPassword} - local startTime=${SECONDS} - - # Drop - psql -h "${_hostname}" -p "${_port}" -ac "DROP DATABASE \"${_database}\";" - _rtnCd=${?} - echo - - # Create - if (( ${_rtnCd} == 0 )); then - psql -h "${_hostname}" -p "${_port}" -ac "CREATE DATABASE \"${_database}\";" - _rtnCd=${?} - echo - fi - - # Grant User Access - if (( ${_rtnCd} == 0 )); then - psql -h "${_hostname}" -p "${_port}" -ac "GRANT ALL ON DATABASE \"${_database}\" TO \"${_username}\";" - _rtnCd=${?} - echo - fi - - # Restore - if (( ${_rtnCd} == 0 )); then - echo "Restoring from backup ..." - gunzip -c "${_fileName}" | psql -v ON_ERROR_STOP=1 -x -h "${_hostname}" -p "${_port}" -d "${_database}" - # Get the status code from psql specifically. ${?} would only provide the status of the last command, psql in this case. - _rtnCd=${PIPESTATUS[1]} - fi - - local duration=$(($SECONDS - $startTime)) - echo -e "Restore complete - Elapsed time: $(($duration/3600))h:$(($duration%3600/60))m:$(($duration%60))s"\\n - - # List tables - if [ -z "${quiet}" ] && (( ${_rtnCd} == 0 )); then - psql -h "${_hostname}" -p "${_port}" -d "${_database}" -c "\d" - _rtnCd=${?} - fi - - return ${_rtnCd} - ) -} - -function isLastDayOfMonth(){ - ( - _date=${1:-$(date)} - _day=$(date -d "${_date}" +%-d) - _month=$(date -d "${_date}" +%-m) - _lastDayOfMonth=$(date -d "${_month}/1 + 1 month - 1 day" "+%-d") - - if (( ${_day} == ${_lastDayOfMonth} )); then - return 0 - else - return 1 - fi - ) -} - -function isLastDayOfWeek(){ - ( - # We're calling Sunday the last dayt of the week in this case. - _date=${1:-$(date)} - _dayOfWeek=$(date -d "${_date}" +%u) - - if (( ${_dayOfWeek} == 7 )); then - return 0 - else - return 1 - fi - ) -} - -function getBackupType(){ - ( - _backupType="" - if rollingStrategy; then - if isLastDayOfMonth && (( "${MONTHLY_BACKUPS}" > 0 )); then - _backupType="monthly" - elif isLastDayOfWeek; then - _backupType="weekly" - else - _backupType="daily" - fi - fi - echo "${_backupType}" - ) -} - -function createBackupFolder(){ - ( - local OPTIND - local genOnly - unset genOnly - while getopts g FLAG; do - case $FLAG in - g ) genOnly=1 ;; - esac - done - shift $((OPTIND-1)) - - _backupTypeDir="${1:-$(getBackupType)}" - if [ ! -z "${_backupTypeDir}" ]; then - _backupTypeDir=${_backupTypeDir}/ - fi - - _backupDir="${ROOT_BACKUP_DIR}${_backupTypeDir}`date +\%Y-\%m-\%d`/" - - # Don't actually create the folder if we're just generating it for printing the configuation. - if [ -z "${genOnly}" ]; then - echo "Making backup directory ${_backupDir} ..." >&2 - if ! makeDirectory ${_backupDir}; then - logError "Failed to create backup directory ${_backupDir}." - exit 1; - fi; - fi - - echo ${_backupDir} - ) -} - -function generateFilename(){ - ( - _backupDir=${1} - _databaseSpec=${2} - _coreFilename=$(generateCoreFilename ${_databaseSpec}) - _filename="${_backupDir}${_coreFilename}_`date +\%Y-\%m-\%d_%H-%M-%S`" - echo ${_filename} - ) -} - -function generateCoreFilename(){ - ( - _databaseSpec=${1} - _hostname=$(getHostname ${_databaseSpec}) - _database=$(getDatabaseName ${_databaseSpec}) - _coreFilename="${_hostname}-${_database}" - echo ${_coreFilename} - ) -} - -function rollingStrategy(){ - if [[ "${BACKUP_STRATEGY}" == "rolling" ]] && (( "${WEEKLY_BACKUPS}" >= 0 )) && (( "${MONTHLY_BACKUPS}" >= 0 )); then - return 0 - else - return 1 - fi -} - -function dailyStrategy(){ - if [[ "${BACKUP_STRATEGY}" == "daily" ]] || (( "${WEEKLY_BACKUPS}" < 0 )); then - return 0 - else - return 1 - fi -} - -function formatList(){ - ( - filters='s~^~ - ~;' - _value=$(echo "${1}" | sed "${filters}") - echo "${_value}" - ) -} - -function listSettings(){ - _backupDirectory=${1:-$(createBackupFolder -g)} - _databaseList=${2:-$(readConf -q)} - _yellow='\e[33m' - _nc='\e[0m' # No Color - _notConfigured="${_yellow}not configured${_nc}" - - echo -e \\n"Settings:" - _mode=$(getMode 2>/dev/null) - echo "- Run mode: ${_mode}" - if rollingStrategy; then - echo "- Backup strategy: rolling" - fi - if dailyStrategy; then - echo "- Backup strategy: daily" - fi - if ! rollingStrategy && ! dailyStrategy; then - echoYellow "- Backup strategy: Unknown backup strategy; ${BACKUP_STRATEGY}" - _configurationError=1 - fi - backupType=$(getBackupType) - if [ -z "${backupType}" ]; then - echo "- Current backup type: flat daily" - else - echo "- Current backup type: ${backupType}" - fi - echo "- Backups to retain:" - if rollingStrategy; then - echo " - Daily: $(getNumBackupsToRetain daily)" - echo " - Weekly: $(getNumBackupsToRetain weekly)" - echo " - Monthly: $(getNumBackupsToRetain monthly)" - else - echo " - Total: $(getNumBackupsToRetain)" - fi - echo "- Backup folder: ${_backupDirectory}" - if [[ "${_mode}" != ${ONCE} ]]; then - if [[ "${_mode}" == ${CRON} ]] || [[ "${_mode}" == ${SCHEDULED} ]]; then - _backupSchedule=$(readConf -cq) - echo "- Time Zone: $(date +"%Z %z")" - fi - _backupSchedule=$(formatList "${_backupSchedule:-${BACKUP_PERIOD}}") - echo "- Schedule:" - echo "${_backupSchedule}" - fi - _databaseList=$(formatList "${_databaseList}") - echo "- Databases:" - echo "${_databaseList}" - echo - if [ -z "${FTP_URL}" ]; then - echo -e "- FTP server: ${_notConfigured}" - else - echo "- FTP server: ${FTP_URL}" - fi - if [ -z "${WEBHOOK_URL}" ]; then - echo -e "- Webhook Endpoint: ${_notConfigured}" - else - echo "- Webhook Endpoint: ${WEBHOOK_URL}" - fi - if [ -z "${ENVIRONMENT_FRIENDLY_NAME}" ]; then - echo -e "- Environment Friendly Name: ${_notConfigured}" - else - echo -e "- Environment Friendly Name: ${ENVIRONMENT_FRIENDLY_NAME}" - fi - if [ -z "${ENVIRONMENT_NAME}" ]; then - echo -e "- Environment Name (Id): ${_notConfigured}" - else - echo "- Environment Name (Id): ${ENVIRONMENT_NAME}" - fi - - if [ ! -z "${_configurationError}" ]; then - logError "\nConfiguration error! The script will exit." - sleep 5 - exit 1 - fi - echo -} - -function isInstalled(){ - rtnVal=$(type "$1" >/dev/null 2>&1) - rtnCd=$? - if [ ${rtnCd} -ne 0 ]; then - return 1 - else - return 0 - fi -} - -function cronMode(){ - ( - cronTabs=$(readConf -cq) - if isInstalled "go-crond" && [ ! -z "${cronTabs}" ]; then - return 0 - else - return 1 - fi - ) -} - -function isScheduled(){ - ( - if [ ! -z "${SCHEDULED_RUN}" ]; then - return 0 - else - return 1 - fi - ) -} - -function isScripted(){ - ( - if [ ! -z "${SCHEDULED_RUN}" ]; then - return 0 - else - return 1 - fi - ) -} - -function restoreMode(){ - ( - if [ ! -z "${_restoreDatabase}" ]; then - return 0 - else - return 1 - fi - ) -} - -function verifyMode(){ - ( - if [ ! -z "${_verifyBackup}" ]; then - return 0 - else - return 1 - fi - ) -} - -function pruneMode(){ - ( - if [ ! -z "${RUN_PRUNE}" ]; then - return 0 - else - return 1 - fi - ) -} - -function getMode(){ - ( - unset _mode - - if pruneMode; then - _mode="${PRUNE}" - fi - - if [ -z "${_mode}" ] && restoreMode; then - _mode="${RESTORE}" - fi - - if [ -z "${_mode}" ] && verifyMode; then - # Determine if this is a scheduled verification or a manual one. - if isScheduled; then - if cronMode; then - _mode="${SCHEDULED_VERIFY}" - else - _mode="${ERROR}" - logError "Scheduled mode cannot be used without cron being installed and at least one cron tab being defined in ${BACKUP_CONF}." - fi - else - _mode="${VERIFY}" - fi - fi - - if [ -z "${_mode}" ] && runOnce; then - _mode="${ONCE}" - fi - - if [ -z "${_mode}" ] && isScheduled; then - if cronMode; then - _mode="${SCHEDULED}" - else - _mode="${ERROR}" - logError "Scheduled mode cannot be used without cron being installed and at least one cron tab being defined in ${BACKUP_CONF}." - fi - fi - - if [ -z "${_mode}" ] && cronMode; then - _mode="${CRON}" - fi - - if [ -z "${_mode}" ]; then - _mode="${LEGACY}" - fi - - echo "${_mode}" - ) -} - -function runBackups(){ - ( - echoBlue "\nStarting backup process ..." - databases=$(readConf) - backupDir=$(createBackupFolder) - listSettings "${backupDir}" "${databases}" - - for database in ${databases}; do - - local startTime=${SECONDS} - filename=$(generateFilename "${backupDir}" "${database}") - backupDatabase "${database}" "${filename}" - rtnCd=${?} - local duration=$(($SECONDS - $startTime)) - local elapsedTime="\n\nElapsed time: $(($duration/3600))h:$(($duration%3600/60))m:$(($duration%60))s - Status Code: ${rtnCd}" - - if (( ${rtnCd} == 0 )); then - backupPath=$(finalizeBackup "${filename}") - dbSize=$(getDbSize "${database}") - backupSize=$(getFileSize "${backupPath}") - logInfo "Successfully backed up ${database}.\nBackup written to ${backupPath}.\nDatabase Size: ${dbSize}\nBackup Size: ${backupSize}${elapsedTime}" - ftpBackup "${filename}" - pruneBackups "${backupDir}" "${database}" - else - logError "Failed to backup ${database}.${elapsedTime}" - fi - done - - listExistingBackups ${ROOT_BACKUP_DIR} - ) -} - -function startCron(){ - logInfo "Starting backup server in cron mode ..." - listSettings - echoBlue "Starting go-crond as a background task ...\n" - CRON_CMD="go-crond -v --default-user=${UID} --allow-unprivileged ${BACKUP_CONF}" - exec ${CRON_CMD} & - wait -} - -function startLegacy(){ - ( - while true; do - runBackups - - echoYellow "Sleeping for ${BACKUP_PERIOD} ...\n" - sleep ${BACKUP_PERIOD} - done - ) -} - -function startServer(){ - ( - _databaseSpec=${1} - - # Start a local PostgreSql instance - POSTGRESQL_DATABASE=$(getDatabaseName "${_databaseSpec}") \ - POSTGRESQL_USER=$(getUsername "${_databaseSpec}") \ - POSTGRESQL_PASSWORD=$(getPassword "${_databaseSpec}") \ - run-postgresql >/dev/null 2>&1 & - - # Wait for server to start ... - local startTime=${SECONDS} - rtnCd=0 - printf "waiting for server to start" - while ! pingDbServer ${_databaseSpec}; do - printf "." - local duration=$(($SECONDS - $startTime)) - if (( ${duration} >= ${DATABASE_SERVER_TIMEOUT} )); then - echoRed "\nThe server failed to start within ${duration} seconds.\n" - rtnCd=1 - break - fi - sleep 1 - done - - return ${rtnCd} - ) -} - -function stopServer(){ - ( - # Stop the local PostgreSql instance - pg_ctl stop -D /var/lib/pgsql/data/userdata - - # Delete the database files and configuration - echo -e "Cleaning up ...\n" - rm -rf /var/lib/pgsql/data/userdata - ) -} - -function pingDbServer(){ - ( - _databaseSpec=${1} - _database=$(getDatabaseName "${_databaseSpec}") - _user=$(getUsername "${_databaseSpec}") - if psql -h 127.0.0.1 -U ${_user} -q -d ${_database} -c 'SELECT 1' >/dev/null 2>&1; then - return 0 - else - return 1 - fi - ) -} - -function verifyBackups(){ - ( - local OPTIND - local flags - unset flags - while getopts q FLAG; do - case $FLAG in - * ) flags+="-${FLAG} " ;; - esac - done - shift $((OPTIND-1)) - - _databaseSpec=${1} - _fileName=${2} - if [[ "${_databaseSpec}" == "all" ]]; then - databases=$(readConf -q) - else - databases=${_databaseSpec} - fi - - for database in ${databases}; do - verifyBackup ${flags} "${database}" "${_fileName}" - done - ) -} - -function verifyBackup(){ - ( - local OPTIND - local quiet - unset quiet - while getopts q FLAG; do - case $FLAG in - q ) quiet=1 ;; - esac - done - shift $((OPTIND-1)) - - _databaseSpec=${1} - _fileName=${2} - _fileName=$(findBackup "${_databaseSpec}" "${_fileName}") - - echoBlue "\nVerifying backup ..." - echo -e "\nSettings:" - echo "- Database: ${_databaseSpec}" - - if [ ! -z "${_fileName}" ]; then - echo -e "- Backup file: ${_fileName}\n" - else - echoRed "- Backup file: No backup file found or specified. Cannot continue with the backup verification.\n" - exit 0 - fi - - if [ -z "${quiet}" ]; then - waitForAnyKey - fi - - local startTime=${SECONDS} - startServer "${_databaseSpec}" - rtnCd=${?} - - # Restore the database - if (( ${rtnCd} == 0 )); then - echo - echo "Restoring from backup ..." - if [ -z "${quiet}" ]; then - restoreDatabase -ql "${_databaseSpec}" "${_fileName}" - rtnCd=${?} - else - # Filter out stdout, keep stderr - restoreLog=$(restoreDatabase -ql "${_databaseSpec}" "${_fileName}" 2>&1 >/dev/null) - rtnCd=${?} - - if [ ! -z "${restoreLog}" ]; then - restoreLog="\n\nThe following issues were encountered during backup verification;\n${restoreLog}" - fi - fi - fi - - # Ensure there are tables in the databse and general queries work - if (( ${rtnCd} == 0 )); then - _hostname="127.0.0.1" - _port="${DEFAULT_PORT}" - _database=$(getDatabaseName ${_databaseSpec}) - tables=$(psql -h "${_hostname}" -p "${_port}" -d "${_database}" -t -c "SELECT table_name FROM information_schema.tables WHERE table_schema='${TABLE_SCHEMA}' AND table_type='BASE TABLE';") - rtnCd=${?} - fi - - # Get the size of the restored database - if (( ${rtnCd} == 0 )); then - size=$(getDbSize -l "${_databaseSpec}") - rtnCd=${?} - fi - - if (( ${rtnCd} == 0 )); then - numResults=$(echo "${tables}"| wc -l) - if [[ ! -z "${tables}" ]] && (( numResults >= 1 )); then - # All good - verificationLog="\nThe restored database contained ${numResults} tables, and is ${size} in size." - else - # Not so good - verificationLog="\nNo tables were found in the restored database." - rtnCd="3" - fi - fi - - stopServer - local duration=$(($SECONDS - $startTime)) - local elapsedTime="\n\nElapsed time: $(($duration/3600))h:$(($duration%3600/60))m:$(($duration%60))s - Status Code: ${rtnCd}" - - if (( ${rtnCd} == 0 )); then - logInfo "Successfully verified backup; ${_fileName}${verificationLog}${restoreLog}${elapsedTime}" - else - logError "Backup verification failed; ${_fileName}${verificationLog}${restoreLog}${elapsedTime}" - fi - - return ${rtnCd} - ) -} - -function getFileSize(){ - ( - _filename=${1} - echo $(du -h "${_filename}" | awk '{print $1}') - ) -} - -function getDbSize(){ - ( - local OPTIND - local localhost - unset localhost - while getopts l FLAG; do - case $FLAG in - l ) localhost=1 ;; - esac - done - shift $((OPTIND-1)) - - _databaseSpec=${1} - _database=$(getDatabaseName ${_databaseSpec}) - _username=$(getUsername ${_databaseSpec}) - _password=$(getPassword ${_databaseSpec}) - if [ -z "${localhost}" ]; then - _hostname=$(getHostname ${_databaseSpec}) - _port=$(getPort ${_databaseSpec}) - else - _hostname="127.0.0.1" - _port="${DEFAULT_PORT}" - fi - - if isInstalled "psql"; then - size=$(PGPASSWORD=${_password} psql -h "${_hostname}" -p "${_port}" -U "${_username}" -d "${_database}" -t -c "SELECT pg_size_pretty(pg_database_size(current_database())) as size;") - rtnCd=${?} - else - size="not found" - rtnCd=1 - fi - echo "${size}" - return ${rtnCd} - ) -} - -function shutDown() { - for jobId in $(jobs | awk -F '[][]' '{print $2}' ) ; do - echo "Shutting down background job '${jobId}' ..." - kill %${jobId} - done - - echo "Waiting for any background jobs to complete ..." - wait -} -# ====================================================================================== +#!/bin/bash # ====================================================================================== -# Set Defaults +# Imports # -------------------------------------------------------------------------------------- -export BACKUP_FILE_EXTENSION=".sql.gz" -export IN_PROGRESS_BACKUP_FILE_EXTENSION=".sql.gz.in_progress" -export DEFAULT_PORT=${POSTGRESQL_PORT_NUM:-5432} -export DATABASE_SERVICE_NAME=${DATABASE_SERVICE_NAME:-postgresql} -export POSTGRESQL_DATABASE=${POSTGRESQL_DATABASE:-my_postgres_db} -export TABLE_SCHEMA=${TABLE_SCHEMA:-public} - -# Supports: -# - daily -# - rolling -export BACKUP_STRATEGY=$(echo "${BACKUP_STRATEGY:-daily}" | tr '[:upper:]' '[:lower:]') -export BACKUP_PERIOD=${BACKUP_PERIOD:-1d} -export ROOT_BACKUP_DIR=${ROOT_BACKUP_DIR:-${BACKUP_DIR:-/backups/}} -export BACKUP_CONF=${BACKUP_CONF:-backup.conf} - -# Used to prune the total number of backup when using the daily backup strategy. -# Default provides for one full month of backups -export NUM_BACKUPS=${NUM_BACKUPS:-31} - -# Used to prune the total number of backup when using the rolling backup strategy. -# Defaults provide for: -# - A week's worth of daily backups -# - A month's worth of weekly backups -# - The previous month's backup -export DAILY_BACKUPS=${DAILY_BACKUPS:-6} -export WEEKLY_BACKUPS=${WEEKLY_BACKUPS:-4} -export MONTHLY_BACKUPS=${MONTHLY_BACKUPS:-1} - -# Webhook defaults -WEBHOOK_TEMPLATE=${WEBHOOK_TEMPLATE:-webhook-template.json} - -# Modes: -export ONCE="once" -export SCHEDULED="scheduled" -export RESTORE="restore" -export VERIFY="verify" -export CRON="cron" -export LEGACY="legacy" -export ERROR="error" -export SCHEDULED_VERIFY="scheduled-verify" -export PRUNE="prune" - -# Other: -export DATABASE_SERVER_TIMEOUT=${DATABASE_SERVER_TIMEOUT:-30} +. ./backup.usage # Usage information +. ./backup.logging # Logging functions +. ./backup.config.utils # Configuration functions +. ./backup.container.utils # Container Utility Functions +. ./backup.ftp # FTP Support functions +. ./backup.misc.utils # General Utility Functions +. ./backup.file.utils # File Utility Functions +. ./backup.utils # Primary Database Backup and Restore Functions +. ./backup.server.utils # Backup Server Utility Functions +. ./backup.settings # Default Settings # ====================================================================================== -# ================================================================================================================= +# ====================================================================================== # Initialization: -# ----------------------------------------------------------------------------------------------------------------- -trap shutDown EXIT INT TERM - -while getopts clr:v:f:1spha: FLAG; do +# -------------------------------------------------------------------------------------- +trap shutDown EXIT TERM + +# Load database plug-in based on the container type ... +. ./backup.${CONTAINER_TYPE}.plugin > /dev/null 2>&1 +if [[ ${?} != 0 ]]; then + echoRed "backup.${CONTAINER_TYPE}.plugin not found." + + # Default to null plugin. + export CONTAINER_TYPE=${UNKNOWN_DB} + . ./backup.${CONTAINER_TYPE}.plugin > /dev/null 2>&1 +fi + +while getopts nclr:v:f:1spha: FLAG; do case $FLAG in + n) + # Allow null database plugin ... + # Without this flag loading the null plugin is considered a configuration error. + # The null plugin can be used for testing. + export _allowNullPlugin=1 + ;; c) echoBlue "\nListing configuration settings ..." listSettings @@ -1404,11 +81,11 @@ while getopts clr:v:f:1spha: FLAG; do esac done shift $((OPTIND-1)) -# ================================================================================================================= +# ====================================================================================== -# ================================================================================================================= +# ====================================================================================== # Main Script -# ----------------------------------------------------------------------------------------------------------------- +# -------------------------------------------------------------------------------------- case $(getMode) in ${ONCE}) runBackups @@ -1426,7 +103,9 @@ case $(getMode) in restoreFlags="-q" fi - restoreDatabase ${restoreFlags} "${_restoreDatabase}" "${_fromBackup}" + if validateOperation "${_restoreDatabase}" "${RESTORE}"; then + restoreDatabase ${restoreFlags} "${_restoreDatabase}" "${_fromBackup}" + fi ;; ${VERIFY}) @@ -1450,7 +129,7 @@ case $(getMode) in ;; ${ERROR}) - echoYellow "A configuration error has occurred, review the details above." + echoRed "A configuration error has occurred, review the details above." usage ;; *) @@ -1458,4 +137,4 @@ case $(getMode) in usage ;; esac -# ================================================================================================================= +# ====================================================================================== \ No newline at end of file diff --git a/docker/backup.usage b/docker/backup.usage new file mode 100644 index 0000000..32238fd --- /dev/null +++ b/docker/backup.usage @@ -0,0 +1,133 @@ +#!/bin/bash +# ================================================================================================================= +# Usage: +# ----------------------------------------------------------------------------------------------------------------- +function usage () { + cat <<-EOF + + Automated backup script for PostgreSQL and MongoDB databases. + + There are two modes of scheduling backups: + - Cron Mode: + - Allows one or more schedules to be defined as cron tabs in ${BACKUP_CONF}. + - If cron (go-crond) is installed (which is handled by the Docker file) and at least one cron tab is defined, the script will startup in Cron Mode, + otherwise it will default to Legacy Mode. + - Refer to ${BACKUP_CONF} for additional details and exples of using cron scheduling. + + - Legacy Mode: + - Uses a simple sleep command to set the schedule based on the setting of BACKUP_PERIOD; defaults to ${BACKUP_PERIOD} + + Refer to the project documentation for additional details on how to use this script. + - https://github.com/BCDevOps/backup-container + + Usage: + $0 [options] + + Standard Options: + ================= + -h prints this usage documentation. + + -1 run once. + Performs a single set of backups and exits. + + -s run in scheduled/silent (no questions asked) mode. + A flag to be used by cron scheduled backups to indicate they are being run on a schedule. + Requires cron (go-crond) to be installed and at least one cron tab to be defined in ${BACKUP_CONF} + Refer to ${BACKUP_CONF} for additional details and examples of using cron scheduling. + + -l lists existing backups. + Great for listing the available backups for a restore. + + -c lists the current configuration settings and exits. + Great for confirming the current settings, and listing the databases included in the backup schedule. + + -p prune backups + Used to manually prune backups. + This can be used with the '-f' option, see below, to prune specific backups or sets of backups. + Use caution when using the '-f' option. + + Verify Options: + ================ + The verify process performs the following basic operations: + - Start a local database server instance. + - Restore the selected backup locally, watching for errors. + - Run a table query on the restored database as a simple test to ensure tables were restored + and queries against the database succeed without error. + - Stop the local database server instance. + - Delete the local database and configuration. + + -v ; in the form =/, or =:/ + where defaults to container database type if omitted + must be one of "postgres" or "mongo" + must be specified in a mixed database container project + + Triggers verify mode and starts verify mode on the specified database. + + Example: + $0 -v postgresql=postgresql:5432/TheOrgBook_Database + - Would start the verification process on the database using the most recent backup for the database. + + $0 -v all + - Verify the most recent backup of all databases. + + -f ; an OPTIONAL filter to use to find/identify the backup file to restore. + Refer to the same option under 'Restore Options' for details. + + Restore Options: + ================ + The restore process performs the following basic operations: + - Drop and recreate the selected database. + - Grant the database user access to the recreated database + - Restore the database from the selected backup file + + Have the 'Admin' (postgres or mongo) password handy, the script will ask you for it during the restore. + + When in restore mode, the script will list the settings it will use and wait for your confirmation to continue. + This provides you with an opportunity to ensure you have selected the correct database and backup file + for the job. + + Restore mode will allow you to restore a database to a different location (host, and/or database name) provided + it can contact the host and you can provide the appropriate credentials. If you choose to do this, you will need + to provide a file filter using the '-f' option, since the script will likely not be able to determine which backup + file you would want to use. This functionality provides a convenient way to test your backups or migrate your + database/data without affecting the original database. + + -r ; in the form =/, or =:/ + where defaults to container database type if omitted + must be one of "postgres" or "mongo" + must be specified in a mixed database container project + + Triggers restore mode and starts restore mode on the specified database. + + Example: + $0 -r postgresql:5432/TheOrgBook_Database/postgres + - Would start the restore process on the database using the most recent backup for the database. + + -f ; an OPTIONAL filter to use to find/identify the backup file to restore. + This can be a full or partial file specification. When only part of a filename is specified the restore process + attempts to find the most recent backup matching the filter. + If not specified, the restore process attempts to locate the most recent backup file for the specified database. + + Examples: + $0 -r postgresql=wallet-db/test_db/postgres -f wallet-db-tob_holder + - Would try to find the latest backup matching on the partial file name provided. + + $0 -r wallet-db/test_db/postgres -f /backups/daily/2018-11-07/wallet-db-tob_holder_2018-11-07_23-59-35.sql.gz + - Would use the specific backup file. + + $0 -r wallet-db/test_db/postgres -f wallet-db-tob_holder_2018-11-07_23-59-35.sql.gz + - Would use the specific backup file regardless of its location in the root backup folder. + + -s OPTIONAL flag. Use with caution. Could cause unintentional data loss. + Run the restore in scripted/scheduled mode. In this mode the restore will not ask you to confirm the settings, + nor will ask you for the 'Admin' password. It will simply attempt to restore a database from a backup. + It's up to you to ensure it's targeting the correct database and using the correct backup file. + + -a ; an OPTIONAL flag used to specify the 'Admin' password. + Use with the '-s' flag to specify the 'Admin' password. Under normal usage conditions it's better to supply the + password when prompted so it is not visible on the console. + +EOF +exit 1 +} +# ================================================================================================================= \ No newline at end of file diff --git a/docker/backup.utils b/docker/backup.utils new file mode 100644 index 0000000..ed54af7 --- /dev/null +++ b/docker/backup.utils @@ -0,0 +1,268 @@ +#!/bin/bash +# ================================================================================================================= +# Primary Database Backup and Restore Functions: +# ----------------------------------------------------------------------------------------------------------------- +function backupDatabase(){ + ( + _databaseSpec=${1} + _fileName=${2} + + _backupFile="${_fileName}${IN_PROGRESS_BACKUP_FILE_EXTENSION}" + + touchBackupFile "${_backupFile}" + onBackupDatabase "${_databaseSpec}" "${_backupFile}" + _rtnCd=${?} + + if (( ${_rtnCd} != 0 )); then + rm -rfvd ${_backupFile} + fi + + return ${_rtnCd} + ) +} + +function restoreDatabase(){ + ( + local OPTIND + local quiet + local localhost + unset quiet + unset localhost + unset flags + while getopts ql FLAG; do + case $FLAG in + q ) + quiet=1 + flags+="-${FLAG} " + ;; + * ) flags+="-${FLAG} ";; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + _fileName=${2} + _fileName=$(findBackup "${_databaseSpec}" "${_fileName}") + + if [ -z "${quiet}" ]; then + echoBlue "\nRestoring database ..." + echo -e "\nSettings:" + echo "- Database: ${_databaseSpec}" + + if [ ! -z "${_fileName}" ]; then + echo -e "- Backup file: ${_fileName}\n" + else + echoRed "- Backup file: No backup file found or specified. Cannot continue with the restore.\n" + exit 1 + fi + waitForAnyKey + fi + + if [ -z "${quiet}" ] && [ -z "${_adminPassword}" ]; then + # Ask for the Admin Password for the database, if it has not already been provided. + _msg="Admin password (${_databaseSpec}):" + _yellow='\033[1;33m' + _nc='\033[0m' # No Color + _message=$(echo -e "${_yellow}${_msg}${_nc}") + read -r -s -p $"${_message}" _adminPassword + echo -e "\n" + fi + + local startTime=${SECONDS} + onRestoreDatabase ${flags} "${_databaseSpec}" "${_fileName}" "${_adminPassword}" + _rtnCd=${?} + + local duration=$(($SECONDS - $startTime)) + if (( ${_rtnCd} == 0 )); then + echoGreen "\nRestore complete - Elapsed time: $(($duration/3600))h:$(($duration%3600/60))m:$(($duration%60))s\n" + else + echoRed "\nRestore failed.\n" >&2 + fi + + return ${_rtnCd} + ) +} + +function runBackups(){ + ( + echoBlue "\nStarting backup process ..." + databases=$(readConf) + backupDir=$(createBackupFolder) + listSettings "${backupDir}" "${databases}" + + for database in ${databases}; do + if isForContainerType ${database}; then + local startTime=${SECONDS} + filename=$(generateFilename "${backupDir}" "${database}") + backupDatabase "${database}" "${filename}" + rtnCd=${?} + local duration=$(($SECONDS - $startTime)) + local elapsedTime="\n\nElapsed time: $(($duration/3600))h:$(($duration%3600/60))m:$(($duration%60))s - Status Code: ${rtnCd}" + + if (( ${rtnCd} == 0 )); then + backupPath=$(finalizeBackup "${filename}") + dbSize=$(getDbSize "${database}") + backupSize=$(getFileSize "${backupPath}") + logInfo "Successfully backed up ${database}.\nBackup written to ${backupPath}.\nDatabase Size: ${dbSize}\nBackup Size: ${backupSize}${elapsedTime}" + ftpBackup "${filename}" + pruneBackups "${backupDir}" "${database}" + else + logError "Failed to backup ${database}.${elapsedTime}" + fi + fi + done + + listExistingBackups ${ROOT_BACKUP_DIR} + ) +} + +function startServer(){ + ( + # Start a local server instance ... + onStartServer ${@} + + # Wait for server to start ... + local startTime=${SECONDS} + rtnCd=0 + printf "waiting for server to start" + while ! pingDbServer ${@}; do + printf "." + local duration=$(($SECONDS - $startTime)) + if (( ${duration} >= ${DATABASE_SERVER_TIMEOUT} )); then + echoRed "\nThe server failed to start within ${duration} seconds.\n" + rtnCd=1 + break + fi + sleep 1 + done + echo + return ${rtnCd} + ) +} + +function stopServer(){ + ( + onStopServer ${@} + ) +} + +function pingDbServer(){ + ( + onPingDbServer ${@} + return ${?} + ) +} + +function verifyBackups(){ + ( + local OPTIND + local flags + unset flags + while getopts q FLAG; do + case $FLAG in + * ) flags+="-${FLAG} " ;; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + _fileName=${2} + if [[ "${_databaseSpec}" == "all" ]]; then + databases=$(readConf -q) + else + databases=${_databaseSpec} + fi + + for database in ${databases}; do + if isForContainerType ${database}; then + verifyBackup ${flags} "${database}" "${_fileName}" + fi + done + ) +} + +function verifyBackup(){ + ( + local OPTIND + local quiet + unset quiet + while getopts q FLAG; do + case $FLAG in + q ) quiet=1 ;; + esac + done + shift $((OPTIND-1)) + + _databaseSpec=${1} + _fileName=${2} + _fileName=$(findBackup "${_databaseSpec}" "${_fileName}") + + echoBlue "\nVerifying backup ..." + echo -e "\nSettings:" + echo "- Database: ${_databaseSpec}" + + if [ ! -z "${_fileName}" ]; then + echo -e "- Backup file: ${_fileName}\n" + else + echoRed "- Backup file: No backup file found or specified. Cannot continue with the backup verification.\n" + exit 0 + fi + + if [ -z "${quiet}" ]; then + waitForAnyKey + fi + + local startTime=${SECONDS} + startServer -l "${_databaseSpec}" + rtnCd=${?} + + # Restore the database + if (( ${rtnCd} == 0 )); then + if [ -z "${quiet}" ]; then + restoreDatabase -ql "${_databaseSpec}" "${_fileName}" + rtnCd=${?} + else + # Filter out stdout, keep stderr + echo "Restoring from backup ..." + restoreLog=$(restoreDatabase -ql "${_databaseSpec}" "${_fileName}" 2>&1 >/dev/null) + rtnCd=${?} + + if [ ! -z "${restoreLog}" ] && (( ${rtnCd} == 0 )); then + echo ${restoreLog} + unset restoreLog + elif [ ! -z "${restoreLog}" ] && (( ${rtnCd} != 0 )); then + restoreLog="\n\nThe following issues were encountered during backup verification;\n${restoreLog}" + fi + fi + fi + + # Ensure there are tables in the databse and general queries work + if (( ${rtnCd} == 0 )); then + verificationLog=$(onVerifyBackup "${_databaseSpec}") + rtnCd=${?} + fi + + # Stop the database server + stopServer "${_databaseSpec}" + local duration=$(($SECONDS - $startTime)) + local elapsedTime="\n\nElapsed time: $(($duration/3600))h:$(($duration%3600/60))m:$(($duration%60))s - Status Code: ${rtnCd}" + + if (( ${rtnCd} == 0 )); then + logInfo "Successfully verified backup: ${_fileName}${verificationLog}${restoreLog}${elapsedTime}" + else + logError "Backup verification failed: ${_fileName}${verificationLog}${restoreLog}${elapsedTime}" + fi + return ${rtnCd} + ) +} + +function getDbSize(){ + ( + size=$(onGetDbSize ${@}) + rtnCd=${?} + + echo ${size} + return ${rtnCd} + ) +} +# ================================================================================================================= diff --git a/docs/ExampleLog.md b/docs/ExampleLog.md index e42df1e..70676d0 100644 --- a/docs/ExampleLog.md +++ b/docs/ExampleLog.md @@ -3,48 +3,60 @@ ``` Starting backup process ... Reading backup config from backup.conf ... -Making backup directory /backups/daily/2018-10-04/ ... +Making backup directory /backups/daily/2020-02-28/ ... Settings: +- Run mode: scheduled + - Backup strategy: rolling -- Backup type: daily -- Number of each backup to retain: 6 -- Backup folder: /backups/daily/2018-10-04/ -- Databases: - - wallet-db:5432/tob_verifier - - postgresql:5432/TheOrgBook_Database - - wallet-db:5432/tob_holder - -Backing up wallet-db:5432/tob_verifier ... -Elapsed time: 0h:0m:1s -Backup written to /backups/daily/2018-10-04/wallet-db-tob_verifier_2018-10-04_22-49-39.sql.gz ... - -Backing up postgresql:5432/TheOrgBook_Database ... -Elapsed time: 0h:2m:48s -Backup written to /backups/daily/2018-10-04/postgresql-TheOrgBook_Database_2018-10-04_22-49-41.sql.gz ... - -Backing up wallet-db:5432/tob_holder ... -Elapsed time: 0h:24m:34s -Backup written to /backups/daily/2018-10-04/wallet-db-tob_holder_2018-10-04_22-52-29.sql.gz ... +- Current backup type: daily +- Backups to retain: + - Daily: 6 + - Weekly: 4 + - Monthly: 1 +- Current backup folder: /backups/daily/2020-02-28/ +- Time Zone: PST -0800 + +- Schedule: + - 0 1 * * * default ./backup.sh -s + - 0 4 * * * default ./backup.sh -s -v all + +- Container Type: mongo +- Databases (filtered by container type): + - mongo=identity-kit-db-bc/identity_kit_db + +- FTP server: not configured +- Webhook Endpoint: https://chat.pathfinder.gov.bc.ca/hooks/*** +- Environment Friendly Name: Verifiable Organizations Network (mongo-test) +- Environment Name (Id): devex-von-test + +Backing up 'identity-kit-db-bc/identity_kit_db' to '/backups/daily/2020-02-28/identity-kit-db-bc-identity_kit_db_2020-02-28_08-07-10.sql.gz.in_progress' ... +Successfully backed up mongo=identity-kit-db-bc/identity_kit_db. +Backup written to /backups/daily/2020-02-28/identity-kit-db-bc-identity_kit_db_2020-02-28_08-07-10.sql.gz. +Database Size: 1073741824 +Backup Size: 4.0K + +Elapsed time: 0h:0m:0s - Status Code: 0 ================================================================================================================================ Current Backups: + +Database Current Size +mongo=identity-kit-db-bc/identity_kit_db 1073741824 + +Filesystem Size Used Avail Use% Mounted on +192.168.111.90:/trident_qtree_pool_file_standard_WKDMGDWTSQ/file_standard_devex_von_test_backup_mongo_54218 1.0G 0 1.0G 0% /backups -------------------------------------------------------------------------------------------------------------------------------- -4.0K 2018-10-04 17:10 /backups/.trashcan/internal_op -8.0K 2018-10-04 17:10 /backups/.trashcan -3.5K 2018-10-04 17:17 /backups/daily/2018-10-04/wallet-db-tob_verifier_2018-10-04_17-17-02.sql.gz -687M 2018-10-04 17:20 /backups/daily/2018-10-04/postgresql-TheOrgBook_Database_2018-10-04_17-17-03.sql.gz -9.1G 2018-10-04 17:44 /backups/daily/2018-10-04/wallet-db-tob_holder_2018-10-04_17-20-06.sql.gz -3.5K 2018-10-04 17:48 /backups/daily/2018-10-04/wallet-db-tob_verifier_2018-10-04_17-48-42.sql.gz -687M 2018-10-04 17:51 /backups/daily/2018-10-04/postgresql-TheOrgBook_Database_2018-10-04_17-48-44.sql.gz -9.1G 2018-10-04 18:16 /backups/daily/2018-10-04/wallet-db-tob_holder_2018-10-04_17-51-36.sql.gz -3.5K 2018-10-04 22:49 /backups/daily/2018-10-04/wallet-db-tob_verifier_2018-10-04_22-49-39.sql.gz -687M 2018-10-04 22:52 /backups/daily/2018-10-04/postgresql-TheOrgBook_Database_2018-10-04_22-49-41.sql.gz -9.1G 2018-10-04 23:17 /backups/daily/2018-10-04/wallet-db-tob_holder_2018-10-04_22-52-29.sql.gz -30G 2018-10-04 23:17 /backups/daily/2018-10-04 -30G 2018-10-04 23:17 /backups/daily -30G 2018-10-04 23:17 /backups/ +4.0K 2020-02-27 13:26 /backups/daily/2020-02-27/identity-kit-db-bc-identity_kit_db_2020-02-27_13-26-21.sql.gz +4.0K 2020-02-27 13:27 /backups/daily/2020-02-27/identity-kit-db-bc-identity_kit_db_2020-02-27_13-27-10.sql.gz +12K 2020-02-27 13:27 /backups/daily/2020-02-27 +4.0K 2020-02-28 06:44 /backups/daily/2020-02-28/identity-kit-db-bc-identity_kit_db_2020-02-28_06-44-19.sql.gz +4.0K 2020-02-28 07:12 /backups/daily/2020-02-28/identity-kit-db-bc-identity_kit_db_2020-02-28_07-12-29.sql.gz +4.0K 2020-02-28 08:07 /backups/daily/2020-02-28/identity-kit-db-bc-identity_kit_db_2020-02-28_08-07-10.sql.gz +16K 2020-02-28 08:07 /backups/daily/2020-02-28 +32K 2020-02-28 08:07 /backups/daily +36K 2020-02-28 08:07 /backups/ ================================================================================================================================ -Sleeping for 1d ... +Scheduled backup run complete. ``` \ No newline at end of file diff --git a/docs/SampleRocketChatMessage.png b/docs/SampleRocketChatMessage.png index 823559f..8045488 100644 Binary files a/docs/SampleRocketChatMessage.png and b/docs/SampleRocketChatMessage.png differ diff --git a/docs/TipsAndTricks.md b/docs/TipsAndTricks.md new file mode 100644 index 0000000..7b5ed99 --- /dev/null +++ b/docs/TipsAndTricks.md @@ -0,0 +1,75 @@ +# Tips and Tricks + +## Verify Fails with - `error connecting to db server` or simular message + +### Issue + +The postgres and mongo containers used for the backup container have the following (simplified) startup sequence for the database server: +- Start the server to perform initial server and database configuration. +- Shutdown the server. +- Start the server with the created configuration. + +If memory and CPU requests and limits have been set for the container it is possible for this sequence to be slowed down enough that the `pingDbServer` operation will return success during the initial startup and configuration, and the subsequent `restoreDatabase` operation will run while the database server is not running (before it's started the second time). + +### Example Logs + +For a Mongo backup-container the error looks like this: +``` +sh-4.2$ ./backup.sh -s -v all + +Verifying backup ... + +Settings: +- Database: mongo=identity-kit-db-bc/identity_kit_db +- Backup file: /backups/daily/2020-03-06/identity-kit-db-bc-identity_kit_db_2020-03-06_01-00-00.sql.gz + +waiting for server to start.... +Restoring from backup ... +2020-03-06T07:28:31.299-0800 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, in(checking socket for error after poll), reason: Connection refused +2020-03-06T07:28:31.299-0800 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed : +connect@src/mongo/shell/mongo.js:251:13 +@(connect):1:21 +exception: connect failed +Cleaning up ... + +rm: cannot remove '/var/lib/mongodb/data/journal': Directory not empty +[!!ERROR!!] - Backup verification failed: /backups/daily/2020-03-06/identity-kit-db-bc-identity_kit_db_2020-03-06_01-00-00.sql.gz + +The following issues were encountered during backup verification; +Restoring '/backups/daily/2020-03-06/identity-kit-db-bc-identity_kit_db_2020-03-06_01-00-00.sql.gz' to '127.0.0.1/identity_kit_db' ... + +2020-03-06T07:28:30.785-0800 Failed: error connecting to db server: no reachable servers + +Restore failed. + +Elapsed time: 0h:0m:16s - Status Code: 1 +``` + + +### Solution + +Configure the `backup-container` to use best effort resource allocation. **This IS the default for the supplied deployment configuration template**; [backup-deploy.json](../openshift/templates/backup/backup-deploy.json) + +Best effort resource allocation can only be set using a template or by directly editing the DC's yaml file. + +The resources section in the containers template in the resulting DC looks like this: +``` +apiVersion: apps.openshift.io/v1 +kind: DeploymentConfig +... +spec: + ... + template: + ... + spec: + containers: + ... + resources: + limits: + cpu: '0' + memory: '0' + requests: + cpu: '0' + memory: '0' +... +``` \ No newline at end of file diff --git a/openshift/backup-deploy.overrides.sh b/openshift/backup-deploy.overrides.sh index e024cbd..f832f3b 100644 --- a/openshift/backup-deploy.overrides.sh +++ b/openshift/backup-deploy.overrides.sh @@ -1,94 +1,40 @@ +_includeFile=$(type -p overrides.inc) +if [ ! -z ${_includeFile} ]; then + . ${_includeFile} +else + _red='\033[0;31m'; _yellow='\033[1;33m'; _nc='\033[0m'; echo -e \\n"${_red}overrides.inc could not be found on the path.${_nc}\n${_yellow}Please ensure the openshift-developer-tools are installed on and registered on your path.${_nc}\n${_yellow}https://github.com/BCDevOps/openshift-developer-tools${_nc}"; exit 1; +fi + # ======================================================================== # Special Deployment Parameters needed for the backup instance. # ------------------------------------------------------------------------ # The generated config map is used to update the Backup configuration. # ======================================================================== - -CONFIG_MAP_NAME=backup-conf +CONFIG_MAP_NAME=${CONFIG_MAP_NAME:-backup-conf} SOURCE_FILE=../config/backup.conf -OUTPUT_FORMAT=json -OUTPUT_FILE=backup-conf-configmap_DeploymentConfig.json - -generateConfigMap() { - _config_map_name=${1} - _source_file=${2} - _output_format=${3} - _output_file=${4} - if [ -z "${_config_map_name}" ] || [ -z "${_source_file}" ] || [ -z "${_output_format}" ] || [ -z "${_output_file}" ]; then - echo -e \\n"generateConfigMap; Missing parameter!"\\n - exit 1 - fi - - oc create configmap ${_config_map_name} --from-file ${_source_file} --dry-run -o ${_output_format} > ${_output_file} -} - -printStatusMsg(){ - ( - _msg=${1} - _yellow='\033[1;33m' - _nc='\033[0m' # No Color - printf "\n${_yellow}${_msg}\n${_nc}" >&2 - ) -} - -readParameter(){ - ( - _msg=${1} - _paramName=${2} - _defaultValue=${3} - _encode=${4} - - _yellow='\033[1;33m' - _nc='\033[0m' # No Color - _message=$(echo -e "\n${_yellow}${_msg}\n${_nc}") - read -r -p $"${_message}" ${_paramName} - - writeParameter "${_paramName}" "${_defaultValue}" "${_encode}" - ) -} - -writeParameter(){ - ( - _paramName=${1} - _defaultValue=${2} - _encode=${3} - - if [ -z "${_encode}" ]; then - echo "${_paramName}=${!_paramName:-${_defaultValue}}" >> ${_overrideParamFile} - else - # The key/value pair must be contained on a single line - _encodedValue=$(echo -n "${!_paramName:-${_defaultValue}}"|base64 -w 0) - echo "${_paramName}=${_encodedValue}" >> ${_overrideParamFile} - fi - ) -} - -initialize(){ - # Define the name of the override param file. - _scriptName=$(basename ${0%.*}) - export _overrideParamFile=${_scriptName}.param - - printStatusMsg "Initializing ${_scriptName} ..." - - # Remove any previous version of the file ... - if [ -f ${_overrideParamFile} ]; then - printStatusMsg "Removing previous copy of ${_overrideParamFile} ..." - rm -f ${_overrideParamFile} - fi -} - -initialize +OUTPUT_FORMAT=json +OUTPUT_FILE=${CONFIG_MAP_NAME}-configmap_DeploymentConfig.json +printStatusMsg "Generating ConfigMap; ${CONFIG_MAP_NAME} ..." generateConfigMap "${CONFIG_MAP_NAME}" "${SOURCE_FILE}" "${OUTPUT_FORMAT}" "${OUTPUT_FILE}" -# Get the FTP URL and credentials -readParameter "FTP_URL - Please provide the FTP server URL. If left blank, the FTP backup feature will be disabled:" FTP_URL "" -readParameter "FTP_USER - Please provide the FTP user name:" FTP_USER "" -readParameter "FTP_PASSWORD - Please provide the FTP password:" FTP_PASSWORD "" -# Get the webhook URL -readParameter "WEBHOOK_URL - Please provide the webhook endpoint URL. If left blank, the webhook integration feature will be disabled:" WEBHOOK_URL "" +if createOperation; then + # Get the FTP URL and credentials + readParameter "FTP_URL - Please provide the FTP server URL. If left blank, the FTP backup feature will be disabled:" FTP_URL "" + readParameter "FTP_USER - Please provide the FTP user name:" FTP_USER "" + readParameter "FTP_PASSWORD - Please provide the FTP password:" FTP_PASSWORD "" + + # Get the webhook URL + readParameter "WEBHOOK_URL - Please provide the webhook endpoint URL. If left blank, the webhook integration feature will be disabled:" WEBHOOK_URL "" +else + printStatusMsg "Update operation detected ...\nSkipping the prompts for the FTP_URL, FTP_USER, FTP_PASSWORD, and WEBHOOK_URL secrets ...\n" + writeParameter "FTP_URL" "prompt_skipped" + writeParameter "FTP_USER" "prompt_skipped" + writeParameter "FTP_PASSWORD" "prompt_skipped" + writeParameter "WEBHOOK_URL" "prompt_skipped" +fi SPECIALDEPLOYPARMS="--param-file=${_overrideParamFile}" echo ${SPECIALDEPLOYPARMS} diff --git a/openshift/templates/backup/backup-build.json b/openshift/templates/backup/backup-build.json index cb1843e..89f4b05 100644 --- a/openshift/templates/backup/backup-build.json +++ b/openshift/templates/backup/backup-build.json @@ -43,10 +43,6 @@ "strategy": { "type": "Docker", "dockerStrategy": { - "from": { - "kind": "${SOURCE_IMAGE_KIND}", - "name": "${SOURCE_IMAGE_NAME}:${SOURCE_IMAGE_TAG}" - }, "dockerfilePath": "${DOCKER_FILE_PATH}" } }, @@ -63,9 +59,9 @@ { "name": "NAME", "displayName": "Name", - "description": "The name assigned to all of the resources defined in this template.", + "description": "The name assigned to all of the resources. Use 'backup-postgres' for Postgres builds or 'backup-mongo' for MongoDB builds.", "required": true, - "value": "backup" + "value": "backup-postgres" }, { "name": "GIT_REPO_URL", @@ -88,31 +84,10 @@ "required": false, "value": "/docker" }, - { - "name": "SOURCE_IMAGE_KIND", - "displayName": "Source Image Kind", - "description": "The 'kind' (type) of the source image; typically ImageStreamTag, or DockerImage.", - "required": true, - "value": "DockerImage" - }, - { - "name": "SOURCE_IMAGE_NAME", - "displayName": "Source Image Name", - "description": "The name of the source image.", - "required": true, - "value": "registry.access.redhat.com/rhscl/postgresql-10-rhel7" - }, - { - "name": "SOURCE_IMAGE_TAG", - "displayName": "Source Image Tag", - "description": "The tag of the source image.", - "required": true, - "value": "latest" - }, { "name": "DOCKER_FILE_PATH", - "displayName": "Docker File Path", - "description": "The path to the docker file defining the build.", + "displayName": "Docker File", + "description": "The path and file of the docker file defining the build. Choose either 'Dockerfile' for Postgres builds or 'Dockerfile_Mongo' for MongoDB builds.", "required": false, "value": "Dockerfile" }, @@ -124,4 +99,4 @@ "value": "latest" } ] -} \ No newline at end of file +} diff --git a/openshift/templates/backup/backup-cronjob.yaml b/openshift/templates/backup/backup-cronjob.yaml index ec80cf6..40b6cee 100644 --- a/openshift/templates/backup/backup-cronjob.yaml +++ b/openshift/templates/backup/backup-cronjob.yaml @@ -10,7 +10,7 @@ parameters: - name: "JOB_NAME" displayName: "Job Name" description: "Name of the Scheduled Job to Create." - value: "backup" + value: "backup-postgres" required: true - name: "JOB_PERSISTENT_STORAGE_NAME" displayName: "Backup Persistent Storage Name" @@ -27,7 +27,7 @@ parameters: displayName: "Source Image Name" description: "The name of the image to use for this resource." required: true - value: "backup" + value: "backup-postgres" - name: "IMAGE_NAMESPACE" displayName: "Image Namespace" description: "The namespace of the OpenShift project containing the imagestream for the application." diff --git a/openshift/templates/backup/backup-deploy.json b/openshift/templates/backup/backup-deploy.json index fc4b76f..3cbe6cd 100644 --- a/openshift/templates/backup/backup-deploy.json +++ b/openshift/templates/backup/backup-deploy.json @@ -17,9 +17,7 @@ }, "spec": { "storageClassName": "${BACKUP_VOLUME_CLASS}", - "accessModes": [ - "ReadWriteOnce" - ], + "accessModes": ["ReadWriteOnce"], "resources": { "requests": { "storage": "${BACKUP_VOLUME_SIZE}" @@ -39,9 +37,7 @@ }, "spec": { "storageClassName": "${VERIFICATION_VOLUME_CLASS}", - "accessModes": [ - "ReadWriteOnce" - ], + "accessModes": ["ReadWriteOnce"], "resources": { "requests": { "storage": "${VERIFICATION_VOLUME_SIZE}" @@ -50,13 +46,25 @@ } }, { + "kind": "Secret", "apiVersion": "v1", + "metadata": { + "name": "${NAME}" + }, + "type": "Opaque", + "stringData": { + "webhook-url": "${WEBHOOK_URL}" + } + }, + { "kind": "Secret", + "apiVersion": "v1", "metadata": { "name": "${FTP_SECRET_KEY}" }, "type": "Opaque", "stringData": { + "ftp-url": "${FTP_URL}", "ftp-user": "${FTP_USER}", "ftp-password": "${FTP_PASSWORD}" } @@ -85,9 +93,7 @@ "type": "ImageChange", "imageChangeParams": { "automatic": true, - "containerNames": [ - "${NAME}" - ], + "containerNames": ["${NAME}"], "from": { "kind": "ImageStreamTag", "namespace": "${IMAGE_NAMESPACE}", @@ -173,15 +179,19 @@ "value": "${DATABASE_SERVICE_NAME}" }, { - "name": "POSTGRESQL_DATABASE", + "name": "DATABASE_NAME", "value": "${DATABASE_NAME}" }, + { + "name": "MONGODB_AUTHENTICATION_DATABASE", + "value": "${MONGODB_AUTHENTICATION_DATABASE}" + }, { "name": "TABLE_SCHEMA", "value": "${TABLE_SCHEMA}" }, { - "name": "POSTGRESQL_USER", + "name": "DATABASE_USER", "valueFrom": { "secretKeyRef": { "name": "${DATABASE_DEPLOYMENT_NAME}", @@ -190,7 +200,7 @@ } }, { - "name": "POSTGRESQL_PASSWORD", + "name": "DATABASE_PASSWORD", "valueFrom": { "secretKeyRef": { "name": "${DATABASE_DEPLOYMENT_NAME}", @@ -200,7 +210,12 @@ }, { "name": "FTP_URL", - "value": "${FTP_URL}" + "valueFrom": { + "secretKeyRef": { + "name": "${FTP_SECRET_KEY}", + "key": "ftp-url" + } + } }, { "name": "FTP_USER", @@ -222,7 +237,12 @@ }, { "name": "WEBHOOK_URL", - "value": "${WEBHOOK_URL}" + "valueFrom": { + "secretKeyRef": { + "name": "${NAME}", + "key": "webhook-url" + } + } }, { "name": "ENVIRONMENT_FRIENDLY_NAME", @@ -269,16 +289,16 @@ { "name": "NAME", "displayName": "Name", - "description": "The name assigned to all of the resources defined in this template.", + "description": "The name assigned to all of the resources. Use 'backup-postgres' for Postgres deployments or 'backup-mongo' for MongoDB deployments.", "required": true, - "value": "backup" + "value": "backup-postgres" }, { "name": "SOURCE_IMAGE_NAME", "displayName": "Source Image Name", - "description": "The name of the image to use for this resource.", + "description": "The name of the image to use for this resource. Use 'backup-postgres' for Postgres deployments or 'backup-mongo' for MongoDB deployments.", "required": true, - "value": "backup" + "value": "backup-postgres" }, { "name": "IMAGE_NAMESPACE", @@ -297,16 +317,23 @@ { "name": "DATABASE_SERVICE_NAME", "displayName": "Database Service Name", - "description": "The name of the database service.", - "required": true, - "value": "postgresql" + "description": "Used for backward compatibility only. Not needed when using the recommended 'backup.conf' configuration. The name of the database service.", + "required": false, + "value": "" }, { "name": "DATABASE_NAME", "displayName": "Database Name", - "description": "The name of the database.", - "required": true, - "value": "MyDatabase" + "description": "Used for backward compatibility only. Not needed when using the recommended 'backup.conf' configuration. The name of the database.", + "required": false, + "value": "" + }, + { + "name": "MONGODB_AUTHENTICATION_DATABASE", + "displayName": "MongoDB Authentication Database", + "description": "This is only required if you are backing up mongo database with a separate authentication database.", + "required": false, + "value": "" }, { "name": "DATABASE_DEPLOYMENT_NAME", @@ -332,7 +359,7 @@ { "name": "TABLE_SCHEMA", "displayName": "Table Schema", - "description": "The table schema for your database.", + "description": "The table schema for your database. Used for Postgres backups.", "required": true, "value": "public" }, @@ -347,7 +374,7 @@ "name": "FTP_SECRET_KEY", "displayName": "FTP Secret Key", "description": "The FTP secret key is used to wire up the credentials associated to the FTP.", - "required": true, + "required": false, "value": "ftp-secret" }, { @@ -402,7 +429,7 @@ { "name": "NUM_BACKUPS", "displayName": "The number of backup files to be retained", - "description": "The number of backup files to be retained. Used for the `daily` backup strategy. Ignored when using the `rolling` backup strategy.", + "description": "Used for backward compatibility only. Ignored when using the recommended `rolling` backup strategy. The number of backup files to be retained. Used for the `daily` backup strategy.", "required": false, "value": "" }, @@ -430,9 +457,9 @@ { "name": "BACKUP_PERIOD", "displayName": "Period (d,m,s) between backups in a format used by the sleep command", - "description": "Period (d,m,s) between backups in a format used by the sleep command", - "required": true, - "value": "1d" + "description": "Used for backward compatibility only. Ignored when using the recommended `backup.conf` and cron backup strategy. Period (d,m,s) between backups in a format used by the sleep command", + "required": false, + "value": "" }, { "name": "CONFIG_FILE_NAME", @@ -472,14 +499,14 @@ { "name": "BACKUP_VOLUME_CLASS", "displayName": "Backup Volume Class", - "description": "The class of the persistent volume used to store the backups; gluster-file, gluster-block, gluster-file-db, nfs-backup. Please note, nfs-backup storage is the recommended storage type for backups. It MUST be provisioned manually through the OCP catalog via the 'BC Gov NFS Storage' template. nfs-backup storage CANNOT be automatically provisioned by this template.", + "description": "The class of the persistent volume used to store the backups; netapp-block-standard, netapp-file-standard, nfs-backup. Please note, nfs-backup storage is the recommended storage type for backups. It MUST be provisioned manually through the OCP catalog via the 'BC Gov NFS Storage' template. nfs-backup storage CANNOT be automatically provisioned by this template.", "required": true, "value": "nfs-backup" }, { "name": "VERIFICATION_VOLUME_NAME", "displayName": "Verification Volume Name", - "description": "The name for the verification volume, used for restoring and verifying backups. When using the recommend nfs-backup storage class for backups, this volume MUST be either gluster-file-db or gluster-block storage; gluster-block is recommended (it has far better performance).", + "description": "The name for the verification volume, used for restoring and verifying backups. When using the recommend nfs-backup storage class for backups, this volume MUST be either netapp-file-standard or netapp-block-standard storage; netapp-block-standard is recommended (it has far better performance).", "required": false, "value": "backup-verification" }, @@ -493,14 +520,14 @@ { "name": "VERIFICATION_VOLUME_CLASS", "displayName": "Backup Volume Class", - "description": "The class of the persistent volume used for restoring and verifying backups; should be one of gluster-block or gluster-file-db. gluster-block performs better.", + "description": "The class of the persistent volume used for restoring and verifying backups; should be one of netapp-block-standard or netapp-file-standard. netapp-block-standard performs better.", "required": true, - "value": "gluster-file-db" + "value": "netapp-file-standard" }, { "name": "VERIFICATION_VOLUME_MOUNT_PATH", "displayName": "Verification Volume Mount Path", - "description": "The path on which to mount the verification volume. This is used by the database server to contain the database configuration and data files.", + "description": "The path on which to mount the verification volume. This is used by the database server to contain the database configuration and data files. For Mongo, please use /var/lib/mongodb/data", "required": true, "value": "/var/lib/pgsql/data" },