backup-flow
is a Bash script that internally utilizes Restic for backing up files to a repository (referred to as a "repository" in Restic). It also provides the option to use Rclone for copying files from remote storage to the local system. Systemd Timers are employed to schedule periodic backups.
Current version: v0.1.1
Please read the Disclaimer section.
- THIS SCRIPT IS STILL WORK IN PROGRESS AND CONTAINS KNOWN OR UNKNOWN ERRORS
- USE THIS SCRIPT AT YOUR OWN RISK
- ALWAYS CHECK YOUR BACKUPS AFTER RUNNING THIS SCRIPT
- Back up selected folders to local storage.
- Back up selected MySQL / PostgreSQL database to the local storage.
- Copy files from remote storage (using Rclone) to the local storage.
- Upload folders from local storage to the Restic repository.
- Includes a Systemd service unit and timer.
- Clean up old files from the local storage.
Before using the backup-flow
script, please ensure that you have:
- Knowledge of how to configure PostgreSQL or MySQL/MariaDB.
- Installed
mysqldump
orpg_dump
utilities. - Installed and configured Rclone.
- Installed and configured Restic.
Follow these steps for installation.
TBD
Open the script and change the following options:
BACKUP_STORAGE_PATH
— local directory path where to store all copied files
BACKUP_DIR_PATH
— local directory path
BACKUP_FILES
— array of files to copy to BACKUP_STORAGE_PATH
BACKUP_DIRS
— as previous but for directories
BACKUP_DATABASE_TYPE
— mysql or postgresql
RCLONE_REMOTE_BACKUP
— enable if files from remote storage managed by Rclone should be copied to the BACKUP_STORAGE_PATH
RCLONE_STORAGE_NAME
— Rclone remote storage name (check ~/.config/rclone/rclone.conf
or rclone config
)
RCLONE_REMOTE_PATH
— Rclone remote path
BACKUP_RESTIC
— enable if you want to use Restic
RESTIC_BACKUP_TAGS
— change from default value
Additionally, please review the script for other parameters. The options are documented with comments.
Install and configure Rclone remote storage. Here is an example configuration for Minio:
[bs-s3]
type = s3
provider = Minio
access_key_id = access_key
secret_access_key = secret_key
endpoint = https://s3.domain.com
no_check_bucket = true
Check if it works with the command: rclone ls bs-s3:/bucket-name/uploads
6467 untitled-design.jpg
6467 scaled-1680-/untitled-design.jpg
4203 thumbs-150-150/untitled-design.jpg
To back up a MySQL database / MariaDB database, put your credentials into ~/.my.cnf
. Here is an example:
[client]
#socket=
user=root
password="password"
To create this file, use the following command:
(umask 0077 && vim ~/.my.cnf)
Check if it works with the command: mysql -e "select 1"
+---+
| 1 |
+---+
| 1 |
+---+
To back up a PostgreSQL server, put your credentials into ~/.pgpass
(or specify the location using the PGPASSFILE environment variable). Here is an example:
# File format
# hostname:port:database:username:password
*:*:database_name:db_user:db_password
To create this file, use the following command:
(umask 0077 && vim ~/.pgpass)
Check if it works with the command: psql -h db_hostname -d db_name -U db_user -c "select 1;"
?column?
----------
1
(1 row)
Alternatively (without .pgpass file), you can pass environment variables: PGPASSWORD=db_password psql -h db_hostname -d db_name -U db_user -c "select 1;"
If you do not plan to use multiple Restic repositories, you can create a file with Restic configuration options. This allows you to use commands without providing common options to the CLI, such as restic --repo path_to_repo snaphost
or restic --repo path_to_repo prune
. Here is an example of /etc/restic/environment
:
RESTIC_REPOSITORY="s3:https://minio.domain.com/backups-bucket"
AWS_ACCESS_KEY_ID="access_key"
AWS_SECRET_ACCESS_KEY="secret_key"
RESTIC_PASSWORD="restic_password"
Next, configure Bash (or your shell) to use these variables. In my case, I added the following to ~/.bashrc
:
# Restic configuration
if [ -f /etc/restic/environment ]; then
set -o allexport
. /etc/restic/environment
set +o allexport
fi
Create Restic repository: restic init
Get stats: restic stats
. Here is an example output:
repository 4021ce42 opened (version 2, compression level auto)
scanning...
Stats in restore-size mode:
Snapshots processed: 2
Total File Count: 1510
Total Size: 32.015 MiB
Download the service and timer files.
Run systemctl daemon-reload
. Start the timer:
systemctl enable backup-flow-timer && systemctl start backup-flow.timer
Refer to the examples directory.
If you have a database server running in Docker and don't want to install pg_dump
or mysqldump
locally, you can dump the database using a container. Here is an example to back up a PostgreSQL database:
docker run --rm --env PGUSER=db_user --env PGPASSWORD=db_password postgres:15 pg_dump -h db_hostname -d db_name -b -w --clean --if-exists > database_dump.sql
If you have a .pgpass
file, mount it as a Docker volume:
docker run --rm --env PGUSER=db_user --volume "/root/.pgpass:/root/.pgpass" postgres:15 pg_dump -h db_hostname -d db_name -b -w --clean --if-exists > database_dump.sql
There are no special requirements on usage and distribution.