Hetzner - DokuWiki




This solution is probably not the best, therefore please incorporate improvement :)
I hereby disclaim any and all responsibility/liability for any damages incurred! The best thing to do is to understand the facts described! This guideline should be independent of the type of distribution. I run Debian 3.1. It should be possible to adopt the instructions for other distributions.


Install backup2l

Included as a package with most distributions, otherwise obtainable from http://backup2l.sourceforge.net

Available on Debian under

$> apt-get install backup2l

Read Documentation

Don't worry, the documentation isn't too long, but it is very important in order to understand the functioning and to be able to perform a restore. We open it with the man command.

$> man backup2l

Configure backup2l

Now we want to apply the knowledge we have gained in the second step and configure backup2l. So, we open/etc/backup2l, the configuration file for backup2l. For example with pico:

$> pico /etc/backup2l.conf

Here Step 2 makes sense, as otherwise the parameters for the configuration file will not be understood. All parameters are also well explained in the configuration file and a root owner should know how an editor works. However, here are some parameters and why I have chosen them in this way:

SRCLIST=(/etc /root /home /var/backup.d/premilinary /var/www/domains)

I back up the configuration in /etc, the user files in /home and /root. Furthermore, I back up/var/backup.d/preliminary. In this directory I set up the "Hot Copies" of my MySQL and PostgreSQL databases as well as my SVN repositories. Further below, we shall see how we can set these up.

SKIPCOND=(-path "/var/www/domains/*/logs/*")

As space on the backup target is mostly limited, you should exclude larger directories and files that do not really need to be backed up. In my case these are the Apache log files.


I have the backup set up in this directory. The files here are then transferred to the backup server later. If need be, the files can also be encrypted here.


These settings are useful for me. In this way I always have a full backup and nine incrementals. If a new incremental backup is to be made each week, then max_per_level should be set at 6. A full backup and then the incrementals are always needed for a restore.

  echo "start pre backup scripts"

  cd /root/backup

  sh hotcopy-mysql.sh
  sh hotcopy-svn.sh
  sh hotcopy-cyrus.sh
  sh hotcopy-postfix.sh
  sh hotcopy-postgresql.sh

  sh dump-dpkg-selections.sh

  chmod -R u=rw,go-rwx /var/backup.d/preliminary/*

  echo "pre backup scripts completed"

  echo "Executing post backup actions."

  cd /root/backup
  chown -R root:backup /var/backup.d/final
  chmod -R u=rw,g=r /var/backup.d/final/*

  echo "The backup has been completed."
  echo "----------------------------------------------"

  sh sendemail.sh

These commands are implemented before and/or after the backups are made. We shall take a closer look at these below.

Last but not least, in this step we set up the /var/backup.d/final directory, where our backups are to be made.

Making Hot Copies

Some files cannot simply be backed up by copying, as they are constantly being accessed. These files include subversion repositories, MySQL and PostgreSQL databases, Cyrus IMAP directories and the Postfix mail spool in /var/spool/. These files need to be dealt with separately.

We either use the tools supplied to create so-called hot copies - backups of data while the programs are running - or use a few tricks. As the commands required here are not that simple and we do not wish to overload our configuration file with them, we transfer the individual steps to other shell scripts and save them in /root/backup.

Let's take a brief look at these scripts.

MySQL Hot Copies

Here we simply use the mysqldump program to perform data backups of our MySQL databases.


# This script creates a hot copy of the mysql data files.
echo "creating mysql dump"

echo "   removing old dumps and creating directory"
mkdir -p /var/backup.d/preliminary/mysql
rm /var/backup.d/preliminary/mysql/all.dump

echo "   executing mysqldump"
mysqldump -A --add-locks -u root --password=miro4711 > /var/backup.d/preliminary/mysql/all.dump

echo "mysql dump created"

PostgreSQL Hot Copies

Here too we use a tool supplied with the database server:pg_dumpall. We execute pg_dumpall with postgres as user as this offers full access to all databases with standard installations androot does not.


COMMAND="pg_dumpall --clean --column-inserts"

echo "creating PostgreSQL backup"

if [ ! -d $TARGET_DIR ]
  echo "  create $TARGET_DIR"
  then mkdir $TARGET_DIR

echo "  executing pg_dumpall"
sudo -u postgres pg_dumpall > ${TARGET_DIR}/${TARGET_FILE}

Subversion Hot Copies

Subversion too provides a program for creating backups. We call up svnadmin with the command hotcopy.


# This script creates a hot copy of all subversion repositories in
# /var/svn.

echo "creating Subversion repository hotcopies"

echo "   removing old subversion hotcopies"
rm -rf /var/backup.d/preliminary/svn
mkdir -p /var/backup.d/preliminary/svn

pushd /var/svn > /dev/null

for repository in `ls`
  if [ -d $repository ]
    echo "   creating hotcopy of ${repository}"
    svnadmin hotcopy $repository "/var/backup.d/preliminary/svn/${repository}"

popd > /dev/null

Cyrus Backup

As far as I am aware, Cyrus does not provide a hot copy tool. The developers of Cyrus recommend performing backups on the file system level, with LVM orrsync. We shall take the second option here. First we use rsync to make copies of the Cyrus data. Then we stop the server and copy the data again with rsync. rsync only copies the changes to the files. Then we start the server again. This reduces the time in which the server in unavailable.


# This script creates a hot copy of the cyrus data files.

# We use a trick from the cyrus wiki. First, we use rsync to copy the spool
# and the cyrus dbs to the preliminary backup directory. Then, we shut down
# cyrus, rsync again and start cyrus again. This way, we reduce cyrus' downtime
# to a minimum.

echo "creating Cyrus backup"

echo "   creating directories"
rm -rf /var/backup.d/preliminary/cyrus
rm -rf /var/backup.d/preliminary/sieve
mkdir -p /var/backup.d/preliminary/cyrus/lib
mkdir -p /var/backup.d/preliminary/cyrus/spool
mkdir -p /var/backup.d/preliminary/sieve/spool

echo "   first rsync pass"

rsync -r /var/lib/cyrus /var/backup.d/preliminary/cyrus/lib
rsync -r /var/spool/cyrus /var/backup.d/preliminary/cyrus/spool
rsync -r /var/spool/sieve /var/backup.d/preliminary/sieve/spool

echo "   halting cyrus"
/etc/init.d/cyrus21 stop

echo "   second rsync pass"
rsync -r /var/lib/cyrus /var/backup.d/preliminary/cyrus/lib
rsync -r /var/spool/cyrus /var/backup.d/preliminary/cyrus/spool
rsync -r /var/spool/sieve /var/backup.d/preliminary/sieve/spool

echo "   starting cyrus again"
/etc/init.d/cyrus21 start

Postfix Backup

Here we use a similar trick to backing up Cyrus files.


# We use the same trick as with cyrus here: rsync, shutdown, rsync again and
# hopefully our backup is clean then.

echo "creating Postfix backup"

echo "   creating backup directories"
rm -rf /var/backup.d/preliminary/postfix
mkdir -p /var/backup.d/preliminary/postfix

echo "   first rsync pass"
rsync /var/spool/postfix /var/backup.d/preliminary/postfix

#echo "   stop postfix"
postfix stop

#echo "   second rsync pass"
rsync /var/spool/postfix /var/backup.d/preliminary/postfix

#echo "   start postfix again"
postfix start

Setting Up Cronjob

We set up a cronjob. For this we set up the file /etc/cron.daily/zz-backup2l with the following content (this file may already have been set up during the installation of backup2l).


# The following command invokes 'backup2l' with the default configuration
# file (/etc/backup2l.conf).
# (Re)move it or this entire script if you do not want automatic backups.
# Redirect its output if you do not want automatic e-mails after each backup.

! which backup2l > /dev/null || nice -n 19 backup2l -b

File Transfer

There are generally two possibilities for getting the files on the backup server (if possible, on a RAID, such as one of the Hetzner backup servers). The first and most plausible option is to copy from the server on which the backup has been made to the target server (backup server):Push Data. This could be done using scp for example. The second option is, the data from the backup server, again with scp: Pull Data. The last option is in my opinion preferable as a hacker cannot simply get to the backups.

For both options, I recommend using scp, as the backup server probably has an SSH login anyway and performance should not be a factor with one-off copying at night. Moreover, the transmission is encrypted. Alternatively an FTP client (such as ncftp) or rsync can be used.

The files can optionally also be encrypted with gpgand only transmitted encrypted. Then the data on the backup server is protected from unauthorized parties. The key for decrypting should not be left on a server but copied locally on a removable storage medium such a USB stick.

Push Data

We outline the approach briefly here: Set up an SSH key and attach it on the backup server to ~/.ssh/authorized_keys. Then delete the private key on the server:

$> ssh-keygen -t rsa
# Now attach ~/.ssh/id_rsa.pub on the backup server 
# to ~/.ssh/authorized_keys
# Delete the private key - we no longer need it.
$> rm ~/.ssh/id_rsa

Then we create a little backup script in /root/backup and call it up in POST_BACKUP() (v.s.). The following should work quite well:




Alternatives to Backup2l

Alternatively, rdiff-backup is also to be recommended.

It can do almost everything that backup2l can do, but has an improved rsync algorithm and has a more flexible lifecycle management than backup2l.

More information can be found here:

© 2019. Hetzner Online GmbH. Alle Rechte vorbehalten.