Postgres backup

Backing up postgres consists of two main pieces: backing up the WALs (write ahead logs) and regular backups of the base. See

The WALs are copied from the postgres server to the backup hosts using ssh with the pg-backup-file script. Base backups are pulled on the backup hosts using postgres-make-base-backups. The former requires the postgres servers be able to ssh to the backup servers, and the latter requires the postgres server listen on the network, have ssl set up correctly, access is allowed in the firewall, a postgres user with replication privileges exists and is configured to allowed to connect in pg_hba.

server config

(2018-02 - This documentation has not really been tested since it was written – weasel)

There are two possible ways to configure the server.

Single cluster on a host

If there is only one cluster on the host, we can use puppet's postgresql::server to configure the cluster and any databases on it, see modules/salsa/manifests/database.pp for an example. In particular, you want to set archive_command and the ssl options in pg.conf, as well as set listen_address correctly.

Add a postgres::backup_cluster stanza to get it backed up.

Multiple clusters/compatibility mode

Since we often have more than one cluster, we cannot use the puppet postgresql::server class for most things.

  track_counts = yes
  archive_mode = on
  wal_level = archive
  max_wal_senders = 3
  archive_timeout = 1h
  archive_command = '/usr/local/bin/pg-backup-file mXXXXXX-CLUSTERNAMEHERE-XXXXain WAL %p'
[git|master] weasel@orinoco:~/projects/debian/d-a/dsa-puppet$ cat data/nodes/
  - roles::snapshot_db
  - roles::postgresql::server

postgres::backup_server::register_backup_clienthost::allow_read_hosts: ['sallinen']

base backup config

Nagios warnings


(2018-02) Our nagios check warns us when a backup server has not successfully fetched a base backup recently. The causes often are that either the postgres server or the backup host went down or was down during the time of the weekly cronjob.

To re-run a base backup for a specific cluster, log into the backup server(s) (storace and/or backuphost), cat /etc/dsa/postgresql-backup/base-backup-clusters to see the port for the cluster, and run

  sudo -u debbackup /usr/local/bin/postgres-make-base-backups <host>:<port>

probably best to do that in a screen as it might take a while.



sudo -u debbackup /usr/lib/nagios/plugins/dsa-check-backuppg  | grep BASE
[fasolo, dak] MISSING-BASE:

This means that we started doing a base backup (as witnessed by a .backup file next to a WAL), but for some reason we don't have the corresponding base file.

root@backuphost:/srv/backups/pg/fasolo# ls -l *backup*
-rw------- 1 debbackup debbackup 9201093916 Jan 14 06:18
-rw------- 1 debbackup debbackup 9227651542 Jan 21 06:25
-rw------- 1 debbackup debbackup 9266306750 Jan 28 07:59
-rw------- 1 debbackup debbackup 9312602089 Feb  5 11:00
-rw------- 1 debbackup debbackup 9346830509 Feb 12 10:25
-rw------- 1 debbackup debbackup        353 Jan 14 06:18 dak.WAL.0000000100000033000000A6.00000028.backup
-rw------- 1 debbackup debbackup        350 Jan 20 11:20 dak.WAL.00000001000000350000008C.00000028.backup
-rw------- 1 debbackup debbackup        353 Jan 21 06:25 dak.WAL.000000010000003600000068.00000028.backup
-rw------- 1 debbackup debbackup        353 Jan 28 07:59 dak.WAL.0000000100000038000000E3.00000028.backup
-rw------- 1 debbackup debbackup        353 Feb  5 11:00 dak.WAL.000000010000003B00000090.00000028.backup
-rw------- 1 debbackup debbackup        350 Feb  5 15:49 dak.WAL.000000010000003B0000009B.00000108.backup
-rw------- 1 debbackup debbackup        353 Feb 11 10:09 dak.WAL.000000010000003D000000AC.00000028.backup
-rw------- 1 debbackup debbackup        353 Feb 12 10:25 dak.WAL.000000010000003E00000027.00000178.backup

.backup files are created on the postgres server and shipped to the backup hosts whenever a base backup is initiated. We do some labelling, so we know which backup host the corresponding tarball should end up with.


root@backuphost:/srv/backups/pg/fasolo# cat dak.WAL.000000010000003B00000090.00000028.backup
START WAL LOCATION: 3B/90000028 (file 000000010000003B00000090)
STOP WAL LOCATION: 3B/97CF2138 (file 000000010000003B00000097)
START TIME: 2018-02-05 10:25:28 UTC
STOP TIME: 2018-02-05 10:59:50 UTC

To fix this, verify we have a later base tarball, or that we are fine for some other reason, and remove the corresponding .backup file from the backup host. In the case above, we would remove dak.WAL.000000010000003D000000AC.00000028.backup.



[bmdb1, main] WAL-MISSING-AFTER: bmdb1/main.WAL.0000000100001340000000DB

If it's just one WAL file missing, it can be recovered from the other backup host. If more logs are missing, check the server's logs for archive errors.