Jumat, 19 Juli 2024

POSTGRES-barman backup ok

 Setting up and Configuring pgBarman

As discussed on the Postgres Backup for Peeps blog, here are the basic steps to install and configure Barman for backing up a production PostgreSQL server:

S.NoHostnameIPRole
1Node1192.168.113.155Barman Server (Backup Server)
2Node2192.168.113.156PostgreSQL Database Server

Steps to be performed in Node1:

 Install Barman: Use OS command yum install barman to install Barman package in Backup Server.

 [root@localhost ~]# yum install barman

Verify barman package installation: yum install barman will create Barman account in Server.
Use the command barman –-version to verify installed Barman version in Backup Server.

[root@localhost ~]# barman --version

Set Password for OS Account barman: Use the OS command passwd barman to reset the Barman OS account in Backup Server.

[root@localhost tmp]# passwd barman

Changing password for user barman.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.

Configure Passwordless Authentication in Barman Server: Use OS command ssh-keygen to generate public and private keys. Once keys are generated, use OS command ssh-copy-id to transfer the public key to PostgreSQL Database Server.

[root@localhost ~]# su - barman
-bash-4.2$ ssh-keygen -t rsa
-bash-4.2$ ls -ltr .ssh
-bash-4.2$ ssh-copy-id -i ~/.ssh/id_rsa.pub postgres@node1
-bash-4.2$ ssh 'postgres@node1'
-bash-4.2$ exit

Configure Global Configuration File in Barman Server: The main configuration file (set to /etc/barman.conf by default) contains general options such as main directory, system user, log file, and so on. Edit /etc/barman.conf to set global parameters. Set the below parameters and save the file.

[root@localhost ~]# cd /etc/barman.conf 
[root@localhost ~]# cp /etc/barman.conf /etc/barman_bkp.conf
[root@localhost ~]# vi /etc/barman.conf
barman_user = barman
configuration_files_directory = /etc/barman.d
barman_home = /var/lib/barman
log_file = /var/log/barman/barman.log
log_level = INFO
compression = gzip (make sure that gzip utility is installed on both servers)
basebackup_retry_sleep = 30
basebackup_retry_times = 3
minimum_redundancy =3
retention_policy = RECOVERY WINDOW OF 4 WEEKS
path_prefix = /usr/pgsql-15/bin/

Create the configuration file for the server in barman.d directory like below:

[root@localhost ~]# cd /etc/barman.d
[root@localhost ~]#cp streaming-server.conf-template streaming-pg.conf
[root@localhost barman.d]$ vi streaming-pg.conf
[streaming-pg]
description =  "Example of PostgreSQL Database (Streaming-Only)"
conninfo = host=192.168.113.156 user=barman dbname=postgres password=postgres
streaming_conninfo = host=192.168.113.156 user=streaming_barman password=postgres
backup_method = postgres
streaming_archiver = on
slot_name = barman
backup_directory = /var/lib/barman/backups
;backup_options = concurrent_backup
retention_policy = RECOVERY WINDOW OF 7 DAYS
wal_retention_policy = main
retention_policy_mode = auto
minimum_redundancy=2

Steps to be followed on Node2 (PostgreSQL DB Server):

Configure Passwordless Authentication in PostgreSQL Database Server: Use OS command ssh-keygen to generate public and private keys. Once keys are generated using the OS command ssh-copy-id to transfer the public key to Barman Server. 

[root@localhost ~]# su - postgres
-bash-4.2$ ssh-keygen -t rsa
-bash-4.2$ ls -ltr .ssh
-bash-4.2$ ssh-copy-id -i ~/.ssh/id_rsa.pub barman@node1
-bash-4.2$ ssh 'barman@node1'
-bash-4.2$ exit

Create two users like below:

  1. barman

[root@localhost ~]# adduser barman

[root@localhost ~]# passwd barman

2. streaming_barman

[root@localhost ~]# adduser streaming_barman

[root@localhost ~]# passwd streaming_barman

set parameters in postgresql.conf file:

[root@localhost ~]# su – postgres
[postgres@localhost ~]$ cd /var/lib/pgsql/15/data/
[postgres@localhost data]$ vi postgresql.conf
listen_addresses = '*'
wal_level=replica
max_wal_senders=10
max_replication_slots=10
archive_mode = on  
archive_command = 'cp %p /var/lib/pgsql/15/archivedir/%f'
hot_standby = on
synchronous_standby_names = 'barman_receive_wal'

Set parameters in pg_hba.conf.conf file:

[postgres@localhost ~]$ cd /var/lib/pgsql/15/data/
[postgres@localhost data]$ vi pg_hba.conf
#ip4
host all all 0.0.0.0/0 md5
#replication privilege
host  replication  streaming_barman  192.168.113.156  md5

systemctl restart postgresql-15.service

Perform backups using Barman :

Verify Server setup in Barman Server: Use the command barman list-server to Server list configured in barman. Command barman show-server streaming-pg to get server configuration setting & command barman check streaming-pg to validate the setup.
[root@localhost ~]# su – barman
-bash-4.2$ barman list-server

[barman@localhost~]$barman switch-xlog --force --archive streaming-pg

[barman@localhost ~]$ barman check streaming-pg

[barman@localhost ~]$ barman backup streaming-pg --wait

[barman@localhost ~]$ barman list-backups streaming-pg

Point-in-time recovery (PITR)

Let us now perform a recovery up to the restore point we have created.
**Note:** we need to have at least a backup completed before the creation of the restart point.

On the backup server as the barman user:

1. Check the existence of at least a backup for the pgprimary server

barman@backup:~$ barman list-backup pgprimary
pgprimary 20230329T100114 - Wed Mar 29 10:01:14 2023 - Size: 31.5 MiB - WAL Size: 30.2 KiB

1. Perform the recovery

barman@backup:~$ barman recover \
                  --remote-ssh-command="ssh postgres@pgrecovery" \
                  pgprimary \
                  20230329T100114 \
                  /opt/postgres/data \
                  --target-name restore_point \
                  --target-action pause
  Starting remote restore for server pgprimary using backup 20230329T100114
  Destination directory: /opt/postgres/data
  Doing PITR. Recovery target name: 'restore_point'
  Copying the base backup.
  Generating recovery.conf
  
  Recovery completed (start time: Wed Mar 29 12:01:00 2023, elapsed time: 3 seconds)
  
  Your PostgreSQL server has been successfully prepared for recovery!

Analyzing the above command:

  • --remote-ssh-command specifies the command through which Barman connects to pgrecovery
  • pgprimary specifies the name of the server we want to recover
  • 20230329T100114 specifies the id of the backup we want to recover
  • /opt/postgres/data specifies the destination directory on pgrecovery
  • --target-name specifies the restore point previously created on pgprimary
  • --target-action option, accepting the following values:
    • shutdown: once the recovery target is reached, PostgreSQL is shut down
    • pause: once the recovery target is reached, PostgreSQL is started in a pause state, allowing users to inspect the instance
    • promote: once the recovery target is reached, PostgreSQL will exit recovery and is promoted as a master

See the PostgreSQL documentation for further details on target action.

**NOTE**: the recovery command used in this example will copy all WAL files needed to reach the target. In some cases, when the amount of WAL files is considerably big, it can be better to use the option `–get-wal`.

On pgrecovery you can now check that only the `test` table exists:

1. Start PostgreSQL service: PostgreSQL will replay the WAL files until a consistent point is reached. Watch PostgreSQL logs and wait for that situation to happen before continuing with the next step.

2. connect to the test database and check the existing tables

test=# \d
                 List of relations
   Schema |        Name        | Type  |  Owner
  --------+--------------------+-------+----------
   public | test               | table | postgres
  (1 row)

We can see that we have restored the instance at the exact point in which we had created the restore point on pgprimary

3. we have performed the recovery process using --target-action pause so PostgreSQL is now in the pause state.    

To terminate the recovery and promote the server to master execute:

test=# SELECT pg_wal_replay_resume();

Tidak ada komentar:

Posting Komentar