Commit 418927d1 authored by Marco Nenciarini's avatar Marco Nenciarini

Update upstream source from tag 'upstream/2.6'

Update to upstream version '2.6'
with Debian dir 066579998648e23b67faac9cee1cfcc88694dec5
parents fad7eb35 a0d49ebd
......@@ -2,19 +2,21 @@ Barman Core Team (in alphabetical order):
* Gabriele Bartolini <gabriele.bartolini@2ndquadrant.it> (architect)
* Jonathan Battiato <jonathan.battiato@2ndquadrant.it> (QA/testing)
* Anna Bellandi <anna.bellandi@2ndquadrant.com> (QA/testing)
* Giulio Calacoci <giulio.calacoci@2ndquadrant.it> (developer)
* Francesco Canovai <francesco.canovai@2ndquadrant.it> (QA/testing)
* Leonardo Cecchi <leonardo.cecchi@2ndquadrant.it> (developer)
* Gianni Ciolli <gianni.ciolli@2ndquadrant.it> (QA/testing)
* Britt Cole <britt.cole@2ndquadrant.com> (documentation)
* Niccolò Fei <niccolo.fei@2ndquadrant.com> (QA/testing)
* Marco Nenciarini <marco.nenciarini@2ndquadrant.it> (project leader)
* Rubens Souza <rubens.souza@2ndquadrant.it> (QA/testing)
Past contributors:
* Carlo Ascani
* Stefano Bianucci
* Giuseppe Broccolo
* Carlo Ascani (developer)
* Stefano Bianucci (developer)
* Giuseppe Broccolo (developer)
* Britt Cole (documentation reviewer)
Many thanks go to our sponsors (in alphabetical order):
......
2019-01-31 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Update the ChangeLog file
Prepare release 2.6
2019-01-30 Giulio Calacoci <giulio.calacoci@2ndquadrant.it>
Fix flake8 errors
2019-01-29 Gabriele Bartolini <gabriele.bartolini@2ndQuadrant.it>
Improved documentation on geo-redundancy
Contains also minor fixes and typos fixes
2019-01-10 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Add colored output for check and error/warning messages
Use ANSI escape codes for color selection
2018-11-15 Giulio Calacoci <giulio.calacoci@2ndquadrant.it>
Geographic redundancy implementation
Introduce three new commands (`sync-info`, `sync-backup` and `sync-wals`)
and one global/server configuration option called `primary_ssh_command`.
When the latter is specified globally, the whole instance of Barman is
an async copy of the server reached via SSH by `primary_ssh_command`.
If specified on a single server definition, that server in Barman is
an async copy of the same server that is defined on another Barman
installation.
Geo-redundancy is asynchronous and based on SSH connections between
Barman servers. Cascading backup is supported.
2019-01-04 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Implement put-wal command
2019-01-08 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Add utils.fsync_file() method
2019-01-10 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Clarify output of list-backup on 'WAITING_FOR_WALS' backups
2018-12-24 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Do not treat lock file busy as an error when validating a backup
This patch prevents the `barman backup` command to terminate with a
failure because of a race condition with the cron starting a
backup validation.
2018-12-06 Abhijit Menon-Sen <ams@2ndQuadrant.com>
Fix typo: consistencty → consistency
2018-11-23 Martín Marqués <martin.marques@2ndquadrant.com>
Typo in NEWS file refering to xlogdb
2018-11-24 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Fix current_action in concurrent stop backup errors
2018-11-21 Gabriele Bartolini <gabriele.bartolini@2ndQuadrant.it>
Documentation: exclude PGDG repository from Barman RPM management
Document how to exclude any Barman related software from getting
updated via PGDG RPM repositories.
2018-11-13 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Fix WAL compression detection algorithm
This patch fixes a misdetection of the compression status of WAL files
into the archive when compression method changes.
2018-11-09 Giulio Calacoci <giulio.calacoci@2ndquadrant.it>
Set version to 2.6a1
2018-11-08 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Honour archiver locking in `wait_for_wal` method
Fixes a race between the archiver run by the `cron` and
the archiver run by `wait_for_wal` method.
Now only one archiver can run at the same time.
2018-11-05 Leonardo Cecchi <leonardo.cecchi@2ndquadrant.com>
Make code compliant with the newer Flake8
Fix switch-wal on standby with WAL dir empty
This patch fixes an error happening using the `switch-wal` command on a
standby server when the WAL directory is empty.
Closes: #5993
2018-10-22 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Update the ChangeLog file
......
Barman News - History of user-visible changes
Copyright (C) 2011-2018 2ndQuadrant Limited
Version 2.6 - 4 Feb 2019
- Add support for Geographical redundancy, introducing 3 new commands:
sync-info, sync-backup and sync-wals. Geo-redundancy allows a Barman
server to use another Barman server as data source instead of a
PostgreSQL server.
- Add put-wal command that allows Barman to safely receive WAL files
via PostgreSQL's archive_command using the barman-wal-archive script
included in barman-cli
- Add ANSI colour support to check command
- Minor fixes:
- Fix switch-wal on standby with an empty WAL directory
- Honour archiver locking in wait_for_wal method
- Fix WAL compression detection algorithm
- Fix current_action in concurrent stop backup errors
- Do not treat lock file busy as an error when validating a backup
Version 2.5 - 23 Oct 2018
- Add support for PostgreSQL 11
......@@ -419,7 +440,7 @@ Version 1.4.1 - 05 May 2015
command (Closes: #63)
* Fix computation of WAL production ratio as reported in the
show-backup command
* Improved management of xlogb file, which is now correctly fsynced
* Improved management of xlogdb file, which is now correctly fsynced
when updated. Also, the rebuild-xlogdb command now operates on a
temporary new file, which overwrites the main one when finished.
* Add unit tests for dateutil module compatibility
......
Metadata-Version: 1.1
Name: barman
Version: 2.5
Version: 2.6
Summary: Backup and Recovery Manager for PostgreSQL
Home-page: http://www.pgbarman.org/
Author: 2ndQuadrant Limited
......@@ -29,3 +29,4 @@ Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
......@@ -44,7 +44,7 @@ Web resources
Licence
-------
Copyright (C) 2011-2018 2ndQuadrant Limited
Copyright (C) 2011-2019 2ndQuadrant Limited
Barman is free software: you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
......
Metadata-Version: 1.1
Name: barman
Version: 2.5
Version: 2.6
Summary: Backup and Recovery Manager for PostgreSQL
Home-page: http://www.pgbarman.org/
Author: 2ndQuadrant Limited
......@@ -29,3 +29,4 @@ Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
......@@ -60,6 +60,7 @@ doc/barman.1.d/50-get-wal.md
doc/barman.1.d/50-list-backup.md
doc/barman.1.d/50-list-files.md
doc/barman.1.d/50-list-server.md
doc/barman.1.d/50-put-wal.md
doc/barman.1.d/50-rebuild-xlogdb.md
doc/barman.1.d/50-receive-wal.md
doc/barman.1.d/50-recover.md
......@@ -69,6 +70,9 @@ doc/barman.1.d/50-show-server.md
doc/barman.1.d/50-status.md
doc/barman.1.d/50-switch-wal.md
doc/barman.1.d/50-switch-xlog.md
doc/barman.1.d/50-sync-backup.md
doc/barman.1.d/50-sync-info.md
doc/barman.1.d/50-sync-wals.md
doc/barman.1.d/70-backup-id-shortcuts.md
doc/barman.1.d/75-exit-status.md
doc/barman.1.d/80-see-also.md
......@@ -132,6 +136,7 @@ doc/barman.5.d/50-pre_recovery_retry_script.md
doc/barman.5.d/50-pre_recovery_script.md
doc/barman.5.d/50-pre_wal_delete_retry_script.md
doc/barman.5.d/50-pre_wal_delete_script.md
doc/barman.5.d/50-primary_ssh_command.md
doc/barman.5.d/50-recovery_options.md
doc/barman.5.d/50-retention_policy.md
doc/barman.5.d/50-retention_policy_mode.md
......@@ -153,8 +158,10 @@ doc/barman.5.d/80-see-also.md
doc/barman.5.d/90-authors.md
doc/barman.5.d/95-resources.md
doc/barman.5.d/99-copying.md
doc/barman.d/passive-server.conf-template
doc/barman.d/ssh-server.conf-template
doc/barman.d/streaming-server.conf-template
doc/images/barman-architecture-georedundancy.png
doc/images/barman-architecture-scenario1.png
doc/images/barman-architecture-scenario1b.png
doc/images/barman-architecture-scenario2.png
......
......@@ -30,7 +30,9 @@ import dateutil.parser
import dateutil.tz
from barman import output, xlog
from barman.backup_executor import PostgresBackupExecutor, RsyncBackupExecutor
from barman.backup_executor import (PassiveBackupExecutor,
PostgresBackupExecutor,
RsyncBackupExecutor)
from barman.compression import CompressionManager
from barman.config import BackupOptions
from barman.exceptions import (AbortedRetryHookScript,
......@@ -38,9 +40,11 @@ from barman.exceptions import (AbortedRetryHookScript,
UnknownBackupIdException)
from barman.hooks import HookScriptRunner, RetryHookScriptRunner
from barman.infofile import BackupInfo, WalFileInfo
from barman.lockfile import ServerBackupSyncLock
from barman.recovery_executor import RecoveryExecutor
from barman.remote_status import RemoteStatusMixin
from barman.utils import fsync_dir, human_readable_timedelta, pretty_size
from barman.utils import (fsync_dir, fsync_file, human_readable_timedelta,
pretty_size)
_logger = logging.getLogger(__name__)
......@@ -61,7 +65,9 @@ class BackupManager(RemoteStatusMixin):
self.compression_manager = CompressionManager(self.config, server.path)
self.executor = None
try:
if self.config.backup_method == "postgres":
if server.passive_node:
self.executor = PassiveBackupExecutor(self)
elif self.config.backup_method == "postgres":
self.executor = PostgresBackupExecutor(self)
else:
self.executor = RsyncBackupExecutor(self)
......@@ -169,8 +175,8 @@ class BackupManager(RemoteStatusMixin):
if not isinstance(status_filter, tuple):
status_filter = tuple(status_filter)
backup = BackupInfo(self.server, backup_id=backup_id)
available_backups = self.get_available_backups(status_filter +
(backup.status,))
available_backups = self.get_available_backups(
status_filter + (backup.status,))
ids = sorted(available_backups.keys())
try:
current = ids.index(backup_id)
......@@ -194,8 +200,8 @@ class BackupManager(RemoteStatusMixin):
if not isinstance(status_filter, tuple):
status_filter = tuple(status_filter)
backup = BackupInfo(self.server, backup_id=backup_id)
available_backups = self.get_available_backups(status_filter +
(backup.status,))
available_backups = self.get_available_backups(
status_filter + (backup.status,))
ids = sorted(available_backups.keys())
try:
current = ids.index(backup_id)
......@@ -339,6 +345,15 @@ class BackupManager(RemoteStatusMixin):
human_readable_timedelta(
delete_end_time - delete_start_time))
# Remove the sync lockfile if exists
sync_lock = ServerBackupSyncLock(self.config.barman_lock_directory,
self.config.name, backup.backup_id)
if os.path.exists(sync_lock.filename):
_logger.debug("Deleting backup sync lockfile: %s" %
sync_lock.filename)
os.unlink(sync_lock.filename)
# Run the post_delete_retry_script if present.
try:
retry_script = RetryHookScriptRunner(
......@@ -426,11 +441,13 @@ class BackupManager(RemoteStatusMixin):
backup_info.end_xlog,
backup_info.end_wal,
backup_info.end_offset)
output.info("Backup completed (start time: %s, elapsed time: %s)",
self.executor.copy_start_time,
human_readable_timedelta(
self.executor.copy_end_time -
self.executor.copy_start_time))
executor = self.executor
output.info(
"Backup completed (start time: %s, elapsed time: %s)",
self.executor.copy_start_time,
human_readable_timedelta(
executor.copy_end_time - executor.copy_start_time))
# Create a restore point after a backup
target_name = 'barman_%s' % backup_info.backup_id
self.server.postgres.create_restore_point(target_name)
......@@ -553,8 +570,9 @@ class BackupManager(RemoteStatusMixin):
"""
Retention policy management
"""
if (self.server.enforce_retention_policies and
self.config.retention_policy_mode == 'auto'):
enforce_retention_policies = self.server.enforce_retention_policies
retention_policy_mode = self.config.retention_policy_mode
if (enforce_retention_policies and retention_policy_mode == 'auto'):
available_backups = self.get_available_backups(
BackupInfo.STATUS_ALL)
retention_status = self.config.retention_policy.report()
......@@ -664,7 +682,7 @@ class BackupManager(RemoteStatusMixin):
else:
status = True
try:
self.compression_manager.get_compressor()
self.compression_manager.get_default_compressor()
except CompressionIncompatibility as field:
check_strategy.result(self.config.name,
'%s setting' % field, False)
......@@ -761,7 +779,7 @@ class BackupManager(RemoteStatusMixin):
output.info("Rebuilding xlogdb for server %s", self.config.name)
root = self.config.wals_directory
default_compression = self.config.compression
comp_manager = self.compression_manager
wal_count = label_count = history_count = 0
# lock the xlogdb as we are about replacing it completely
with self.server.xlogdb('w') as fxlogdb:
......@@ -799,17 +817,15 @@ class BackupManager(RemoteStatusMixin):
'rebuilding the wal database: %s',
fullname)
continue
wal_info = WalFileInfo.from_file(
fullname,
default_compression=default_compression)
wal_info = comp_manager.get_wal_file_info(
fullname)
fxlogdb_new.write(wal_info.to_xlogdb_line())
else:
# only history files are here
if xlog.is_history_file(fullname):
history_count += 1
wal_info = WalFileInfo.from_file(
fullname,
default_compression=default_compression)
wal_info = comp_manager.get_wal_file_info(
fullname)
fxlogdb_new.write(wal_info.to_xlogdb_line())
else:
_logger.warning(
......@@ -834,10 +850,11 @@ class BackupManager(RemoteStatusMixin):
from os.path import isdir, join
root = self.config.wals_directory
comp_manager = self.compression_manager
# If the WAL archive directory doesn't exists the archive is empty
if not isdir(root):
return None
return dict()
# Traverse all the directory in the archive in reverse order,
# returning the first WAL file found
......@@ -864,11 +881,12 @@ class BackupManager(RemoteStatusMixin):
fullname = join(hash_dir, wal_name)
# Return the first file that has the correct name
if not isdir(fullname) and xlog.is_wal_file(fullname):
timelines[timeline] = WalFileInfo.from_file(fullname)
timelines[timeline] = comp_manager.get_wal_file_info(
fullname)
break
# Return the timeline map or None if it is empty
return timelines or None
# Return the timeline map
return timelines
def remove_wal_before_backup(self, backup_info, timelines_to_protect=None):
"""
......@@ -981,14 +999,11 @@ class BackupManager(RemoteStatusMixin):
# execute fsync() on all the contained files
for filename in file_names:
file_path = os.path.join(dir_path, filename)
file_fd = os.open(file_path, os.O_RDONLY)
file_stat = os.fstat(file_fd)
file_stat = fsync_file(file_path)
backup_size += file_stat.st_size
# Excludes hard links from real backup size
if file_stat.st_nlink == 1:
deduplicated_size += file_stat.st_size
os.fsync(file_fd)
os.close(file_fd)
# Save size into BackupInfo object
backup_info.set_attribute('size', backup_size)
backup_info.set_attribute('deduplicated_size', deduplicated_size)
......
......@@ -661,10 +661,12 @@ class SshBackupExecutor(with_metaclass(ABCMeta, BackupExecutor)):
'for server %s' % backup_manager.config.name)
# Apply the default backup strategy
if (BackupOptions.CONCURRENT_BACKUP not in
self.config.backup_options and
BackupOptions.EXCLUSIVE_BACKUP not in
self.config.backup_options):
backup_options = self.config.backup_options
concurrent_backup = (
BackupOptions.CONCURRENT_BACKUP in backup_options)
exclusive_backup = (
BackupOptions.EXCLUSIVE_BACKUP in backup_options)
if not concurrent_backup and not exclusive_backup:
self.config.backup_options.add(BackupOptions.EXCLUSIVE_BACKUP)
output.debug("The default backup strategy for "
"any ssh based backup_method is: "
......@@ -778,7 +780,7 @@ class SshBackupExecutor(with_metaclass(ABCMeta, BackupExecutor)):
path=self.server.path)
minimal_ssh_output = ''.join(cmd.get_last_output())
except FsOperationFailed as e:
hint = str(e).strip()
hint = str(e).strip()
# Output the result
check_strategy.result(self.config.name, cmd is not None, hint=hint)
......@@ -793,9 +795,9 @@ class SshBackupExecutor(with_metaclass(ABCMeta, BackupExecutor)):
"the remote command output")
# If SSH works but PostgreSQL is not responding
if (cmd is not None and
self.server.get_remote_status().get('server_txt_version')
is None):
server_txt_version = self.server.get_remote_status().get(
'server_txt_version')
if cmd is not None and server_txt_version is None:
# Check for 'backup_label' presence
last_backup = self.server.get_backup(
self.server.get_last_backup_id(BackupInfo.STATUS_NOT_EMPTY)
......@@ -883,6 +885,95 @@ class SshBackupExecutor(with_metaclass(ABCMeta, BackupExecutor)):
output.info(message)
class PassiveBackupExecutor(BackupExecutor):
"""
Dummy backup executors for Passive servers.
Raises a SshCommandException if 'primary_ssh_command' is not set.
"""
def __init__(self, backup_manager):
"""
Constructor of Dummy backup executors for Passive servers.
:param barman.backup.BackupManager backup_manager: the BackupManager
assigned to the executor
"""
super(PassiveBackupExecutor, self).__init__(backup_manager)
# Retrieve the ssh command and the options necessary for the
# remote ssh access.
self.ssh_command, self.ssh_options = _parse_ssh_command(
backup_manager.config.primary_ssh_command)
# Requires ssh_command to be set
if not self.ssh_command:
raise SshCommandException(
'Invalid primary_ssh_command in barman configuration '
'for server %s' % backup_manager.config.name)
def backup(self, backup_info):
"""
This method should never be called, because this is a passive server
:param barman.infofile.BackupInfo backup_info: backup information
"""
# The 'backup' command is not available on a passive node.
# If we get here, there is a programming error
assert False
def check(self, check_strategy):
"""
Perform additional checks for PassiveBackupExecutor, including
Ssh connection to the primary (executing a 'true' command on the
remote server).
:param CheckStrategy check_strategy: the strategy for the management
of the results of the various checks
"""
check_strategy.init_check('ssh')
hint = 'Barman primary node'
cmd = None
minimal_ssh_output = None
try:
cmd = UnixRemoteCommand(self.ssh_command,
self.ssh_options,
path=self.server.path)
minimal_ssh_output = ''.join(cmd.get_last_output())
except FsOperationFailed as e:
hint = str(e).strip()
# Output the result
check_strategy.result(self.config.name, cmd is not None, hint=hint)
# Check if the communication channel is "clean"
if minimal_ssh_output:
check_strategy.init_check('ssh output clean')
check_strategy.result(
self.config.name,
False,
hint="the configured ssh_command must not add anything to "
"the remote command output")
def status(self):
"""
Set additional status info for PassiveBackupExecutor.
"""
# On passive nodes show the primary_ssh_command
output.result('status', self.config.name,
"primary_ssh_command",
"SSH command to primary server",
self.config.primary_ssh_command)
@property
def mode(self):
"""
Property that defines the mode used for the backup.
:return str: a string describing the mode used for the backup
"""
return 'passive'
class RsyncBackupExecutor(SshBackupExecutor):
"""
Concrete class for backup via Rsync+Ssh.
......@@ -1135,10 +1226,10 @@ class BackupStrategy(with_metaclass(ABCMeta, object)):
"""
#: Regex for START WAL LOCATION info
START_TIME_RE = re.compile('^START TIME: (.*)', re.MULTILINE)
START_TIME_RE = re.compile(r'^START TIME: (.*)', re.MULTILINE)
#: Regex for START TIME info
WAL_RE = re.compile('^START WAL LOCATION: (.*) \(file (.*)\)',
WAL_RE = re.compile(r'^START WAL LOCATION: (.*) \(file (.*)\)',
re.MULTILINE)
def __init__(self, executor, mode=None):
......@@ -1251,8 +1342,9 @@ class BackupStrategy(with_metaclass(ABCMeta, object)):
backup_info.xlog_segment_size))
# If file_name and file_offset are available, use them
if (start_info.get('file_name') is not None and
start_info.get('file_offset') is not None):
file_name = start_info.get('file_name')
file_offset = start_info.get('file_offset')
if (file_name is not None and file_offset is not None):
backup_info.set_attribute('begin_wal',
start_info['file_name'])
backup_info.set_attribute('begin_offset',
......@@ -1276,8 +1368,9 @@ class BackupStrategy(with_metaclass(ABCMeta, object)):
# If file_name or file_offset are missing build them using the stop
# location and the timeline.
if (stop_info.get('file_name') is None or
stop_info.get('file_offset') is None):
file_name = stop_info.get('file_name')
file_offset = stop_info.get('file_offset')
if file_name is None or file_offset is None:
# Take a copy of stop_info because we are going to update it
stop_info = stop_info.copy()
# Get the timeline from the stop_info if available, otherwise
......@@ -1559,12 +1652,15 @@ class ConcurrentBackupStrategy(BackupStrategy):
:param barman.infofile.BackupInfo backup_info: backup information
"""
pg_version = self.executor.server.postgres.server_version
self.current_action = "issuing stop backup command"
if pg_version >= 90600:
# On 9.6+ execute native concurrent stop backup
self.current_action += " (native concurrent)"
_logger.debug("Stop of native concurrent backup")
self._concurrent_stop_backup(backup_info)
else:
# On older Postgres use pgespresso
self.current_action += " (pgespresso)"
_logger.debug("Stop of concurrent backup with pgespresso")
self._pgespresso_stop_backup(backup_info)
......
......@@ -19,6 +19,7 @@
This module implements the interface with the command line and the logger.
"""
import json
import logging
import os
import sys
......@@ -32,10 +33,11 @@ import barman.config
import barman.diagnose
from barman import output
from barman.config import RecoveryOptions
from barman.exceptions import BadXlogSegmentName, RecoveryException
from barman.exceptions import BadXlogSegmentName, RecoveryException, SyncError
from barman.infofile import BackupInfo
from barman.server import Server
from barman.utils import configure_logging, drop_privileges, parse_log_level
from barman.utils import (BarmanEncoder, configure_logging, drop_privileges,
parse_log_level)
_logger = logging.getLogger(__name__)
......@@ -118,6 +120,9 @@ def list_server(minimal=False):
# If server has configuration errors
elif server.config.disabled:
description += " (WARNING: disabled)"
# If server is a passive node
if server.passive_node:
description += ' (Passive)'
output.result('list_server', name, description)
output.close_and_exit()
......@@ -209,7 +214,7 @@ def backup(args):
"""
Perform a full backup for the given server (supports 'all')
"""
servers = get_server_list(args, skip_inactive=True)
servers = get_server_list(args, skip_inactive=True, skip_passive=True)
for name in sorted(servers):
server = servers[name]
......@@ -648,6 +653,84 @@ def diagnose():
output.close_and_exit()
@named('sync-info')
@arg('--primary', help='execute the sync-info on the primary node (if set)',
action='store_true', default=SUPPRESS)
@arg("server_name",
completer=server_completer,
help='specifies the server name for the command')
@arg("last_wal",
help='specifies the name of the latest WAL read',
nargs='?')
@arg("last_position",
nargs='?',
type=check_positive,
help='the last position read from xlog database (in bytes)')
@expects_obj
def sync_info(args):
"""
Output the internal synchronisation status.
Used to sync_backup with a passive node
"""
server = get_server(args)
try:
# if called with --primary option
if getattr(args, 'primary', False):
primary_info = server.primary_node_info(args.last_wal,
args.last_position)
output.info(json.dumps(primary_info, sys.stdout,
cls=BarmanEncoder, indent=4),
log=False)
else:
server.sync_status(args.last_wal, args.last_position)
except SyncError as e:
# Catch SyncError exceptions and output only the error message,
# preventing from logging the stack trace
output.error(e)
output.close_and_exit()
@named('sync-backup')
@arg("server_name",
completer=server_completer,
help='specifies the server name for the command')
@arg("backup_id",
help='specifies the backup ID to be copied on the passive node')
@expects_obj
def sync_backup(args):
"""
Command that synchronises a backup from a master to a passive node
"""
server = get_server(args)
try:
server.sync_backup(args.backup_id)
except SyncError as e:
# Catch SyncError exceptions and output only the error message,
# preventing from logging the stack trace
output.error(e)
output.close_and_exit()
@named('sync-wals')
@arg("server_name",
completer=server_completer,
help='specifies the server name for the command')
@expects_obj
def sync_wals(args):
"""
Command that synchronises WAL files from a master to a passive node
"""
server = get_server(args)
try:
server.sync_wals()
except SyncError as e:
# Catch SyncError exceptions and output only the error message,
# preventing from logging the stack trace
output.error(e)
output.close_and_exit()
@named('show-backup')
@arg('server_name',
completer=server_completer,
......@@ -775,6 +858,28 @@ def get_wal(args):
output.close_and_exit()
@named('put-wal')
@arg('server_name',
completer=server_completer,
help='specifies the server name for the command')
@expects_obj
def put_wal(args):