Commit 7486ce68 authored by Marco Nenciarini's avatar Marco Nenciarini

New upstream version 2.9

parent 11806dbc
2019-07-26 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Update the ChangeLog file
Version set to 2.9
2019-07-25 Gabriele Bartolini <gabriele.bartolini@2ndQuadrant.it>
Add release notes for version 2.9
Manage `synchronous_standby_names = '*'` special entry
Fixes GH-231
2019-07-23 Gabriele Bartolini <gabriele.bartolini@2ndQuadrant.it>
Clarify documentation about WAL archiving from standby
Fixes GH-232
2019-07-24 Leonardo Cecchi <leonardo.cecchi@2ndquadrant.it>
Fix exception in check-backup for in progress backups
Backups that are in progress have not `end_wal` and that was causing
check_backup to fail.
Fixes GH-224
2019-07-11 Marcin Hlybin <marcin.hlybin@gmail.com>
JSON Output Writer (experimental)
Add -f/--format option with 'json' value. Default is 'console'.
Currently declared experimental. Actual structure of the output
will be defined and documented in a future release based on
feedback and production experience.
Fixes GH-234
2019-07-22 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Truncate the .partial WAL file when size is wrong
During the `receive-wal` startup, if we detect that the partial file size
is wrong, we truncate it and then start `pg_receivewal`, making Barman
more resilient.
This is coherent with the behaviour of `pg_receivewal`, that will allocate
the exact size needed for a WAL segment before starting to write it.
This also enables `receive-wal` to automatically resume after the backup
server ran out of disk space.
2019-07-18 Gabriele Bartolini <gabriele.bartolini@2ndQuadrant.it>
Add `--target-lsn` option to `recover` command
Support `recovery_target_lsn` option in recovery, allowing users
to specify a Log Sequence Number (LSN) as recovery target.
Extract 'checkpoint_timeout'
2019-03-29 Timothy Alexander <dragonfyre13@gmail.com>
Fix execution environment of recovery retry scripts
This patch fix the execution environment of the recovery retry
scripts: previously the environment applyied was the one for the
"recovery script" instead of the one for the "recovery retry script".
Pull request GH-203
2019-04-13 Jeff Janes <jeff.janes@gmail.com>
Fix "Backup completed" elapsed time
The message issued when a backup completes does not include all of the
elapsed time, but rather only a non-intuitive subset of it.
Notably it does not include the time taken to fsync all of the files,
which can be substantial. Make the message include all time up to when
the message is issued.
Pull request GH-216
2019-07-11 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Support for PostgreSQL 12
PostgreSQL 12 removes support for `recovery.conf`, in favour of
GUC options and introduces two signal files: `recovery.signal`
for recovery and `standby.signal` for standby mode.
Barman detects PostgreSQL 12 and transparently supports it both
during backup (by excluding some files) and, most importantly,
during recovery, by rewriting configuration options in the
`postgresql.auto.conf` file.
Support for versions prior to 12 has not been changed.
2019-07-12 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Rename barman_xlog directory to barman_wal
2019-07-15 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Preserve PostgreSQL connection status in aliveness check
In some cases, PostgreSQL was terminating the connection for the
start backup with the following error message:
"FATAL: terminating connection due to idle-in-transaction timeout".
This was caused by the "SELECT 1" query that checks that a connection
is alive.
Fixes GH149
2019-06-04 Gabriele Bartolini <gabriele.bartolini@2ndQuadrant.it>
Skip WAL archive failure if we have a terminated backup
In low workload situations with concurrent backup, at the end of the
backup the WAL archive might be temporarily empty and therefore throw
a failure with `check` or commands that rely on it such as `backup`.
Ignore the error if the last backup is `WAITING_FOR_WALS` as WAL
archiving must have been verified before running that backup.
2019-05-20 Drazen Kacar <drazen.kacar@oradian.com>
barman-wal-restore: Add --spool-dir option
Allow the user to change the spool directory location from the
default, avoiding conflicts in case of multiple PostgreSQL instances
on the same server.
Closes: GH-225, 2ndquadrant-it/barman-cli#2
2019-05-23 Gabriele Bartolini <gabriele.bartolini@2ndQuadrant.it>
Add '--bwlimit' option to backup and recover commands
2019-05-16 Gabriele Bartolini <gabriele.bartolini@2ndQuadrant.it>
replication-status doesn't show streamers without a slot
Fixes #GH222
2019-05-23 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Use latest PostgreSQL release in sample configurations
2019-05-17 Gabriele Bartolini <gabriele.bartolini@2ndQuadrant.it>
Clarify some options in passive node configuration example
2019-05-16 Gabriele Bartolini <gabriele.bartolini@2ndQuadrant.it>
Version set to 2.9a1
2019-05-14 Marco Nenciarini <marco.nenciarini@2ndquadrant.it>
Update the ChangeLog file
......
Barman News - History of user-visible changes
Copyright (C) 2011-2019 2ndQuadrant Limited
Version 2.9 - 1 Aug 2019
- Transparently support PostgreSQL 12, by supporting the new way of
managing recovery and standby settings through GUC options and
signal files (recovery.signal and standby.signal)
- Add --bwlimit command line option to set bandwidth limitation for
backup and recover commands
- Ignore WAL archive failure for check command in case the latest
backup is WAITING_FOR_WALS
- Add --target-lsn option to set recovery target Log Sequence Number
for recover command with PostgreSQL 10 or higher
- Add --spool-dir option to barman-wal-restore so that users can
change the spool directory location from the default, avoiding
conflicts in case of multiple PostgreSQL instances on the same
server (thanks to Drazen Kacar).
- Rename barman_xlog directory to barman_wal
- JSON output writer to export command output as JSON objects and
facilitate integration with external tools and systems (thanks to
Marcin Onufry Hlybin). Experimental in this release.
Bug fixes:
- replication-status doesn’t show streamers with no slot (GH-222)
- When checking that a connection is alive (“SELECT 1” query),
preserve the status of the PostgreSQL connection (GH-149). This
fixes those cases of connections that were terminated due to
idle-in-transaction timeout, causing concurrent backups to fail.
Version 2.8 - 17 May 2019
- Add support for reuse_backup in geo-redundancy for incremental
......@@ -9,7 +44,7 @@ Version 2.8 - 17 May 2019
- Improve performance of rsync based copy by using strptime instead of
the more generic dateutil.parser (#210)
- Add ‘test’ option to barman-wal-archive and barman-wal-restore to
- Add ‘--test’ option to barman-wal-archive and barman-wal-restore to
verify the connection with the Barman server
- Complain if backup_options is not explicitly set, as the future
......
Metadata-Version: 1.1
Name: barman
Version: 2.8
Version: 2.9
Summary: Backup and Recovery Manager for PostgreSQL
Home-page: http://www.pgbarman.org/
Author: 2ndQuadrant Limited
......
Metadata-Version: 1.1
Name: barman
Version: 2.8
Version: 2.9
Summary: Backup and Recovery Manager for PostgreSQL
Home-page: http://www.pgbarman.org/
Author: 2ndQuadrant Limited
......
......@@ -447,7 +447,7 @@ class BackupManager(RemoteStatusMixin):
"Backup completed (start time: %s, elapsed time: %s)",
self.executor.copy_start_time,
human_readable_timedelta(
executor.copy_end_time - executor.copy_start_time))
datetime.datetime.now() - executor.copy_start_time))
# Create a restore point after a backup
target_name = 'barman_%s' % backup_info.backup_id
self.server.postgres.create_restore_point(target_name)
......@@ -494,6 +494,7 @@ class BackupManager(RemoteStatusMixin):
:kwparam str|None target_tli: the target timeline
:kwparam str|None target_time: the target time
:kwparam str|None target_xid: the target xid
:kwparam str|None target_lsn: the target LSN
:kwparam str|None target_name: the target name created previously with
pg_create_restore_point() function call
:kwparam bool|None target_immediate: end recovery as soon as
......@@ -511,15 +512,23 @@ class BackupManager(RemoteStatusMixin):
# Run the pre_recovery_script if present.
script = HookScriptRunner(self, 'recovery_script', 'pre')
script.env_from_recover(backup_info, dest, tablespaces, remote_command,
**kwargs)
script.env_from_recover(
backup_info,
dest,
tablespaces,
remote_command,
**kwargs)
script.run()
# Run the pre_recovery_retry_script if present.
retry_script = RetryHookScriptRunner(
self, 'recovery_retry_script', 'pre')
script.env_from_recover(backup_info, dest, tablespaces, remote_command,
**kwargs)
retry_script.env_from_recover(
backup_info,
dest,
tablespaces,
remote_command,
**kwargs)
retry_script.run()
# Execute the recovery.
......@@ -535,8 +544,12 @@ class BackupManager(RemoteStatusMixin):
try:
retry_script = RetryHookScriptRunner(
self, 'recovery_retry_script', 'post')
script.env_from_recover(
backup_info, dest, tablespaces, remote_command, **kwargs)
retry_script.env_from_recover(
backup_info,
dest,
tablespaces,
remote_command,
**kwargs)
retry_script.run()
except AbortedRetryHookScript as e:
# Ignore the ABORT_STOP as it is a post-hook operation
......@@ -1044,6 +1057,11 @@ class BackupManager(RemoteStatusMixin):
end_wal = backup_info.end_wal
timeline = begin_wal[:8]
# Case 0: there is nothing to check for this backup, as it is
# currently in progress
if not end_wal:
return
# Case 1: Barman still doesn't know about the timeline the backup
# started with. We still haven't archived any WAL corresponding
# to the backup, so we can't proceed with checking the existence
......
......@@ -1004,9 +1004,12 @@ class RsyncBackupExecutor(SshBackupExecutor):
# Files: see excludeFiles const in PostgreSQL source
'pgsql_tmp*',
'postgresql.auto.conf.tmp',
'current_logfiles.tmp',
'pg_internal.init',
'postmaster.pid',
'postmaster.opts',
'recovery.conf',
'standby.signal',
# Directories: see excludeDirContents const in PostgreSQL source
'pg_dynshmem/*',
......
......@@ -218,6 +218,13 @@ def backup_completer(prefix, parsed_args, **kwargs):
@arg('--jobs', '-j',
help='Run the copy in parallel using NJOBS processes.',
type=check_positive, metavar='NJOBS')
@arg('--bwlimit',
help="maximum transfer rate in kilobytes per second. "
"A value of 0 means no limit. Overrides 'bandwidth_limit' "
"configuration option.",
metavar='KBPS',
type=check_non_negative,
default=SUPPRESS)
@expects_obj
def backup(args):
"""
......@@ -241,6 +248,8 @@ def backup(args):
server.config.immediate_checkpoint = args.immediate_checkpoint
if args.jobs is not None:
server.config.parallel_jobs = args.jobs
if hasattr(args, 'bwlimit'):
server.config.bandwidth_limit = args.bwlimit
with closing(server):
server.backup()
output.close_and_exit()
......@@ -354,6 +363,7 @@ def rebuild_xlogdb(args):
help='target time. You can use any valid unambiguous representation. '
'e.g: "YYYY-MM-DD HH:MM:SS.mmm"')
@arg('--target-xid', help='target transaction ID')
@arg('--target-lsn', help='target LSN (Log Sequence Number)')
@arg('--target-name',
help='target name created previously with '
'pg_create_restore_point() function call')
......@@ -362,7 +372,7 @@ def rebuild_xlogdb(args):
action='store_true',
default=False)
@arg('--exclusive',
help='set target xid to be non inclusive', action="store_true")
help='set target to be non inclusive', action="store_true")
@arg('--tablespace',
help='tablespace relocation rule',
metavar='NAME:LOCATION', action='append')
......@@ -378,6 +388,13 @@ def rebuild_xlogdb(args):
help='specifies the backup ID to recover')
@arg('destination_directory',
help='the directory where the new server is created')
@arg('--bwlimit',
help="maximum transfer rate in kilobytes per second. "
"A value of 0 means no limit. Overrides 'bandwidth_limit' "
"configuration option.",
metavar='KBPS',
type=check_non_negative,
default=SUPPRESS)
@arg('--retry-times',
help='Number of retries after an error if base backup copy fails.',
type=check_non_negative)
......@@ -427,7 +444,7 @@ def rebuild_xlogdb(args):
@expects_obj
def recover(args):
"""
Recover a server at a given time or xid
Recover a server at a given time, name, LSN or xid
"""
server = get_server(args)
......@@ -485,10 +502,12 @@ def recover(args):
server.config.recovery_options.remove(RecoveryOptions.GET_WAL)
if args.jobs is not None:
server.config.parallel_jobs = args.jobs
if hasattr(args, 'bwlimit'):
server.config.bandwidth_limit = args.bwlimit
# PostgreSQL supports multiple parameters to specify when the recovery
# process will end, and in that case the last entry in recovery.conf
# will be used. See [1]
# process will end, and in that case the last entry in recovery
# configuration files will be used. See [1]
#
# Since the meaning of the target options is not dependent on the order
# of parameters, we decided to make the target options mutually exclusive.
......@@ -497,7 +516,7 @@ def recover(args):
# recovery-target-settings.html
target_options = ['target_tli', 'target_time', 'target_xid',
'target_name', 'target_immediate']
'target_lsn', 'target_name', 'target_immediate']
specified_target_options = len(
[option for option in target_options if getattr(args, option)])
if specified_target_options > 1:
......@@ -523,6 +542,7 @@ def recover(args):
target_tli=args.target_tli,
target_time=args.target_time,
target_xid=args.target_xid,
target_lsn=args.target_lsn,
target_name=args.target_name,
target_immediate=args.target_immediate,
exclusive=args.exclusive,
......
# walrestore - Remote Barman WAL restore command for PostgreSQL
#
# This script remotely fetches WAL files from Barman via SSH, on demand.
# It is intended to be used as restore_command in recovery.conf files
# It is intended to be used in restore_command in recovery configuration files
# of PostgreSQL standby servers. Supports parallel fetching and
# protects against SSH failures.
#
......@@ -38,8 +38,7 @@ except ImportError:
raise SystemExit("Missing required python module: argparse")
DEFAULT_USER = 'barman'
# TODO: make this generic
SPOOL_DIR = '/var/tmp/walrestore'
DEFAULT_SPOOL_DIR = '/var/tmp/walrestore'
# The string_types list is used to identify strings
# in a consistent way between python 2 and 3
......@@ -120,7 +119,7 @@ def spawn_additional_process(config, additional_files):
"""
processes = []
for wal_name in additional_files:
spool_file_name = os.path.join(SPOOL_DIR, wal_name)
spool_file_name = os.path.join(config.spool_dir, wal_name)
try:
# Spawn a process and write the output in the spool dir
process = RemoteGetWal(config, wal_name, spool_file_name)
......@@ -149,10 +148,11 @@ def peek_additional_files(config):
# Make sure the SPOOL_DIR exists
try:
if not os.path.exists(SPOOL_DIR):
os.mkdir(SPOOL_DIR)
if not os.path.exists(config.spool_dir):
os.mkdir(config.spool_dir)
except EnvironmentError as e:
exit_with_error("Cannot create '%s' directory: %s" % (SPOOL_DIR, e))
exit_with_error("Cannot create '%s' directory: %s" %
(config.spool_dir, e))
# Retrieve the list of files from remote
additional_files = execute_peek(config)
......@@ -232,7 +232,7 @@ def try_deliver_from_spool(config, dest_file):
:param argparse.Namespace config: the configuration from command line
:param dest_file: The destination file object
"""
spool_file = os.path.join(SPOOL_DIR, config.wal_name)
spool_file = os.path.join(config.spool_dir, config.wal_name)
# id the file is not present, give up
if not os.path.exists(spool_file):
......@@ -317,6 +317,12 @@ def parse_arguments(args=None):
"in parallel. "
"Defaults to 0 (disabled).",
)
parser.add_argument(
"--spool-dir", default=DEFAULT_SPOOL_DIR,
metavar="SPOOL_DIR",
help="Specifies spool directory for WAL files. Defaults to "
"'{0}'.".format(DEFAULT_SPOOL_DIR)
)
parser.add_argument(
'-z', '--gzip',
help='Transfer the WAL files compressed with gzip',
......
This diff is collapsed.
......@@ -26,7 +26,7 @@ from abc import ABCMeta
import psycopg2
from psycopg2.errorcodes import (DUPLICATE_OBJECT, OBJECT_IN_USE,
UNDEFINED_OBJECT)
from psycopg2.extensions import STATUS_IN_TRANSACTION
from psycopg2.extensions import STATUS_IN_TRANSACTION, STATUS_READY
from psycopg2.extras import DictCursor, NamedTupleCursor
from barman.exceptions import (ConninfoException, PostgresAppNameError,
......@@ -172,7 +172,9 @@ class PostgreSQL(with_metaclass(ABCMeta, RemoteStatusMixin)):
# Check if the connection works by running 'SELECT 1'
cursor = None
initial_status = None
try:
initial_status = self._conn.status
cursor = self._conn.cursor()
cursor.execute(self.CHECK_QUERY)
except psycopg2.DatabaseError:
......@@ -185,6 +187,10 @@ class PostgreSQL(with_metaclass(ABCMeta, RemoteStatusMixin)):
return False
finally:
if cursor:
# Rollback if initial status was IDLE because the CHECK QUERY
# has started a new transaction.
if initial_status == STATUS_READY:
self._conn.rollback()
cursor.close()
return True
......@@ -659,6 +665,30 @@ class PostgreSQLConnection(PostgreSQL):
force_str(e).strip())
return None
@property
def checkpoint_timeout(self):
"""
Retrieve the checkpoint_timeout setting in PostgreSQL
:return: The checkpoint timeout (in seconds)
"""
try:
cur = self._cursor(cursor_factory=DictCursor)
# We can't use the `get_setting` method here, because it
# uses `SHOW`, returning an human readable value such as "5min",
# while we prefer a raw value such as 300.
cur.execute("SELECT setting "
"FROM pg_settings "
"WHERE name='checkpoint_timeout'")
result = cur.fetchone()
checkpoint_timeout = int(result[0])
return checkpoint_timeout
except ValueError as e:
_logger.error("Error retrieving checkpoint_timeout: %s",
force_str(e).strip())
return None
def get_archiver_stats(self):
"""
This method gathers statistics from pg_stat_archiver.
......@@ -761,6 +791,8 @@ class PostgreSQLConnection(PostgreSQL):
result['current_xlog'] = self.current_xlog_file_name
result['current_size'] = self.current_size
result['archive_timeout'] = self.archive_timeout
result['checkpoint_timeout'] = self.checkpoint_timeout
result['xlog_segment_size'] = self.xlog_segment_size
result.update(self.get_configuration_files())
......@@ -1253,7 +1285,8 @@ class PostgreSQLConnection(PostgreSQL):
# Look for replication slot name
from_repslot = "LEFT JOIN pg_replication_slots rs " \
"ON (r.pid = rs.active_pid) "
where_clauses += ["rs.slot_type = 'physical'"]
where_clauses += ["(rs.slot_type IS NULL OR "
"rs.slot_type = 'physical')"]
elif self.server_version >= 90500:
# PostgreSQL 9.5/9.6
what = "pid, " \
......@@ -1276,7 +1309,8 @@ class PostgreSQLConnection(PostgreSQL):
# Look for replication slot name
from_repslot = "LEFT JOIN pg_replication_slots rs " \
"ON (r.pid = rs.active_pid) "
where_clauses += ["rs.slot_type = 'physical'"]
where_clauses += ["(rs.slot_type IS NULL OR "
"rs.slot_type = 'physical')"]
elif self.server_version >= 90400:
# PostgreSQL 9.4
what = "pid, " \
......
This diff is collapsed.
......@@ -504,9 +504,14 @@ class Server(RemoteStatusMixin):
# NOTE: This check needs to be only visible if it fails
if xlogdb_empty:
check_strategy.result(
self.config.name, False,
hint='please make sure WAL shipping is setup')
# Skip the error if we have a terminated backup
# with status WAITING_FOR_WALS.
# TODO: Improve this check
backup_id = self.get_last_backup_id([BackupInfo.WAITING_FOR_WALS])
if not backup_id:
check_strategy.result(
self.config.name, False,
hint='please make sure WAL shipping is setup')
# Check the number of wals in the incoming directory
self._check_wal_queue(check_strategy,
......@@ -1348,6 +1353,7 @@ class Server(RemoteStatusMixin):
:kwparam str|None target_tli: the target timeline
:kwparam str|None target_time: the target time
:kwparam str|None target_xid: the target xid
:kwparam str|None target_lsn: the target LSN
:kwparam str|None target_name: the target name created previously with
pg_create_restore_point() function call
:kwparam bool|None target_immediate: end recovery as soon as
......@@ -2490,8 +2496,7 @@ class Server(RemoteStatusMixin):
sync_status['wals'] = wals
sync_status['version'] = barman.__version__
sync_status['config'] = self.config
output.info(json.dumps(sync_status, cls=BarmanEncoder, indent=4),
log=False)
json.dump(sync_status, sys.stdout, cls=BarmanEncoder, indent=4)
def sync_cron(self):
"""
......
......@@ -19,4 +19,4 @@
This module contains the current Barman version.
'''
__version__ = '2.8'
__version__ = '2.9'
......@@ -35,6 +35,7 @@ from barman.hooks import HookScriptRunner, RetryHookScriptRunner
from barman.infofile import WalFileInfo
from barman.remote_status import RemoteStatusMixin
from barman.utils import fsync_dir, fsync_file, mkpath, with_metaclass
from barman.xlog import is_partial_file
_logger = logging.getLogger(__name__)
......@@ -712,6 +713,7 @@ class StreamingWalArchiver(WalArchiver):
'PostgreSQL server version')
# Execute sanity check on replication slot usage
postgres_status = self.server.postgres.get_remote_status()
if self.config.slot_name:
# Check if slots are supported
if not remote_status['pg_receivexlog_supports_slots']:
......@@ -720,7 +722,6 @@ class StreamingWalArchiver(WalArchiver):
'(9.4 or higher is required)' %
self.server.streaming.server_txt_version)
# Check if the required slot exists
postgres_status = self.server.postgres.get_remote_status()
if postgres_status['replication_slot'] is None:
raise ArchiverFailure(
"replication slot '%s' doesn't exist. "
......@@ -733,6 +734,10 @@ class StreamingWalArchiver(WalArchiver):
"replication slot '%s' is already in use" %
(self.config.slot_name,))
# Check the size of the .partial WAL file and truncate it if needed
self._truncate_partial_file_if_needed(
postgres_status['xlog_segment_size'])
# Make sure we are not wasting precious PostgreSQL resources
self.server.close()
......@@ -784,6 +789,40 @@ class StreamingWalArchiver(WalArchiver):
output.info("Removing status file %s" % partial)
os.unlink(partial)
def _truncate_partial_file_if_needed(self, xlog_segment_size):
"""
Truncate .partial WAL file if size is not 0 or xlog_segment_size
:param int xlog_segment_size:
"""
# Retrieve the partial list (only one is expected)
partial_files = glob(os.path.join(
self.config.streaming_wals_directory, '*.partial'))
# Take the last partial file, ignoring wrongly formatted file names
last_partial = None
for partial in partial_files:
if not is_partial_file(partial):
continue
if not last_partial or partial > last_partial:
last_partial = partial
# Skip further work if there is no good partial file
if not last_partial:
return
# If size is either 0 or wal_segment_size everything is fine...
partial_size = os.path.getsize(last_partial)
if partial_size == 0 or partial_size == xlog_segment_size:
return
# otherwise truncate the file to be empty. This is safe because
# pg_receivewal pads the file to the full size before start writing.
output.info("Truncating partial file %s that has wrong size %s "
"while %s was expected." %
(last_partial, partial_size, xlog_segment_size))
open(last_partial, 'wb').close()
def get_next_batch(self):
"""
Returns the next batch of WAL files that have been archived via
......@@ -918,7 +957,8 @@ class StreamingWalArchiver(WalArchiver):
# if `synchronous_standby_names` is configured and contains
# the value of `streaming_archiver_name`
streaming_archiver_name = self.config.streaming_archiver_name
synchronous = (syncnames and streaming_archiver_name in syncnames)
synchronous = (syncnames and (
'*' in syncnames or streaming_archiver_name in syncnames))
_logger.debug('Synchronous WAL streaming for %s: %s',
streaming_archiver_name,
synchronous)
......
.\" Automatically generated by Pandoc 2.7.2
.\" Automatically generated by Pandoc 2.7.3
.\"
.TH "BARMAN-WAL-ARCHIVE" "1" "May 17, 2019" "Barman User manuals" "Version 2.8"
.TH "BARMAN-WAL-ARCHIVE" "1" "August 1, 2019" "Barman User manuals" "Version 2.9"
.hy
.SH NAME
.PP
......
% BARMAN-WAL-ARCHIVE(1) Barman User manuals | Version 2.8
% BARMAN-WAL-ARCHIVE(1) Barman User manuals | Version 2.9
% 2ndQuadrant <http://www.2ndQuadrant.com>
% May 17, 2019
% August 1, 2019
# NAME
......
.\" Automatically generated by Pandoc 2.7.2
.\" Automatically generated by Pandoc 2.7.3
.\"
.TH "BARMAN-WAL-RESTORE" "1" "May 17, 2019" "Barman User manuals" "Version 2.8"
.TH "BARMAN-WAL-RESTORE" "1" "August 1, 2019" "Barman User manuals" "Version 2.9"
.hy
.SH NAME
.PP
......
% BARMAN-WAL-RESTORE(1) Barman User manuals | Version 2.8
% BARMAN-WAL-RESTORE(1) Barman User manuals | Version 2.9
% 2ndQuadrant <http://www.2ndQuadrant.com>
% May 17, 2019
% August 1, 2019
# NAME
......
.\" Automatically generated by Pandoc 2.7.2
.\" Automatically generated by Pandoc 2.7.3
.\"
.TH "BARMAN" "1" "May 17, 2019" "Barman User manuals" "Version 2.8"
.TH "BARMAN" "1" "August 1, 2019" "Barman User manuals" "Version 2.9"
.hy
.SH NAME
.PP
......@@ -16,21 +16,27 @@ Barman can perform remote backups of multiple servers in business
critical environments and helps DBAs during the recovery phase.
.SH OPTIONS
.TP
.B -v, --version
Show program version number and exit.
.TP
.B -q, --quiet
Do not output anything.
Useful for cron scripts.
.TP
.B -h, --help
Show a help message and exit.
.TP
.B -v, --version
Show program version number and exit.
.TP
.B -c \f[I]CONFIG\f[R], --config \f[I]CONFIG\f[R]
Use the specified configuration file.
.TP
.B --color \f[I]{never,always,auto}\f[R], --colour \f[I]{never,always,auto}\f[R]
Whether to use colors in the output (default: \f[I]auto\f[R])
.TP
.B -q, --quiet
Do not output anything.
Useful for cron scripts.
.TP
.B -d, --debug
debug output (default: False)
.TP
.B -f {json,console}, --format {json,console}
output format (default: \[aq]console\[aq])
.SH COMMANDS
.PP
Important: every command has a help option
......@@ -94,6 +100,12 @@ present in the configuration file.
Number of parallel workers to copy files during backup.
Overrides value of the parameter \f[C]parallel_jobs\f[R], if present in
the configuration file.
.TP
.B --bwlimit KBPS
maximum transfer rate in kilobytes per second.