Skip to content
GitLab
Menu
Projects
Groups
Snippets
/
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
Faustin Lammler
mariadb-10.1
Commits
e07d9be7
Commit
e07d9be7
authored
Feb 06, 2019
by
Otto Kekäläinen
Browse files
New upstream version 10.1.38
parent
cdc3e0d9
Changes
415
Hide whitespace changes
Inline
Side-by-side
BUILD/check-cpu
View file @
e07d9be7
...
...
@@ -40,6 +40,12 @@ check_compiler_cpu_flags () {
cc_major
=
$1
cc_minor
=
$2
cc_patch
=
$3
if
test
-z
"
$cc_minor
"
;
then
cc_minor
=
"0"
;
fi
if
test
-z
"
$cc_patch
"
;
then
cc_minor
=
"0"
;
fi
cc_comp
=
`
expr
$cc_major
'*'
100
'+'
$cc_minor
`
fi
...
...
CMakeLists.txt
View file @
e07d9be7
...
...
@@ -241,8 +241,14 @@ ENDIF()
MY_CHECK_AND_SET_COMPILER_FLAG
(
-ggdb3 DEBUG
)
OPTION
(
ENABLED_LOCAL_INFILE
"If we should should enable LOAD DATA LOCAL by default"
${
IF_WIN
}
)
SET
(
ENABLED_LOCAL_INFILE
"AUTO"
CACHE STRING
"If we should should enable LOAD DATA LOCAL by default (OFF/ON/AUTO)"
)
IF
(
ENABLED_LOCAL_INFILE MATCHES
"^(0|FALSE)$"
)
SET
(
ENABLED_LOCAL_INFILE OFF
)
ELSEIF
(
ENABLED_LOCAL_INFILE MATCHES
"^(1|TRUE)$"
)
SET
(
ENABLED_LOCAL_INFILE ON
)
ELSEIF
(
NOT ENABLED_LOCAL_INFILE MATCHES
"^(ON|OFF|AUTO)$"
)
MESSAGE
(
FATAL_ERROR
"ENABLED_LOCAL_INFILE must be one of OFF, ON, AUTO"
)
ENDIF
()
OPTION
(
WITH_FAST_MUTEXES
"Compile with fast mutexes"
OFF
)
MARK_AS_ADVANCED
(
WITH_FAST_MUTEXES
)
...
...
Docs/INFO_SRC
View file @
e07d9be7
commit:
8d834cd0f370b306f63c2364552d187fc388e59e
date: 201
8-10-31 23:48:29
+0200
build-date: 201
8-10-31 21:54:07
+0000
short:
8d834cd
commit:
4c490d6df63695dc97b2c808e59954e6877d3a51
date: 201
9-02-04 18:55:35
+0200
build-date: 201
9-02-04 17:02:14
+0000
short:
4c490d6
branch: HEAD
MariaDB source 10.1.3
7
MariaDB source 10.1.3
8
Docs/README-wsrep
View file @
e07d9be7
...
...
@@ -60,7 +60,7 @@ CONTENTS:
Wsrep API developed by Codership Oy is a modern generic (database-agnostic)
replication API for transactional databases with a goal to make database
replication/logging subsystem completely modular and pluggable. It is developed
with flexibility and completeness in mind to satisfy broad range of modern
with flexibility and completeness in mind to satisfy
a
broad range of modern
replication scenarios. It is equally suitable for synchronous and asynchronous,
master-slave and multi-master replication.
...
...
@@ -87,7 +87,7 @@ Upgrade from mysql-server-5.0 to mysql-wsrep is not supported yet, please
upgrade to mysql-server-5.1 first.
If you're installing over an existing mysql installation, mysql-server-wsrep
will conflict with mysql-server-5.1 package, so remove it first:
will conflict with
the
mysql-server-5.1 package, so remove it first:
$ sudo apt-get remove mysql-server-5.1 mysql-server-core-5.1
...
...
@@ -105,7 +105,7 @@ For example, installation of required packages on Debian Lenny:
$ sudo apt-get install psmisc
$ sudo apt-get -t lenny-backports install mysql-client-5.1
Now you should be able to install mysql-wsrep package:
Now you should be able to install
the
mysql-wsrep package:
$ sudo dpkg -i <mysql-server-wsrep DEB>
...
...
@@ -150,7 +150,7 @@ and can be ignored unless specific functionality is needed.
3. FIRST TIME SETUP
Unless you're upgrading an already installed mysql-wsrep package, you will need
to set up a few things to prepare server for operation.
to set up a few things to prepare
the
server for operation.
3.1 CONFIGURATION FILES
...
...
@@ -162,7 +162,7 @@ to set up a few things to prepare server for operation.
* Make sure system-wide my.cnf contains "!includedir /etc/mysql/conf.d/" line.
* Edit /etc/mysql/conf.d/wsrep.cnf and set wsrep_provider option by specifying
a path to provider library. If you don't have a provider, leave it as it is.
a path to
the
provider library. If you don't have a provider, leave it as it is.
* When a new node joins the cluster it'll have to receive a state snapshot from
one of the peers. This requires a privileged MySQL account with access from
...
...
@@ -267,7 +267,7 @@ innodb_autoinc_lock_mode=2
This is a required parameter. Without it INSERTs into tables with
AUTO_INCREMENT column may fail.
autoinc lock modes 0 and 1 can cause unresolved deadlock, and make
system unresponsive.
the
system unresponsive.
innodb_locks_unsafe_for_binlog=1
This option is required for parallel applying.
...
...
@@ -299,14 +299,14 @@ wsrep_node_address=
results (multiple network interfaces, NAT, etc.)
If not explicitly overridden by wsrep_sst_receive_address, the <address> part
will be used to listen for SST (see below). And the whole <address>[:port]
will be passed to wsrep provider to be used as a base address in its
will be passed to
the
wsrep provider to be used as a base address in its
communications.
wsrep_node_name=
Human readable node name (for easier log reading only). Defaults to hostname.
wsrep_slave_threads=1
N
umber of threads dedicated to processing of writesets from other nodes.
The n
umber of threads dedicated to
the
processing of writesets from other nodes.
For best performance should be few per CPU core.
wsrep_dbug_option
...
...
@@ -326,7 +326,7 @@ wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
Retry autocommit queries and single statement transactions should they fail
certification test. This is analogous to rescheduling an autocommit query
should it go into deadlock with other transactions in the database lock
should it go into
a
deadlock with other transactions in the database lock
manager.
wsrep_auto_increment_control=1
...
...
@@ -357,7 +357,7 @@ wsrep_OSU_method=TOI
is not replicating and may be unable to process replication events (due to
table lock). Once DDL operation is complete, the node will catch up and sync
with the cluster to become fully operational again. The DDL statement or
its effects are not replicated, so it is user's responsibility to manually
its effects are not replicated, so it is
the
user's responsibility to manually
perform this operation on each of the nodes.
wsrep_forced_binlog_format=none
...
...
@@ -366,7 +366,7 @@ wsrep_forced_binlog_format=none
format, regardless of what the client session has specified in binlog_format.
Valid choices for wsrep_forced_binlog_format are: ROW, STATEMENT, MIXED and
special value NONE, meaning that there is no forced binlog format in effect.
This variable was intr
u
duced to support STATEMENT format replication during
This variable was intr
o
duced to support STATEMENT format replication during
rolling schema upgrade processing. However, in most cases ROW replication
is valid for asymmetrict schema replication.
...
...
@@ -412,8 +412,8 @@ wsrep_sst_auth=
wsrep_sst_donor=
A name of the node which should serve as state snapshot donor. This allows
to
control which node will serve state snapshot request. By default the
most suitable node is chosen by wsrep provider. This is the same as given in
control
ling
which node will serve
the
state snapshot request. By default the
most suitable node is chosen by
the
wsrep provider. This is the same as given in
wsrep_node_name.
...
...
@@ -423,7 +423,7 @@ wsrep_sst_donor=
for the database. They change the database structure and are non-
transactional.
Release 22.3 brings a new method for performing schema upgrades.
U
ser can
Release 22.3 brings a new method for performing schema upgrades.
A u
ser can
now choose whether to use the traditional total order isolation or new
rolling schema upgrade method. The OSU method choice is done by global
parameter: 'wsrep_OSU_method'.
...
...
@@ -439,7 +439,7 @@ wsrep_sst_donor=
6.2 Rolling Schema Upgrade (RSU)
Rolling schema upgrade is new DDL processing method, where DDL will be
Rolling schema upgrade is
a
new DDL processing method, where DDL will be
processed locally for the node. The node is disconnected of the replication
for the duration of the DDL processing, so that there is only DDL statement
processing in the node and it does not block the rest of the cluster. When
...
...
@@ -468,7 +468,7 @@ wsrep_sst_donor=
* LOCK/UNLOCK TABLES cannot be supported in multi-master setups.
* lock functions (GET_LOCK(), RELEASE_LOCK()... )
4) Query log cannot be directed to table. If you enable query logging,
4) Query log cannot be directed to
a
table. If you enable query logging,
you must forward the log to a file:
log_output = FILE
Use general_log and general_log_file to choose query logging and the
...
...
@@ -480,7 +480,7 @@ wsrep_sst_donor=
6) Due to cluster level optimistic concurrency control, transaction issuing
COMMIT may still be aborted at that stage. There can be two transactions.
writing to same rows and committing in separate cluster nodes, and only one
of
the
them can successfully commit. The failing one will be aborted.
of them can successfully commit. The failing one will be aborted.
For cluster level aborts, MySQL/galera cluster gives back deadlock error.
code (Error: 1213 SQLSTATE: 40001 (ER_LOCK_DEADLOCK)).
...
...
VERSION
View file @
e07d9be7
MYSQL_VERSION_MAJOR=10
MYSQL_VERSION_MINOR=1
MYSQL_VERSION_PATCH=3
7
MYSQL_VERSION_PATCH=3
8
client/CMakeLists.txt
View file @
e07d9be7
...
...
@@ -41,7 +41,7 @@ ENDIF(UNIX)
MYSQL_ADD_EXECUTABLE
(
mysqltest mysqltest.cc COMPONENT Test
)
SET_SOURCE_FILES_PROPERTIES
(
mysqltest.cc PROPERTIES COMPILE_FLAGS
"-DTHREADS"
)
TARGET_LINK_LIBRARIES
(
mysqltest mysqlclient pcre
pcre
posix
)
TARGET_LINK_LIBRARIES
(
mysqltest mysqlclient pcreposix
pcre
)
SET_TARGET_PROPERTIES
(
mysqltest PROPERTIES ENABLE_EXPORTS TRUE
)
...
...
client/mysqlbinlog.cc
View file @
e07d9be7
...
...
@@ -72,6 +72,7 @@ ulong mysqld_net_retry_count = 10L;
ulong
open_files_limit
;
ulong
opt_binlog_rows_event_max_size
;
ulonglong
test_flags
=
0
;
ulong
opt_binlog_rows_event_max_encoded_size
=
MAX_MAX_ALLOWED_PACKET
;
static
uint
opt_protocol
=
0
;
static
FILE
*
result_file
;
static
char
*
result_file_name
=
0
;
...
...
@@ -813,7 +814,12 @@ write_event_header_and_base64(Log_event *ev, FILE *result_file,
/* Write header and base64 output to cache */
ev
->
print_header
(
head
,
print_event_info
,
FALSE
);
ev
->
print_base64
(
body
,
print_event_info
,
FALSE
);
DBUG_ASSERT
(
print_event_info
->
base64_output_mode
==
BASE64_OUTPUT_ALWAYS
);
ev
->
print_base64
(
body
,
print_event_info
,
print_event_info
->
base64_output_mode
!=
BASE64_OUTPUT_DECODE_ROWS
);
/* Read data from cache and write to result file */
if
(
copy_event_cache_to_file_and_reinit
(
head
,
result_file
)
||
...
...
@@ -852,7 +858,9 @@ static bool print_base64(PRINT_EVENT_INFO *print_event_info, Log_event *ev)
return
1
;
}
ev
->
print
(
result_file
,
print_event_info
);
return
print_event_info
->
head_cache
.
error
==
-
1
;
return
print_event_info
->
head_cache
.
error
==
-
1
||
print_event_info
->
body_cache
.
error
==
-
1
;
}
...
...
@@ -1472,6 +1480,15 @@ that may lead to an endless loop.",
"This value must be a multiple of 256."
,
&
opt_binlog_rows_event_max_size
,
&
opt_binlog_rows_event_max_size
,
0
,
GET_ULONG
,
REQUIRED_ARG
,
UINT_MAX
,
256
,
ULONG_MAX
,
0
,
256
,
0
},
#ifndef DBUG_OFF
{
"debug-binlog-row-event-max-encoded-size"
,
0
,
"The maximum size of base64-encoded rows-event in one BINLOG pseudo-query "
"instance. When the computed actual size exceeds the limit "
"the BINLOG's argument string is fragmented in two."
,
&
opt_binlog_rows_event_max_encoded_size
,
&
opt_binlog_rows_event_max_encoded_size
,
0
,
GET_ULONG
,
REQUIRED_ARG
,
UINT_MAX
/
4
,
256
,
ULONG_MAX
,
0
,
256
,
0
},
#endif
{
"verify-binlog-checksum"
,
'c'
,
"Verify checksum binlog events."
,
(
uchar
**
)
&
opt_verify_binlog_checksum
,
(
uchar
**
)
&
opt_verify_binlog_checksum
,
0
,
GET_BOOL
,
NO_ARG
,
0
,
0
,
0
,
0
,
0
,
0
},
...
...
client/mysqltest.cc
View file @
e07d9be7
...
...
@@ -20,7 +20,7 @@
Tool used for executing a .test file
See the "MySQL Test framework manual" for more information
http://
dev.mysql.com/doc/mysqltest/en/index.html
http
s
://
mariadb.com/kb/en/library/mysqltest/
Please keep the test framework tools identical in all versions!
...
...
@@ -6075,7 +6075,6 @@ void do_connect(struct st_command *command)
#endif
if (opt_compress || con_compress)
mysql_options(con_slot->mysql, MYSQL_OPT_COMPRESS, NullS);
mysql_options(con_slot->mysql, MYSQL_OPT_LOCAL_INFILE, 0);
mysql_options(con_slot->mysql, MYSQL_SET_CHARSET_NAME,
charset_info->csname);
if (opt_charsets_dir)
...
...
@@ -6175,6 +6174,11 @@ void do_connect(struct st_command *command)
if (con_slot == next_con)
next_con++; /* if we used the next_con slot, advance the pointer */
}
else // Failed to connect. Free the memory.
{
mysql_close(con_slot->mysql);
con_slot->mysql= NULL;
}
dynstr_free(&ds_connection_name);
dynstr_free(&ds_host);
...
...
@@ -6547,8 +6551,6 @@ static inline bool is_escape_char(char c, char in_string)
SYNOPSIS
read_line
buf buffer for the read line
size size of the buffer i.e max size to read
DESCRIPTION
This function actually reads several lines and adds them to the
...
...
@@ -6566,10 +6568,15 @@ static inline bool is_escape_char(char c, char in_string)
*/
int read_line(char *buf, int size)
static char *read_command_buf= NULL;
static size_t read_command_buflen= 0;
static const size_t max_multibyte_length= 6;
int read_line()
{
char c, last_quote=0, last_char= 0;
char *p= buf, *buf_end= buf + size - 1;
char *p= read_command_buf;
char *buf_end= read_command_buf + read_command_buflen - max_multibyte_length;
int skip_char= 0;
my_bool have_slash= FALSE;
...
...
@@ -6577,10 +6584,21 @@ int read_line(char *buf, int size)
R_COMMENT, R_LINE_START} state= R_LINE_START;
DBUG_ENTER("read_line");
*p= 0;
start_lineno= cur_file->lineno;
DBUG_PRINT("info", ("Starting to read at lineno: %d", start_lineno));
for (; p < buf_end ;
)
while (1
)
{
if (p >= buf_end)
{
my_ptrdiff_t off= p - read_command_buf;
read_command_buf= (char*)my_realloc(read_command_buf,
read_command_buflen*2, MYF(MY_FAE));
p= read_command_buf + off;
read_command_buflen*= 2;
buf_end= read_command_buf + read_command_buflen - max_multibyte_length;
}
skip_char= 0;
c= my_getc(cur_file->file);
if (feof(cur_file->file))
...
...
@@ -6616,7 +6634,7 @@ int read_line(char *buf, int size)
cur_file->lineno++;
/* Convert cr/lf to lf */
if (p != buf && *(p-1) == '\r')
if (p !=
read_command_
buf && *(p-1) == '\r')
p--;
}
...
...
@@ -6631,9 +6649,9 @@ int read_line(char *buf, int size)
}
else if ((c == '{' &&
(!my_strnncoll_simple(charset_info, (const uchar*) "while", 5,
(uchar*) buf, MY_MIN(5, p - buf), 0) ||
(uchar*)
read_command_
buf, MY_MIN(5, p -
read_command_
buf), 0) ||
!my_strnncoll_simple(charset_info, (const uchar*) "if", 2,
(uchar*) buf, MY_MIN(2, p - buf), 0))))
(uchar*)
read_command_
buf, MY_MIN(2, p -
read_command_
buf), 0))))
{
/* Only if and while commands can be terminated by { */
*p++= c;
...
...
@@ -6767,8 +6785,6 @@ int read_line(char *buf, int size)
*p++= c;
}
}
die("The input buffer is too small for this query.x\n" \
"check your query or increase MAX_QUERY and recompile");
DBUG_RETURN(0);
}
...
...
@@ -6913,12 +6929,8 @@ bool is_delimiter(const char* p)
terminated by new line '\n' regardless how many "delimiter" it contain.
*/
#define MAX_QUERY (256*1024*2) /* 256K -- a test in sp-big is >128K */
static char read_command_buf[MAX_QUERY];
int read_command(struct st_command** command_ptr)
{
char *p= read_command_buf;
struct st_command* command;
DBUG_ENTER("read_command");
...
...
@@ -6934,8 +6946,7 @@ int read_command(struct st_command** command_ptr)
die("Out of memory");
command->type= Q_UNKNOWN;
read_command_buf[0]= 0;
if (read_line(read_command_buf, sizeof(read_command_buf)))
if (read_line())
{
check_eol_junk(read_command_buf);
DBUG_RETURN(1);
...
...
@@ -6944,6 +6955,7 @@ int read_command(struct st_command** command_ptr)
if (opt_result_format_version == 1)
convert_to_format_v1(read_command_buf);
char *p= read_command_buf;
DBUG_PRINT("info", ("query: '%s'", read_command_buf));
if (*p == '#')
{
...
...
@@ -9095,6 +9107,8 @@ int main(int argc, char **argv)
init_win_path_patterns();
#endif
read_command_buf= (char*)my_malloc(read_command_buflen= 65536, MYF(MY_FAE));
init_dynamic_string(&ds_res, "", 2048, 2048);
init_alloc_root(&require_file_root, 1024, 1024, MYF(0));
...
...
@@ -9165,7 +9179,6 @@ int main(int argc, char **argv)
(void *) &opt_connect_timeout);
if (opt_compress)
mysql_options(con->mysql,MYSQL_OPT_COMPRESS,NullS);
mysql_options(con->mysql, MYSQL_OPT_LOCAL_INFILE, 0);
mysql_options(con->mysql, MYSQL_SET_CHARSET_NAME,
charset_info->csname);
if (opt_charsets_dir)
...
...
cmake/build_configurations/mysql_release.cmake
View file @
e07d9be7
...
...
@@ -83,7 +83,6 @@ IF(FEATURE_SET)
ENDIF
()
ENDIF
()
OPTION
(
ENABLED_LOCAL_INFILE
""
ON
)
SET
(
WITH_INNODB_SNAPPY OFF CACHE STRING
""
)
IF
(
WIN32
)
SET
(
WITH_LIBARCHIVE STATIC CACHE STRING
""
)
...
...
cmake/ssl.cmake
View file @
e07d9be7
...
...
@@ -177,12 +177,20 @@ MACRO (MYSQL_CHECK_SSL)
ENDIF
()
INCLUDE
(
CheckSymbolExists
)
INCLUDE
(
CheckCSourceCompiles
)
SET
(
CMAKE_REQUIRED_INCLUDES
${
OPENSSL_INCLUDE_DIR
}
)
CHECK_SYMBOL_EXISTS
(
SHA512_DIGEST_LENGTH
"openssl/sha.h"
HAVE_SHA512_DIGEST_LENGTH
)
CHECK_C_SOURCE_COMPILES
(
"
#include <openssl/dh.h>
int main()
{
DH dh;
return sizeof(dh.version);
}"
OLD_OPENSSL_API
)
SET
(
CMAKE_REQUIRED_INCLUDES
)
IF
(
OPENSSL_INCLUDE_DIR AND OPENSSL_LIBRARIES AND
OPENSSL_
MAJOR_VERSION STRLESS
"101"
AND
OLD_
OPENSSL_
API
AND
CRYPTO_LIBRARY AND HAVE_SHA512_DIGEST_LENGTH
)
SET
(
SSL_SOURCES
""
)
...
...
cmake/wsrep.cmake
View file @
e07d9be7
...
...
@@ -26,7 +26,7 @@ ENDIF()
OPTION
(
WITH_WSREP
"WSREP replication API (to use, e.g. Galera Replication library)"
${
with_wsrep_default
}
)
# Set the patch version
SET
(
WSREP_PATCH_VERSION
"2
3
"
)
SET
(
WSREP_PATCH_VERSION
"2
4
"
)
# Obtain wsrep API version
FILE
(
STRINGS
"
${
MySQL_SOURCE_DIR
}
/wsrep/wsrep_api.h"
WSREP_API_VERSION
...
...
cmake/zlib.cmake
View file @
e07d9be7
...
...
@@ -34,11 +34,6 @@ ENDMACRO()
MACRO
(
MYSQL_CHECK_ZLIB_WITH_COMPRESS
)
# For NDBCLUSTER: Use bundled zlib by default
IF
(
NOT WITH_ZLIB
)
SET
(
WITH_ZLIB
"bundled"
CACHE STRING
"By default use bundled zlib on this platform"
)
ENDIF
()
IF
(
WITH_ZLIB STREQUAL
"bundled"
)
MYSQL_USE_BUNDLED_ZLIB
()
ELSE
()
...
...
config.h.cmake
View file @
e07d9be7
...
...
@@ -544,7 +544,11 @@
/*
MySQL features
*/
#cmakedefine ENABLED_LOCAL_INFILE 1
#define LOCAL_INFILE_MODE_OFF 0
#define LOCAL_INFILE_MODE_ON 1
#define LOCAL_INFILE_MODE_AUTO 2
#define ENABLED_LOCAL_INFILE LOCAL_INFILE_MODE_@ENABLED_LOCAL_INFILE@
#cmakedefine ENABLED_PROFILING 1
#cmakedefine EXTRA_DEBUG 1
#cmakedefine CYBOZU 1
...
...
extra/innochecksum.cc
View file @
e07d9be7
...
...
@@ -522,7 +522,16 @@ is_page_corrupted(
normal method. */
if
(
is_encrypted
&&
key_version
!=
0
)
{
is_corrupted
=
!
fil_space_verify_crypt_checksum
(
buf
,
page_size
.
is_compressed
()
?
page_size
.
physical
()
:
0
,
NULL
,
cur_page_num
);
page_size
.
is_compressed
()
?
page_size
.
physical
()
:
0
);
if
(
is_corrupted
&&
log_file
)
{
fprintf
(
log_file
,
"Page "
ULINTPF
":%llu may be corrupted;"
" key_version="
ULINTPF
"
\n
"
,
space_id
,
cur_page_num
,
mach_read_from_4
(
FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION
+
buf
));
}
}
else
{
is_corrupted
=
true
;
}
...
...
extra/mariabackup/fil_cur.cc
View file @
e07d9be7
...
...
@@ -30,6 +30,8 @@ Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA
#include
<trx0sys.h>
#include
"fil_cur.h"
#include
"fil0crypt.h"
#include
"fil0pagecompress.h"
#include
"common.h"
#include
"read_filt.h"
#include
"xtrabackup.h"
...
...
@@ -219,7 +221,7 @@ xb_fil_cur_open(
posix_fadvise
(
cursor
->
file
,
0
,
0
,
POSIX_FADV_SEQUENTIAL
);
/* Determine the page size */
zip_size
=
xb_get_zip_size
(
cursor
->
fil
e
);
zip_size
=
xb_get_zip_size
(
nod
e
);
if
(
zip_size
==
ULINT_UNDEFINED
)
{
xb_fil_cur_close
(
cursor
);
return
(
XB_FIL_CUR_SKIP
);
...
...
@@ -263,6 +265,100 @@ xb_fil_cur_open(
return
(
XB_FIL_CUR_SUCCESS
);
}
static
bool
page_is_corrupted
(
const
byte
*
page
,
ulint
page_no
,
const
xb_fil_cur_t
*
cursor
,
const
fil_space_t
*
space
)
{
byte
tmp_frame
[
UNIV_PAGE_SIZE_MAX
];
byte
tmp_page
[
UNIV_PAGE_SIZE_MAX
];
ulint
page_type
=
mach_read_from_2
(
page
+
FIL_PAGE_TYPE
);
/* We ignore the doublewrite buffer pages.*/
if
(
cursor
->
space_id
==
TRX_SYS_SPACE
&&
page_no
>=
FSP_EXTENT_SIZE
&&
page_no
<
FSP_EXTENT_SIZE
*
3
)
{
return
false
;
}
/* Validate page number. */
if
(
mach_read_from_4
(
page
+
FIL_PAGE_OFFSET
)
!=
page_no
&&
space
->
id
!=
TRX_SYS_SPACE
)
{
/* On pages that are not all zero, the
page number must match.
There may be a mismatch on tablespace ID,
because files may be renamed during backup.
We disable the page number check
on the system tablespace, because it may consist
of multiple files, and here we count the pages
from the start of each file.)
The first 38 and last 8 bytes are never encrypted. */
const
ulint
*
p
=
reinterpret_cast
<
const
ulint
*>
(
page
);
const
ulint
*
const
end
=
reinterpret_cast
<
const
ulint
*>
(
page
+
cursor
->
page_size
);
do
{
if
(
*
p
++
)
{
return
true
;
}
}
while
(
p
!=
end
);
/* Whole zero page is valid. */
return
false
;
}
/* Validate encrypted pages. The first page is never encrypted.
In the system tablespace, the first page would be written with
FIL_PAGE_FILE_FLUSH_LSN at shutdown, and if the LSN exceeds
4,294,967,295, the mach_read_from_4() below would wrongly
interpret the page as encrypted. We prevent that by checking
page_no first. */
if
(
page_no
&&
mach_read_from_4
(
page
+
FIL_PAGE_FILE_FLUSH_LSN_OR_KEY_VERSION
)
&&
(
opt_backup_encrypted
||
(
space
->
crypt_data
&&
space
->
crypt_data
->
type
!=
CRYPT_SCHEME_UNENCRYPTED
)))
{
if
(
!
fil_space_verify_crypt_checksum
(
page
,
cursor
->
zip_size
))
return
true
;
/* Compressed encrypted need to be unencryped and uncompressed for verification. */
if
(
page_type
!=
FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED
&&
!
opt_extended_validation
)
return
false
;
memcpy
(
tmp_page
,
page
,
cursor
->
page_size
);
bool
decrypted
=
false
;
if
(
!
space
->
crypt_data
||
space
->
crypt_data
->
type
==
CRYPT_SCHEME_UNENCRYPTED
||
!
fil_space_decrypt
(
space
,
tmp_frame
,
tmp_page
,
&
decrypted
))
{
return
true
;
}
if
(
page_type
!=
FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED
)
{
return
buf_page_is_corrupted
(
true
,
tmp_page
,
cursor
->
zip_size
,
space
);
}
}
if
(
page_type
==
FIL_PAGE_PAGE_COMPRESSED
)
{
memcpy
(
tmp_page
,
page
,
cursor
->
page_size
);
}
if
(
page_type
==
FIL_PAGE_PAGE_COMPRESSED
||
page_type
==
FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED
)
{
ulint
decomp
=
fil_page_decompress
(
tmp_frame
,
tmp_page
);
page_type
=
mach_read_from_2
(
tmp_page
+
FIL_PAGE_TYPE
);
return
(
!
decomp
||
(
decomp
!=
srv_page_size
&&
cursor
->
zip_size
)
||
page_type
==
FIL_PAGE_PAGE_COMPRESSED
||
page_type
==
FIL_PAGE_PAGE_COMPRESSED_ENCRYPTED
||
buf_page_is_corrupted
(
true
,
tmp_page
,
cursor
->
zip_size
,
space
));
}
return
buf_page_is_corrupted
(
true
,
page
,
cursor
->
zip_size
,
space
);
}
/************************************************************************
Reads and verifies the next block of pages from the source
file. Positions the cursor after the last read non-corrupted page.
...
...
@@ -336,55 +432,41 @@ xb_fil_cur_read(
return
(
XB_FIL_CUR_ERROR
);
}
fil_system_enter
();
fil_space_t
*
space
=
fil_space_get_by_id
(
cursor
->
space_id
);
fil_system_exit
();
fil_space_t
*
space
=
fil_space_acquire_for_io
(
cursor
->
space_id
);
/* check pages for corruption and re-read if necessary. i.e. in case of
partially written pages */
for
(
page
=
cursor
->
buf
,
i
=
0
;
i
<
npages
;
page
+=
cursor
->
page_size
,
i
++
)
{
ib_int64_t
page_no
=
cursor
->
buf_page_no
+
i
;
bool
checksum_ok
=
fil_space_verify_crypt_checksum
(
page
,
cursor
->
zip_size
,
space
,
(
ulint
)
page_no
);
ulint
page_no
=
cursor
->
buf_page_no
+
i
;
if
(
!
checksum_ok
&&
buf_page_is_corrupted
(
true
,
page
,
cursor
->
zip_size
,
space
))
{
if
(
page_is_corrupted
(
page
,
page_no
,
cursor
,
space
)){
retry_count
--
;
if
(
cursor
->
is_system
&&
page_no
>=
(
ib_int64_t
)
FSP_EXTENT_SIZE
&&
page_no
<
(
ib_int64_t
)
FSP_EXTENT_SIZE
*
3
)
{
/* skip doublewrite buffer pages */
xb_a
(
cursor
->
page_size
==
UNIV_PAGE_SIZE
);
msg
(
"[%02u] mariabackup: "
"Page "
UINT64PF
" is a doublewrite buffer page, "
"skipping.
\n
"
,
cursor
->
thread_n
,
page_no
);
}
else
{
retry_count
--
;
if
(
retry_count
==
0
)
{
msg
(
"[%02u] mariabackup: "
"Error: failed to read page after "
"10 retries. File %s seems to be "
"corrupted.
\n
"
,
cursor
->
thread_n
,
cursor
->
abs_path
);
ret
=
XB_FIL_CUR_ERROR
;
break
;
}
if
(
retry_count
==
0
)
{
msg
(
"[%02u] mariabackup: "
"
Database page corruption detected at page
"
UINT64PF
", retrying...
\n
"
,
cursor
->
thread_n
,
page_no
);
os_thread_sleep
(
100000
)
;
goto
read_retry
;
"
Error: failed to read page after
"
"10 retries. File %s seems to be "
"corrupted.
\n
"
,
cursor
->
thread_n
,
cursor
->
abs_path
);
ret
=
XB_FIL_CUR_ERROR
;
buf_page_print
(
page
,
cursor
->
page_size
);
break
;
}
msg
(
"[%02u] mariabackup: "
"Database page corruption detected at page "
ULINTPF
", retrying...
\n
"
,
cursor
->
thread_n
,
page_no
);
os_thread_sleep
(
100000
);
goto
read_retry
;
}
cursor
->
buf_read
+=
cursor
->
page_size
;
cursor
->
buf_npages
++
;
}
posix_fadvise
(
cursor
->
file
,
offset
,
to_read
,
POSIX_FADV_DONTNEED
);
fil_space_release_for_io
(
space
);
return
(
ret
);
}
...
...
extra/mariabackup/xtrabackup.cc
View file @
e07d9be7
...
...
@@ -206,6 +206,8 @@ char* log_ignored_opt = NULL;
extern
my_bool
opt_use_ssl
;
my_bool
opt_ssl_verify_server_cert
;
my_bool
opt_extended_validation
;
my_bool
opt_backup_encrypted
;
/* === metadata of backup === */
#define XTRABACKUP_METADATA_FILENAME "xtrabackup_checkpoints"
...
...
@@ -248,7 +250,6 @@ static ulong innobase_log_block_size = 512;
char
*
innobase_doublewrite_file
=
NULL
;
char
*
innobase_buffer_pool_filename
=
NULL
;
longlong
innobase_buffer_pool_size
=
8
*
1024
*
1024L
;
longlong
innobase_log_file_size
=
48
*
1024
*
1024L
;
/* The default values for the following char* start-up parameters
...
...
@@ -510,6 +511,8 @@ enum options_xtrabackup
OPT_XTRA_DATABASES_FILE
,
OPT_XTRA_CREATE_IB_LOGFILE
,
OPT_XTRA_PARALLEL
,
OPT_XTRA_EXTENDED_VALIDATION
,
OPT_XTRA_BACKUP_ENCRYPTED
,
OPT_XTRA_STREAM
,
OPT_XTRA_COMPRESS
,
OPT_XTRA_COMPRESS_THREADS
,
...
...
@@ -976,6 +979,22 @@ struct my_option xb_server_options[] =
(
G_PTR
*
)
&
xtrabackup_parallel
,
(
G_PTR
*
)
&
xtrabackup_parallel
,
0
,
GET_INT
,
REQUIRED_ARG
,
1
,
1
,
INT_MAX
,
0
,
0
,
0
},
{
"extended_validation"
,
OPT_XTRA_EXTENDED_VALIDATION
,
"Enable extended validation for Innodb data pages during backup phase. "
"Will slow down backup considerably, in case encryption is used. "
"May fail if tables are created during the backup."
,
(
G_PTR
*
)
&
opt_extended_validation
,
(
G_PTR
*
)
&
opt_extended_validation
,
0
,
GET_BOOL
,
NO_ARG
,
FALSE
,
0
,
0
,
0
,
0
,
0
},
{
"backup_encrypted"
,
OPT_XTRA_BACKUP_ENCRYPTED
,
"In --backup, assume that nonzero key_version implies that the page"
" is encrypted. Use --backup --skip-backup-encrypted to allow"
" copying unencrypted that were originally created before MySQL 5.1.48."
,
(
G_PTR
*
)
&
opt_backup_encrypted
,
(
G_PTR
*
)
&
opt_backup_encrypted
,
0
,
GET_BOOL
,
NO_ARG
,
TRUE
,
0
,
0
,
0
,
0
,
0
},
{
"log"
,
OPT_LOG
,
"Ignored option for MySQL option compatibility"
,
(
G_PTR
*
)
&
log_ignored_opt
,
(
G_PTR
*
)
&
log_ignored_opt
,
0
,
GET_STR
,
OPT_ARG
,
0
,
0
,
0
,
0
,
0
,
0
},
...
...
@@ -1003,11 +1022,6 @@ struct my_option xb_server_options[] =
(
G_PTR
*
)
&
srv_auto_extend_increment
,
(
G_PTR
*
)
&
srv_auto_extend_increment
,
0
,
GET_ULONG
,
REQUIRED_ARG
,
8L
,
1L
,
1000L
,
0
,
1L
,
0
},
{
"innodb_buffer_pool_size"
,
OPT_INNODB_BUFFER_POOL_SIZE
,
"The size of the memory buffer InnoDB uses to cache data and indexes of its tables."
,
(
G_PTR
*
)
&
innobase_buffer_pool_size
,
(
G_PTR
*
)
&
innobase_buffer_pool_size
,
0
,
GET_LL
,
REQUIRED_ARG
,
8
*
1024
*
1024L
,
1024
*
1024L
,
LONGLONG_MAX
,
0
,
1024
*
1024L
,
0
},
{
"innodb_checksums"
,
OPT_INNODB_CHECKSUMS
,
"Enable InnoDB checksums validation (enabled by default). \
Disable with --skip-innodb-checksums."
,
(
G_PTR
*
)
&
innobase_use_checksums
,
(
G_PTR
*
)
&
innobase_use_checksums
,
0
,
GET_BOOL
,
NO_ARG
,
1
,
0
,
0
,
0
,
0
,
0
},
...
...
@@ -1214,11 +1228,23 @@ debug_sync_point(const char *name)
#endif
}
static
const
char
*
xb_client_default_groups
[]
=
{
"xtrabackup"
,
"mariabackup"
,
"client"
,
0
,
0
,
0
};
static
const
char
*
xb_client_default_groups
[]
=
{
"xtrabackup"
,
"mariabackup"
,
"client"
,
"client-server"
,
"client-mariadb"
,
0
,
0
,
0
};
static
const
char
*
xb_server_default_groups
[]
=
{
"xtrabackup"
,
"mariabackup"
,
"mysqld"
,
0
,
0
,
0
};
static
const
char
*
xb_server_default_groups
[]
=
{
"xtrabackup"
,
"mariabackup"
,
"mysqld"
,
"server"
,
MYSQL_BASE_VERSION
,
"mariadb"
,
MARIADB_BASE_VERSION
,
"client-server"
,
#ifdef WITH_WSREP
"galera"
,
#endif
0
,
0
,
0
};
static
void
print_version
(
void
)
{
...
...
@@ -1514,13 +1540,6 @@ innodb_init_param(void)
" on 32-bit systems
\n
"
);
}
if
(
innobase_buffer_pool_size
>
UINT_MAX32
)
{
msg
(
"mariabackup: innobase_buffer_pool_size can't be "
"over 4GB on 32-bit systems
\n
"
);
goto
error
;
}
if
(
innobase_log_file_size
>
UINT_MAX32
)
{
msg
(
"mariabackup: innobase_log_file_size can't be "
"over 4GB on 32-bit systemsi
\n
"
);
...
...
@@ -1645,8 +1664,6 @@ innodb_init_param(void)
/* We set srv_pool_size here in units of 1 kB. InnoDB internally
changes the value so that it becomes the number of database pages. */
//srv_buf_pool_size = (ulint) innobase_buffer_pool_size;
srv_buf_pool_size
=
(
ulint
)
xtrabackup_use_memory
;
srv_mem_pool_size
=
(
ulint
)
innobase_additional_mem_pool_size
;
...
...
@@ -2300,7 +2317,7 @@ check_if_skip_table(
Reads the space flags from a given data file and returns the compressed
page size, or 0 if the space is not compressed. */
ulint
xb_get_zip_size
(
pfs_os_fil
e_t
file
)
xb_get_zip_size
(
fil_nod
e_t
*
file
)
{
byte
*
buf
;
byte
*
page
;
...
...
@@ -2311,7 +2328,7 @@ xb_get_zip_size(pfs_os_file_t file)
buf
=
static_cast
<
byte
*>
(
ut_malloc
(
2
*
UNIV_PAGE_SIZE
));
page
=
static_cast
<
byte
*>
(
ut_align
(
buf
,
UNIV_PAGE_SIZE
));
success
=
os_file_read
(
file
,
page
,
0
,
UNIV_PAGE_SIZE
);
success
=
os_file_read
(
file
->
handle
,
page
,
0
,
UNIV_PAGE_SIZE
);
if
(
!
success
)
{
goto
end
;
}
...
...
@@ -2319,6 +2336,17 @@ xb_get_zip_size(pfs_os_file_t file)
space
=
mach_read_from_4
(
page
+
FIL_PAGE_ARCH_LOG_NO_OR_SPACE_ID
);
zip_size
=
(
space
==
0
)
?
0
:
dict_tf_get_zip_size
(
fsp_header_get_flags
(
page
));
if
(
!
file
->
space
->
crypt_data
)
{
fil_system_enter
();
if
(
!
file
->
space
->
crypt_data
)
{
file
->
space
->
crypt_data
=
fil_space_read_crypt_data
(
space
,
page
,
fsp_header_get_crypt_offset
(
zip_size
));
}
fil_system_exit
();
}
end:
ut_free
(
buf
);
...
...
@@ -5235,6 +5263,7 @@ xb_process_datadir(
path
,
NULL
,
fileinfo
.
name
,
data
))
{
os_file_closedir
(
dbdir
);
return
(
FALSE
);
}
}
...
...
@@ -5295,6 +5324,7 @@ xb_process_datadir(
dbinfo
.
name
,
fileinfo
.
name
,
data
))
{
os_file_closedir
(
dbdir
);
return
(
FALSE
);
}
}
...
...
@@ -6823,3 +6853,12 @@ int main(int argc, char **argv)
exit
(
EXIT_SUCCESS
);
}
#if defined (__SANITIZE_ADDRESS__) && defined (__linux__)
/* Avoid LeakSanitizer's false positives. */
const
char
*
__asan_default_options
()
{
return
"detect_leaks=0"
;
}
#endif
extra/mariabackup/xtrabackup.h
View file @
e07d9be7
...
...
@@ -128,6 +128,8 @@ extern my_bool opt_noversioncheck;
extern
my_bool
opt_no_backup_locks
;
extern
my_bool
opt_decompress
;
extern
my_bool
opt_remove_original
;
extern
my_bool
opt_extended_validation
;
extern
my_bool
opt_backup_encrypted
;
extern
char
*
opt_incremental_history_name
;
extern
char
*
opt_incremental_history_uuid
;
...
...
@@ -184,7 +186,7 @@ void xb_data_files_close(void);
/***********************************************************************
Reads the space flags from a given data file and returns the compressed
page size, or 0 if the space is not compressed. */
ulint
xb_get_zip_size
(
pfs_os_fil
e_t
file
);
ulint
xb_get_zip_size
(
fil_nod
e_t
*
file
);
/************************************************************************
Checks if a table specified as a name in the form "database/name" (InnoDB 5.6)
...
...
include/my_global.h
View file @
e07d9be7
...
...
@@ -1082,7 +1082,7 @@ typedef ulong myf; /* Type of MyFlags in my_funcs */
static
inline
char
*
dlerror
(
void
)
{
static
char
win_errormsg
[
2048
];
FormatMessage
(
FORMAT_MESSAGE_FROM_SYSTEM
,
FormatMessage
A
(
FORMAT_MESSAGE_FROM_SYSTEM
,
0
,
GetLastError
(),
0
,
win_errormsg
,
2048
,
NULL
);
return
win_errormsg
;
}
...
...
include/my_sys.h
View file @
e07d9be7
...
...
@@ -602,7 +602,9 @@ static inline size_t my_b_bytes_in_cache(const IO_CACHE *info)
return
*
info
->
current_end
-
*
info
->
current_pos
;
}
int
my_b_copy_to_file
(
IO_CACHE
*
cache
,
FILE
*
file
);
int
my_b_copy_to_file
(
IO_CACHE
*
cache
,
FILE
*
file
,
size_t
count
);
int
my_b_copy_all_to_file
(
IO_CACHE
*
cache
,
FILE
*
file
);
my_off_t
my_b_append_tell
(
IO_CACHE
*
info
);
my_off_t
my_b_safe_tell
(
IO_CACHE
*
info
);
/* picks the correct tell() */
int
my_b_pread
(
IO_CACHE
*
info
,
uchar
*
Buffer
,
size_t
Count
,
my_off_t
pos
);
...
...
include/my_valgrind.h
View file @
e07d9be7
...
...
@@ -42,7 +42,7 @@
# define MEM_CHECK_ADDRESSABLE(a,len) ((void) 0)
# define MEM_CHECK_DEFINED(a,len) ((void) 0)
#else
# define MEM_UNDEFINED(a,len) ((void)
0
)
# define MEM_UNDEFINED(a,len) ((void)
(a), (void) (len)
)
# define MEM_NOACCESS(a,len) ((void) 0)
# define MEM_CHECK_ADDRESSABLE(a,len) ((void) 0)
# define MEM_CHECK_DEFINED(a,len) ((void) 0)
...
...
@@ -51,7 +51,7 @@
#ifndef DBUG_OFF
#define TRASH_FILL(A,B,C) do { const size_t trash_tmp= (B); MEM_UNDEFINED(A, trash_tmp); memset(A, C, trash_tmp); } while (0)
#else
#define TRASH_FILL(A,B,C) do {
const size_t trash_tmp __attribute__((unused))= (B);
MEM_UNDEFINED(
A,trash_tmp
); } while (0)
#define TRASH_FILL(A,B,C) do { MEM_UNDEFINED(
(A), (B)
); } while (0)
#endif
#define TRASH_ALLOC(A,B) do { TRASH_FILL(A,B,0xA5); MEM_UNDEFINED(A,B); } while(0)
#define TRASH_FREE(A,B) do { TRASH_FILL(A,B,0x8F); MEM_NOACCESS(A,B); } while(0)
Prev
1
2
3
4
5
…
21
Next
Write
Preview
Supports
Markdown
0%
Try again
or
attach a new file
.
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment