Load balancing 使用sst:xtrabackup(galera)时节点加入群集时出现问题

Load balancing 使用sst:xtrabackup(galera)时节点加入群集时出现问题,load-balancing,mariadb,galera,Load Balancing,Mariadb,Galera,看起来节点正在加入集群,然后失败了……我尝试了rsync和xtrabackup,但在状态转移过程中失败了。在我看来,我似乎错过了一些真正简单的东西,我一根手指也插不上。。任何帮助都将不胜感激 有关节点的更多信息 船长-10.XXX.XXX.161 节点1-10.XXX.XXX.160 已安装的软件包: MariaDB compat MariaDB common MariaDB devel MariaDB shared MariaDB客户端MariaDB测试MariaDB Galera服务器(v5

看起来节点正在加入集群,然后失败了……我尝试了rsync和xtrabackup,但在状态转移过程中失败了。在我看来,我似乎错过了一些真正简单的东西,我一根手指也插不上。。任何帮助都将不胜感激

有关节点的更多信息

船长-10.XXX.XXX.161 节点1-10.XXX.XXX.160

已安装的软件包: MariaDB compat MariaDB common MariaDB devel MariaDB shared MariaDB客户端MariaDB测试MariaDB Galera服务器(v5.5.29-1) galera(v23.2.4-1.rhel6) percona xtrabackup(v2.1.6-702.rhel6)

节点1的配置

[mysqld]
wsrep_cluster_address = gcomm://10.XXX.XXX.161
wsrep_provider = /usr/lib64/galera/libgalera_smm.so
wsrep_provider_options = gcache.size=4G; gcache.page_size=1G
wsrep_cluster_name = galera_cluster
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
innodb_locks_unsafe_for_binlog = 1
wsrep_sst_method = xtrabackup
wsrep_sst_auth = root:rootpassword
wsrep_node_name=1
主机配置

[mysqld]
wsrep_cluster_address = gcomm://
wsrep_provider = /usr/lib64/galera/libgalera_smm.so
wsrep_provider_options = gcache.size=4G; gcache.page_size=1G
wsrep_cluster_name = galera_cluster
default_storage_engine = InnoDB
innodb_autoinc_lock_mode = 2
innodb_locks_unsafe_for_binlog = 1
wsrep_sst_method = rsync
wsrep_slave_threads = 4
wsrep_sst_auth = root:rootpassword
wsrep_node_name = 2
节点1日志文件

131203 16:09:03 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
131203 16:09:03 mysqld_safe WSREP: Running position recovery with --log_error=/tmp/tmp.f2EedjRjbQ
131203 16:09:08 mysqld_safe WSREP: Recovered position 359350ee-5c63-11e3-0800-6673d15135cd:2188
131203 16:09:08 [Note] WSREP: wsrep_start_position var submitted: '359350ee-5c63-11e3-0800-6673d15135cd:2188'
131203 16:09:08 [Note] WSREP: Read nil XID from storage engines, skipping position init
131203 16:09:08 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/libgalera_smm.so'
131203 16:09:08 [Note] WSREP: wsrep_load(): Galera 23.2.4(r147) by Codership Oy <info@codership.com]]> loaded succesfully.
131203 16:09:08 [Note] WSREP: Found saved state: 00000000-0000-0000-0000-000000000000:-1
131203 16:09:08 [Note] WSREP: Reusing existing '/var/lib/mysql//galera.cache'.
131203 16:09:08 [Note] WSREP: Passing config to GCS: base_host = 10.XXX.XXX.160; base_port = 4567; cert.log_conflicts = no; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 1G; gcache.size = 4G; gcs.fc_debug = 0; gcs.fc_factor = 1; gcs.fc_limit = 16; gcs.fc_master_slave = NO; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = NO; replicator.causal_read_timeout = PT30S; replicator.commit_order = 3
131203 16:09:08 [Note] WSREP: Assign initial position for certification: -1, protocol version: -1
131203 16:09:08 [Note] WSREP: wsrep_sst_grab()
131203 16:09:08 [Note] WSREP: Start replication
131203 16:09:08 [Note] WSREP: Setting initial position to 00000000-0000-0000-0000-000000000000:-1
131203 16:09:08 [Note] WSREP: protonet asio version 0
131203 16:09:08 [Note] WSREP: backend: asio
131203 16:09:08 [Note] WSREP: GMCast version 0
131203 16:09:08 [Note] WSREP: (8814b4ba-5c67-11e3-0800-91035d554a96, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
131203 16:09:08 [Note] WSREP: (8814b4ba-5c67-11e3-0800-91035d554a96, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
131203 16:09:08 [Note] WSREP: EVS version 0
131203 16:09:08 [Note] WSREP: PC version 0
131203 16:09:08 [Note] WSREP: gcomm: connecting to group 'galera_cluster', peer '10.XXX.XXX.161:'
131203 16:09:09 [Note] WSREP: declaring 7a9a87e8-5c67-11e3-0800-8cb6cba8f65a stable
131203 16:09:09 [Note] WSREP: Node 7a9a87e8-5c67-11e3-0800-8cb6cba8f65a state prim
131203 16:09:09 [Note] WSREP: view(view_id(PRIM,7a9a87e8-5c67-11e3-0800-8cb6cba8f65a,2) memb {
     7a9a87e8-5c67-11e3-0800-8cb6cba8f65a,
     8814b4ba-5c67-11e3-0800-91035d554a96,
} joined {
} left {
} partitioned {
})
131203 16:09:09 [Note] WSREP: gcomm: connected
131203 16:09:09 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
131203 16:09:09 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
131203 16:09:09 [Note] WSREP: Opened channel 'galera_cluster'
131203 16:09:09 [Note] WSREP: New COMPONENT: primary = yes, bootstrap = no, my_idx = 1, memb_num = 2
131203 16:09:09 [Note] WSREP: STATE EXCHANGE: Waiting for state UUID.
131203 16:09:09 [Note] WSREP: Waiting for SST to complete.
131203 16:09:09 [Note] WSREP: STATE EXCHANGE: sent state msg: 8861cdd5-5c67-11e3-0800-cc70fcc5f515
131203 16:09:09 [Note] WSREP: STATE EXCHANGE: got state msg: 8861cdd5-5c67-11e3-0800-cc70fcc5f515 from 0 (2)
131203 16:09:09 [Note] WSREP: STATE EXCHANGE: got state msg: 8861cdd5-5c67-11e3-0800-cc70fcc5f515 from 1 (1)
131203 16:09:09 [Note] WSREP: Quorum results:
     version    = 2,
     component  = PRIMARY,
     conf_id    = 1,
     members    = 1/2 (joined/total),
     act_id     = 2521,
     last_appl. = -1,
     protocols  = 0/4/2 (gcs/repl/appl),
     group UUID = 359350ee-5c63-11e3-0800-6673d15135cd
131203 16:09:09 [Note] WSREP: Flow-control interval: [23, 23]
131203 16:09:09 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 2521)
131203 16:09:09 [Note] WSREP: State transfer required:
     Group state: 359350ee-5c63-11e3-0800-6673d15135cd:2521
     Local state: 00000000-0000-0000-0000-000000000000:-1
131203 16:09:09 [Note] WSREP: New cluster view: global state: 359350ee-5c63-11e3-0800-6673d15135cd:2521, view# 2: Primary, number of nodes: 2, my index: 1, protocol version 2
131203 16:09:09 [Warning] WSREP: Gap in state sequence. Need state transfer.
131203 16:09:11 [Note] WSREP: Running: 'wsrep_sst_xtrabackup --role 'joiner' --address '10.XXX.XXX.160' --auth 'root:rootpassword' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --parent '13175''
131203 16:09:11 [Note] WSREP: Prepared SST request: xtrabackup|10.162.143.160:4444/xtrabackup_sst
131203 16:09:11 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
131203 16:09:11 [Note] WSREP: Assign initial position for certification: 2521, protocol version: 2
131203 16:09:11 [Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (359350ee-5c63-11e3-0800-6673d15135cd): 1 (Operation not permitted)
      at galera/src/replicator_str.cpp:prepare_for_IST():442. IST will be unavailable.
131203 16:09:11 [Note] WSREP: Node 1 (1) requested state transfer from '*any*'. Selected 0 (2)(SYNCED) as donor.
131203 16:09:11 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 2525)
131203 16:09:11 [Note] WSREP: Requesting state transfer: success, donor: 0
tar: dbexport/db.opt: Cannot open: Permission denied
tar: Exiting with failure status due to previous errors
131203 16:10:22 [Note] WSREP: 0 (2): State transfer to 1 (1) complete.
131203 16:10:22 [Note] WSREP: Member 0 (2) synced with group.
WSREP_SST: [ERROR] Error while getting st data from donor node:  0, 2 (20131203 16:10:22.379)
131203 16:10:22 [ERROR] WSREP: Process completed with error: wsrep_sst_xtrabackup --role 'joiner' --address '10.XXX.XXX.160' --auth 'root:rootpassword' --datadir '/var/lib/mysql/' --defaults-file '/etc/my.cnf' --parent '13175': 32 (Broken pipe)
131203 16:10:22 [ERROR] WSREP: Failed to read uuid:seqno from joiner script.
131203 16:10:22 [ERROR] WSREP: SST failed: 32 (Broken pipe)
131203 16:10:22 [ERROR] Aborting
131203 16:10:24 [Note] WSREP: Closing send monitor...
131203 16:10:24 [Note] WSREP: Closed send monitor.
131203 16:10:24 [Note] WSREP: gcomm: terminating thread
131203 16:10:24 [Note] WSREP: gcomm: joining thread
131203 16:10:24 [Note] WSREP: gcomm: closing backend
131203 16:10:25 [Note] WSREP: view(view_id(NON_PRIM,7a9a87e8-5c67-11e3-0800-8cb6cba8f65a,2) memb {
     8814b4ba-5c67-11e3-0800-91035d554a96,
} joined {
} left {
} partitioned {
     7a9a87e8-5c67-11e3-0800-8cb6cba8f65a,
})
131203 16:10:25 [Note] WSREP: view((empty))
131203 16:10:25 [Note] WSREP: gcomm: closed
131203 16:10:25 [Note] WSREP: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 1
131203 16:10:25 [Note] WSREP: Flow-control interval: [16, 16]
131203 16:10:25 [Note] WSREP: Received NON-PRIMARY.
131203 16:10:25 [Note] WSREP: Shifting JOINER -> OPEN (TO: 2607)
131203 16:10:25 [Note] WSREP: Received self-leave message.
131203 16:10:25 [Note] WSREP: Flow-control interval: [0, 0]
131203 16:10:25 [Note] WSREP: Received SELF-LEAVE. Closing connection.
131203 16:10:25 [Note] WSREP: Shifting OPEN -> CLOSED (TO: 2607)
131203 16:10:25 [Note] WSREP: RECV thread exiting 0: Success
131203 16:10:25 [Note] WSREP: recv_thread() joined.
131203 16:10:25 [Note] WSREP: Closing slave action queue.
131203 16:10:25 [Note] WSREP: Service disconnected.
131203 16:10:25 [Note] WSREP: rollbacker thread exiting
131203 16:10:26 [Note] WSREP: Some threads may fail to exit.
131203 16:10:26 [Note] /usr/sbin/mysqld: Shutdown complete
Error in my_thread_global_end(): 2 threads didn't exit
131203 16:10:31 mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended

问题是在MariaDB的数据目录(可能是
/var/lib/mysql/
)中有一个数据库备份目录(
dbexport
)。执行SST时,提供程序扫描数据目录以查找要发送的文件。它看到了目录,并假设它是用于数据库的,因为数据目录中的目录就是用于数据库的。删除备份目录修复了该问题。作为最佳实践,不要更改
/var/lib/
中的任何内容;程序通常将其数据文件保存在那里,如果弄乱它们,可能会导致类似这样的问题

主要问题解决后,在日志中注意到一条新消息:

[Warning] WSREP: Failed to prepare for incremental state transfer: Local state UUID (00000000-0000-0000-0000-000000000000) does not match group state UUID (359350ee-5c63-11e3-0800-6673d15135cd): 1 (Operation not permitted) at galera/src/replicator_str.cpp:prepare_for_IST():442. IST will be unavailable.
此消息是正常的。当节点加入galera集群时,它将尝试执行IST(增量状态传输),而不是完整SST(状态快照传输)。如果节点以前是集群的一部分,并且它离开时的状态与集群的当前状态之间的差异足够小,则IST可用,它只传输节点的当前状态与集群状态之间的差异。这比传输所有数据快得多。如果节点以前是集群的一部分,但很久以前就离开了,那么它将需要执行SST。在本例中,加入节点的状态UUID是
00000000-0000-0000-0000-000000000000
,这基本上意味着它是集群的新成员。我运行一个MariaDB/galera集群,每当IST不可用时,这条消息就会让我恼火。如果这不是一个警告并被改写,那就太好了。我不知道为什么
操作不被允许
在那里,但没什么好担心的


此外,建议您运行奇数个节点,以防止出现大脑分裂情况。如果可能,您应该向集群中添加另一台MariaDB服务器,或者在无法运行的情况下运行
garbd
充当集群中的节点,而不是数据库服务器。它允许您拥有奇数个节点,而不需要另一个数据库服务器。

在我的情况下,将主群集替换为辅助群集可以解决此问题。 在db1之前

[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="name"
wsrep_cluster_address="gcomm://37.x.x.104,37.x.x.117"
wsrep_sst_method=rsync
wsrep_node_address="37.x.x.104"
wsrep_node_name="db1"

pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
关于db2

[mysqld]
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
query_cache_size=0
query_cache_type=0
bind-address=0.0.0.0

# Galera Provider Configuration
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_name="name"
wsrep_cluster_address="gcomm://37.x.x.104,37.x.x.117"
wsrep_sst_method=rsync
wsrep_node_address="37.x.x.104"
wsrep_node_name="db1"

pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
log_error = /var/log/mysql/error.log
expire_logs_days = 10
max_binlog_size = 100M
我改变

wsrep_cluster_address="gcomm://37.x.x.104,37.x.x.117"


集群已经启动

问题出现在“tar:dbexport/db.opt:无法打开:权限被拒绝”。
dbexport
是您服务器上的数据库吗?
dbexport
目录的权限是什么?如果它存在于node1上,但不在master上,则在尝试加入群集之前,请尝试清空
/var/lib/mysql
。该文件夹包含我的每日备份。。。它实际上存在于我的两个节点上。。。我可以找到任何名为db.opt的文件..所以我不太明白galera为什么要读取一个不存在的文件..以及为什么要在该目录中查找?尝试删除该目录,看看是否解决了问题。我找不到位置,但我很确定我看到有人有类似的问题,这是由于服务器的数据目录中有非mysql文件引起的。我认为服务器看到了该目录,并假设它代表一个数据库,因此它在内部查找数据库目录中找到的文件。我从加入群集的节点中删除了该目录,它工作了..但给了我这个令人不安的警告。。[警告]WSREP:无法准备增量状态传输:本地状态UUID(00000000-0000-0000-0000-000000000000)与组状态UUID(359350ee-5c63-11e3-0800-6673d15135cd)不匹配:1(不允许操作)位于galera/src/replicator_str。cpp:prepare_for_IST():442。IST将不可用。在我的例子中,我指定了wsrep_sst_施主节点,它还不是集群的一部分。一旦我把它留白,SST就启动了。谢谢你的精彩解释。。。只要我有足够的代表;我会给你一个+1的分数。。。
wsrep_cluster_address="gcomm://37.x.x.104,37.x.x.117"
wsrep_cluster_address="gcomm://37.x.x.117,37.x.x.104"
wsrep_node_address="37.x.x.**104**" to **117**