《OpenStack高可用集群(下册):部署与运维》—11.4.3 MariaDB关系数据库高可用部署
11.4.3 MariaDB关系数据库高可用部署
数据库是OpenStack高可用集群最为核心的功能组件之一,在OpenStack云平台中,关系型数据库MySQL/MariaDB存储了用户创建的对象及其状态信息。截至目前,OpenStack社区使用最多的关系型数据库是MariaDB,关于MariaDB更多的工作原理与配置使用方式等相关信息,请参考第7章关于集群数据库系统的介绍,本节主要介绍关系型数据库MariaDB在Pacemaker集群中的高可用部署。MariaDB高可用集群部署采用最多也是较为成熟的方案是MariaDB Galera Cluster,因此本节将介绍如何在Pacemaker集群中配置MariaDB Galera集群以实现OpenStack集群数据库服务的高可用性。MariaDB高可用部署过程可参考以下步骤。
安装MariaDB Galera集群软件包:
yum install -y mariadb-galera-server xinetd rsync
mariadb-galera-server软件包自带有Galera集群检查命令/usr/bin/clustercheck,要允许HAProxy对Galera进行健康检查,但需要先将Pacemaker集群中的HAProxy资源暂停,并在MariaDB中进行允许健康检查的相关配置:
pcs resource disable lb-haproxy
cat > /etc/sysconfig/clustercheck << EOF
MYSQL_USERNAME="clustercheck"
MYSQL_PASSWORD="$password"
MYSQL_HOST="localhost"
MYSQL_PORT="3306"
EOF
systemctl start mysqld.service
//clustercheck命令要正常工作,需在数据库中创建clustercheck用户
mysql -e "CREATE USER 'clustercheck'@'localhost' IDENTIFIED BY\
'${password}';"
systemctl stop mysqld.service
创建MariaDB Galera集群之前,为每个Galera集群节点配置集群文件/etc/my.cnf.d/galera.cnf,该文件设置了Galera集群名称和默认使用的存储引擎等参数,具体配置可参考如下脚本:
cat > /etc/my.cnf.d/galera.cnf << EOF
[mysqld]
skip-name-resolve=1
binlog_format=ROW
default-storage-engine=innodb
innodb_autoinc_lock_mode=2
innodb_locks_unsafe_for_binlog=1
query_cache_size=0
query_cache_type=0
bind_address=$bind_ip
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_name="galera_cluster"
wsrep_slave_threads=1
wsrep_certify_nonPK=1
wsrep_max_ws_rows=131072
wsrep_max_ws_size=1073741824
wsrep_debug=0
wsrep_convert_LOCK_to_trx=0
wsrep_retry_autocommit=1
wsrep_auto_increment_control=1
wsrep_drupal_282555_workaround=0
wsrep_causal_reads=0
wsrep_notify_cmd=
wsrep_sst_method=rsync
EOF
在HAProxy配置文件中,设置了httpchk参数以对数据库服务进行周期性的健康检查,数据库服务在负载均衡器HAProxy的配置文件/etc/haproxy/haproxy.cfg中的配置段如下:
......
backend db-vms-galera
option httpchk
option tcpka
stick-table type ip size 1000
stick on dst
timeout server 90m
server controller1-vm 192.168.142.110:3306 check inter 1s port 9200\
backup on-marked-down shutdown-sessions
server controller2-vm 192.168.142.111:3306 check inter 1s port 9200\
backup on-marked-down shutdown-sessions
server controller3-vm 192.168.142.112:3306 check inter 1s port 9200\
backup on-marked-down shutdown-sessions
......
为了对数据库服务进行基于HTTP协议的健康检查,需要对xinetd服务进行相关配置,配置文件可在/etc/xinetd.d目录中自定义创建,注意配置文件中的端口号与haproxy.cfg中指定的要一致,具体配置参考如下:
cat > /etc/xinetd.d/galera-monitor << EOF
service galera-monitor
{
port = 9200
disable = no
socket_type = stream
protocol = tcp
wait = no
user = root
group = root
groups = yes
server = /usr/bin/clustercheck
type = UNLISTED
per_source = UNLIMITED
log_on_success =
log_on_failure = HOST
flags = REUSE
}
EOF
启动xinetd服务,并设置其为开机自启动:
systemctl enable xinetd.service
systemctl start xinetd.service
现在,可在Pacemaker集群中创建Galera集群资源。Galera在Pacemaker中以多状态资源(multi-state)的形式运行,具体创建命令如下:
node_list="$controller1-vm,controller2-vm,$controller3-vm"
pcs resource create galera galera enable_creation=true\
wsrep_cluster_address="gcomm://${node_list}" \
additional_parameters='--open-files-limit=16384' meta master-max=3\
ordered=true op promote timeout=300s on-fail=block --master
Galera集群资源创建完成后,重新启动Pacemaker中的HAProxy资源,并设置Galera与HAProxy之间的顺序约束。在OpenStack高可用集群中,所有服务监听的虚拟IP均由HAProxy负责转发到具体的后端物理服务器上,因此在启动任意后端服务之前,必须先保证HAProxy已经成功启动,Galera集群资源在Pacemaker中的启动约束如下:
pcs resource enable lb-haproxy
pcs constraint order start lb-haproxy-clone then start galera-master
Galera集群资源创建完成后,可通过clustercheck命令检查集群状态是否已经稳定,以及当前节点是否已经同步到Galera集群中:
[root@controller2-vm ~]# clustercheck
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 32
Galera cluster node is synced.
通过Pacemaker的集群状态查看命令,也可以看到当前Pacemaker集群资源的运行状态,pcs status命令输出的集群资源信息应该如下:
[root@controller2-vm ~]# pcs resource
Clone Set: lb-haproxy-clone [lb-haproxy]
Started: [ controller1-vm controller2-vm controller3-vm ]
vip-db (ocf::heartbeat:IPaddr2): Started controller1-vm
vip-rabbitmq (ocf::heartbeat:IPaddr2): Started controller2-vm
vip-keystone (ocf::heartbeat:IPaddr2): Started controller3-vm
vip-glance (ocf::heartbeat:IPaddr2): Started controller1-vm
vip-cinder (ocf::heartbeat:IPaddr2): Started controller2-vm
vip-swift (ocf::heartbeat:IPaddr2): Started controller3-vm
vip-neutron (ocf::heartbeat:IPaddr2): Started controller1-vm
vip-nova (ocf::heartbeat:IPaddr2): Started controller2-vm
vip-horizon (ocf::heartbeat:IPaddr2): Started controller3-vm
vip-heat (ocf::heartbeat:IPaddr2): Started controller1-vm
vip-ceilometer (ocf::heartbeat:IPaddr2): Started controller2-vm
vip-qpid (ocf::heartbeat:IPaddr2): Started controller3-vm
Master/Slave Set: galera-master [galera]
Masters: [ controller1-vm controller2-vm controller3-vm ]
Pacemaker集群中看到的Galera集群为Active/Active多主集群高可用模式,但是客户端对Galera集群的访问是通过HAProxy实现的,而在HAProxy的后端数据库服务器设置中,客户端对数据库服务器集群的访问仅限制在某个活跃节点,并非同时读写全部节点。为了进一步确认Galera集群已经准备就绪,可以进入数据库并查看wsrep的相关参数来确认,具体如下:
[root@controller1-vm log]# mysql -u root -p
MariaDB [(none)]> show status like 'wsrep%';
+-----------------------------+--------------------------------------+
| Variable_name |Value |
+-----------------------------+--------------------------------------+
| wsrep_local_state_uuid |c47d5ea9-b668-11e6-b3eb-ebe89b9658d2 |
| wsrep_protocol_version |5 |
| wsrep_last_committed |14 |
| wsrep_replicated |0 |
| wsrep_replicated_bytes |0 |
| wsrep_repl_keys |0 |
| wsrep_repl_keys_bytes |0 |
| wsrep_repl_data_bytes |0 |
| wsrep_repl_other_bytes |0 |
| wsrep_received |2 |
| wsrep_received_bytes |315 |
| wsrep_local_commits |0 |
| wsrep_local_cert_failures |0 |
| wsrep_local_replays |0 |
| wsrep_local_send_queue |0 |
| wsrep_local_send_queue_avg |0.500000 |
| wsrep_local_recv_queue |0 |
| wsrep_local_recv_queue_avg |0.000000 |
| wsrep_local_cached_downto |18446744073709551615 |
| wsrep_flow_control_paused_ns |0 |
| wsrep_flow_control_paused |0.000000 |
| wsrep_flow_control_sent |0 |
| wsrep_flow_control_recv |0 |
| wsrep_cert_deps_distance |0.000000 |
| wsrep_apply_oooe |0.000000 |
| wsrep_apply_oool |0.000000 |
| wsrep_apply_window |0.000000 |
| wsrep_commit_oooe |0.000000 |
| wsrep_commit_oool |0.000000 |
| wsrep_commit_window |0.000000 |
| wsrep_local_state |4 |
| wsrep_local_state_comment |Synced |
| wsrep_cert_index_size |0 |
| wsrep_causal_reads |0 |
| wsrep_cert_interval |0.000000 |
| wsrep_incoming_addresses |*.110:3306,*.111:3306,*.112:3306 |
| wsrep_cluster_conf_id |2 |
| wsrep_cluster_size |3 |
| wsrep_cluster_state_uuid |c47d5ea9-b668-11e6-b3eb-ebe89b9658d2 |
| wsrep_cluster_status |Primary |
| wsrep_connected |ON |
| wsrep_local_bf_aborts |0 |
| wsrep_local_index |1 |
| wsrep_provider_name |Galera |
| wsrep_provider_vendor |Codership Oy <info@codership.com> |
| wsrep_provider_version |3.5(rXXXX) |
| wsrep_ready |ON |
| wsrep_thread_count |2 |
+-----------------------------+--------------------------------------+
48 rows in set (0.00 sec)
通过集群wsrep参数,可以看到wsrep_ready已经为ON,表明Galera集群已经准备就绪,即现在可对集群数据库进行读写访问。如下代码段将在数据库中创建OpenStack各个服务组件,包括与Keystone、Glance、Cinder、Neutron、Nova和Heat对应的数据库:
......
galera_script=galera.setup
echo "" > $galera_script
echo "GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED by 'root' WITH\
GRANT OPTION;" >> $galera_script
for db in keystone glance cinder neutron nova heat; do
cat<<EOF >> $galera_script
CREATE DATABASE ${db};
GRANT ALL ON ${db}.* TO '${db}'@'%' IDENTIFIED BY '${db}';
EOF
done
echo "FLUSH PRIVILEGES;" >> $galera_script
mysql mysql < $galera_script
mysqladmin flush-hosts
......
数据库创建命令将会在全部Galera节点上执行,因此创建完成后,可在任意Galera节点查看所创建的数据库,具体如下:
[root@controller2-vm ~]# mysql -uroot -e "show databases;"
+--------------------+
| Database |
+--------------------+
| information_schema |
| cinder |
| glance |
| heat |
| keystone |
| mysql |
| neutron |
| nova |
| performance_schema |
+--------------------+
[root@controller3-vm ~]# mysql -uroot -e "show databases;"
+--------------------+
| Database |
+--------------------+
| information_schema |
| cinder |
| glance |
| heat |
| keystone |
| mysql |
| neutron |
| nova |
| performance_schema |
+--------------------+
[root@controller1-vm ~]# mysql -uroot -proot -e "show databases;"
+--------------------+
| Database |
+--------------------+
| information_schema |
| cinder |
| glance |
| heat |
| keystone |
| mysql |
| neutron |
| nova |
| performance_schema |
+--------------------+
可以看到,尽管仅在某个Galera节点上执行数据库命令,但是可以在任意节点上看到数据库命令执行后的结果。至此,关系型数据库MariaDB的高可用部署已经完成,用户已经可以对数据库进行高可用读写访问。后续将继续介绍各个OpenStack高可用基础服务组件的高可用部署。关于本节介绍的MariaDB高可用集群部署源代码,可参考笔者位于Github上的开源项目(https://github.com/ynwssjx/Openstack-HA-Deployment),对应的部署脚本为3_create_mariadb_galera_on_pacemaker.sh。
- 点赞
- 收藏
- 关注作者
评论(0)