mysql8高可用集群MIC
一、环境说明
名称 | 标签 | 备注 |
---|---|---|
mysql | mysql:8.0.19 | docker-compose部署,分别位于222,221,220主机 |
mysql-shell | coldenol/mysql-shell:8.0.30-debian | docker-compose部署,用来管理集群 |
mysql-router | 8.0.31 | docker部署,集群路由服务 |
docker | 23.0.1 | 用来部署容器 |
docker-compose | 1.25.4 | 用来部署容器 |
二、部署架构
三、部署过程
1、关swap、selinux、防火墙,安装docker(省略)
2、安装docker-compose
curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.4/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
docker-compose --version
3、docker 创建集群专用网段
#222机子创建swarm集群
docker swarm init --advertise-addr=10.1.130.222
\#221和220机子加入集群
docker swarm join --token SWMTKN-1-21ve6sxpzc8vtulhepraa605tw909rky5n6y9b85ezakefefl4-8xrg6v0b4ibqhqnr2s4529mto 10.1.130.222:2377
\#222机子创建网络mysql8-cluster
docker network create -d overlay --attachable mysql8-cluster
4、docker-compose 创建mysql
4.1写配置文件my.cnf,三个节点的mysql的文件差别只在server_id不同
[mysqld]
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
datadir = /var/lib/mysql
secure-file-priv= NULL
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
default_authentication_plugin = mysql_native_password
sql_mode=""
log_error = /var/log/mysql/mysql-error.log #错误日志路径
slow_query_log = 1
long_query_time = 1 #慢查询时间 超过1秒则为慢查询
slow_query_log_file = /var/log/mysql/mysql-slow.log
max_connections=1000
binlog_checksum=NONE
# Custom config should go here
# 各节点的唯一识别,各节点cnf唯一不同之处,只能是数字
server_id = 1
# 开启组复制的集群数据同步功能
gtid_mode = ON
enforce_gtid_consistency = ON
# 指明单主多从模式
loose-group_replication_single_primary_mode = ON
loose-binlog_transaction_dependency_tracking = WRITESET
# 集群名称,必为有效的UUID,编的就可以
loose-group_replication_group_name = "a53e2dec-a5fe-11ed-99b8-080027c5c3a3"
!includedir /etc/mysql/conf.d/
4.2 创建docker-compose文件mysql8-docker-compose.yml,不同节点的mysql不同点在hostname、service的名字
version: '2'
services:
mysql.clus.n1: #这里分别是n1,n2,n3
ports:
- 3306:3306
hostname: mc1 #这里分别是mc1,mc2,mc3
networks:
- mysql8-cluster
environment:
MYSQL_ROOT_PASSWORD: "xxxxxxx"
TZ: Asia/Shanghai
image: "docker.io/mysql:8.0.19"
restart: unless-stopped
volumes:
- "/data/mysqltest/data:/var/lib/mysql"
- "/data/mysqltest/conf/my.cnf:/etc/mysql/my.cnf"
- "/data/mysqltest/log:/var/log/mysql"
- "/etc/localtime:/etc/localtime:ro"
networks:
mysql8-cluster:
external: true
4.3创建mysql
启动
docker-compose -f mysql8-docker-compose.yml up -d
关闭
docker-compose -f mysql8-docker-compose.yml down -v
然后docker ps能看到创建成功了,用navicat也连得上
5、mysql-shell 组建 MySQL 集群
5.1 运行mysql-shell,mysql-shell不用安装,使用的时候用这个指令就好
docker run -it --network mysql8-cluster --rm coldenol/mysql-shell:8.0.30-debian mysqlsh
5.2 集群操作
在 mysqlsh 中,使用 MySQL AdminAPI 提供的dba
对象及函数来 检测节点/创建集群/配置集群/解散集群/重启集群 等的集群管理操作。
先连接任意一个 Instance,比如:
\c root@mc1:3306
(会让你输入凭证)
然后检测所有节点的配置,是否符合集群的基本运行条件(各 cnf 的必须配置项,未配好的会提醒)
# 分别检测各节点(docker-compose文件里的 hosts 各名称)
dba.checkInstanceConfiguration('root@mc1:3306');
dba.checkInstanceConfiguration('root@mc2:3306');
dba.checkInstanceConfiguration('root@mc3:3306');
#创建集群clus-1
dba.createCluster('clus-1');
#查集群
dba.getCluster();
#查看集群状态
dba.getCluster().status();
#加其他mysql实例
dba.getCluster().addInstance('root@mc2:3306');
dba.getCluster().addInstance('root@mc3:3306');
#再查看集群状态
dba.getCluster().status();
如图,实例添加成功:
dba.getCluster().status()看集群状态:
MySQL mc1:3306 ssl JS > dba.getCluster().status()
{
"clusterName": "clus-1",
"defaultReplicaSet": {
"name": "default",
"primary": "mc1:3306",
"ssl": "REQUIRED",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"mc1:3306": {
"address": "mc1:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE",
"version": "8.0.19"
},
"mc2:3306": {
"address": "mc2:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE",
"version": "8.0.19"
},
"mc3:3306": {
"address": "mc3:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE",
"version": "8.0.19"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "mc1:3306"
}
自此,单主多从的集群创建完成。
5.3 集群GR 数据同步测试
\# 在 Master 节点创建库/表/数据
\c root@mc1:3306
create database clusterdb;
create table clusterdb.emp(id bigint not null AUTO_INCREMENT,usercode VARCHAR(32) DEFAULT NULL,createtime datetime default now(),PRIMARY KEY (id));
insert into clusterdb.emp(usercode) values ('Sol'),('wang'); # 查看 Slave 节点是否已同步数据
\c root@mc2:3306
select * from clusterdb.emp;
测试结果:主从数据同步完成。
5.4 故障转移测试
1、假设的 Master 宕机:docker stop mysql.clus.n1
2、再查看集群 Master 角色所处节点。
# 连接到集群节点
\c root@mc2:3306
# 查看集群节点状况
dba.getCluster().status();
测试结果:其中 mc2:3306 节点已接替 Master 角色,完成故障转移。
MySQL mc2:3306 ssl JS > dba.getCluster().status()
{
"clusterName": "clus-1",
"defaultReplicaSet": {
"name": "default",
"primary": "mc2:3306",
"ssl": "REQUIRED",
"status": "OK_NO_TOLERANCE_PARTIAL",
"statusText": "Cluster is NOT tolerant to any failures. 1 member is not active.",
"topology": {
"mc1:3306": {
"address": "mc1:3306",
"memberRole": "SECONDARY",
"mode": "n/a",
"readReplicas": {},
"role": "HA",
"shellConnectError": "MySQL Error 2005: Could not open connection to 'mc1:3306': Unknown MySQL server host 'mc1' (-2)",
"status": "(MISSING)"
},
"mc2:3306": {
"address": "mc2:3306",
"memberRole": "PRIMARY",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE",
"version": "8.0.19"
},
"mc3:3306": {
"address": "mc3:3306",
"memberRole": "SECONDARY",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE",
"version": "8.0.19"
}
},
"topologyMode": "Single-Primary"
},
"groupInformationSourceMember": "mc2:3306"
}
然后记得恢复mc1!!
6.创建 mysql-router 容器加入集群
6.1运行已有的 oraclelinux 镜像,并进入容器:
# oraclelinux 镜像运行出新容器
docker run -dit --restart unless-stopped -e TZ=Asia/Shanghai \
--name mysql.clus.mr --hostname mr \
--network mysql8-cluster \
-p 6446:6446 -p 6447:6447 \
oraclelinux:9-slim
# 进入容器
docker exec -it mysql.clus.mr bash
6.2容器内安装 mysql-router
\# 这里使用官方提供的 Yum Repository 方式安装
curl -#O https://repo.mysql.com/mysql80-community-release-el9-1.noarch.rpm
rpm -ivh mysql80-community-release-el9-1.noarch.rpm
# 更新 Repos,为了后续更快的应用安装(多等会)
microdnf clean all && microdnf makecache
# 模糊搜索以 mysql-router 为前缀的安装包
microdnf repoquery mysql-router*
# 搜索到的包,安装指定版本;格式:{name}-{版本号}
microdnf install mysql-router-community-8.0.31 -y
# 查看安装效果
mysqlrouter -V
6.3容器内启动 mysql-router
# 命令有多种可选参数,这里用以下方式初始化 mysql-router 实例
# 集群 Master主机的URI格式:{sql-user}:{pass}@{host}:{port}
mysqlrouter --bootstrap root:密码@mc1:3306 --user=root --force
# 初始化后产生的配置文件可查看,或修改配置项
cat /etc/mysqlrouter/mysqlrouter.conf
# 启动后台运行实例
mysqlrouter --user=root &
启动好后,集群查看路由信息:dba.getCluster().listRouters()
6.4 集群路由测试
用navicat登录 mysqlrouter的6446端口(读写)和6447端口(只读)
然后select @@hostname, @@port; 来验证连接的主机。
然后读写mysqrouter和写入mysqrouter都尝试创建数据库。
验证结果:路由的 6446 端口连接到 Master,并可写入数据;6447端口连接只读mysql,可读数据不可写数据,测试通过!!
四、报错情况和 解决
1.创建mysql集群失败,或者集群加入新实例失败
ERROR: Unable to start Group Replication for instance 'mc1:3306'. Please check the MySQL server error log for more information.
Dba.createCluster: Group Replication failed to start: MySQL Error 3096 (HY000): mc1:3306: The START GROUP_REPLICATION command failed as there was an error when initializing the group communication layer. (RuntimeError)
解决方法:检查mysql-shell的容器和mysql容器是不是在同一个集群网络;集群网络的网段有没有和别的网段冲突;如果有挂载hosts文件进容器的要注意mc1,mc2,mc3的解析会不会和集群网络的域名解析冲突,有就去掉。
2.不用dockerhub的mysq-router官方镜像是因为它一直报错Error: Configuration error: Invalid section key ‘clus-1’.解决无果。
创建命令如下:
docker run -tid -e MYSQL_HOST=mc1 -e MYSQL_PORT=3306 -e MYSQL_USER=root -e MYSQL_PASSWORD=密码 -e MYSQL_INNODB_CLUSTER_MEMBERS=3 --network mysql8-cluster mysql/mysql-router:8.0.19
解决方法:后来就只能在容器里建router。
参考链接:https://www.cnblogs.com/Sol-wang/p/17122439.html
- 点赞
- 收藏
- 关注作者
评论(0)