Docker 环境下搭建MySQL 8.0.20 组复制

最近在学习MySQL 8.0 新特性,以及MGR。于是想在搭建一套环境来测试。
本来在买的阿里云服务器上进行搭建,结果好像是一台机器搭建不了,至少需要三台,于是就放弃了。
正好小伙伴分享了docker,于是就利用docker在本地搭建了一套8.0的MGR环境。

准备docker镜像

运行一个test名称的centos系统镜像
➜ ~ docker run -d -it --name test centos
进入容器
➜ ~ docker exec -it test bash
在容器内安装MySQL

1
2
3
4
[root@c610b533f3dc /]# yum install yum-utils wget -y
[root@c610b533f3dc /]# wget https://dev.mysql.com/get/mysql80-community-release-el7-1.noarch.rpm
[root@c610b533f3dc /]# rpm -ivh mysql80-community-release-el7-1.noarch.rpm
[root@c610b533f3dc /]# yum install -y mysql-server

安装完后退出容器,并生成镜像

1
2
[root@c610b533f3dc /]# exit
➜ ~ docker commit 容器ID mysql8.0.20

启动MySQL容器

1
2
3
docker run -d -it --privileged --name=mysql-mgr-node1 mysql8.0.20 /usr/sbin/init
docker run -d -it --privileged --name=mysql-mgr-node2 mysql8.0.20 /usr/sbin/init
docker run -d -it --privileged --name=mysql-mgr-node3 mysql8.0.20 /usr/sbin/init

配置MGR节点

修改/etc/hosts文件

登录每一个容器,查看hosts文件,并进行添加

1
2
3
4
5
6
7
8
9
10
11
➜  ~ docker exec -it mysql-mgr-node1 bash
[root@4af6073b5ee9 /]# cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.3 4af6073b5ee9
172.17.0.4 762f16c6d69f
172.17.0.5 927681f0afc0

修改/etc/my.cnf文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

server_id = 1
gtid_mode = ON
enforce_gtid_consistency = ON
binlog_checksum = NONE

transaction_write_set_extraction = XXHASH64
loose-group_replication_group_name = 'ce9be252-2b71-11e6-b8f4-00212844f856'
loose-group_replication_start_on_boot = off
loose-group_replication_local_address= "172.17.0.3:33061"
loose-group_replication_group_seeds= "172.17.0.3:33061,172.17.0.4:33061,172.17.0.5:33061"
loose-group_replication_bootstrap_group = off

其他两个容器只需要修改server_idloose-group_replication_local_address

重启MySQL

[root@4af6073b5ee9 /]# systemctl restart mysqld

修改MySQL密码

在容器的/var/log/mysqld.log文件中会有MySQL的初始化密码。

1
2
[root@4af6073b5ee9 /]# cat /var/log/mysqld.log
2020-06-12T10:15:04.837913Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: gYEdgjDqe0,9

登录MySQL
[root@4af6073b5ee9 /]# mysql -u root -p"gYEdgjDqe0,9"

修改密码
mysql> alter user 'root'@'localhost' identified by 'xxxxx';

初始化组复制插件及用户

1
2
3
4
5
6
7
mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
mysql> SET SQL_LOG_BIN=0;
mysql> CREATE USER repl@'%' IDENTIFIED BY 'xxxx';
mysql> GRANT REPLICATION SLAVE ON *.* TO repl@'%';
mysql> FLUSH PRIVILEGES;
mysql> SET SQL_LOG_BIN=1;
mysql> CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='xxxx'FOR CHANNEL 'group_replication_recovery';
以上操作需要在所有容器上执行

配置MGR主节点

本操作在主节点(mysql-mgr-node1)上执行

1
2
3
4
5
6
7
8
9
10
11
mysql> SET GLOBAL group_replication_bootstrap_group=ON;
mysql> START GROUP_REPLICATION;
mysql> SET GLOBAL group_replication_bootstrap_group=OFF;

# 查看组信息
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 95744fa5-ac95-11ea-a21d-0242ac110003 | 4af6073b5ee9 | 3306 | ONLINE | PRIMARY | 8.0.20 |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+

其他节点加入集群

1
mysql> START GROUP_REPLICATION;

在任意节点上可以查看组复制状态

1
2
3
4
5
6
7
8
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 95744fa5-ac95-11ea-a21d-0242ac110003 | 4af6073b5ee9 | 3306 | ONLINE | PRIMARY | 8.0.20 |
| group_replication_applier | 9bda89f2-ac95-11ea-9c6a-0242ac110004 | 762f16c6d69f | 3306 | ONLINE | SECONDARY | 8.0.20 |
| group_replication_applier | 9f569a5e-ac95-11ea-9ffc-0242ac110005 | 927681f0afc0 | 3306 | ONLINE | SECONDARY | 8.0.20 |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+

至此,在Docker下搭建MySQL MGR完成

MGR多主模式搭建(扩展)

在所有MGR节点上停止组复制模式,并配置多主模式参数

1
2
3
mysql> stop group_replication;
mysql> set global group_replication_single_primary_mode=OFF;
mysql> set global group_replication_enforce_update_everywhere_checks=ON;

选一个节点作为源节点,执行以下命令

1
2
3
mysql> SET GLOBAL group_replication_bootstrap_group=ON;
mysql> START GROUP_REPLICATION;
mysql> SET GLOBAL group_replication_bootstrap_group=OFF;

其他节点加入集群
mysql> START GROUP_REPLICATION;

查看MGR组状态

1
2
3
4
5
6
7
8
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 95744fa5-ac95-11ea-a21d-0242ac110003 | 4af6073b5ee9 | 3306 | ONLINE | PRIMARY | 8.0.20 |
| group_replication_applier | 9bda89f2-ac95-11ea-9c6a-0242ac110004 | 762f16c6d69f | 3306 | ONLINE | PRIMARY | 8.0.20 |
| group_replication_applier | 9f569a5e-ac95-11ea-9ffc-0242ac110005 | 927681f0afc0 | 3306 | ONLINE | PRIMARY | 8.0.20 |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+

遇到的问题

1
2
3
[ERROR] Slave I/O for channel 'group_replication_recovery': Got fatal error 1236 from master when
reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has
purged binary logs containing GTIDs that the slave requires.', Error_code: 1236

这是由于主从实例上都产生了gtid,导致冲突报错。
解决办法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# 在主实例上执行show master status\G命令查看gtid
mysql> show master status\G
*************************** 1. row ***************************
File: binlog.000002
Position: 1365
Binlog_Do_DB:
Binlog_Ignore_DB:
Executed_Gtid_Set: 95744fa5-ac95-11ea-a21d-0242ac110003:1-2,
ce9be252-2b71-11e6-b8f4-00212844f856:1-2

# 在从实例上重新设定gtid
mysql> STOP GROUP_REPLICATION;
mysql> reset master;

# 根据在主节点上执行show master status;得出的gtid。
mysql> set global gtid_purged = '95744fa5-ac95-11ea-a21d-0242ac110003:1-2,ce9be252-2b71-11e6-b8f4-00212844f856:1-2';

mysql> START GROUP_REPLICATION;

查看状态时,从节点一直处于RECOVERING状态

1
2
3
4
5
6
7
8
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 95744fa5-ac95-11ea-a21d-0242ac110003 | 4af6073b5ee9 | 3306 | ONLINE | PRIMARY | 8.0.20 |
| group_replication_applier | 9bda89f2-ac95-11ea-9c6a-0242ac110004 | 762f16c6d69f | 3306 | RECOVERING | SECONDARY | 8.0.20 |
| group_replication_applier | 9f569a5e-ac95-11ea-9ffc-0242ac110005 | 927681f0afc0 | 3306 | RECOVERING | SECONDARY | 8.0.20 |
+---------------------------+--------------------------------------+--------------+-------------+--------------+-------------+----------------+

这是由于找不到从节点导致。
1、首先检查一下/etc/hosts文件,看是否将所有容器信息都给补充了。
2、执行以下命令重启一下从节点。

1
2
3
mysql> stop group_replication;
mysql> set global group_replication_recovery_get_public_key=ON;
mysql> start group_replication;