Web 集群架构应用

分布式集群商城应用系统部署

一、介绍

1. 什么是集群应用系统

​ 集群应用系统(Cluster Application System)是指由多台独立的计算机通过网络连接组成的一个系统,这些计算机协同工作以完成特定任务。集群系统的主要目的是提高系统的性能、可靠性和可扩展性。

​ 集群通过协同工作,可以实现数据的高并发、更稳定、高可用、时效性及鲁棒性的目标。具体来说,当多个节点同时处理同一任务时,可以显著提高系统的整体性能。同时,当某个节点发生故障时,其他节点可以接管其任务,保证了系统的稳定性和可用性。此外,集群还能通过负载均衡等技术实现资源的合理分配,进一步提高系统的整体性能和鲁棒性。

​ 集群应用系统广泛应用于各种领域,包括互联网服务、数据库管理、科学研究和企业应用等。

2. 架构演变过程解析

单体架构到分布式架构

​ 在一开始,我们使用单节点部署gpmall,那个单体架构,单体架构的特点是所有功能模块(例如用户管理、订单处理、商品展示、支付系统等)全部集成到一个应用程序中。这种架构适合初期系统开发,部署简单,适合小规模的项目

​ 然而,随着业务的增长,单体架构逐渐暴露出一些问题:

  1. 可维护性差:随着业务的扩展,代码量增大,单个应用变得很复杂,维护难度增加
  2. 扩展能力有限:单台机器性能有极限,不能扩展太多,任何一部分需要扩展或修改时,必须重新部署整个应用
  3. 不稳定:一损俱损,一个服务使得系统宕机了,所有服务都用不了

​ 为了解决这些问题,我们转变到了分布式架构

​ 在分布式架构下,系统的组件会包括:

  • Nginx:用于分发流量,确保各个服务的负载均衡(nginx 服务器)
  • 数据库和缓存服务:数据库主从架构、读写分离,以及 Redis 缓存等,确保数据的高可用性和高性能(mycat、db1、db2 和 redis 服务器)
  • 消息队列:通过消息队列 Kafka 实现服务之间的异步通信,减轻服务间的耦合度(zookeeper1、zookeeper2、zookeeper3 服务器)
  • 后端服务:两台 jar1、jar2 服务器实现后端服务的负载均衡

思维导图

二、实际部署

IP 主机名 节点
192.168.104.130 mycat mycat 中间件服务节点
192.168.104.131 db1 MariaDB 数据库集群主节点
192.168.104.132 db2 MariaDB 数据库集群从节点
192.168.104.133 zookeeper1 集群节点
192.168.104.134 zookeeper2 集群节点
192.168.104.135 zookeeper3 集群节点
192.168.104.136 redis 消息队列服务节点
192.168.104.137 jar1 Tomcat1 节点
192.168.104.138 jar2 Tomcat2 节点
192.168.104.139 nginx nginx服务器

1. 基础配置

1.1 修改主机名
1
2
3
4
5
6
7
8
9
10
11
12
13
$ sudo hostnamectl set-hostname mycat
$ bash
$ sudo hostnamectl
Static hostname: mycat
Icon name: computer-vm
Chassis: vm
Machine ID: 754f8a1ad2654504b10cacfb2e9d5eb0
Boot ID: 75789f685c6247ee8418c60b451585af
Virtualization: kvm
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1160.45.1.el7.x86_64
Architecture: x86-64

​ 同理 db1 db2 zookeeper1 zookeeper2 zookeeper3 redis jar1 jar2 也配置上

1.1_1

1.2 修改 hosts 文件

​ 10 台机器都配置上以下内容

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ sudo vi /etc/hosts

# 填入以下内容
192.168.104.130 mysql.mall mycat
192.168.104.131 db1
192.168.104.132 db2
192.168.104.133 zk1.mall
192.168.104.134 zk1.mall
192.168.104.135 zk1.mall
192.168.104.133 kafka1.mall
192.168.104.134 kafka1.mall
192.168.104.135 kafka1.mall
192.168.104.136 redis.mall
192.168.104.137 jar1
192.168.104.138 jar2
192.168.104.139 nginx

1.2

1.3 配置 yum 源

​ 使用提供的 gpmall-repo 文件上传至 10 个虚拟机的 /root 目录下,设置 阿里 yum 源 + 本地 yum 源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
# 首先将所有节点 /etc/yum.repos.d/ 目录下的所有文件都移动到 /media 目录下
$ sudo mv /etc/yum.repos.d/* /media/

$ sudo vi /etc/yum.repos.d/CentOS7-Aliyun.repo
# 填入以下内容
[base]
name=CentOS-$releasever - Base - mirrors.aliyun.com
baseurl=http://mirrors.aliyun.com/centos/$releasever/os/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

[updates]
name=CentOS-$releasever - Updates - mirrors.aliyun.com
baseurl=http://mirrors.aliyun.com/centos/$releasever/updates/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

[extras]
name=CentOS-$releasever - Extras - mirrors.aliyun.com
baseurl=http://mirrors.aliyun.com/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=http://mirrors.aliyun.com/centos/RPM-GPG-KEY-CentOS-7

# 然后安装 EPEL
$ sudo yum install -y epel-release

# 安装 wget 和 unzip
$ sudo yum install -y wget unzip

# 然后使用 wget 命令下载 gpmall-repo,然后解压
$ sudo wget https://moka.anitsuri.top/images/gpmall_plural/gpmall-repo.zip
$ sudo unzip gpmall-repo.zip

# 编写本地 yum 源
$ sudo vi /etc/yum.repos.d/local.repo
# 填入以下内容
[mariadb]
name=mariadb
baseurl=file:///root/gpmall-repo
gpgcheck=0
enabled=1

$ yum repolist
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* epel: d2lzkl7pfhq30w.cloudfront.net
repo id repo name status
base/7/x86_64 CentOS-7 - Base - mirrors.aliyun.com 10,072
epel/x86_64 Extra Packages for Enterprise Linux 7 - x86_64 13,791
extras/7/x86_64 CentOS-7 - Extras - mirrors.aliyun.com 526
mariadb mariadb 165
updates/7/x86_64 CentOS-7 - Updates - mirrors.aliyun.com 6,173
repolist: 30,727
1.4 安装 JDK 环境

​ 在 mycat、zookeeper1、zookeeper2、zookeeper3、jar1、jar2 服务器上部署 JDK 1.8 版本

1
2
3
4
5
$ sudo yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
$ java -version
openjdk version "1.8.0_412"
OpenJDK Runtime Environment (build 1.8.0_412-b08)
OpenJDK 64-Bit Server VM (build 25.412-b08, mixed mode)

1.4

1.5 安装数据库、Zookeeper 等服务

​ 在 mycat 服务器上使用提供的 mycat 包 来安装 mycat 服务

1
2
3
4
5
6
7
8
9
10
# 下载 mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz
$ sudo wget https://moka.anitsuri.top/images/mycat_db/mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz

# 解压到 /usr/local,并赋予权限
$ sudo tar -zxf mycat-server-1.6-RELEASE-20161028204710-linux.tar.gz -C /usr/local/
$ sudo chown -R 777 /usr/local/mycat/

# 给 mycat 添加到系统变量中并生效系统变量
$ echo mycat_HOME=/usr/local/mycat/ >> /etc/profile
$ source /etc/profile

3.1

​ 在 db1 和 db2 服务器上安装 MariaDB 服务

1
2
3
4
5
6
# 安装 MariaDB 服务
$ sudo yum install -y mariadb mariadb-server

# 启动 MariaDB 服务并设置开机自启
$ sudo systemctl start mariadb
$ sudo systemctl enable mariadb

​ 在 zookeeper1、zookeeper2、zookeeper3 服务器上使用提供的 ZooKeeper 和 Kafka 包来安装 ZooKeeper 和 Kafka 服务

1
2
3
4
5
6
7
# 安装 ZooKeeper
$ sudo wget https://moka.anitsuri.top/images/gpmall/zookeeper-3.4.14.tar.gz
$ tar -zxf zookeeper-3.4.14.tar.gz

# 安装 Kafka
$ sudo wget https://moka.anitsuri.top/images/gpmall/kafka_2.11-1.1.1.tgz
$ sudo tar -zxf kafka_2.11-1.1.1.tgz

​ 在 Redis 服务器上安装 redis 服务

1
$ sudo yum install -y redis

​ 在 Nginx 服务器上安装 nginx 服务

1
$ sudo yum install -y nginx

2. 部署服务

2.1 部署 MariaDB 服务
2.1.1 初始化 MariaDB 服务

​ 在 db1 和 db2 虚拟机上初始化 MariaDB 数据库,并设置 MariaDB 数据库 root 访问用户的密码为 123456

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
# 运行安全脚本(它可以 设置 root 用户密码、移除匿名用户、禁止 root 用户远程登录、移除测试数据库 等)
$ mysql_secure_installation

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
SERVERS IN PRODUCTION USE! PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user. If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

# 这里需要输入当前 root 用户的密码
# 如果刚安装 MariaDB 并还没有设置 root 用户密码,那么密码是空的,按回车即可
Enter current password for root (enter for none): #默认按回车
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

# 这里问你是否需要设置 root 用户的密码
# 按 Y 进行设置
Set root passworNew password: #这里输入密码123456
Re-enter new password: #确认密码:再输一遍123456
Password updated successfully!
Reloading privilege tables..
... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them. This is intended only for testing, and to make the installation
go a bit smoother. You should remove them before moving into a
production environment.

# 是否移除匿名用户
# 按 Y 进行移除
Remove anonymous users? [Y/n] d? [Y/n] y
... Success!

Normally, root should only be allowed to connect from 'localhost'. This
ensures that someone cannot guess at the root password from the network.

# 是否禁用 root 用户远程登录
# 按 Y 禁用远程登录
Disallow root login remotely? [Y/n] n
... skipping.

By default, MariaDB comes with a database named 'test' that anyone can
access. This is also intended only for testing, and should be removed
before moving into a production environment.

# 是否移除测试数据库 test 及其权限
# 按 Y 移除
Remove test database and access to it? [Y/n] y
- Dropping test database...
... Success!
- Removing privileges on test database...
... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

# 是否重新加载权限表,以确保所有更改立刻生效
# 按 Y 重新加载
Reload privilege tables now? [Y/n] y
... Success!

Cleaning up...

All done! If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!
2.1.2 配置数据库集群主节点

​ 编辑 db1 服务器的数据库配置文件 /etc/my.cnf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ sudo vi /etc/my.cnf
# 填入以下内容
[mysqld]
log_bin = moka
binlog_ignore_db = mysql
server_id = 131

datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0
log-error=/var/log/mariadb/mariadb.log

[mysqld_safe]
pid-file=/var/run/mariadb/mariadb.pid

# 然后创建日志目录并给予权限,然后重启 MariaDB 服务
$ sudo mkdir /var/log/mariadb/
$ sudo chmod 777 /var/log/mariadb/
$ sudo systemctl restart mariadb

2.3

  • 配置解析:
    • log_bin = moka:启用二进制日志,并将其文件前缀设置为 moka
      • 作用:二进制日志记录了所有修改数据库的数据操作,它用于数据库主从复制和数据恢复。启用该日志可以让从库读取并重放这些操作,以保持数据同步
    • binlog_ignore_db = mysql:忽略 mysql 数据库的二进制日志记录
      • 作用:该选项确保 mysql 系统数据库的操作不会记录到二进制日志中
    • server_id = 131:设置数据库服务器的唯一标识符
      • 作用:在主从复制环境中,每个服务器必须有一个唯一的 server_id,用于区分主库和从库以及防止数据循环,通常就是用这台服务器的主机号做 server_id
    • datadir=/var/lib/mysql:指定数据库的数据文件存储位置
    • socket=/var/lib/mysql/mysql.sock:指定 MySQL 的 Unix Socket 文件路径
      • 作用:这用于本地连接到 MySQL 数据库,客户端通过该路径与数据库通信
    • symbolic-links=0:禁用符号链接。
      • 作用:出于安全考虑,建议禁用符号链接,以防止潜在的安全风险(例如目录遍历攻击)
    • log-error=/var/log/mariadb/mariadb.log:指定错误日志的存储路径
    • [mysqld_safe] pid-file=/var/run/mariadb/mariadb.pid:定义 mysqld_safe 进程的 PID 文件路径
2.1.3 开放主节点的数据库权限

​ 在主节点 db1 虚拟机上使用 mysql 命令登录 MariaDB 数据库

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
$ mysql -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 9
Server version: 10.3.18-MariaDB-log MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>

# 授权在任何客户端机器上可以以 root 用户登录到数据库
MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO root@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.001 sec)

# 在主节点 db1 数据库上创建一个 user 用户让从节点 db2 连接,并赋予从节点同步主数据库的权限
MariaDB [(none)]> GRANT REPLICATION SLAVE ON *.* TO 'user'@'db2' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.001 sec)

2.4

2.1.4 配置从节点 db2 同步主节点 db1

​ 在 db2 上使用 mysql 命令 登录 MariaDB 数据库,配置从节点连接主节点的连接信息

  • master_host 为主节点主机名 db1master_user 则是在上一步中创建的用户 user
1
2
3
4
5
6
7
8
9
10
$ mysql -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 15
Server version: 10.3.18-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
MariaDB [(none)]> CHANGE master TO master_host='db1',master_user='user',master_password='123456';
Query OK, 0 rows affected (0.017 sec)

# 开启从节点服务
MariaDB [(none)]> START SLAVE;
Query OK, 0 rows affected (0.004 sec)

# 查看连接状态
MariaDB [(none)]> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: db1
Master_User: user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: moka.000001
Read_Master_Log_Pos: 696
Relay_Log_File: db2-relay-bin.000002
Relay_Log_Pos: 990
Relay_Master_Log_File: moka.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 696
Relay_Log_Space: 1297
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 131
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: No
Gtid_IO_Pos:
Replicate_Do_Domain_Ids:
Replicate_Ignore_Domain_Ids:
Parallel_Mode: conservative
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
Slave_DDL_Groups: 2
Slave_Non_Transactional_Groups: 0
Slave_Transactional_Groups: 0
1 row in set (0.001 sec)

2.5_12.5_2

2.1.5 验证主从数据库的同步功能

​ 先在 db1 的数据库中创建库 test,并在库 test 中创建表 company,插入表数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
MariaDB [(none)]> CREATE DATABASE test;
Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]> USE test;
Database changed
MariaDB [test]> CREATE TABLE company(id int not null primary key,name varchar(50),addr varchar(255));
Query OK, 0 rows affected (0.014 sec)

MariaDB [test]> INSERT INTO company VALUES(1,"facebook","usa");
Query OK, 1 row affected (0.003 sec)

MariaDB [test]> SELECT * FROM company;
+----+----------+------+
| id | name | addr |
+----+----------+------+
| 1 | facebook | usa |
+----+----------+------+
1 row in set (0.001 sec)

2.6_1

​ 此时 从 db2 的数据库就会同步主节点数据库创建的 test 库,可以在从节点查询 test 数据库与表 company
​ 如果可以查询到信息,就能验证主从数据库集群功能在正常运行

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
MariaDB [(none)]> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| test |
+--------------------+
4 rows in set (0.001 sec)

MariaDB [(none)]> SELECT * FROM test.company;
+----+----------+------+
| id | name | addr |
+----+----------+------+
| 1 | facebook | usa |
+----+----------+------+
1 row in set (0.001 sec)

2.6_2

2.1.6 新建 gpmall 数据库

​ 将提供的 gpmall.sql 数据库文件上传至 db1 节点的 /root 目录下

1
2
3
4
5
6
7
8
9
10
$ mysql -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 11
Server version: 10.3.18-MariaDB-log MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [gpmall]>
1
2
3
4
5
6
7
8
9
10
11
12
# 创建数据库 gpmall
MariaDB [(none)]> CREATE DATABASE gpmall;
Query OK, 1 row affected (0.002 sec)

# 进入 gpmall 并使用文件 /root/gpmall.sql
MariaDB [(none)]> USE gpmall;
Database changed
MariaDB [gpmall]> source /root/gpmall.sql

# 然后退出数据库
MariaDB [gpmall]> Ctrl-C -- exit!
Aborted

3.1.1

2.2 部署 mycat 服务
2.2.1 编辑 mycat 的配置文件

​ 配置 mycat 服务读写分离的 schema.xml 配置文件在 /usr/local/mycat/conf/ 目录下

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ sudo vi /usr/local/mycat/conf/schema.xml

# 修改成以下内容
<?xml version="1.0"?>
<!DOCTYPE mycat:schema SYSTEM "schema.dtd">
<mycat:schema xmlns:mycat="http://io.mycat/">
<!-- 定义逻辑库 USERDB 并关联数据节点 dn1 -->
<schema name="gpmall" checkSQLschema="false" sqlMaxLimit="100" dataNode="dn1"></schema>

<!-- 定义数据节点 dn1,关联到数据主机 localhost1 -->
<dataNode name="dn1" dataHost="localhost1" database="gpmall"></dataNode>

<!-- 定义数据主机 localhost1,关联到物理数据库服务器 -->
<dataHost name="localhost1" maxCon="1000" minCon="10" balance="3"
dbType="mysql" dbDriver="native" writeType="0" switchType="1" slaveThreshold="100">
<heartbeat>select user()</heartbeat>

<!-- 主节点 db1 的 IP 地址及 MySQL 配置信息 -->
<writeHost host="hostM1" url="192.168.104.131:3306" user="root" password="123456">
<!-- 从节点 db2 的 IP 地址及 MySQL 配置信息 -->
<readHost host="hostS1" url="192.168.104.132:3306" user="root" password="123456"></readHost>
</writeHost>
</dataHost>
</mycat:schema>
  • 注:
    • sqlMaxLimit:配置默认查询数量
    • database:配置真实数据库名
    • balance="0":不开启读写分离机制,所有读操作都发送到当前可用的 writeHost
    • balance="1":全部的 readHoststand by writeHost 参与 select 语句的负载均衡
    • balance="2":所有读操作都随机的在 writeHostreadHost 上分发
    • balance="3":所有读请求随机地分发到 writeHost 对应的 readHost 执行,writeHost 不负担读压力(注:这个东西只在 1.4 及其之后的版本才有,1.3 之前的版本都没有)
    • writeType="0":所有写操作发送配置的第一个 writeHost,第一个挂了需要切换到还生存的第二个 writeHost,重新启动后以切换后的为准,切换记录在配置文件 dnindex.properties

​ 然后修改 schema.xml 的用户权限

1
$ sudo chown root.root /usr/local/mycat/conf/schema.xml
2.2.2 编辑 mycat 的访问用户

​ 修改 /usr/local/mycat/conf 目录下的 server.xml 文件,修改 root 用户的访问密码与数据库,密码设置为 123456,访问 mycat 的逻辑库为 gpmall

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ sudo vi /usr/local/mycat/conf/server.xml

# 在文件的最后部分修改
<user name="root">
<property name="password">123456</property>
<property name="schemas">gpmall</property>
</user>

# 在文件的最后删除掉
<user name="user">
<property name="password">user</property>
<property name="schemas">TESTDB</property>
<property name="readOnly">true</property>
</user>

​ 最后 /usr/local/mycat/conf/server.xml 最后部分将会变成这个样子

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ sudo tail -16 /usr/local/mycat/conf/server.xml
<user name="root">
<property name="password">123456</property>
<property name="schemas">gpmall</property>

<!-- 表级 DML 权限设置 -->
<!--
<privileges check="false">
<schema name="TESTDB" dml="0110" >
<table name="tb01" dml="0000"></table>
<table name="tb02" dml="1111"></table>
</schema>
</privileges>
-->
</user>

</mycat:server>
2.2.3 启动 mycat 服务
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
# 启动 mycat 数据库中间件服务
$ sudo /bin/bash /usr/local/mycat/bin/mycat start

# 安装 net-tools,以便后续使用 netstat
$ sudo yum install -y net-tools

# 查看 8066 和 9066 端口是否开放,如果开放则说明 mycat 服务开启成功
$ netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 568/rpcbind
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1536/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1041/master
tcp 0 0 127.0.0.1:32000 0.0.0.0:* LISTEN 16487/java
tcp6 0 0 :::111 :::* LISTEN 568/rpcbind
tcp6 0 0 :::34966 :::* LISTEN 16487/java
tcp6 0 0 :::22 :::* LISTEN 1536/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1041/master
tcp6 0 0 :::1984 :::* LISTEN 16487/java
tcp6 0 0 :::8066 :::* LISTEN 16487/java
tcp6 0 0 :::37093 :::* LISTEN 16487/java
tcp6 0 0 :::9066 :::* LISTEN 16487/java

3.1.2

2.3 部署 ZooKeeper 集群服务
2.3.1 修改 ZooKeeper 配置文件

​ 在 zookeeper1、zookeeper2、zookeeper3 上都修改

1
2
3
4
5
6
7
$ cd zookeeper-3.4.14/conf/
$ sudo mv zoo_sample.cfg zoo.cfg
$ sudo vi zoo.cfg
# 在下面加入
server.1=192.168.104.133:2888:3888
server.2=192.168.104.134:2888:3888
server.3=192.168.104.135:2888:3888
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# zoo.cfg 的全部内容如下
$ cat zoo.cfg

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=192.168.104.133:2888:3888
server.2=192.168.104.134:2888:3888
server.3=192.168.104.135:2888:3888

内容解析

  • tickTimetickTimeinitLimitsyncLimit 两个超时配置的基本单位,例如对于 initLimit,其配置值为 5 ,且 tickTime2000 时,则说明其超时时间为 2000ms × 5 =10s
  • initLimit:ZooKeeper 集群模式下包含多个 zk 进程(zk 进程: ZooKeeper 集群中的各个服务器进程),其中一个进程为 leader,余下的进程为 follower
    • 当 follower 最初与 leader 建立连接时,他们之间会传输相当多的数据,尤其是 follower 的数据落后 leader 很多
    • initLimit 配置 follower 与 leader 之间建立连接后进行同步的最长时间
  • syncLimit:配置 follower 与 leader 之间发送消息,请求与应答的最大时间长度
  • dataDir:用于指定 ZooKeeper 存储数据的目录,包括事务日志和快照(snapshot)
    在集群模式下,这个目录还包含一个 myid 文件,用于标识每个节点的唯一身份(ID),myid 文件的内容只有一行,且内容只能为 1~255 之间的数字,这个数字即为 <server.id> 中的 ID,表示 zk 进程的 ID
  • <server.id>=<host>:<port1>:<port2>
    • <server.id>:这是一个数字,表示 ZooKeeper 集群中每个节点的唯一标识(ID)。这个 ID 需要与对应节点的 myid 文件中的内容一致。
    • <host>:这个字段指定了节点的IP地址。在集群内部通信时,通常使用内网IP。
    • <port1>:这是集群中 follower 和 leader 之间进行消息交换的端口。ZooKeeper 集群中,follower 节点通过这个端口与 leader 节点进行数据同步和其他通信。
    • <port2>:这是用于 leader 选举的端口。当集群中的节点需要选举新的 leader 时,它们通过这个端口进行通信。

2.2.2

2.3.2 创建 myid 文件

​ 在 3 台机器的 dataDir 目录(此处应为 /tmp/zookeeper)下,分别创建一个 myid 文件,文件内容分别只有一行,其内容为 1, 2, 3
​ 即文件中只有一个数字,这个数字即为上面 zoo.cfg 配置文件中指定的值
​ ZooKeeper 时根据该文件来决定 ZooKeeper 集群各个机器的身份分配

1
2
3
4
5
6
7
8
9
10
11
$ sudo mkdir /tmp/zookeeper
$ sudo vi /tmp/zookeeper/myid

# 在 zookeeper1 里填入
1

# 在 zookeeper2 里填入
2

# 在 zookeeper3 里填入
3
2.3.3 启动 ZooKeeper
1
2
3
4
5
6
7
$ cd /root/zookeeper-3.4.14/bin
# 启动服务
$ sudo ./zkServer.sh start

# 查看状态
# 三台机器都启动了之后再执行这个
$ sudo ./zkServer.sh status

​ 然后查看三台机器的状态

​ zookeeper1 节点

1
2
3
4
$ sudo ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower

​ zookeeper2 节点

1
2
3
4
$ sudo ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: leader

​ zookeeper3 节点

1
2
3
4
$ sudo ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /root/zookeeper-3.4.14/bin/../conf/zoo.cfg
Mode: follower

3.13.23.3

​ 可以看到,3 个节点, zookeeper2 是 leader,其他的都是 follower

2.4 部署 Kafka 服务
2.4.1 修改配置文件

​ 在 zookeeper1、zookeeper2、zookeeper3 上都修改
(*注意: broker.id 应不一样)

1
2
3
4
5
6
7
8
9
10
11
$ cd /root/kafka_2.11-1.1.1/config/
$ sudo vi server.properties

# 找到以下内容并在前面添加上井号
broker.id=0 #第21行
zookeeper.connect=localhost:2181 #第123行

# 然后再在下面填上以下三个配置
broker.id=1
zookeeper.connect=192.168.104.133:2181,192.168.104.134:2181,192.168.104.135:2181
listeners=PLAINTEXT://192.168.104.133:9092
  • 内容解析:
    • broker.id:每台机器都不一样,相当于在 ZooKeeper 中的 <server.id>
    • zookeeper.connect:因为有 3 台 ZooKeeper 服务器,所以这里都得配置上
    • listenerslisteners 配置项用于指定 Kafka broker 的监听地址和端口。通常设置为当前节点的内网IP和默认的 Kafka 端口 9092

1.2.1

2.4.2 启动 Kafka 服务

​ 在 zookeeper1、zookeeper2、zookeeper3 上

1
2
3
4
5
6
7
8
9
# 安装 screen 服务,以便更好运行 Kafka
$ sudo yum install -y screen
# 使用 screen 服务并创建一个名为 Kafka 的窗口
$ sudo screen -S Kafka
# 进入 Kafka 并启动它
$ cd /root/kafka_2.11-1.1.1/bin/
$ sudo ./kafka-server-start.sh ../config/server.properties

# 然后按 Ctrl+d 键最小化窗口

2.1_1

​ 测试 Kafka 服务是否能正常运行,首先在 zookeeper1 服务器上

1
2
3
4
$ cd /root/kafka_2.11-1.1.1/bin/
# 创建 topic 命令
$ sudo ./kafka-topics.sh --create --zookeeper 192.168.104.133:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test"
  • 如果成功的话,会输出 Created topic "test"

31

​ 虽然 topic 是在 192.168.104.130 上创建的,但是在其他机器上也能看到,所以可以用其他机器查看到 zookeeper1 机器上查看 topic

​ 在 zookeeper2 节点 和 zookeeper3 节点上

1
2
3
$ cd /root/kafka_2.11-1.1.1/bin/
$ sudo ./kafka-topics.sh --list --zookeeper 192.168.104.133:2181
test

3_23_3

2.5 部署 redis 服务

​ 在 Redis 服务器上,修改 redis 配置文件,编辑 /etc/redis.conf 文件

1
2
3
4
5
$ sudo vi /etc/redis.conf
# 注释掉
bind 127.0.0.1 #第61行
# 将下面的 yes 改成 no
protected-mode no #第80行

​ 然后启动 Redis 服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ sudo systemctl start redis
$ sudo systemctl enable redis
Created symlink from /etc/systemd/system/multi-user.target.wants/redis.service to /usr/lib/systemd/system/redis.service.

# 检查 Redis 服务是否启动,如果检查到 6379 端口,则说明 Redis 服务运行成功
$ netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1038/master
tcp 0 0 0.0.0.0:6379 0.0.0.0:* LISTEN 16276/redis-server
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 563/rpcbind
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1508/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1038/master
tcp6 0 0 :::6379 :::* LISTEN 16276/redis-server
tcp6 0 0 :::111 :::* LISTEN 563/rpcbind
tcp6 0 0 :::22 :::* LISTEN 1508/sshd

3.2.2

*注:(错误)日志的位置
  1. MariaDB:看 /etc/my.cnf 我配置的是 log_error = /var/log/mariadb/mariadb.log
  2. mycat:
    • 主日志:/usr/local/mycat/logs/mycat.log
    • 启动脚本相关日志,主要记录启动或关闭时的事件:/usr/local/mycat/logs/wrapper.log
    • 压缩的历史日志:/usr/local/mycat/logs/2024-09/mycat-09-10-1.log.gz(解压后使用)
  3. ZooKeeper:/root/zookeeper-3.4.14/bin/zookeeper.out
  4. Kafka:
    • 主日志:/root/kafka_2.11-1.1.1/logs/server.log
    • 输出日志:/root/kafka_2.11-1.1.1/logs/kafkaServer.out
  5. redis:/var/log/redis/redis.log
*注:(错误)日志解析
1. MariaDB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
$ tail -50 /var/log/mariadb/mariadb.log 
2024-09-14 1:09:37 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2024-09-14 1:09:37 0 [Note] InnoDB: 128 out of 128 rollback segments are active.
2024-09-14 1:09:37 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2024-09-14 1:09:37 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2024-09-14 1:09:37 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2024-09-14 1:09:37 0 [Note] InnoDB: Waiting for purge to start
2024-09-14 1:09:37 0 [Note] InnoDB: 10.3.18 started; log sequence number 2036727; transaction id 472
2024-09-14 1:09:37 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2024-09-14 1:09:37 0 [Note] Plugin 'FEEDBACK' is disabled.
2024-09-14 1:09:37 0 [Note] InnoDB: Buffer pool(s) load completed at 240914 1:09:37
2024-09-14 1:09:37 0 [Note] Server socket created on IP: '::'.
2024-09-14 1:09:37 0 [Note] Reading of all Master_info entries succeeded
2024-09-14 1:09:37 0 [Note] Added new Master_info '' to hash table
2024-09-14 1:09:37 0 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.3.18-MariaDB-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
2024-09-14 1:10:35 10 [Note] Start binlog_dump to slave_server(1), pos(moka.000006, 337)
2024-09-14 1:10:38 0 [Note] /usr/sbin/mysqld (initiated by: unknown): Normal shutdown
2024-09-14 1:10:38 0 [Note] Event Scheduler: Purging the queue. 0 events
2024-09-14 1:10:38 0 [Note] InnoDB: FTS optimize thread exiting.
2024-09-14 1:10:38 0 [Note] InnoDB: Starting shutdown...
2024-09-14 1:10:38 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
2024-09-14 1:10:38 0 [Note] InnoDB: Buffer pool(s) dump completed at 240914 1:10:38
2024-09-14 1:10:40 0 [Note] InnoDB: Shutdown completed; log sequence number 2036761; transaction id 473
2024-09-14 1:10:40 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
2024-09-14 1:10:40 0 [Note] /usr/sbin/mysqld: Shutdown complete

2024-09-14 1:10:40 0 [Note] InnoDB: Using Linux native AIO
2024-09-14 1:10:40 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2024-09-14 1:10:40 0 [Note] InnoDB: Uses event mutexes
2024-09-14 1:10:40 0 [Note] InnoDB: Compressed tables use zlib 1.2.7
2024-09-14 1:10:40 0 [Note] InnoDB: Number of pools: 1
2024-09-14 1:10:40 0 [Note] InnoDB: Using SSE2 crc32 instructions
2024-09-14 1:10:40 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2024-09-14 1:10:40 0 [Note] InnoDB: Completed initialization of buffer pool
2024-09-14 1:10:40 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2024-09-14 1:10:40 0 [Note] InnoDB: 128 out of 128 rollback segments are active.
2024-09-14 1:10:40 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2024-09-14 1:10:40 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2024-09-14 1:10:40 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2024-09-14 1:10:40 0 [Note] InnoDB: Waiting for purge to start
2024-09-14 1:10:40 0 [Note] InnoDB: 10.3.18 started; log sequence number 2036761; transaction id 472
2024-09-14 1:10:40 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2024-09-14 1:10:40 0 [Note] Plugin 'FEEDBACK' is disabled.
2024-09-14 1:10:40 0 [Note] InnoDB: Buffer pool(s) load completed at 240914 1:10:40
2024-09-14 1:10:40 0 [Note] Server socket created on IP: '::'.
2024-09-14 1:10:40 0 [Note] Reading of all Master_info entries succeeded
2024-09-14 1:10:40 0 [Note] Added new Master_info '' to hash table
2024-09-14 1:10:40 0 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.3.18-MariaDB-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server
2024-09-14 1:11:38 13 [Note] Start binlog_dump to slave_server(1), pos(moka.000007, 337)
  • 解析:**[Note]:提示消息;[Warning]:警告消息;[Error]**:错误消息

    • InnoDB 初始化和设置日志

      1
      2
      3
      4
      5
      2024-09-14  1:09:37 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
      2024-09-14 1:09:37 0 [Note] InnoDB: 128 out of 128 rollback segments are active.
      2024-09-14 1:09:37 0 [Note] InnoDB: Creating shared tablespace for temporary tables
      2024-09-14 1:09:37 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
      2024-09-14 1:09:37 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.

      ​ MariaDB 的 InnoDB 存储引擎正在初始化和设置一些关键资源。这包括检查用户权限(user is authorized)、激活回滚段(rollback segments)、创建共享表空间(temporary tables,用于临时表的存储),并设置临时文件的大小:ibtmp1 是临时表的存储文件,大小设置为 12 MB

    • InnoDB 启动日志

      1
      2024-09-14  1:09:37 0 [Note] InnoDB: 10.3.18 started; log sequence number 2036727; transaction id 472

      ​ InnoDB 已成功启动,版本为 10.3.18,并记录了当前的 日志序列号(log sequence number) 为 2036727 和 事务 ID (transaction id) 为 472

    • 加载缓冲池、禁用插件

      1
      2
      2024-09-14  1:09:37 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
      2024-09-14 1:09:37 0 [Note] Plugin 'FEEDBACK' is disabled.

      ​ InnoDB 正在加载之前保存的缓冲池(buffer pool(s)),禁用了 FEEDBACK 插件,不允许 MariaDB 匿名收集使用统计信息并反馈给 MariaDB 开发团队

    • MariaDB 启动完毕

      1
      2
      2024-09-14  1:09:37 0 [Note] /usr/sbin/mysqld: ready for connections.
      Version: '10.3.18-MariaDB-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server

      ​ MariaDB 服务器启动完成,准备接受客户端连接,监听端口为 3306socket 文件位于 /var/lib/mysql/mysql.sock

    • 二进制日志同步到从服务器

      1
      2024-09-14  1:10:35 10 [Note] Start binlog_dump to slave_server(1), pos(moka.000006, 337)

      ​ MariaDB 开始将主服务器的二进制日志(binlog)发送给 ID 为 1 的从服务器,开始的位置为 moka.000006 文件的第 337 字节处。这个过程是主从复制的一部分

    • MariaDB 正常关机、InnoDB 的关闭日志

      1
      2
      3
      4
      5
      2024-09-14  1:10:38 0 [Note] /usr/sbin/mysqld (initiated by: unknown): Normal shutdown
      2024-09-14 1:10:38 0 [Note] InnoDB: Starting shutdown...
      2024-09-14 1:10:38 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool
      2024-09-14 1:10:38 0 [Note] InnoDB: Buffer pool(s) dump completed at 240914 1:10:38
      2024-09-14 1:10:40 0 [Note] InnoDB: Shutdown completed; log sequence number 2036761; transaction id 473

      ​ MariaDB 正在执行正常关机程序,InnoDB 正在关闭,并将缓冲池的内容转储到磁盘。在关闭过程中,InnoDB 会将所有活动的日志和事务信息写入磁盘,以确保数据一致性

    • MariaDB 再次启动

      1
      2
      2024-09-14  1:10:40 0 [Note] /usr/sbin/mysqld: ready for connections.
      Version: '10.3.18-MariaDB-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server

      ​ MariaDB 重新启动,并再次准备接受连接

    • 二进制日志复制

      1
      2024-09-14  1:11:38 13 [Note] Start binlog_dump to slave_server(1), pos(moka.000007, 337)

      ​ MariaDB 开始将二进制日志传输给从服务器。这次的二进制日志文件是 moka.000007,传输从第 337 字节开始

2. mycat
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
$ tail -25 /usr/local/mycat/logs/mycat.log
2024-09-14 01:06:31.143 INFO [$_NIOREACTOR-2-RW] (io.mycat.backend.mysql.nio.handler.NewConnectionRespHandler.connectionAcquired(NewConnectionRespHandler.java:45)) - connectionAcquired MySQLConnection [id=34, lastTime=1726275991136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=10, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:06:31.143 INFO [$_NIOREACTOR-1-RW] (io.mycat.backend.mysql.nio.handler.NewConnectionRespHandler.connectionAcquired(NewConnectionRespHandler.java:45)) - connectionAcquired MySQLConnection [id=33, lastTime=1726275991136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=11, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:09:35.500 INFO [$_NIOREACTOR-3-RW] (io.mycat.net.AbstractConnection.close(AbstractConnection.java:508)) - close connection,reason:no handler ,MySQLConnection [id=35, lastTime=1726276141136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=12, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:09:35.500 INFO [$_NIOREACTOR-0-RW] (io.mycat.net.AbstractConnection.close(AbstractConnection.java:508)) - close connection,reason:no handler ,MySQLConnection [id=32, lastTime=1726276161136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=9, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:09:35.500 INFO [$_NIOREACTOR-1-RW] (io.mycat.net.AbstractConnection.close(AbstractConnection.java:508)) - close connection,reason:no handler ,MySQLConnection [id=33, lastTime=1726276151136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=11, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:09:35.500 INFO [$_NIOREACTOR-2-RW] (io.mycat.net.AbstractConnection.close(AbstractConnection.java:508)) - close connection,reason:no handler ,MySQLConnection [id=34, lastTime=1726276171136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=10, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:09:35.500 WARN [$_NIOREACTOR-2-RW] (io.mycat.backend.mysql.nio.MySQLConnectionHandler.closeNoHandler(MySQLConnectionHandler.java:214)) - no handler bind in this con io.mycat.backend.mysql.nio.MySQLConnectionHandler@7907845f client:MySQLConnection [id=34, lastTime=1726276171136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=10, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:09:35.500 WARN [$_NIOREACTOR-1-RW] (io.mycat.backend.mysql.nio.MySQLConnectionHandler.closeNoHandler(MySQLConnectionHandler.java:214)) - no handler bind in this con io.mycat.backend.mysql.nio.MySQLConnectionHandler@59a6eb88 client:MySQLConnection [id=33, lastTime=1726276151136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=11, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:09:35.500 WARN [$_NIOREACTOR-0-RW] (io.mycat.backend.mysql.nio.MySQLConnectionHandler.closeNoHandler(MySQLConnectionHandler.java:214)) - no handler bind in this con io.mycat.backend.mysql.nio.MySQLConnectionHandler@2bf1c277 client:MySQLConnection [id=32, lastTime=1726276161136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=9, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:09:35.500 WARN [$_NIOREACTOR-3-RW] (io.mycat.backend.mysql.nio.MySQLConnectionHandler.closeNoHandler(MySQLConnectionHandler.java:214)) - no handler bind in this con io.mycat.backend.mysql.nio.MySQLConnectionHandler@33d45214 client:MySQLConnection [id=35, lastTime=1726276141136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=12, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:09:41.140 INFO [Timer0] (io.mycat.backend.datasource.PhysicalDatasource.getConnection(PhysicalDatasource.java:413)) - no ilde connection in pool,create new connection for hostM1 of schema gpmall
2024-09-14 01:10:38.623 INFO [$_NIOREACTOR-0-RW] (io.mycat.net.AbstractConnection.close(AbstractConnection.java:508)) - close connection,reason:no handler ,MySQLConnection [id=36, lastTime=1726276231136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=9, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:10:38.623 WARN [$_NIOREACTOR-0-RW] (io.mycat.backend.mysql.nio.MySQLConnectionHandler.closeNoHandler(MySQLConnectionHandler.java:214)) - no handler bind in this con io.mycat.backend.mysql.nio.MySQLConnectionHandler@51b8f13b client:MySQLConnection [id=36, lastTime=1726276231136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=9, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:10:41.140 INFO [Timer1] (io.mycat.backend.datasource.PhysicalDatasource.getConnection(PhysicalDatasource.java:413)) - no ilde connection in pool,create new connection for hostM1 of schema gpmall
2024-09-14 01:11:31.139 INFO [Timer1] (io.mycat.backend.datasource.PhysicalDatasource.createByIdleLitte(PhysicalDatasource.java:299)) - create connections ,because idle connection not enough ,cur is 1, minCon is 10 for hostM1
2024-09-14 01:11:31.143 INFO [$_NIOREACTOR-2-RW] (io.mycat.backend.mysql.nio.handler.NewConnectionRespHandler.connectionAcquired(NewConnectionRespHandler.java:45)) - connectionAcquired MySQLConnection [id=38, lastTime=1726276291136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=10, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:11:31.143 INFO [$_NIOREACTOR-3-RW] (io.mycat.backend.mysql.nio.handler.NewConnectionRespHandler.connectionAcquired(NewConnectionRespHandler.java:45)) - connectionAcquired MySQLConnection [id=39, lastTime=1726276291136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=11, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:11:31.143 INFO [$_NIOREACTOR-0-RW] (io.mycat.backend.mysql.nio.handler.NewConnectionRespHandler.connectionAcquired(NewConnectionRespHandler.java:45)) - connectionAcquired MySQLConnection [id=40, lastTime=1726276291136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=12, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:16:31.139 INFO [Timer1] (io.mycat.backend.datasource.PhysicalDatasource.createByIdleLitte(PhysicalDatasource.java:299)) - create connections ,because idle connection not enough ,cur is 4, minCon is 10 for hostM1
2024-09-14 01:16:31.143 INFO [$_NIOREACTOR-2-RW] (io.mycat.backend.mysql.nio.handler.NewConnectionRespHandler.connectionAcquired(NewConnectionRespHandler.java:45)) - connectionAcquired MySQLConnection [id=42, lastTime=1726276591136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=14, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:16:31.144 INFO [$_NIOREACTOR-1-RW] (io.mycat.backend.mysql.nio.handler.NewConnectionRespHandler.connectionAcquired(NewConnectionRespHandler.java:45)) - connectionAcquired MySQLConnection [id=41, lastTime=1726276591136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=15, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:21:31.139 INFO [Timer1] (io.mycat.backend.datasource.PhysicalDatasource.createByIdleLitte(PhysicalDatasource.java:299)) - create connections ,because idle connection not enough ,cur is 6, minCon is 10 for hostM1
2024-09-14 01:21:31.144 INFO [$_NIOREACTOR-3-RW] (io.mycat.backend.mysql.nio.handler.NewConnectionRespHandler.connectionAcquired(NewConnectionRespHandler.java:45)) - connectionAcquired MySQLConnection [id=43, lastTime=1726276891136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=16, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
2024-09-14 01:26:31.139 INFO [Timer0] (io.mycat.backend.datasource.PhysicalDatasource.createByIdleLitte(PhysicalDatasource.java:299)) - create connections ,because idle connection not enough ,cur is 7, minCon is 10 for hostM1
2024-09-14 01:26:31.144 INFO [$_NIOREACTOR-0-RW] (io.mycat.backend.mysql.nio.handler.NewConnectionRespHandler.connectionAcquired(NewConnectionRespHandler.java:45)) - connectionAcquired MySQLConnection [id=44, lastTime=1726277191136, user=root, schema=gpmall, old shema=gpmall, borrowed=false, fromSlaveDB=false, threadId=17, charset=latin1, txIsolation=3, autocommit=true, attachment=null, respHandler=null, host=192.168.104.131, port=3306, statusSync=null, writeQueue=0, modifiedSQLExecuted=false]
  • 解析:DEBUG: 调试信息;INFO: 一般信息;WARN: 警告信息;ERROR: 错误信息;FATAL: 严重错误信息

    • 连接成功

      1
      2024-09-14 01:06:31.143  INFO [$_NIOREACTOR-2-RW] ... - connectionAcquired MySQLConnection [id=34, lastTime=1726275991136, user=root, schema=gpmall, ... host=192.168.104.131, port=3306, ...]

      日志显示 mycat 成功与 MySQL 数据库建立了连接

      • MySQLConnection [id=34]: 表示这个连接的 ID 是 34
      • user=root: 连接使用的数据库用户是 root
      • schema=gpmall: 正在访问的数据库是 gpmall
      • host=192.168.104.131, port=3306: MySQL 数据库的主机 IP 是 192.168.104.131,端口是 3306
    • 连接关闭

      1
      2024-09-14 01:09:35.500  INFO [$_NIOREACTOR-3-RW] ... - close connection,reason:no handler ,MySQLConnection [id=35, ... host=192.168.104.131, port=3306, ...]

      ​ mycat 正在关闭 ID 为 35 的连接。关闭原因是 no handler

      ​ 这通常意味着没有连接处理器。可能是由于连接空闲或者超时未使用,mycat 主动关闭了连接

    • 无处理器绑定的警告

      1
      2024-09-14 01:09:35.500  WARN [$_NIOREACTOR-2-RW] ... - no handler bind in this con ... MySQLConnection [id=34, ...]

      ​ 这是一个警告信息,表示 ID 为 34 的连接没有绑定任何处理器

      ​ 这可能是因为连接已经空闲超时,或者 mycat 认为这个连接不再被需要

    • 连接池中创建新连接

      1
      2024-09-14 01:09:41.140  INFO [Timer0] ... - no ilde connection in pool,create new connection for hostM1 of schema gpmall

      ​ mycat 发现连接池中没有可用的空闲连接,于是创建了一个新的连接

    • 连接池中的连接不足

      1
      2024-09-14 01:11:31.139  INFO [Timer1] ... - create connections ,because idle connection not enough ,cur is 1, minCon is 10 for hostM1

      ​ 连接池中的空闲连接不足。当前有 1 个连接,而最小空闲连接数要求为 10,mycat 正在为 hostM1 创建更多的连接以满足最低连接数的要求

    • 新连接获取

      1
      2024-09-14 01:16:31.144  INFO [$_NIOREACTOR-2-RW] ... - connectionAcquired MySQLConnection [id=42, ...]

      ​ 一个新的 MySQL 连接被成功获取,连接 ID 为 42,并与 192.168.104.131:3306 上的 MySQL 数据库进行连接

3. ZooKeeper
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
[root@zookeeper1 ~]# tail -50 /root/zookeeper-3.4.14/bin/zookeeper.out
2024-09-13 09:53:37,356 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer@694] - Established session 0x1000024aca20002 with negotiated timeout 30000 for client /192.168.104.134:37020
2024-09-13 09:53:37,414 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1000024aca20002
2024-09-13 09:53:37,419 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /192.168.104.134:37020 which had sessionid 0x1000024aca20002
2024-09-13 10:03:07,718 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x20000243eba0000 type:setData cxid:0x41 zxid:0x30000004b txntype:-1 reqpath:n/a Error Path:/config/topics/user-register-succ-topic Error:KeeperErrorCode = NoNode for /config/topics/user-register-succ-topic
2024-09-13 10:03:07,958 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x20000243eba0000 type:setData cxid:0x4e zxid:0x300000051 txntype:-1 reqpath:n/a Error Path:/config/topics/__consumer_offsets Error:KeeperErrorCode = NoNode for /config/topics/__consumer_offsets
2024-09-13 10:03:12,861 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@222] - Accepted socket connection from /192.168.104.137:34516
2024-09-13 10:03:12,882 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /192.168.104.137:34516
2024-09-13 10:03:12,887 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer@694] - Established session 0x1000024aca20003 with negotiated timeout 40000 for client /192.168.104.137:34516
2024-09-13 10:03:23,592 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@222] - Accepted socket connection from /192.168.104.137:34524
2024-09-13 10:03:23,597 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /192.168.104.137:34524
2024-09-13 10:03:23,602 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer@694] - Established session 0x1000024aca20004 with negotiated timeout 40000 for client /192.168.104.137:34524
2024-09-13 10:03:38,987 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@222] - Accepted socket connection from /192.168.104.138:59332
2024-09-13 10:03:38,991 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /192.168.104.138:59332
2024-09-13 10:03:38,997 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer@694] - Established session 0x1000024aca20005 with negotiated timeout 40000 for client /192.168.104.138:59332
2024-09-13 10:04:10,969 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@222] - Accepted socket connection from /192.168.104.138:59342
2024-09-13 10:04:10,973 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /192.168.104.138:59342
2024-09-13 10:04:10,978 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer@694] - Established session 0x1000024aca20006 with negotiated timeout 40000 for client /192.168.104.138:59342
2024-09-13 10:28:14,438 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@376] - Unable to read additional data from client sessionid 0x1000024aca20004, likely client has closed socket
2024-09-13 10:28:14,440 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /192.168.104.137:34524 which had sessionid 0x1000024aca20004
2024-09-13 10:28:22,840 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@376] - Unable to read additional data from client sessionid 0x1000024aca20003, likely client has closed socket
2024-09-13 10:28:22,842 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /192.168.104.137:34516 which had sessionid 0x1000024aca20003
2024-09-13 10:28:33,688 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@376] - Unable to read additional data from client sessionid 0x1000024aca20006, likely client has closed socket
2024-09-13 10:28:33,690 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /192.168.104.138:59342 which had sessionid 0x1000024aca20006
2024-09-13 10:28:36,442 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@376] - Unable to read additional data from client sessionid 0x1000024aca20005, likely client has closed socket
2024-09-13 10:28:36,444 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /192.168.104.138:59332 which had sessionid 0x1000024aca20005
2024-09-13 10:28:55,216 [myid:1] - INFO [SessionTracker:ZooKeeperServer@355] - Expiring session 0x1000024aca20004, timeout of 40000ms exceeded
2024-09-13 10:28:55,219 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1000024aca20004
2024-09-13 10:28:59,216 [myid:1] - INFO [SessionTracker:ZooKeeperServer@355] - Expiring session 0x3000023e26f0001, timeout of 40000ms exceeded
2024-09-13 10:28:59,217 [myid:1] - INFO [SessionTracker:ZooKeeperServer@355] - Expiring session 0x3000023e26f0000, timeout of 40000ms exceeded
2024-09-13 10:28:59,218 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x3000023e26f0001
2024-09-13 10:28:59,219 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x3000023e26f0000
2024-09-13 10:29:03,216 [myid:1] - INFO [SessionTracker:ZooKeeperServer@355] - Expiring session 0x1000024aca20003, timeout of 40000ms exceeded
2024-09-13 10:29:03,218 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1000024aca20003
2024-09-13 10:29:07,217 [myid:1] - INFO [SessionTracker:ZooKeeperServer@355] - Expiring session 0x1000024aca20006, timeout of 40000ms exceeded
2024-09-13 10:29:07,219 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1000024aca20006
2024-09-13 10:29:13,216 [myid:1] - INFO [SessionTracker:ZooKeeperServer@355] - Expiring session 0x20000243eba0003, timeout of 40000ms exceeded
2024-09-13 10:29:13,219 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x20000243eba0003
2024-09-13 10:29:15,217 [myid:1] - INFO [SessionTracker:ZooKeeperServer@355] - Expiring session 0x1000024aca20005, timeout of 40000ms exceeded
2024-09-13 10:29:15,220 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1000024aca20005
2024-09-13 10:29:21,216 [myid:1] - INFO [SessionTracker:ZooKeeperServer@355] - Expiring session 0x20000243eba0002, timeout of 40000ms exceeded
2024-09-13 10:29:21,218 [myid:1] - INFO [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x20000243eba0002
2024-09-13 10:31:18,104 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@222] - Accepted socket connection from /192.168.104.137:34552
2024-09-13 10:31:18,109 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /192.168.104.137:34552
2024-09-13 10:31:18,115 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer@694] - Established session 0x1000024aca20007 with negotiated timeout 40000 for client /192.168.104.137:34552
2024-09-13 10:31:23,691 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@222] - Accepted socket connection from /192.168.104.137:34564
2024-09-13 10:31:23,702 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /192.168.104.137:34564
2024-09-13 10:31:23,707 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer@694] - Established session 0x1000024aca20008 with negotiated timeout 40000 for client /192.168.104.137:34564
2024-09-13 10:31:31,210 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@222] - Accepted socket connection from /192.168.104.138:59364
2024-09-13 10:31:31,214 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /192.168.104.138:59364
2024-09-13 10:31:31,219 [myid:1] - INFO [CommitProcessor:1:ZooKeeperServer@694] - Established session 0x1000024aca20009 with negotiated timeout 40000 for client /192.168.104.138:59364
  • 解析:DEBUG: 调试信息;INFO: 一般信息;WARN: 警告信息;ERROR: 错误信息;FATAL: 严重错误信息

    • 会话建立

      1
      2024-09-13 09:53:37,356 [myid:1] - INFO  [CommitProcessor:1:ZooKeeperServer@694] - Established session 0x1000024aca20002 with negotiated timeout 30000 for client /192.168.104.134:37020

      ​ ZooKeeper 成功为客户端 /192.168.104.134:37020 建立了会话,分配的会话 ID 是 0x1000024aca20002,并协商了 30000 毫秒的超时时间

    • 会话终止

      1
      2024-09-13 09:53:37,414 [myid:1] - INFO  [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1000024aca20002

      ​ 会话 ID 为 0x1000024aca20002 的会话已经被终止。终止的原因可能是客户端主动断开连接或超时

    • 套接字连接关闭

      1
      2024-09-13 09:53:37,419 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /192.168.104.134:37020 which had sessionid 0x1000024aca20002

      ​ ZooKeeper 关闭了与客户端 /192.168.104.134:37020 的套接字连接,该客户端的会话 ID 是 0x1000024aca20002

    • 节点不存在的错误

      1
      2024-09-13 10:03:07,718 [myid:1] - INFO  [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x20000243eba0000 type:setData cxid:0x41 zxid:0x30000004b txntype:-1 reqpath:n/a Error Path:/config/topics/user-register-succ-topic Error:KeeperErrorCode = NoNode for /config/topics/user-register-succ-topic

      ​ ZooKeeper 在处理客户端的 setData 请求时遇到 NoNode 错误,表示客户端尝试修改不存在的 ZNode /config/topics/user-register-succ-topic

    • 接受新的客户端连接

      1
      2024-09-13 10:03:12,861 [myid:1] - INFO  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@222] - Accepted socket connection from /192.168.104.137:34516

      ​ ZooKeeper 接受了来自 IP 地址 192.168.104.137:34516 的新的套接字连接

    • 会话超时

      1
      2024-09-13 10:28:55,216 [myid:1] - INFO  [SessionTracker:ZooKeeperServer@355] - Expiring session 0x1000024aca20004, timeout of 40000ms exceeded

      ​ 会话 ID 为 0x1000024aca20004 的会话因超时时间 40000 毫秒到期而失效。这意味着该会话超过了协商的超时时间,且客户端没有发送任何保持活动的信号

    • 会话终止处理

      1
      2024-09-13 10:28:55,219 [myid:1] - INFO  [ProcessThread(sid:1 cport:-1)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1000024aca20004

      ​ ZooKeeper 处理了会话 ID 为 0x1000024aca20004 的会话终止操作。这通常是在超时或客户端主动断开连接时执行的

    • 无法从客户端读取数据

      1
      2024-09-13 10:28:14,438 [myid:1] - WARN  [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@376] - Unable to read additional data from client sessionid 0x1000024aca20004, likely client has closed socket

      ​ ZooKeeper 在尝试从客户端读取数据时失败。可能的原因是客户端已经关闭了连接。日志级别为 WARN,表示这是一个需要注意的异常情况

    • 新的客户端会话建立

      1
      2024-09-13 10:31:18,115 [myid:1] - INFO  [CommitProcessor:1:ZooKeeperServer@694] - Established session 0x1000024aca20007 with negotiated timeout 40000 for client /192.168.104.137:34552

      ​ ZooKeeper 为客户端 192.168.104.137:34552 创建了一个新的会话,协商的超时时间为 40000 毫秒,分配的会话 ID 是 0x1000024aca20007

4. Kafka
1
2
3
4
$ cat /root/kafka_2.11-1.1.1/logs/server.log
[2024-09-14 02:02:52,438] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-09-14 02:12:52,439] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2024-09-14 02:22:52,438] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
  • 解析:DEBUG: 调试信息;INFO: 一般信息;WARN: 警告信息;ERROR: 错误信息;FATAL: 严重错误信息
    • Kafka broker 1 在 1 毫秒内执行了过期位移的检查操作,但没有发现任何需要删除的过期位移
    • Kafka broker 1 再次执行了过期位移检查,但没有发现任何过期位移,且操作在不到 1 毫秒内完成
    • Kafka broker 1 第三次执行了过期位移检查,没有找到过期位移,检查耗时 1 毫秒
5. redis
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
cat /var/log/redis/redis.log
8272:C 13 Sep 09:54:05.846 * supervised by systemd, will signal readiness
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 3.2.12 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in standalone mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
| `-._ `._ / _.-' | PID: 8272
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'

8272:M 13 Sep 09:54:05.849 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
8272:M 13 Sep 09:54:05.849 # Server started, Redis version 3.2.12
8272:M 13 Sep 09:54:05.849 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
8272:M 13 Sep 09:54:05.850 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
8272:M 13 Sep 09:54:05.850 * The server is now ready to accept connections on port 6379
  • 解析:

    • 开始服务

      1
      8272:C 13 Sep 09:54:05.846 * supervised by systemd, will signal readiness

      ​ 交代了 redis 的 进程 ID 是 8272,是作为 子进程(child process,也就是上面的 C) 启动的,由 systemd 进行监督管理,并即将通知 systemd 说它已经准备好开始工作了

    • 欢迎信息

      1
      2
      3
      4
      5
      6
      7
      8
      9
      10
      11
      12
      13
      14
      15
      16
      17
      18
                     _._                                                  
      _.-``__ ''-._
      _.-`` `. `_. ''-._ Redis 3.2.12 (00000000/0) 64 bit
      .-`` .-```. ```\/ _.,_ ''-._
      ( ' , .-` | `, ) Running in standalone mode
      |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379
      | `-._ `._ / _.-' | PID: 8272
      `-._ `-._ `-./ _.-' _.-'
      |`-._`-._ `-.__.-' _.-'_.-'|
      | `-._`-._ _.-'_.-' | http://redis.io
      `-._ `-._`-.__.-'_.-' _.-'
      |`-._`-._ `-.__.-' _.-'_.-'|
      | `-._`-._ _.-'_.-' |
      `-._ `-._`-.__.-'_.-' _.-'
      `-._ `-.__.-' _.-'
      `-._ _.-'
      `-.__.-'

      ​ 这段是 Redis 启动时的欢迎信息,显示了 redis 的版本是 3.2.12、进程 ID 是 8272、端口信息是 6379

    • TCP backlog 设置警告

      1
      8272:M 13 Sep 09:54:05.849 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.

      ​ redis 尝试设置 TCP backlog 为 511,但系统限制为 128

    • 正在运行

      1
      8272:M 13 Sep 09:54:05.849 # Server started, Redis version 3.2.12

      ​ redis 服务器已经启动,正在运行 3.2.12 版本

    • overcommit_memory 警告

      1
      8272:M 13 Sep 09:54:05.849 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.

      ​ 系统的 overcommit_memory 参数设置为 0,意味着内存分配受到系统限制,可能导致 Redis 在内存不足时无法进行后台持久化保存

    • THP (Transparent Huge Pages) 警告

      1
      8272:M 13 Sep 09:54:05.850 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.

      ​ 透明大页(Transparent Huge Pages, THP)在你的系统内核中已启用,但这会导致 Redis 的内存使用和性能出现问题,增加延迟

    • 准备连接

      1
      8272:M 13 Sep 09:54:05.850 * The server is now ready to accept connections on port 6379

      ​ Redis 服务器已经准备好开始接受客户端连接,监听的端口是 6379

3. 配置后端

​ 将提供的 4 个 jar 包,上传至 jar1、jar2 节点的/root 目录下,然后在 jar1 和 jar2 两个机器上运行这 4 个 jar 包

1
2
3
4
$ sudo wget https://moka.anitsuri.top/images/gpmall_plural/user-provider-0.0.1-SNAPSHOT.jar
$ sudo wget https://moka.anitsuri.top/images/gpmall_plural/shopping-provider-0.0.1-SNAPSHOT.jar
$ sudo wget https://moka.anitsuri.top/images/gpmall_plural/gpmall-shopping-0.0.1-SNAPSHOT.jar
$ sudo wget https://moka.anitsuri.top/images/gpmall_plural/gpmall-user-0.0.1-SNAPSHOT.jar
1
2
3
4
$ sudo nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
$ sudo nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
$ sudo nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
$ sudo nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &
* 检查 / 日志 (解析)
  • 检查:

    ​ 使用 ps -aux | grep java 检查那几个 jar 包是否运行

  • 日志:nohup.out

3.3

  • 日志解析

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    $ tail -50 nohup.out
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:host.name=jar1
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:java.version=1.8.0_412
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:java.vendor=Red Hat, Inc.
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:java.home=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.412.b08-1.el7_9.x86_64/jre
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:java.class.path=gpmall-shopping-0.0.1-SNAPSHOT.jar
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:java.io.tmpdir=/tmp
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:java.compiler=<NA>
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:os.name=Linux
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:os.arch=amd64
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:os.version=3.10.0-1160.45.1.el7.x86_64
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:user.name=root
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:user.home=/root
    2024-09-13 10:31:27.883 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Client environment:user.dir=/root
    2024-09-13 10:31:27.884 INFO 8688 --- [ main] org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=zk1.mall:2181,zookeeper::9090 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState@4b2c5e02
    2024-09-13 10:31:27.906 INFO 8688 --- [(zk1.mall:2181)] org.apache.zookeeper.ClientCnxn : Opening socket connection to server zk1.mall/192.168.104.135:2181. Will not attempt to authenticate using SASL (unknown error)
    2024-09-13 10:31:27.914 INFO 8688 --- [(zk1.mall:2181)] org.apache.zookeeper.ClientCnxn : Socket connection established to zk1.mall/192.168.104.135:2181, initiating session
    2024-09-13 10:31:27.916 INFO 8688 --- [ main] o.a.c.f.imps.CuratorFrameworkImpl : Default schema
    2024-09-13 10:31:27.934 INFO 8688 --- [(zk1.mall:2181)] org.apache.zookeeper.ClientCnxn : Session establishment complete on server zk1.mall/192.168.104.135:2181, sessionid = 0x3000023e26f0003, negotiated timeout = 40000
    2024-09-13 10:31:27.945 INFO 8688 --- [ain-EventThread] o.a.c.f.state.ConnectionStateManager : State change: CONNECTED
    2024-09-13 10:31:28.594 INFO 8734 --- [ main] o.a.d.c.s.b.f.a.ReferenceBeanBuilder : The bean[type:ReferenceBean] has been built.
    2024-09-13 10:31:28.694 INFO 8734 --- [ main] o.a.d.c.s.b.f.a.ReferenceBeanBuilder : The bean[type:ReferenceBean] has been built.
    2024-09-13 10:31:28.726 INFO 8688 --- [ main] o.a.d.c.s.b.f.a.ReferenceBeanBuilder : The bean[type:ReferenceBean] has been built.
    2024-09-13 10:31:28.818 INFO 8734 --- [ main] o.a.d.c.s.b.f.a.ReferenceBeanBuilder : The bean[type:ReferenceBean] has been built.
    2024-09-13 10:31:28.923 INFO 8734 --- [ main] o.a.d.c.s.b.f.a.ReferenceBeanBuilder : The bean[type:ReferenceBean] has been built.
    2024-09-13 10:31:29.024 INFO 8688 --- [ main] o.a.d.c.s.b.f.a.ReferenceBeanBuilder : The bean[type:ReferenceBean] has been built.
    2024-09-13 10:31:29.193 INFO 8688 --- [ main] o.a.d.c.s.b.f.a.ReferenceBeanBuilder : The bean[type:ReferenceBean] has been built.
    2024-09-13 10:31:29.267 INFO 8688 --- [ main] o.a.d.c.s.b.f.a.ReferenceBeanBuilder : The bean[type:ReferenceBean] has been built.
    2024-09-13 10:31:29.437 INFO 8734 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
    2024-09-13 10:31:29.464 INFO 8688 --- [ main] o.a.d.c.s.b.f.a.ReferenceBeanBuilder : The bean[type:ReferenceBean] has been built.
    2024-09-13 10:31:29.931 INFO 8734 --- [ main] o.a.c.framework.CuratorFrameworkFactory : zkHosts:null,sessionTimeout:30000,connectionTimeout:30000,singleton:true,namespacenull
    2024-09-13 10:31:30.083 INFO 8688 --- [ main] o.a.d.c.s.b.f.a.ReferenceBeanBuilder : The bean[type:ReferenceBean] has been built.
    2024-09-13 10:31:30.199 INFO 8734 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8082 (http) with context path ''
    2024-09-13 10:31:30.203 INFO 8734 --- [ main] c.g.u.gpmalluser.GpmallUserApplication : Started GpmallUserApplication in 9.606 seconds (JVM running for 10.69)
    2024-09-13 10:31:30.423 INFO 8688 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 2 endpoint(s) beneath base path '/actuator'
    2024-09-13 10:31:30.714 INFO 8688 --- [ main] pertySourcedRequestMappingHandlerMapping : Mapped URL path [/v2/api-docs] onto method [public org.springframework.http.ResponseEntity<springfox.documentation.spring.web.json.Json> springfox.documentation.swagger2.web.Swagger2Controller.getDocumentation(java.lang.String,javax.servlet.http.HttpServletRequest)]
    2024-09-13 10:31:31.020 INFO 8688 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
    2024-09-13 10:31:31.282 INFO 8688 --- [ main] o.a.c.framework.CuratorFrameworkFactory : zkHosts:null,sessionTimeout:30000,connectionTimeout:30000,singleton:true,namespacenull
    2024-09-13 10:31:31.651 INFO 8688 --- [ main] d.s.w.p.DocumentationPluginsBootstrapper : Context refreshed
    2024-09-13 10:31:31.673 INFO 8688 --- [ main] d.s.w.p.DocumentationPluginsBootstrapper : Found 1 custom documentation plugin(s)
    2024-09-13 10:31:31.673 INFO 8688 --- [ main] d.s.w.p.DocumentationPluginsBootstrapper : Skipping initializing disabled plugin bean swagger v2.0
    2024-09-13 10:31:31.709 INFO 8688 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8081 (http) with context path ''
    2024-09-13 10:31:31.713 INFO 8688 --- [ main] c.g.s.g.GpmallShoppingApplication : Started GpmallShoppingApplication in 13.866 seconds (JVM running for 15.049)
    2024-09-13 10:31:32.655 INFO 8640 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=mail-group-id] Attempt to heartbeat failed since group is rebalancing
    2024-09-13 10:31:32.657 INFO 8640 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=mail-group-id] Revoking previously assigned partitions [user-register-succ-topic-0]
    2024-09-13 10:31:32.657 INFO 8640 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked: [user-register-succ-topic-0]
    2024-09-13 10:31:32.657 INFO 8640 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=mail-group-id] (Re-)joining group
    2024-09-13 10:31:32.670 INFO 8640 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=mail-group-id] Successfully joined group with generation 6
    2024-09-13 10:31:32.672 INFO 8640 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=mail-group-id] Setting newly assigned partitions []
    2024-09-13 10:31:32.672 INFO 8640 --- [ntainer#0-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: []
    • 解析:

      • 开始运行

        1
        2
        3
        4
        2024-09-13 10:31:27.883 INFO 8688 --- [main] org.apache.zookeeper.ZooKeeper : Client environment:host.name=jar1
        2024-09-13 10:31:27.883 INFO 8688 --- [main] org.apache.zookeeper.ZooKeeper : Client environment:java.version=1.8.0_412
        2024-09-13 10:31:27.883 INFO 8688 --- [main] org.apache.zookeeper.ZooKeeper : Client environment:java.vendor=Red Hat, Inc.
        2024-09-13 10:31:27.883 INFO 8688 --- [main] org.apache.zookeeper.ZooKeeper : Client environment:java.home=/usr/lib/jvm/...

        ​ ZooKeeper 客户端开始运行,并记录了当前客户端运行的主机名称为 jar1,Java 运行时的版本号为 1.8.0_412,Java 环境的供应商为 Red Hat, Inc.,Java 的安装路径为 /usr/lib/jvm/...

      • 正在连接

        1
        2
        3
        4
        2024-09-13 10:31:27.884 INFO 8688 --- [main] org.apache.zookeeper.ZooKeeper : Initiating client connection, connectString=zk1.mall:2181
        2024-09-13 10:31:27.906 INFO 8688 --- [zk1.mall:2181] org.apache.zookeeper.ClientCnxn : Opening socket connection to server zk1.mall/192.168.104.135:2181
        2024-09-13 10:31:27.914 INFO 8688 --- [zk1.mall:2181] org.apache.zookeeper.ClientCnxn : Socket connection established to zk1.mall/192.168.104.135:2181
        2024-09-13 10:31:27.934 INFO 8688 --- [zk1.mall:2181] org.apache.zookeeper.ClientCnxn : Session establishment complete on server zk1.mall/192.168.104.135:2181

        ​ ZooKeeper 客户端开始尝试连接到 zk1.mall:2181(ZooKeeper 服务器的地址和端口),正在尝试通过 IP 地址 192.168.104.135 的端口 2181 与 ZooKeeper 服务器建立套接字连接并且成功了,也建立了会话,并且与 192.168.104.135:2181 完成了会话协商

      • Spring Boot 应用启动

        1
        2
        2024-09-13 10:31:30.199 INFO 8734 --- [main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8082 (http)
        2024-09-13 10:31:30.203 INFO 8734 --- [main] c.g.u.gpmalluser.GpmallUserApplication : Started GpmallUserApplication in 9.606 seconds

        ​ Spring Boot 应用的嵌入式 Tomcat 服务器成功启动,并监听 8082 端口,然后 GpmallUserApplication Spring Boot 应用在 9.606 秒内成功启动

      • 心跳失败

        1
        2
        2024-09-13 10:31:32.657 INFO 8640 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-2, groupId=mail-group-id] Attempt to heartbeat failed since group is rebalancing
        2024-09-13 10:31:32.657 INFO 8640 --- [ntainer#0-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-2, groupId=mail-group-id] Revoking previously assigned partitions [user-register-succ-topic-0]

        ​ Kafka 消费者客户端 consumer-2 在尝试向 mail-group-id 消费者组发送心跳时失败,因为消费者组正在进行重新平衡(rebalancing)

        ​ Kafka 消费者 consumer-2 撤销了先前分配的分区 user-register-succ-topic-0,这是消费者组重新平衡的一部分

4. 配置前端

​ 将提供的 dist 文件夹上传至 nginx 节点的 /root 目录下,并复制到 nginx 项目目录

1
2
$ sudo rm -rf /usr/share/nginx/html/*
$ sudo cp -rvf dist/* /usr/share/nginx/html/

​ 然后修改 nginx 的配置文件 /etc/nginx/conf.d/default.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
$ sudo vi /etc/nginx/conf.d/default.conf

# 填入以下内容,并且调整成自己的IP
upstream myuser {
server 192.168.104.137:8082;
server 192.168.104.138:8082;
ip_hash;
}

upstream myshopping {
server 192.168.104.137:8081;
server 192.168.104.138:8081;
ip_hash;
}

upstream mycashier {
server 192.168.104.137:8083;
server 192.168.104.138:8083;
ip_hash;
}

server {
listen 80;
server_name localhost;

location / {
root /usr/share/nginx/html;
index index.html index.htm;
}

location /user {
proxy_pass http://myuser;
}

location /shopping {
proxy_pass http://myshopping;
}

location /cashier {
proxy_pass http://mycashier;
}

error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
  • 配置解析:
    • upstream myuser/myshopping/mycashier
      • 这些块定义了上游服务器的集群,分别代表了 myusermyshopping、和 mycashier 服务
      • 每个 upstream 配置中,列出多个 IP 地址(192.168.104.137:8082192.168.104.138:8082 等)表示服务集群的后端节点
      • ip_hash:nginx 的负载均衡算法,基于客户端 IP 地址进行哈希计算,同一个客户端的请求将被分配到相同的服务器节点
    • server{ }
      • listen 80:指定 nginx 监听 HTTP 请求的 80 端口
      • server_name localhost:指定 nginx 的虚拟主机名(这里是 localhost,可以根据实际需要替换为域名)
      • location /:将根路径 / 的请求指向 root 路径 /usr/share/nginx/html,用作默认的静态资源存储路径。
      • location /user /shopping /cashier:分别将 /user/shopping、和 /cashier 路径下的请求转发到对应的 upstream 服务(myusermyshoppingmycashier)。
      • error_page:自定义错误页面,当遇到 HTTP 500、502、503 或 504 甚至是 404 错误时,使用 /50x.html 文件作为错误页面的返回

​ 启动 nginx 服务

1
2
3
4
5
6
7
8
9
10
11
12
13
$ sudo systemctl start nginx
# 查看 nginx 是否启动(查看 80 端口是否启动)
$ netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 604/rpcbind
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 16257/nginx: master
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1540/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1043/master
tcp6 0 0 :::111 :::* LISTEN 604/rpcbind
tcp6 0 0 :::80 :::* LISTEN 16257/nginx: master
tcp6 0 0 :::22 :::* LISTEN 1540/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1043/master
* 检查 / 日志 (解析)
  • 检查:

    ​ 使用 nginx -t 检查配置的 nginx 配置文件是否有语法错误,如果没有则会提示 OK 和 successful

  • 日志:

    • 错误日志:/var/log/nginx/error.log
    • 访问日志:/var/log/nginx/access.log
  • 日志解析:

    1
    2
    3
    4
    $ tail -3 /var/log/nginx/error.log
    2024/09/14 03:00:15 [error] 8340#8340: *7 open() "/usr/share/nginx/html/static/js/manifest.2d17a82764acff8145be.js" failed (2: No such file or directory), client: 192.168.104.36, server: localhost, request: "GET /static/js/manifest.2d17a82764acff8145be.js HTTP/1.1", host: "192.168.104.139", referrer: "http://192.168.104.139/"
    2024/09/14 03:00:15 [error] 8340#8340: *7 open() "/usr/share/nginx/html/static/js/vendor.4f07d3a235c8a7cd4efe.js" failed (2: No such file or directory), client: 192.168.104.36, server: localhost, request: "GET /static/js/vendor.4f07d3a235c8a7cd4efe.js HTTP/1.1", host: "192.168.104.139", referrer: "http://192.168.104.139/"
    2024/09/14 03:00:15 [error] 8340#8340: *7 open() "/usr/share/nginx/html/static/js/app.81180cbb92541cdf912f.js" failed (2: No such file or directory), client: 192.168.104.36, server: localhost, request: "GET /static/js/app.81180cbb92541cdf912f.js HTTP/1.1", host: "192.168.104.139", referrer: "http://192.168.104.139/"
    • 解析:

      1. 时间:2024/09/14 03:00:15

      2. 日志级别:[error]

      3. 进程和连接信息:

        • 8340#8340: 这是 Nginx 的工作进程 ID 和线程 ID(通常是相同的,因为 Nginx 采用的是单线程工作进程)
        • *7: 表示请求的内部连接 ID,用于标识特定的客户端请求
      4. 错误类型:

        1
        open() "/usr/share/nginx/html/static/js/manifest.2d17a82764acff8145be.js" failed (2: No such file or directory)

        ​ 表示 Nginx 尝试打开文文件 /usr/share/nginx/html/static/js/manifest.2d17a82764acff8145be.js 失败,原因是文件不存在(No such file or directory,错误代码 2)

      5. 客户端信息:

        client: 192.168.104.36: 发出请求的客户端 IP 地址是 `192.168.104.36

      6. 服务器信息:

        server: localhost: 处理这个请求的服务器配置是 localhost

      7. 请求信息:

        request: "GET /static/js/manifest.2d17a82764acff8145be.js HTTP/1.1": 客户端发送了一个 HTTP GET 请求,尝试访问 /static/js/manifest.2d17a82764acff8145be.js 这个文件

      8. 主机信息:

        host: "192.168.104.139": 客户端在请求时的 Host 头字段中指定了目标主机为 192.168.104.139

      9. 引用页面(referrer):

        referrer: "http://192.168.104.139/": 客户端请求时的来源页面是 http://192.168.104.139/,即客户端从这个页面上点击了某个链接,试图访问上述的静态资源

    • 剩下两个都大差不差,只是客户端尝试访问的文件不同

5. 网页访问

​ 打开 Chrome,在地址栏里输入 http://192.168.104.139,访问页面

3.5_1

​ 单击右上角“头像”,进行登录操作,使用 用户名/密码 为 test/test 进行登录

3.6.2

登录后,可进行购买商品操作,单击首页“坚果 R1”图片
3.5_23.6.4

​ 点击完 “现在购买” 按钮,跳转到提交订单页面
3.5_NET

​ 至此,集群部署应用系统完成。


Web 集群架构应用
https://moka.anitsuri.top/2024/09/12/gpmall_plural/
作者
アニつり
发布于
2024年9月12日
许可协议