Openeuler2203安装zookeeper-3.9.1与kafka-3.6.1集群

举报
江晚正愁余 发表于 2024/01/31 16:14:25 2024/01/31
【摘要】 Openeuler2203系统zookeeper+kafka集群部署 一,具体环境 1,主机信息前三台用来安装zookeeper-3.9.1与kafka_2.13-3.6.1,Java 采用JAVA-11-openjdk版本192.168.0.11 ecs-0001192.168.0.12 ecs-0002192.168.0.13 ecs-0003192.168.0.14 ecs-0004...

Openeuler2203系统zookeeper+kafka集群部署

一,具体环境

1,主机信息

前三台用来安装zookeeper-3.9.1与kafka_2.13-3.6.1,Java 采用JAVA-11-openjdk版本

192.168.0.11 ecs-0001
192.168.0.12 ecs-0002
192.168.0.13 ecs-0003
192.168.0.14 ecs-0004

注意: kafka新版本可以使用kraft配置文件,不再需要zookeeper。

随着 Apache Kafka 3.5 的发布,Zookeeper 现在被标记为已弃用。计划在 Apache Kafka(4.0 版)的下一个主要版本中删除 ZooKeeper, 计划不早于 2024 年 4 月发生。在弃用阶段,仍支持 ZooKeeper 进行 Kafka 集群的元数据管理, 但不建议将其用于新部署。
支持 Java 8、Java 11 和 Java 17。请注意,自 Apache Kafka 3.0 以来,Java 8 支持已被弃用,并将在 Apache Kafka 4.0 中删除。 如果启用了 TLS,Java 11 及更高版本的性能会明显更好。

2,目录信息

/data/kafka 用来存放kafka文件

/data/zookeeper 用来存放zookeeper文件

/data/bin 用来存放启动脚本

二,部署步骤

1,主机环境配置

# 主机免密 
ssh-keygen

for i in {11..14};do ssh-copy-id 192.168.0.${i};done

# 安装java-11

for i in {11..14};do ssh 192.168.0.${i} "yum install -y java-11";done

#配置hosts

for i in {1..4};do ssh 192.168.0.1${i} "sed -i '3d' /etc/hosts";done

for i in {1..4};do ssh 192.168.0.1${i} "echo "192.168.0.11  ecs-0001" >> /etc/hosts";done
for i in {1..4};do ssh 192.168.0.1${i} "echo "192.168.0.12  ecs-0002" >> /etc/hosts";done
for i in {1..4};do ssh 192.168.0.1${i} "echo "192.168.0.13  ecs-0003" >> /etc/hosts";done
for i in {1..4};do ssh 192.168.0.1${i} "echo "192.168.0.14  ecs-0004" >> /etc/hosts";done

  
  

2,zookeeper集群安装

安装步骤

# 创建目录与下载
for i in {1..4};do ssh 192.168.0.1${i} "mkdir -p /data/";done

cd /data/
 wget https://mirrors.tuna.tsinghua.edu.cn/apache/kafka/3.6.1/kafka_2.13-3.6.1.tgz
 wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.9.1/apache-zookeeper-3.9.1-bin.tar.gz
 
# 解压文件 
 tar -zxvf apache-zookeeper-3.9.1-bin.tar.gz -C /data/
 tar -zxvf kafka_2.13-3.6.1.tgz -C /data/
 mv apache-zookeeper-3.9.1-bin zookeeper
 mv kafka_2.13-3.6.1 kafka

#根据后面的内容将ecs-0001主机的zookkeeper/config/zoocfg配置文件内容修改下

#然后打包zookeeper目录

cd /data/
zip -r zookeeper.zip zookeeper

# 将打包文件传到其它主机上并解压
 for i in {2..3};do scp /data/zookeeper.zip  192.168.0.1${i}:/data/;done
 
 for i in {11..13};do ssh 192.168.0.${i} "cd /data && unzip zookeeper.zip";done
  
# 修改zookeeper的id   
for i in {1..3};do ssh 192.168.0.1${i} " echo ${i} >  /data/zookeeper/zkdata/myid";done

以下内容是zoo.cfg配置


dataDir=/data/zookeeper/zkdata
dataLogDir=/data/zookeeper/logs

server.1=ecs-0001:3188:3288
server.2=ecs-0002:3188:3288
server.3=ecs-0003:3188:3288

以下内容是批量启动zookeeper的脚本,放到 /data/bin/zookeeper.sh中


#!/bin/bash
ZK_HOME=/data/zookeeper
case $1 in
"start") {
    for i in ecs-0001 ecs-0002 ecs-0003; do
        echo ---------- zookeeper $i 启动 ------------
        ssh $i "/data/zookeeper/bin/zkServer.sh start"
    done
} ;;
"stop") {
    for i in ecs-0001 ecs-0002 ecs-0003; do
        echo ---------- zookeeper $i 停止 ------------
        ssh $i "/data/zookeeper/bin/zkServer.sh stop"
    done
} ;;
"status") {
    for i in ecs-0001 ecs-0002 ecs-0003; do
        echo ---------- zookeeper $i 状态 ------------
        ssh $i "/data/zookeeper/bin/zkServer.sh status"
    done
} ;;
esac


3,启动zookeeper

#启动zookeeper服务

[root@ecs-0001 bin]# ./zookeeper.sh start
---------- zookeeper ecs-0001 启动 ------------

Authorized users only. All activities may be monitored and reported.
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
---------- zookeeper ecs-0002 启动 ------------
 
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
---------- zookeeper ecs-0003 启动 ------------

 
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
 
 
 
#检测zookeeper服务状态,可以看到ecs-0002已经是Leader
 
[root@ecs-0001 bin]# ./zookeeper.sh status
---------- zookeeper ecs-0001 状态 ------------
 
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
---------- zookeeper ecs-0002 状态 ------------
 
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
---------- zookeeper ecs-0003 状态 ------------
 
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
 

4,kafka安装步骤

安装流程

#根据后面的内容将ecs-0001主机的kafka/config/kraft/server.properties配置文件内容修改下

#然后打包kafka目录

cd /data/
zip -r kafka.zip kafka

# 将打包文件传到其它主机上并解压
 for i in {2..3};do scp /data/kafka.zip  192.168.0.1${i}:/data/;done
 
 for i in {12..13};do ssh 192.168.0.${i} "cd /data && unzip kafka.zip";done
 
# 注意另外两台主机都要修改配置文件 

kafka配置文件修改内容,每台主机都要修改

 
node.id=1 
controller.quorum.voters=1@ecs-0001:9093,2@ecs-0002:9093,3@ecs-0003:9093 
listeners=PLAINTEXT://ecs-0001:9092,CONTROLLER://ecs-0001:9093  
advertised.listeners=PLAINTEXT://ecs-0001:9092  
log.dirs=/data/kafka/logs 
num.partitions=10
 

kafka集群初始化

[root@ecs-0001 bin]# /data/kafka/bin/kafka-storage.sh random-uuid
xJevzljvR76d1nnM8IANMQ
 
 
[root@ecs-0001 bin]# for i in {1..3};do ssh 192.168.0.1${i} "/data/kafka/bin/kafka-storage.sh format -t xJevzljvR76d1nnM8IANMQ -c /data/kafka/config/kraft/server.properties";done
 
Formatting /data/kafka/logs with metadata.version 3.6-IV2.
 
Formatting /data/kafka/logs with metadata.version 3.6-IV2.
 
Formatting /data/kafka/logs with metadata.version 3.6-IV2.


[root@ecs-0001 bin]# cat /data/kafka/logs/meta.properties 
#
#Wed Jan 31 14:55:41 CST 2024
node.id=1
version=1
cluster.id=xJevzljvR76d1nnM8IANMQ

以下内容是批量启动kafka的脚本,放到 /data/bin/kafka.sh中

#!/bin/bash

#KAFKA_HOME
export KAFKA_HOME=/data/kafka/
export PATH=$PATH:$KAFKA_HOME/bin

case $1 in
"start") {
    for i in ecs-0001 ecs-0002 ecs-0003; do
        echo ---------- kafka $i 启动 ------------
        ssh $i "export KAFKA_HOME=/data/kafka/ && export PATH=$PATH:$KAFKA_HOME/bin && /data/kafka/bin/kafka-server-start.sh /data/kafka/config/server.properties"
    done
} ;;
"stop") {
    for i in ecs-0001 ecs-0002 ecs-0003; do
        echo ---------- kafka $i 停止 ------------
        ssh $i "export KAFKA_HOME=/data/kafka/ && export PATH=$PATH:$KAFKA_HOME/bin && /data/kafka/bin/kafka-server-stop.sh stop"
    done
} ;;
esac

5,kafka功能测试

# 创建topic测试:

[root@kafka10]# ./bin/kafka-topics.sh --create --bootstrap-server ecs-0001:9092 --replication-factor 1 --partitions 1 --topic tp1

Created topic tp1.

# 查看所有的topic

[root@kafka1]# ./bin/kafka-topics.sh --list  --bootstrap-server ecs-0003:9092
tp1

# 模拟一个数据

[root@kafka1]# ./bin/kafka-console-producer.sh --bootstrap-server ecs-0001:9092 --topic tp1
>测试数据
>ceshi1

# 起一个消费者:

[root@kafka1]# ./bin/kafka-console-consumer.sh --bootstrap-server ecs-0002:9092 --topic tp1 

测试数据
ceshi1


# 到这里kafka集群已经ok
————————————————


【版权声明】本文为华为云社区用户原创内容,转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息, 否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱: cloudbbs@huaweicloud.com
  • 点赞
  • 收藏
  • 关注作者

评论(0

0/1000
抱歉,系统识别当前为高风险访问,暂不支持该操作

全部回复

上滑加载中

设置昵称

在此一键设置昵称,即可参与社区互动!

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。

*长度不超过10个汉字或20个英文字符,设置后3个月内不可修改。