
Kafka集群環(huán)境搭建
服務(wù)器環(huán)境準(zhǔn)備
使用vm虛擬三個(gè)linux主機(jī)
192.168.212.174
192.168.212.175
192.168.212.176
Zookeeper集群環(huán)境搭建
1.每臺(tái)服務(wù)器節(jié)點(diǎn)上安裝jdk1.8環(huán)境
使用java-v命令測(cè)試
2.每臺(tái)服務(wù)器節(jié)點(diǎn)上安裝Zookeeper
|
1.下載并且安裝zookeeper安裝包
wget https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
2. 解壓安裝包
tar -zxvf zookeeper-3.4.10.tar.gz
3. 重命名
重命名: mv zookeeper-3.4.10 zookeeper
|
3.搭建Zookeeper集群環(huán)境
修改zoo_sample.cfg文件
|
cd /usr/local/zookeeper/conf mv zoo_sample.cfg zoo.cfg 修改conf: vi zoo.cfg 修改兩處 (1) dataDir=/usr/local/zookeeper/data(注意同時(shí)在zookeeper創(chuàng)建data目錄) (2)最后面添加 server.0=192.168.212.174:2888:3888 server.1=192.168.212.175:2888:3888 server.2=192.168.212.176:2888:3888
|
4.創(chuàng)建服務(wù)器標(biāo)識(shí) 服務(wù)器標(biāo)識(shí)配置: 創(chuàng)建文件夾: mkdir data 創(chuàng)建文件myid并填寫內(nèi)容為0: vi myid (內(nèi)容為服務(wù)器標(biāo)識(shí) : 0)
5.復(fù)制zookeeper
進(jìn)行復(fù)制zookeeper目錄到hadoop01和hadoop02 還有/etc/profile文件 把hadoop01、 hadoop02中的myid文件里的值修改為1和2 路徑(vi /usr/local/zookeeper/data/myid)
關(guān)閉每臺(tái)服務(wù)器節(jié)點(diǎn)防火墻,systemctl stop firewalld.service
啟動(dòng)zookeeper
啟動(dòng)zookeeper: 路徑: /usr/local/zookeeper/bin 執(zhí)行: zkServer.sh start (注意這里3臺(tái)機(jī)器都要進(jìn)行啟動(dòng)) 狀態(tài): zkServer.sh status(在三個(gè)節(jié)點(diǎn)上檢驗(yàn)zk的mode,一個(gè)leader和倆個(gè)follower)
Kafka集群環(huán)境搭建
3臺(tái)虛擬機(jī)均進(jìn)行以下操作:
|
// 解壓下載好的kafka壓縮包并重命名
cd /usr/local
wget http://mirror.bit.edu.cn/apache/kafka/1.0.0/kafka_2.11-1.0.0.tgz
tar -zxvf kafka_2.11-1.0.0.tgz
mv kafka_2.12-0.11.0.0 kafka
// 修改配置文件
vi ./kafka/config/server.properties
|
需要修改的內(nèi)容如下(192.168.212.169)
|
broker.id=0
listeners=PLAINTEXT://192.168.131.130:9092
zookeeper.connect=192.168.131.130:2181,192.168.131.131:2181,192.168.131.132:2181
|
需要修改的內(nèi)容如下(192.168.212.170)
|
broker.id=1
listeners=PLAINTEXT://192.168.212.170:9092
zookeeper.connect=192.168.131.130:2181,192.168.131.131:2181,192.168.131.132:2181
|
需要修改的內(nèi)容如下(192.168.212.171)
|
broker.id=2
listeners=PLAINTEXT://192.168.212.171:9092
zookeeper.connect=192.168.131.130:2181,192.168.131.131:2181,192.168.131.132:2181
|
// 在系統(tǒng)環(huán)境中配置kafka的路徑
vi /etc/profile
|
// 在文件最下方添加kafka路徑
export KAFKA_HOME=/usr/local/kafka
// 多路徑PATH寫法為PATH={KAFKA_HOME}/bin:$PATH
PATH=PATH
export PATH
|
// 使修改完的環(huán)境變量生效
source /etc/profile
192.168.212.169:2181,192.168.212.172:2181,192.168.212.173:2181
Kafka集群環(huán)境測(cè)試
1、開啟3臺(tái)虛擬機(jī)的zookeeper程序
/usr/local/zookeeper/bin/zkServer.sh start
開啟成功后查看zookeeper集群的狀態(tài)
/usr/local/zookeeper/bin/zkServer.sh status
出現(xiàn)Mode:follower或是Mode:leader則代表成功
2、在后臺(tái)開啟3臺(tái)虛擬機(jī)的kafka程序(cd /usr/local/kafka)
./bin/kafka-server-start.sh -daemon config/server.properties
3、在其中一臺(tái)虛擬機(jī)(192.168.131.130)創(chuàng)建topic
/usr/local/kafka/bin/kafka-topics.sh –create –zookeeper 192.168.131.130:2181 –replication-factor 3 –partitions 1 –topic my-replicated-topic
// 查看創(chuàng)建的topic信息
/usr/local/kafka/bin/kafka-topics.sh –describe –zookeeper 192.168.131.130:2181 –topic my-replicated-topic
SpringBoot整合kafka
'''
@RestController
@SpringBootApplication
public class KafkaController {
/**
* 注入kafkaTemplate
*/
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
/**
* 發(fā)送消息的方法
*
* @param key
* 推送數(shù)據(jù)的key
* @param data
* 推送數(shù)據(jù)的data
*/
private void send(String key, String data) {
// topic 名稱 key data 消息數(shù)據(jù)
kafkaTemplate.send("my_test", key, data);
}
// test 主題 1 my_test 3
@RequestMapping("/kafka")
public String testKafka() {
int iMax = 6;
for (int i = 1; i < iMax; i++) {
send("key" + i, "data" + i);
}
return "success";
}
public static void main(String[] args) {
SpringApplication.run(KafkaController.class, args);
}
/**
* 消費(fèi)者使用日志打印消息
*/
@KafkaListener(topics = "my_test")
public void receive(ConsumerRecord<?, ?> consumer) {
System.out.println("topic名稱:" + consumer.topic() + ",key:" + consumer.key() + ",分區(qū)位置:" + consumer.partition()
+ ", 下標(biāo)" + consumer.offset());
}
}
'''
resources
'''
kafka
spring:
kafka:
# kafka服務(wù)器地址(可以多個(gè))
bootstrap-servers: 192.168.212.174:9092,192.168.212.175:9092,192.168.212.176:9092
consumer:
# 指定一個(gè)默認(rèn)的組名
group-id: kafka2
# earliest:當(dāng)各分區(qū)下有已提交的offset時(shí),從提交的offset開始消費(fèi);無提交的offset時(shí),從頭開始消費(fèi)
# latest:當(dāng)各分區(qū)下有已提交的offset時(shí),從提交的offset開始消費(fèi);無提交的offset時(shí),消費(fèi)新產(chǎn)生的該分區(qū)下的數(shù)據(jù)
# none:topic各分區(qū)都存在已提交的offset時(shí),從offset后開始消費(fèi);只要有一個(gè)分區(qū)不存在已提交的offset,則拋出異常
auto-offset-reset: earliest
# key/value的反序列化
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
producer:
# key/value的序列化
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
# 批量抓取
batch-size: 65536
# 緩存容量
buffer-memory: 524288
# 服務(wù)器地址
bootstrap-servers: 192.168.212.174:9092,192.168.212.175:9092,192.168.212.176:9092
'''