1.安裝docker環(huán)境
http://www.itdecent.cn/p/bf2735f9f4d0
2.安裝docker-compose(容器編排技術(shù))
1)去github下載需要的版本即可
https://github.com/docker/compose/releases/tag/v2.0.1
2)上傳docker-compose文件到/usr/local/bin/目錄并賦執(zhí)行權(quán)限
sudo chmod +x /usr/local/bin/docker-compose(docker-compose文件添加可執(zhí)行的權(quán)限)

3)驗(yàn)證是否安裝成功
docker-compose -v

3.配置docker-compose.yml文件(會(huì)自動(dòng)安裝zk environment kafka kibana并運(yùn)行安裝的容器)
yml文件里的ip地址緩存實(shí)際虛擬機(jī)的ip地址
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper ## 鏡像
ports:
- "2181:2181" ## 對(duì)外暴露的端口號(hào)
kafka:
image: wurstmeister/kafka ## 鏡像
volumes:
- /etc/localtime:/etc/localtime ## 掛載位置(kafka鏡像和宿主機(jī)器之間時(shí)間保持一直)
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 192.168.100.200 ## 修改:宿主機(jī)IP
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 ## 卡夫卡運(yùn)行是基于zookeeper的
KAFKA_ADVERTISED_PORT: 9092
KAFKA_LOG_RETENTION_HOURS: 120
KAFKA_MESSAGE_MAX_BYTES: 10000000
KAFKA_REPLICA_FETCH_MAX_BYTES: 10000000
KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS: 60000
KAFKA_NUM_PARTITIONS: 3
KAFKA_DELETE_RETENTION_MS: 1000
kafka-manager:
image: sheepkiller/kafka-manager ## 鏡像:開源的web管理kafka集群的界面
environment:
ZK_HOSTS: 192.168.100.200 ## 修改:宿主機(jī)IP
ports:
- "9001:9000" ## 暴露端口
elasticsearch:
image: daocloud.io/library/elasticsearch:6.5.4
restart: always
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- 9200:9200
kibana:
image: daocloud.io/library/kibana:6.5.4
restart: always
container_name: kibana
ports:
- 5601:5601
environment:
- elasticsearch_url=http://192.168.100.200:9200
depends_on:
- elasticsearch
4.在/usr下新建目錄xxx(我這創(chuàng)建dcf),上傳docker-compose.yml文件

5.設(shè)置虛擬機(jī)內(nèi)存并重啟虛擬機(jī)
1)設(shè)置vm內(nèi)存
在/etc/sysctl.conf文件的最后添加一行代碼:
vm.max_map_count=262144
2)關(guān)閉虛擬機(jī)防火墻
systemctl stop firewalld.service
3)重啟虛擬機(jī)
4)重啟docker
service docker restart
5.進(jìn)入/usr/dcf目錄下運(yùn)行docker-compose.yml文件
docker-compose up

6.查看運(yùn)行結(jié)果
如圖顯示說明kibana elasticsearch zookper運(yùn)行成功
http://192.168.100.200:9200/

http://192.168.100.200:5601/app/kibana#/dev_tools/console?_g=()


7.安裝logstash插件
1)將logstash上傳到/usr/dcf目錄中并解壓

2)進(jìn)入logstash的config目錄修改logstash-sample.conf配置文件

input {
kafka {
bootstrap_servers => "192.168.100.200:9092"
topics => "jyb-log"
}
}
filter {
#Only matched data are send to output.
}
output {
elasticsearch {
action => "index"
hosts => "192.168.100.200:9200"
index => "jyb_logs"
}
}
3)安裝jdk環(huán)境(安裝過就跳過此步驟)
http://www.itdecent.cn/p/69883925350c
4)關(guān)聯(lián)kafaka與elasticsearch
輸入 bin/logstash-plugin install logstash-input-kafka 關(guān)聯(lián)kafaka

輸入 bin/logstash-plugin install logstash-output-elasticsearch 關(guān)聯(lián)elasticsearch

5)進(jìn)入bin目錄運(yùn)行l(wèi)ogstash插件
輸入 ./logstash -f ../config/logstash-sample.conf

8.虛擬機(jī)配置信息
elk+kafaka搭建,最低配置6g內(nèi)存+4核+30g存儲(chǔ)空間,低于此配置環(huán)境是無法正常運(yùn)行的

9.搭建環(huán)境需要的安裝包地址
地址百度云盤分享如果失效了,請(qǐng)自行百度查找安裝包下載
鏈接:https://pan.baidu.com/s/1dgdpVigA876Rdg41Z-Iz8g
提取碼:3535