詳細(xì)容器ELK部署+flume收集日志

docker pull elasticsearch:7.13.2
mkdir /data/elk/es
mkdir /data/elk/es/data
mkdir /data/elk/es/config
echo http.host: 0.0.0.0>config/elasticsearch.yml
chmod -R 777 es (非必選)
docker run --name es -p 9200:9200 -p 9300:9300 -e “discovery.type=single-node”  \
-e  ES_JAVA_OPTS="-Xms500m -Xmx500m"  \
-v /data/elk/es/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml  \
-v /data/elk/es/data:/usr/share/elasticsearch/data  \
-v /data/elk/es/plugins:/usr/share/elasticsearch/plugins -d elasticsearch:7.13.2 \
詳解:
-e “discovery.type=single-node”:單例模式
-e ES_JAVA_OPTS=“-Xms500m -Xmx500m”:配置內(nèi)存大小
部署成功,通過本地hosts解或者IP析訪問,云主機(jī)記得開放端口

部署kibana

docker0 網(wǎng)橋,使本地和容器的網(wǎng)絡(luò)互通
我是部署在同一臺主機(jī)的,所以后面的容器想調(diào)用其它容器可以使用172.17.0.1(容器IP)+端口調(diào)用 elasticsearch
docker run -p 5601:5601 -d -e ELASTICSEARCH_URL=http://172.17.0.1:9200  \
-e ELASTICSEARCH_HOSTS=http://172.17.0.1:9200 kibana:7.3.2
頁面訪問成功
IP+端口訪問(云主機(jī)開放5601端口)

搭建kafka

前置條件:zookeeper
docker run -d --name zookeeper -p 2181:2181 -v /etc/localtime:/etc/localtime zookeeper
部署kafka:
docker run  -d --name kafka -p 9092:9092 -e KAFKA_BROKER_ID=0  \
-e KAFKA_ZOOKEEPER_CONNECT=172.17.0.1:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://172.17.0.1:9092 \
-e KAFKA_LISTENERS=PLAINTEXT://0.0.0.0:9092 -t wurstmeister/kafka

#KAFKA_BROKER_ID:kafka節(jié)點(diǎn)Id,集群時(shí)要指定
#KAFKA_ZOOKEEPER_CONNECT:配置zookeeper管理kafka的路徑,內(nèi)網(wǎng)ip
#KAFKA_ADVERTISED_LISTENERS:把kafka的地址端口注冊給zookeeperzookeeper
#KAFKA_LISTENERS: 配置kafka的監(jiān)聽端口

用kafka tool連接kafka看能否成功,下載地址:https://www.kafkatool.com/download.html

部署logstash

mkdir  /data/elk/logstash     
存放配置文件:logstash.yml和logstash.conf
[root@mayi-2 elk]# cat  /data/elk/logstash/logstash.yml
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://172.17.0.1:9200" ]

[root@mayi-2 elk]# cat  /data/elk/logstash/logstash.conf
input {
     kafka {
      topics => "logkafka"
      bootstrap_servers => "172.17.0.1:9092"
      codec => "json"
            }
}
output {
  elasticsearch {
    hosts => ["172.17.0.1:9200"]
    index => "logkafka"
    #user => "elastic"
    #password => "changeme"
  }
}
啟動容器映射本地配置文件到容器:
docker run --rm -it --privileged=true -p 9600:9600  -d \
-v /data/elk/logstash/logstash.conf:/usr/share/logstash/pipeline/logstash.conf  \
-v /data/elk/logstash/log/:/home/public/  \
-v /data/elk/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml logstash:7.13.2
測試訪問成功
進(jìn)入kafka測試:
進(jìn)入容器
docker exec -it  kafka /bin/bash
找到kafka命令位置
root@30085794fa39:/# find ./* -name 'kafka-console-producer.sh'
./opt/kafka_2.13-2.8.1/bin/kafka-console-producer.sh
向topic測試寫入數(shù)據(jù)
 ./kafka-console-producer.sh --broker-list 172.17.0.1:9092  --topic logkafka
寫入測試數(shù)據(jù)

在Stack Management——Index Management可以看到這個(gè)topic和4條數(shù)據(jù)的產(chǎn)生
增加索引
索引名稱得和topic前綴一樣,可以是log*,logka*,logkaka*

索引的時(shí)間字段

中文版
在Discover 可以發(fā)現(xiàn)剛剛寫入的數(shù)據(jù)

flume部署:

創(chuàng)建對應(yīng)的配置文件:

目錄結(jié)構(gòu):
mkdir -p ./{logs,conf,flume_log}
配置文件:
[root@mayi-2 elk]# cat flume/conf/los-flume-kakfa.conf
app.sources = r1
app.channels = c1
# Describe/configure the source
app.sources.r1.type = exec
app.sources.r1.command = tail -F /tmp/test_logs/app.log
app.sources.r1.channels = c1
# Use a channel which buffers events in KafkaChannel
# 設(shè)置app.channels.c1的類型
app.channels.c1.type = org.apache.flume.channel.kafka.KafkaChannel
# 設(shè)置Kafka集群中的Broker 集群以逗號分割
app.channels.c1.kafka.bootstrap.servers = 127.17.0.1:9092
# 設(shè)置app.channels.c1使用的Kafka的Topic
app.channels.c1.kafka.topic = logkafka
# 設(shè)置成不按照flume event格式解析數(shù)據(jù),因?yàn)橥粋€(gè)Kafka topic可能有非flume Event類數(shù)據(jù)傳入
app.channels.c1.parseAsFlumeEvent = false
# 設(shè)置消費(fèi)者組,保證每次消費(fèi)時(shí)能夠獲取上次對應(yīng)的Offset
app.channels.c1.kafka.consumer.group.id = logkafka-consumer-group
# 設(shè)置消費(fèi)過程poll()超時(shí)時(shí)間(ms)
app.channels.c1.pollTimeout = 1000
啟動容器:
docker run --name flume --net=host \
-v /data/elk/flume/conf:/opt/flume-config/flume.conf \
-v /data/elk/flume/flume_log:/var/tmp/flume_log \
-v /data/elk/flume/logs:/opt/flume/logs \
-v /tmp/test_logs/:/tmp/test_logs/ \
-e FLUME_AGENT_NAME="agent" \
-d docker.io/probablyfine/flume:latest

進(jìn)入容器啟動flume

docker exec -it flume  bash
cd /opt/flume/bin/
nohup ./flume-ng agent -c /opt/flume/conf -f /opt/flume-config/flume.conf/los-flume-kakfa.conf -n app &
測試寫入內(nèi)容

kibana收集到日志
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容