filebeat + kafka + elk 日志收集方案實踐
[TOC]
0.使用的插件的版本
elasticsearch-5.6.11
filebeat-6.4.2
kafka_2.12-1.1.1
kibana-5.6.11
logstash-6.4.2
zookeeper-3.4.10
注意:
服務(wù)器均已安裝 JDK1.8 版本
這里的es版本要和kibana的版本要對應(yīng)
filebeat 對 kafka 的版本有需求,版本不對應(yīng)將導(dǎo)致日志無法正確收集
es 和logstash 不要安裝到同一個服務(wù)器 兩個都為內(nèi)存大戶
1.安裝elasticsearch
下載對應(yīng)版本的安裝包
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.6.11.tar.gz
tar -xvf elasticsearch-5.6.11.tar.gz
進入當(dāng)前目錄,啟動Elasticsearch:
cd elasticsearch-5.6.11/bin
./elasticsearch
如果是用root賬號啟動,會報以下錯誤:
Exception in thread "main" java.lang.RuntimeException: don't run elasticsearch as root.
at org.elasticsearch.bootstrap.Bootstrap.initializeNatives(Bootstrap.java:93)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:144)
at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:285)
at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.
這個意思也很明顯,Elasticsearch出于安全方面的考慮, 不允許用戶使用root賬號啟動Elasticsearch。我們得新建一個用戶,專門用于啟動Elasticsearch。
創(chuàng)建一個用戶組elsearch與用戶組中的用戶elsearch:
groupadd elsearch
useradd elsearch -g elsearch
修改用戶elsearch的密碼:
passwd elsearch
修改目錄擁有者,賦予相應(yīng)的權(quán)限:
chown -R elsearch:elsearch elasticsearch-6.2.4
切換到用戶elsearch,或者使用elsearch登陸,啟動Elasticsearch:
su elsearch cd elasticsearch-6.2.4/bin
./elasticsearch
如果你想讓你的ElasticSearch在后端啟動:
./bin/elasticsearch -d
查看Elasticsearch是否安裝成功,如果有返回值說明安裝成功:
curl http://127.0.0.1:9200
2.安裝filebeat
日志收集插件
下載對應(yīng)版本的安裝包
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-linux-x86_64.tar.gz
tar -zxvf filebeat-6.4.2-linux-x86_64.tar.gz
進入當(dāng)前目錄,編輯配置文件 filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /root/logs/xiezhu/*/*.log # 文件夾包含子目錄遍歷
fields:
log_topics: log
multiline: # 多行代碼合并
pattern: '^\['
negate: true
match: after
output.kafka: # 輸出方式,輸出到kafka
enabled: true
hosts: ["192.168.1.182:9092"]
topic: '%{[fields][log_topics]}' # 輸出的topic
#這里配置下name 在對應(yīng)日志中可顯示出來
name: 192.168.1.126-Ares-server
啟動filebeat
nohup ./filebeat &
查看輸出日志
tail -f logs/filebeat
3.安裝kafka
下載對應(yīng)版本的安裝包
wget http://mirrors.shu.edu.cn/apache/kafka/1.1.1/kafka_2.12-1.1.1.tgz
tar -zxvf kafka_2.12-1.1.1.tgz
進入當(dāng)前目錄,編輯配置文件 config/server.properties
broker.id=0
listeners=PLAINTEXT://192.168.1.182:9092 # 網(wǎng)絡(luò)監(jiān)聽端口
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/soft/kafka/kafka-logs # 日志輸出路徑
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.1.160:2181 # zk地址
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
啟動kafka
nohup ./bin/kafka-server-start.sh config/server.properties &
驗證安裝
1、創(chuàng)建topic
bin/kafka-topics.sh --create --zookeeper 192.168.1.160:2181 --replication-factor 1 --partitions 1 --topic test
2、查看創(chuàng)建的topic
bin/kafka-topics.sh -list -zookeeper 192.168.1.160:2181
test
3、生產(chǎn)消息測試
bin/kafka-console-producer.sh --broker-list 192.168.1.182:9092 --topic test
this is test #輸入后回車
4、消費消息測試
bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.182:9092 --topic test --from-beginning
this is test
4.安裝zookeeper
下載對應(yīng)版本的安裝包
wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
tar -zxvf zookeeper-3.4.10.tar.gz
進入當(dāng)前目錄,編輯配置文件conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/soft/zookeeper/zookeeper-3.4.10/data
# the port at which the clients will connect
clientPort=2181
啟動zookeeper
./bin/zkServer start
5.安裝logstash
下載對應(yīng)版本的安裝包
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.4.2.tar.gz
tar -zxvf logstash-6.4.2.tar.gz
進入當(dāng)前目錄,編輯配置文件config/logstash.conf
input {
kafka {
bootstrap_servers => "192.168.1.182:9092"
topics => ["log"]
codec => json {
charset => "UTF-8"
}
}
# 如果有其他數(shù)據(jù)源,直接在下面追加
}
filter {
# 將message轉(zhuǎn)為json格式
if [type] == "log" {
json {
source => "message"
target => "message"
}
}
}
output {
# 處理后的日志入es
elasticsearch {
hosts => "192.168.1.181:9200"
index => "logstash-%{+YYYY.MM.dd-HH:mm:ss.SSS}"
}
}
啟動logstash
nohup ./bin/logstash -f config/logstash.conf &
6.安裝kibana
下載對應(yīng)版本的安裝包
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.6.11-linux-x86_64.tar.gz
tar -zxvf kibana-5.6.11-linux-x86_64.tar.gz
進入當(dāng)前目錄 修改相關(guān)配置config/kibana.yml
server.port: 5601
server.host: "localhost"
server.name: "MM-LOG"
elasticsearch.url: "http://192.168.1.181:9200" # es的地址
kibana.index: ".kibana"
啟動/停止命令
# 啟動
./kibana
# 停止
fuser -n tcp 5601 # 查看端口進程
kill -9 端口 # 殺死上步執(zhí)行結(jié)果進程
所有流程執(zhí)行成功結(jié)果
7.kibana 優(yōu)化
7.1 添加登錄驗證
# 1.安裝nginx
sudo apt-get install nginx
# 2.安裝Apache密碼生產(chǎn)工具
sudo apt-get install apache2-utils
# 3.生成密碼文件
mkdir -p /etc/nginx/passwd
htpasswd -c -b /etc/nginx/passwd/kibana.passwd 用戶名 密碼
# 4.配置nginx
#/etc/nginx/conf.d/default
server {
listen 192.168.1.182:5601;
auth_basic "Kibana Auth";
auth_basic_user_file /etc/nginx/passwd/kibana.passwd;
location / {
proxy_pass http://127.0.0.1:5601;
proxy_redirect off;
}
}
# 5.重啟kafka
# 6.重啟nginx
7.2 漢化
英文好的請自行忽略
https://github.com/anbai-inc/Kibana_Hanization
需要python運行環(huán)境
python main.py Kibana目錄
8.日志總體架構(gòu)圖
日志搜集分析 對系統(tǒng)無侵入,可擴展