一、背景
- 之前的教程中,已經(jīng)搭建好ELK+filebeat+redis的日志平臺(tái)。參考ElasticSearch+Logstash+Kibana+redis+filebeat搭建日志收集分析平臺(tái)
- 但是在現(xiàn)實(shí)開發(fā)中,肯定會(huì)filebeat需要收集多個(gè)日志文件的需求,比如一個(gè)服務(wù)器上,nginx就有access.log與error.log,并且還有可能需要收集tomcat的日志等等。所以就需要配置filebeat支持采集多個(gè)日志文件的功能。
- 此處我演示同時(shí)收集nginx的access.log與error.log日志,并且在Kibana中分析顯示。此處只演示配置文件,安裝等參考:ElasticSearch+Logstash+Kibana+redis+filebeat搭建日志收集分析平臺(tái)
二、配置Filebeat
- 編輯filebeat.yml
vi /etc/filebeat/filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access-json.log #指明讀取文件的位置
tags: ["nginx-access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log #指明讀取文件的位置
tags: ["nginx-error"]
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Redis output ------------------------------
output.redis:
hosts: ["192.168.1.110:6379"] #輸出到redis的機(jī)器
password: "123456"
key: "filebeat:test16" #redis中日志數(shù)據(jù)的key值?
db: 0
timeout: 5
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
三、配置Logstash
1、編輯你自己的conf文件。我此處是接著上個(gè)教程,所文件名是nginx16-access.conf
vi /etc/logstash/nginx16-access.conf
input {
redis {
data_type =>"list"
key =>"filebeat:test16"
host =>"192.168.1.110"
port => 6379
password => "123456"
threads => "8"
db => 0
#codec => json
}
}
filter {
#在json化之前,使用mutte對(duì)\\x字符串進(jìn)行替換,防止以下錯(cuò)誤:ParserError: Unrecognized character escape 'x' (code 120)
mutate {
gsub => ["message", "\\x", "\\\x"]
}
if "nginx-access" in [tags]{
json {
source => "message"
remove_field => ["beat","message"]
}
}else if "nginx-error" in [tags]{
grok {
match => [
"message", "(?<time>\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}(%{NUMBER:pid:int}#%{NUMBER}:\s{1,}\*%{NUMBER}|\*%{NUMBER}) %{DATA:err_message}(?:,\s{1,}client:\s{1,}(?<client_ip>%{IP}|%{HOSTNAME}))(?:,\s{1,}server:\s{1,}%{IPORHOST:server})(?:, request: %{QS:request})?(?:, host: %{QS:client_ip})?(?:, referrer: \"%{URI:referrer})?",
"message", "(?<time>\d{4}/\d{2}/\d{2}\s{1,}\d{2}:\d{2}:\d{2})\s{1,}\[%{DATA:err_severity}\]\s{1,}%{GREEDYDATA:err_message}"]
}
date{
match=>["time","yyyy/MM/dd HH:mm:ss"]
target=>"logdate"
}
ruby{
code => "event.set('logdateunix',event.get('logdate').to_i)"
}
}
#使用geoip庫定位ip
geoip {
source => "remote_addr" #nginx日志中外部訪問ip對(duì)應(yīng)字段
database => "/opt/GeoLite2-City/GeoLite2-City.mmdb"
#去掉顯示geoip顯示的多余信息
remove_field => ["[geoip][latitude]", "[geoip][longitude]", "[geoip][country_code]", "[geoip][country_code2]", "[geoip][country_code3]", "[geoip][timezone]", "[geoip][continent_code]", "[geoip][region_code]", "[geoip][ip]"]
target => "geoip"
}
mutate {
convert => [ "[geoip][coordinates]", "float" ]
}
}
output {
if "nginx-access" in [tags]{
elasticsearch {
hosts => ["192.168.1.110:9200"]
index => "logstash-test16-nginx-access-%{+yyyy.MM.dd}" #注意此處索引名稱,一定要以logstash開頭命名,后者地圖功能不可用(mapping)
}
}
if "nginx-error" in [tags]{
elasticsearch {
hosts => ["192.168.1.110:9200"]
index => "logstash-test16-nginx-error-%{+yyyy.MM.dd}" #注意此處索引名稱,一定要以logstash開頭命名,后者地圖功能不可用(mapping)
}
}
}
2、配置完,接著重啟filebeat與logstash即可。
四、配置kibana,顯示日志
1、創(chuàng)建索引

image.png

image.png

image.png

image.png

image.png
2、查看日志數(shù)據(jù)

image.png