ELK日志收集

ELK(es,filebeat,kibana,logstash,redis,zookeeper,kafka)部署日志收集(,nginx,tomcat,docker)

收集 代理 nginx haproxy

web nginx tomcat

數(shù)據(jù)庫(kù) mysql redis mongo elasticsearch

操作系統(tǒng) source message

1、搭建環(huán)境:
es安裝單節(jié)點(diǎn)就可以了,這里把前面配置的集群取消

192.168.208.120:   elastic   , kibana , filebeat  nginx  zookeeper  kafka/redis logstash
192.168.208.121:    nginx     filebeat, tomcat           zookeeper  kafka
192.168.208.122:    nginx     filebeat, tomcat           zookeeper  kafka

192.168.208.120:


1.png

es 單節(jié)點(diǎn)配置文件:

[root@elk-server config]# egrep -v "^#|^$" elasticsearch.yml
node.name: node-1
path.data: /usr/local/data
path.logs: /usr/local/logs
network.host: 192.168.208.120
http.port: 9200
discovery.seed_hosts: ["192.168.208.120"]
http.cors.allow-origin: "/.*/"
http.cors.enabled: true
[root@elk-server config]# systemctl restart elasticsearch

安裝 kibana

[root@elk-server tool]# rpm -ihv kibana-6.6.0-x86_64.rpm 
[root@elk-server tool]# rpm -qc kibana            
/etc/kibana/kibana.yml
kibana配置:

[root@elk-server tool]# vim /etc/kibana/kibana.yml
server.port: 5601
server.host: "192.168.208.120"
server.name: "elk-server"
elasticsearch.hosts: ["http://192.168.208.120:9200"]
kibana.index: ".kibana"

啟動(dòng) kibana

[root@elk-server tool]# systemctl restart elasticsearch
[root@elk-server tool]# systemctl start kibana
[root@elk-server tool]# netstat -luntp |grep 9200
tcp6       0      0 192.168.208.120:9200    :::*     LISTEN     22217/java          
[root@elk-server tool]# netstat -luntp |grep 5601
tcp        0      0 192.168.208.120:5601    0.0.0.0:*  LISTEN     22569/node

測(cè)試訪問:

http://192.168.208.120:5601


2.png

3.png

2、安裝 nginx

收集節(jié)點(diǎn) 192.168.208.120/121

配置nginx 下載源

[root@node1 ~]#  vim /etc/yum.repos.d/nginx.repo   
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
?
[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enable=0
gpgkey=https://nginx.org/keys/nginx_signing.key

安裝nginx

[root@node1 ~]# yum install nginx -y
[root@node1 ~]# systemctl start nginx
[root@node1 ~]# netstat -luntp          
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      42966/nginx: master

壓力測(cè)試:

[root@node1 ~]# ab -c 10 -n 1000 192.168.208.121/

3、安裝配置filebeat

收集節(jié)點(diǎn) 192.168.208.120/121

[root@elk-server tool]# rpm -ivh filebeat-6.6.0-x86_64.rpm
備份 filebeat 配置文件

[root@elk-server tool]# cp /etc/filebeat/filebeat.yml /tool/
修改 filebeat.yml 配置文件

[root@elk-server tool]# egrep -v "#|^$" /etc/filebeat/filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log 
output.elasticsearch:
  hosts: ["192.168.208.120:9200"]

?啟動(dòng) filebeat

[root@elk-server tool]# systemctl restart filebeat                             
[root@elk-server tool]# tail -f /var/log/filebeat/filebeat     #  查看日志

查看上傳到 es 的日志 默認(rèn)寫入的索引就是 filebeat


4.png

4、kibana配置

Index Patterns 配置 el 索引數(shù)據(jù)


5.png

將 下面顯示的 filebeat-6.6.0-2020.04.05 索引 復(fù)制到 index pattern


6.png

等待 進(jìn)入到下一步 選擇 @timestamp 選擇創(chuàng)建


7.png

進(jìn)入到 索引界面, 可以看到把 es 里的索引添加到kibana 上了 然后進(jìn)入 Ddiscover


8.png

點(diǎn)擊右上角的 最新的 15分鐘 選擇 新近 1小時(shí) 或者是 4小時(shí), 就可以看到圖了


9.png

然后選擇左邊 把 message 點(diǎn)擊 add 添加要看的信息 如下顯示
10.png

點(diǎn)擊 可以進(jìn)行 全文檢索 查詢狀 態(tài)碼 和顯示的條數(shù)


11.png

檢查 條件篩查


12.png

==================================================================================

收集 nginx的json日志
日志格式配置如下:

log_format main '{ "time_local": "$time_local", '
                           '"remote_addr": "$remote_addr", '
                           '"referer": "$http_referer", '
                           '"request": "$request", '
                           '"status": $status, '
                           '"bytes": $body_bytes_sent, '
                           '"agent": "$http_user_agent", '
                           '"x_forwarded": "$http_x_forwarded_for", '
                           '"up_addr": "$upstream_addr",'
                           '"up_host": "$upstream_http_host",'
                           '"upstream_time": "$upstream_response_time",'
                           '"request_time": "$request_time"'
''}'

將以上日志格式放入到 nginx.conf 中

1、定時(shí) log_format json 日志格式

2、在 access_log /var/log/nginx/access.log json; 引用 json 格式日志

3、清空 /var/log/nginx.log 歷史日志

4、修改 /etc/filebeat/filebeat.yml 配置文件 設(shè)置為 json 格式解析

5、重啟 nginx

[root@elk-server tool]# vim /etc/nginx/nginx.conf 
user  nginx;
worker_processes  1;
error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;
events {
    worker_connections  1024;
}
http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
?
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    log_format json '{ "time_local": "$time_local", '
                           '"remote_addr": "$remote_addr", '
                           '"referer": "$http_referer", '
                           '"request": "$request", '
                           '"status": $status, '
                           '"bytes": $body_bytes_sent, '
                           '"agent": "$http_user_agent", '
                           '"x_forwarded": "$http_x_forwarded_for", '
                           '"up_addr": "$upstream_addr",'
                           '"up_host": "$upstream_http_host",'
                           '"upstream_time": "$upstream_response_time",'
                           '"request_time": "$request_time"'
        '}';
    access_log  /var/log/nginx/access.log  json;  
    sendfile        on;
    #tcp_nopush     on;
    keepalive_timeout  65;
    #gzip  on;
    include /etc/nginx/conf.d/*.conf;
}

檢查是否已成為 json類似字典

[root@elk-server nginx]# tail -f /var/log/nginx/access.log 
{ "time_local": "05/Apr/2020:19:35:37 +0800", "remote_addr": "192.168.208.1", "referer": "-", "request": "GET / HTTP/1.1", "status": 304, "bytes": 0, "agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000"}
{ "time_local": "05/Apr/2020:19:35:37 +0800", "remote_addr": "192.168.208.1", "referer": "-", "request": "GET / HTTP/1.1", "status": 304, "bytes": 0, "agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36", "x_forwarded": "-", "up_addr": "-","up_host": "-","upstream_time": "-","request_time": "0.000"}

修改 filebeat.yml

vim /etc/filebeat/filebeat.yml
filebeat.inputs:
   - type: log
   enabled: true 
   paths:
   - /var/log/nginx/access.log
   json.keys_under_root: true
   json.overwrite_keys: true

然后 將索引干掉,重啟 kibana


13.png

重啟 nginx 和 filebeat

[root@elk-server nginx]# echo "" >/var/log/nginx/access.log
[root@elk-server nginx]# systemctl restart nginx
[root@elk-server nginx]# systemctl restart filebeat 
[root@elk-server nginx]# systemctl restart kibana

然后重新在 kibana 上添加索引數(shù)據(jù)

filebeat:

停掉了 filebeat 會(huì)記錄一下上一次讀到什么位置

寫入100條信息后

再啟動(dòng) filebeat 會(huì)從上一次記錄的位置開始讀取數(shù)據(jù)

自定義 index 索引 替代 默認(rèn)的索引插件 es數(shù)據(jù)表中

[root@elk-server tool]# egrep -v "#|^$" /etc/filebeat/filebeat.yml 
filebeat.inputs:

- type: log
  enabled: true 
  paths:

    - /var/log/nginx/access.log
      json.keys_under_root: true
      json.overwrite_keys: true

  setup.kibana:
    host: "192.168.208.120:5601"

  output.elasticsearch:
    hosts: ["192.168.208.120:9200"]
    index: "nginx-%{[beat.version]}-%{+yyyy.MM}" 

  setup.template.name: "nginx"
  setup.template.pattern: "nginx-*"
  setup.template.enabled: false
  setup.template.overwrite: true 

重啟 filebeat , 重啟后這里會(huì)報(bào)錯(cuò)
[root@elk-server nginx]# systemctl restart filebeat
解決方法:

在配置文件最后面增加 文件一定要注意縮進(jìn), 要不然很難發(fā)現(xiàn)錯(cuò)誤 導(dǎo)致 啟動(dòng)不了的

使用 nginx的模塊索引

setup.template.name: "nginx"
setup.template.pattern: "nginx_*"
setup.template.enabled: false

此時(shí)再重新訪問 nginx , 重啟配置 kibana

[root@elk-server tool]#systemctl restart filebeat

就可以發(fā)現(xiàn) 索引少了很多沒用的了


14.png

15.png

多日志收集分析配置

注釋: 多個(gè)日志文件收集,最好使用 tags 收打標(biāo)簽

語(yǔ)法

- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  ...
  tags: ["access"]

在寫入 elastic 時(shí)使用判斷 tags 標(biāo)簽 用來(lái)寫入某個(gè)索引

  indices:
    - index: "nginx-access%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "access"
    - index: "nginx-error%{[beat.version]}-%{+yyyy.MM}"
        tags: "error"
[root@elk-server filebeat]# ls /var/log/nginx/ access.log error.log

此配置可以在多節(jié)點(diǎn)配置收集日志

[root@elk-server filebeat]# pwd
/etc/filebeat
[root@elk-server filebeat]# vim filebeat.yml 
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]
?
?
- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]
  json.keys_under_root: true
  json.overwrite_keys: true
?
?
setup.kibana:
  host: "192.168.208.120:5601"
?
output.elasticsearch:
  hosts: ["192.168.208.120:9200"]
  #index: "nginx-%{[beat.version]}-%{+yyyy.MM}"  
?
  indices:
    - index: "nginx-access%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "access"
    - index: "nginx-error%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "error"
?
setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true

改后重新啟動(dòng)

[root@elk-server tool]# systemctl restart filebeat    

此時(shí)也可以在 插件上查看


16.png

在 kibana 上重新添加


17.png

18.png

收集 tomcat日志

192.168.208.121

安裝tomcat yum安裝

yum install tomcat tomcat-webapps tomcat-admin-webapps tomcat-docs-webapp tomcat-javadoc -y

啟動(dòng) tomcat

[root@node1 /]# systemctl start tomcat
[root@node1 /]# netstat -luntp |grep 8080
tcp6   0   0 :::8080       :::*      LISTEN      18009/java

訪問 tomcat 192.168.208.121:8080

修改日志 為 json格式

修改 /etc/tomcat/server.xml

將 139行的 pattern="%h %l %u %t "%r" %s %b" /> 替換成下面的

[root@node1 /]#vim /etc/tomcat/server.xml
pattern="{"clientip":"%h","ClientUser":"%l","authenticated":"%u","AccessTime":"%t","method":"%r","status":"%s","SendBytes":"%b","Query?string":"%q","partner":"%{Referer}i","AgentVersion":"%{User-Agent}i"}"/>
[root@node1 /]# systemctl restart tomcat

修改 filebeat.yml 配置

[root@node1 /]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
?
############################ nginx
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]
?
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]
  json.keys_under_root: true
  json.overwrite_keys: true
?
############################ tomcat
- type: log
  enabled: true
  paths:
    - /var/log/tomcat/localhost_access_log.*.txt
  tags: ["tomcat"]
  json.keys_under_root: true
  json.overwrite_keys: true
?
setup.kibana:
  host: "192.168.208.120:5601"
?
output.elasticsearch:
  hosts: ["192.168.208.120:9200"]
  #index: "nginx-%{[beat.version]}-%{+yyyy.MM}"  
  
  indices:
    - index: "nginx-access%{[beat.version]}-%{+yyyy.MM}" 
      when.contains:
        tags: "access"
    - index: "nginx-error%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "error"
        
    - index: "tomcat-accessr%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "tomcat"

重啟 filebeat 服務(wù)

[root@node1 /]# systemctl restart filebeat
19.png

收集java日志

java 日志處理

日志內(nèi)容 多行的處理如下, 添加如下參數(shù)

192.168.208.120: elastic日志

  multiline.pattern: '^\['
  multiline.negate: true
  multiline.match: after

日志如下:

[root@elk-server tool]# cat /var/log/elasticsearch/elasticsearch.log 
[2020-04-06T00:12:55,740][INFO ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][27738] overhead, spent [350ms] collecting in the last [1s]
[2020-04-06T00:22:10,191][INFO ][o.e.c.m.MetaDataCreateIndexService] [node-1] [tomcat-accessr6.6.0-2020.04] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
[2020-04-06T00:22:10,761][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [tomcat-accessr6.6.0-2020.04/nA93Krr4RtyjKnoayAZAUg] create_mapping [doc]
[2020-04-06T00:22:10,868][INFO ][o.e.c.m.MetaDataMappingService] [node-1] [tomcat-accessr6.6.0-2020.04/nA93Krr4RtyjKnoayAZAUg] update_mapping [doc]
[2020-04-06T00:22:14,001][WARN ][o.e.m.j.JvmGcMonitorService] [node-1] [gc][young][28292][535] duration [2.6s], collections [1]/[3.5s], total [2.6s]/[34s], memory [237.2mb]->[220mb]/[503.6mb], all_pools {[young] [47.2mb]->[21.4mb]/[66.5mb]}{[survivor] [378.5kb]->[8.3mb]/[8.3mb]}{[old] [189.5mb]->[190.4mb]/[428.8mb]}

配置文件寫如下進(jìn)行多行匹配:

- type: log
  enabled: true
  paths:
    - /var/log/elasticsearch/elasticsearch.log
  tags: ["elastic_java"]
  multiline.pattern: '^\['
  multiline.negate: true
  multiline.match: after
setup.kibana:
  host: "192.168.208.120:5601"
?
output.elasticsearch:
  hosts: ["192.168.208.120:9200"]
  #index: "nginx-%{[beat.version]}-%{+yyyy.MM}"  
  indices:
    - index: "elastic-java%{[beat.version]}-%{+yyyy.MM}"
      when.contains:
        tags: "elastic_java"  

收集docker日志

這個(gè)日志收集參考 老師給的文檔

前臺(tái)查看 nginx日志

docker logs -f nginx (容器名)

收集容器需要根據(jù)容器運(yùn)行的服務(wù) 來(lái)區(qū)分運(yùn)行的服務(wù)日志

需要使用工具 docker-compose

https://github.com/docker/compose/releases/tag/1.25.5-rc1

https://github.com/docker/compose/releases/

安裝

[root@dorcer01 /]# yum install -y python2-pip
[root@dorcer01 /]# pip install -i https://pypi.tuna.tsinghua.edu.cn/simple pip -U
[root@dorcer01 /]# pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
[root@dorcer01 /]# pip install docker-compose   
?

檢查版本:

[root@dorcer01 /]#docker-compose version

編寫 docker-compose

[root@dorcer01 /] cat docker-compose.yml 
version: '3'                #compose版本號(hào)
services:                   #容器服務(wù)組
  nginx:                    #nginx服務(wù)
    image: nginx:v2         #啟動(dòng)的鏡像名稱 
    # 設(shè)置labels
    labels:
      service: nginx
    # logging設(shè)置增加labels.service
    logging:                # 寫入日志操作
      options:
        labels: "service"
    ports:
      - "8080:80"
  db:
    image: nginx:latest
    # 設(shè)置labels
    labels:
      service: db 
    # logging設(shè)置增加labels.service
    logging:
      options:
        labels: "service"
    ports:
      - "80:80"

清理鏡像

[root@dorcer01 /]docker ps -a|awk 'NR>1{print "docker rm",$1}'|bash
運(yùn)行 docker-compose.yml

docker-compose up -d

配置 filebeat的 配置文件

[root@dorcer01 /] cat /etc/filebeat/filebeat.yml    
filebeat.inputs:
- type: log
  enabled: true 
  paths:
    - /var/lib/docker/containers/*/*-json.log    # docker所有日志文件
  json.keys_under_root: true
  json.overwrite_keys: true
output.elasticsearch:
  hosts: ["192.168.47.175:9200"]
  indices:
    - index: "docker-nginx-access-%{[beat.version]}-%{+yyyy.MM.dd}"
      when.contains:
          attrs.service: "nginx"       # docker-compose 上設(shè)置的服務(wù)標(biāo)簽
          stream: "stdout"              # docke生成的日志中標(biāo)準(zhǔn)輸出 上設(shè)置的服務(wù)標(biāo)簽  
    - index: "docker-nginx-error-%{[beat.version]}-%{+yyyy.MM.dd}"
      when.contains:
          attrs.service: "nginx"
          stream: "stderr"
    - index: "docker-db-access-%{[beat.version]}-%{+yyyy.MM.dd}"
      when.contains:
          attrs.service: "db"
          stream: "stdout"
    - index: "docker-db-error-%{[beat.version]}-%{+yyyy.MM.dd}"
      when.contains:
          attrs.service: "db"
          stream: "stderr"
setup.template.name: "docker"
setup.template.pattern: "docker-*"
setup.template.enabled: false
setup.template.overwrite: true

重啟 filebeat

[root@dorcer01 /]systemctl restart filebeat

通過filebeat的module模塊收集日志

通過 filebeat 的 module 模塊進(jìn)行配置 收集 nginx 普通日志

首先查看 filebeat配置文件

[root@node1 modules.d]# rpm -qc filebeat           
/etc/filebeat/filebeat.yml
/etc/filebeat/modules.d/apache2.yml.disabled
/etc/filebeat/modules.d/auditd.yml.disabled
/etc/filebeat/modules.d/elasticsearch.yml.disabled
/etc/filebeat/modules.d/haproxy.yml.disabled
/etc/filebeat/modules.d/icinga.yml.disabled
/etc/filebeat/modules.d/iis.yml.disabled
/etc/filebeat/modules.d/kafka.yml.disabled
/etc/filebeat/modules.d/kibana.yml.disabled
/etc/filebeat/modules.d/logstash.yml.disabled
/etc/filebeat/modules.d/mongodb.yml.disabled
/etc/filebeat/modules.d/mysql.yml.disabled
/etc/filebeat/modules.d/nginx.yml.disabled
/etc/filebeat/modules.d/osquery.yml.disabled
/etc/filebeat/modules.d/postgresql.yml.disabled
/etc/filebeat/modules.d/redis.yml.disabled
/etc/filebeat/modules.d/suricata.yml.disabled
/etc/filebeat/modules.d/system.yml.disabled
/etc/filebeat/modules.d/traefik.yml.disabled

1、在 filebeat 配置文件中啟動(dòng)配置模塊路徑:

[root@elk-server filebeat]# cat filebeat.yml
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  reload.period: 10s
?
output.elasticsearch:
  hosts: ["192.168.208.120:9200"]
  index: "nginx-%{[beat.version]}-%{+yyyy.MM}" 

2、激活 軟件

[root@elk-server filebeat]# filebeat modules enable nginx
Module nginx is already enabled
?
[root@elk-server filebeat]# filebeat modules list        
Enabled:
nginx
?
Disabled:
apache2
auditd
elasticsearch
haproxy
icinga
iis
kafka
kibana
logstash
mongodb
mysql
osquery
postgresql
redis
suricata
system
traefik

3、修改 nginx 配置文件 的 為普通日志文件格式 main

access_log  /var/log/nginx/access.log  main;
[root@elk-server filebeat]# systemctl restart nginx

4、修改 filebeat啟動(dòng)的 nginx模塊配置文件

[root@elk-server filebeat]# vim /etc/filebeat/modules.d/nginx.yml
- module: nginx
  # Access logs
  access:
▽   enabled: true
    var.paths: ["/var/log/nginx/access.log"]
  # Error logs

  error:
    enabled: true
    var.paths: ["/var/log/nginx/error.log"]

5、重啟 filebeat 報(bào)如下錯(cuò)誤信息

[root@elk-server filebeat]# systemctl restart filebeat
[root@elk-server filebeat]# tail -f /var/log/filebeat/filebeat
2020-04-08T20:55:14.205+0800    ERROR   fileset/factory.go:142  Error loading pipeline: Error loading pipeline for fileset nginx/access: This module requires the following Elasticsearch plugins: ingest-user-agent, ingest-geoip. You can install them by running the following commands on all the Elasticsearch nodes:
    sudo bin/elasticsearch-plugin install ingest-user-agent
    sudo bin/elasticsearch-plugin install ingest-geoip

按錯(cuò)誤提示安裝 ingest-user-agent ingest-geoip

[root@elk-server /]# find / -name 'elasticsearch-plugin'
/usr/share/elasticsearch/bin/elasticsearch-plugin
/usr/share/elasticsearch/bin/elasticsearch-plugin install  ingest-user-agent
/usr/share/elasticsearch/bin/elasticsearch-plugin install  ingest-geoip

6、修改配置 filebeat 配置文件

[root@elk-server ~]# cat /etc/filebeat/filebeat.yml                                  
filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false
  reload.period: 10s
?
setup.kibana:
  host: "192.168.208.120:5601"
?
output.elasticsearch:
  hosts: ["192.168.208.120:9200"]
  indices:
  - index: "nginx_access-%{[beat.version]}-%{+yyyy.MM.dd}"
    when.contains:
      fileset.name: "access"
?
  - index: "nginx_error-%{[beat.version]}-%{+yyyy.MM.dd}"
    when.contains:
      fileset.name: "error"
?
setup.template.name: "nginx"
setup.template.pattern: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true

7、重啟 filebeat 和 es

[root@elk-server ~]# systemctl restart filebeat 
[root@elk-server ~]# systemctl restart elasticsearch

在kibana上 添加錯(cuò)誤日志是要使用 read_timestamp


20.png

ELK-kibana畫圖

柱狀圖、拆線圖、餅圖、儀表圖、大屏拼接展示

官文文檔

https://www.elastic.co/guide/en/beats/filebeat/6.6/configuration-filebeat-modules.html

1、nginx畫圖配置

1、備份kibana, 拷貝 filebeat下的 kibana 到 /root/ 下

[root@elk-server kibana]# cp -a /usr/share/filebeat/kibana /root
[root@elk-server ~]cd kibana/6/dashboard
[root@elk-server dashboard]# find . -type f ! -name "*nginx*" |xargs rm -rf

2、替換 filebeat開頭的 替換成 nginx

[root@elk-server dashboard]# sed -i 's#filebeat\-\*#nginx\-\*#g' Filebeat-nginx-overview.json
[root@elk-server dashboard]# sed -i 's#filebeat\-\*#nginx\-\*#g' Filebeat-nginx-logs.json 

3、返回上級(jí) 把 index-pattern的也改了

[root@elk-server 6]# cd index-pattern/
[root@elk-server index-pattern]# ls
filebeat.json
[root@elk-server index-pattern]# pwd
/root/kibana/6/index-pattern
[root@elk-server index-pattern]# sed -i 's#filebeat\-\*#nginx\-\*#g' filebeat.json 

4、導(dǎo)入當(dāng)前的修改過的 kibana

filebeat setup --dashboards -E setup.dashboards.directory=/root/kibana/

filebeat setup --dashboards -E setup.dashboards.directory=/root/kibana/
[root@elk-server /]# filebeat setup --dashboards -E setup.dashboards.directory=/root/kibana/
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards

5、重啟 filebeat kibana elasticsearch

2、舉例畫圖

訪問最多的ip地址前10

訪問最多的 url 前10

訪問最多的agent

http狀態(tài)碼

開始畫圖


21.png

--設(shè)置圖形形態(tài)--


22.png

點(diǎn)擊 上面 save 可保存到 視圖面板中, 餅圖


23.png

圖表類型畫圖
24.png

畫面板圖 ,即使用已畫好的圖 添加到面板中


25.png

26.png

使用Redis做緩存

當(dāng) es 數(shù)據(jù)存儲(chǔ)遇到瓶頸時(shí), 可以使用 redis 存儲(chǔ)數(shù)據(jù),并使用 logstash來(lái)獲取 redis數(shù)據(jù)

https://www.elastic.co/guide/en/beats/filebeat/6.6/redis-output.html

架構(gòu)如下:


27.png

output:

語(yǔ)法:

output.redis:
  hosts: ["localhost"]
  password: "my_password"
  key: "filebeat"
  db: 0
  timeout: 5

啟動(dòng) redis

[root@elk-server redis]# redis-server redis.conf 
[root@elk-server redis]# ps -ef |grep redis
root      22573      1  0 20:49 ?        00:00:00 redis-server 192.168.208.120:6379
root      22586  14139  0 20:50 pts/0    00:00:00 grep --color=auto redis
[root@elk-server redis]# pwd
/tool/redis

配置 filebeat配置文件 filebeat.yml

output.redis:
  hosts: ["192.168.208.120"]
  key: "filebeat"
  db: 0
  timeout: 5

?重啟 filebeat

[root@elk-server filebeat]# systemctl restart filebeat
[root@elk-server filebeat]# redis-cli -h 192.168.208.120
192.168.208.120:6379> keys *
1) "filebeat"
192.168.208.120:6379> type filebeat
list
192.168.208.120:6379> LLEN filebeat
(integer) 26
192.168.208.120:6379> LRANGE filebeat
(error) ERR wrong number of arguments for 'lrange' command
192.168.208.120:6379> LRANGE filebeat 1 20

安裝 logstash 6.6.0

[root@elk-server tool]# yum localinstall logstash-6.6.0.rpm

?修改 filebeat 配置 nginx 日志

[root@elk-server conf.d]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]
?
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]
  json.keys_under_root: true
  json.overwrite_keys: true
?
- type: log
  enabled: true 
  paths:
    - /var/log/elasticsearch/elasticsearch.log
  tags: ["elastic_java"]
  multiline.pattern: '^\['
  multiline.negate: true
  multiline.match: after
?
setup.kibana:
  host: "192.168.208.120:5601"
?
output.redis:
  hosts: ["192.168.208.120"]
  keys:
    - key: "nginx_access"   # 寫入 redis 的鍵
      when.contains:
        tags: "access"
    - key: "nginx_error"
      when.contains:
        tags: "error"
 

配置 logstash 配置文件 input 從redis取數(shù)據(jù) output 輸出數(shù)據(jù)到 es

添加 redis.conf

[root@elk-server conf.d]# pwd
/etc/logstash/conf.d

[root@elk-server conf.d]# vim redis.conf                    
?
input {
  redis {
    host => "192.168.208.120"
    port => "6379"
    db => "0"
    key => "nginx_access"     # 從 redis 的鍵中取值
    data_type => "list"
  }
  redis {
    host => "192.168.208.120"
    port => "6379"
    db => "0"
    key => "nginx_error"
    data_type => "list"
  }
}
?
# nginx 與 php 的時(shí)間解析 改為float
filter {
  mutate {
    convert => ["upstream_time", "float"]
    convert => ["request_time", "float"]
  }
}
?
output {
    stdout {}
    if "access" in [tags] {
      elasticsearch {
        hosts => "http://192.168.208.120:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM.dd}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://192.168.208.120:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM.dd}"
      }
    }
}
 

重啟 filebeat logstash(前臺(tái)啟動(dòng))

[root@elk-server logstash]# systemctl restart filebeat
[root@elk-server logstash]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf

查看 redis:


28.png

優(yōu)化 filebeat logstash 配置文件 如下:

filebeat

[root@elk-server conf.d]# cat /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]
?
- type: log
  enabled: true 
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]
  json.keys_under_root: true
  json.overwrite_keys: true
?
- type: log
  enabled: true 
  paths:
    - /var/log/elasticsearch/elasticsearch.log
  tags: ["elastic_java"]
  multiline.pattern: '^\['
  multiline.negate: true
  multiline.match: after
?
setup.kibana:
  host: "192.168.208.120:5601"
?
output.redis:
  hosts: ["192.168.208.120"]
  key: "access"                 # 只存一個(gè) key   到 redis 

logstash 配置

[root@elk-server logstash]# vim logstash.yml                    
input {
  redis {
    host => "192.168.208.120"
    port => "6379"
    db => "0"
    key => "nginx"     # 從 redis 的鍵中取值   只取  一個(gè) key 就可以了
    data_type => "list"
  }
}
# nginx 與 php 的時(shí)間解析 改為float
filter {
  mutate {
    convert => ["upstream_time", "float"]
    convert => ["request_time", "float"]
  }
}
?
output {
    stdout {}
    if "access" in [tags] {
      elasticsearch {
        hosts => "http://192.168.208.120:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM.dd}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://192.168.208.120:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM.dd}"
      }
    }
}

kibana監(jiān)控 es集群

elk 版本統(tǒng)一

使用Kafka作來(lái)緩存

環(huán)境安裝準(zhǔn)備

下載地址

http://zookeeper.apache.org/releases.html

wget https://downloads.apache.org/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz

http://kafka.apache.org/downloads.html

三臺(tái) 服務(wù)器配置 hosts, 可以相互ping 通

vim /etc/hosts

192.168.208.120 server
192.168.208.121 node1
192.168.208.122 node2

同步節(jié)點(diǎn)

[root@elk-server ~]# cd /etc/
[root@elk-server etc]# scp hosts node1:/etc/                                              
???????[root@elk-server etc]# scp hosts node2:/etc/                                                                                                                

安裝zookeeper

zookeeper: 集群調(diào)度工具

[root@elk-server tool]# ll zookeeper-3.4.14.tar.gz kafka-2.4.1-src.tgz 
-rw-r--r-- 1 root root 7690352 4月  15 17:22 kafka-2.4.1-src.tgz
-rw-r--r-- 1 root root 3096576 4月  15 17:23 zookeeper-3.4.11.tar.gz
[root@elk-server opt]# tar -xzvf apache-zookeeper-3.4.11.tar.gz -C /opt
[root@elk-server opt]# ln -s apache-zookeeper-3.4.11 zookeeper
[root@elk-server opt]# mkdir -p /data/zookeeper
[root@elk-server opt]# cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg

修改配置文件

[root@elk-server opt]# vim /opt/zookeeper/conf/zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=192.168.208.120:2888:3888
server.2=192.168.208.121:2888:3888
server.3=192.168.208.122:2888:3888

增加 myid

[root@elk-server opt]# echo "1" > /data/zookeeper/myid

其它節(jié)點(diǎn)一樣的配置, 只是這個(gè) myid 不同

rsync -avz /opt/zookeeper-3.4.11 node1:/opt/
?
rsync -avz /data/zookeeper   node1:/data
?
node1:
[root@node1 #] echo "2" > /data/zookeeper/myid
?
rsync -avz /opt/zookeeper-3.4.11 node2:/opt/
?
rsync -avz /data/zookeeper   node2:/data
?
node2:
?
[root@node2 #]  echo "3" > /data/zookeeper/myid

注: 2888:3888 一個(gè)同步數(shù)據(jù)使用端口, 一個(gè)集群選擇用端口

myid 是與 server.id 是相對(duì)應(yīng)的

zookeeper 配置完成

啟動(dòng) 三臺(tái)機(jī)器啟動(dòng)

[root@elk-server /]# /opt/zookeeper/bin/zkServer.sh start
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

查看:

[root@node1 opt]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@node2 opt]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: leader

三臺(tái)機(jī)器啟動(dòng)后, 只能故障一臺(tái), 這樣啟動(dòng)就是集群了

連接測(cè)試

[root@elk_server zookeeper]#  /opt/zookeeper/bin/zkCli.sh -server 192.168.208.120:2181
[root@node1 zookeeper]#  /opt/zookeeper/bin/zkCli.sh -server 192.168.208.121:2181
[root@node2 zookeeper]#  /opt/zookeeper/bin/zkCli.sh -server 192.168.208.122:2181

連接后測(cè)試發(fā)送消息:

create /test "hello"

[zk: 192.168.208.120:2181(CONNECTED) 0] create /test "hello"
Created /test

其它節(jié)點(diǎn)接收消息

get /test

[zk: 192.168.208.122:2181(CONNECTED) 0] get /test
hello
cZxid = 0x100000005
ctime = Wed Apr 15 18:57:45 CST 2020
mZxid = 0x100000005
mtime = Wed Apr 15 18:57:45 CST 2020
pZxid = 0x100000005
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0

安裝kafka

server節(jié)點(diǎn): 192.168.208.120

[root@elk-server tool]# tar -xzf /tool/kafka-2.11-2.4.tgz -C /opt
[root@elk-server tool]# ln -s /opt/kafka-2.11-2.4 /opt/kafka
[root@elk-server tool]# mkdir -p /data/kafka/logs/    
[root@elk-server tool]# vim /opt/kafka/config/server.properties
broker.id=1                                                                           
listeners=PLAINTEXT://192.168.208.120:9092
log.dirs=/data/kafka/logs                                                    
log.retention.hours=24                                                                   
zookeeper.connect=192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181
?

其它節(jié)點(diǎn)相同配置:

[root@elk-server tool]# rsync -avz /opt/kafka-2.11-2.4 node1:/opt/
[root@elk-server tool]# rsync -avz /opt/kafka-2.11-2.4 node2:/opt/

node1:

[root@node1 opt]# ln -s /opt//opt/kafka-2.11-2.4 /opt/kafka
[root@node1 opt]# mkdir -p /data/kafka/logs
[root@node1 opt]# vim kafka/config/server.properties 
broker.id=2                                                                          
listeners=PLAINTEXT://192.168.208.121:9092
log.dirs=/data/kafka/logs                                                    
log.retention.hours=24                                                                   
zookeeper.connect=192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181

node2:

[root@node2 opt]# ln -s /opt//opt/kafka-2.11-2.4 /opt/kafka
[root@node2 opt]# mkdir -p /data/kafka/logs
[root@node2 opt]# vim kafka/config/server.properties 
broker.id=3                                                                           
listeners=PLAINTEXT://192.168.208.122:9092
log.dirs=/data/kafka/logs                                                    
log.retention.hours=24                                                                   
zookeeper.connect=192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181

啟動(dòng):

---前臺(tái)啟動(dòng) , 沒有報(bào)錯(cuò)了 就可以后臺(tái)啟動(dòng)了

/opt/kafka/bin/kafka-server-start.sh  /opt/kafka/config/server.properties

啟動(dòng)日志如下:

[2020-04-15 19:33:07,920] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2020-04-15 19:33:08,242] INFO [SocketServer brokerId=1] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2020-04-15 19:33:08,243] INFO Kafka version: 2.4.1 (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-15 19:33:08,243] INFO Kafka commitId: c57222ae8cd7866b (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-15 19:33:08,243] INFO Kafka startTimeMs: 1586950388242 (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-15 19:33:08,244] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)

-- 后臺(tái)啟動(dòng)

/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties

[root@elk-server kafka]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@node1 opt]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@node2 opt]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties

三臺(tái)啟動(dòng)后 進(jìn)行測(cè)試

elk_server 創(chuàng)建:

 /opt/kafka/bin/kafka-topics.sh  --create  --zookeeper 192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181 --partitions 3 --replication-factor 3 --topic kafkatest

node1 獲取

/opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181 --topic kafkatest
[root@node1 opt]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181  --topic kafkatest
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Topic: kafkatest        PartitionCount: 3       ReplicationFactor: 3    Configs: 
        Topic: kafkatest        Partition: 0    Leader: 1       Replicas: 1,3,2 Isr: 1,3,2
        Topic: kafkatest        Partition: 1    Leader: 2       Replicas: 2,1,3 Isr: 2,1,3
        Topic: kafkatest        Partition: 2    Leader: 3       Replicas: 3,2,1 Isr: 3,2,1

node2 獲取

[root@node2 opt]#  /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.208.120:2181,192.168.208.121:2181,192.168.208.122:2181  --topic kafkatest
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
Topic: kafkatest        PartitionCount: 3       ReplicationFactor: 3    Configs: 
        Topic: kafkatest        Partition: 0    Leader: 1       Replicas: 1,3,2 Isr: 1,3,2
        Topic: kafkatest        Partition: 1    Leader: 2       Replicas: 2,1,3 Isr: 2,1,3
        Topic: kafkatest        Partition: 2    Leader: 3       Replicas: 3,2,1 Isr: 3,2,1
29.png

配置kafka做緩存

配置 filebeat 的 output

1、修改filebeat
語(yǔ)法:

???????output.kafka:
  hosts: ["192.168.208.120:9092","192.168.208.121:9092","192.168.208.122:9092",]
  topic: elklog

配置文件如下:

[root@elk-server filebeat]# vim filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/nginx/access.log
  json.keys_under_root: true
  json.overwrite_keys: true
  tags: ["access"]
?
- type: log
  enabled: true
  paths:
    - /var/log/nginx/error.log
  tags: ["error"]
  json.keys_under_root: true
  json.overwrite_keys: true
?
- type: log
  enabled: true
  paths:
    - /var/log/elasticsearch/elasticsearch.log
  tags: ["elastic_java"]
  multiline.pattern: '^\['
  multiline.negate: true
  multiline.match: after
?
setup.kibana:
  host: "192.168.208.120:5601"
output.kafka:
  hosts: ["192.168.208.120:9092","192.168.208.121:9092","192.168.208.122:9092",]
  topic: elklog  #頻道

2、修改logstash

[root@elk-server conf.d]# vim /etc/logstash/conf.d/kafka.conf
?
input{
  kafka{
    bootstrap_servers=>"192.168.208.120:9092"  # kafka集群任意一個(gè)IP
    topics=>["elklog"]
    group_id=>"logstash"
    codec => "json"
  }
}
filter {
  mutate {
    convert => ["upstream_time", "float"]
    convert => ["request_time", "float"]
  }
}
output {
    if "access" in [tags] {
      elasticsearch {
        hosts => "http://192.168.208.120:9200"
        manage_template => false
        index => "nginx_access-%{+yyyy.MM.dd}"
      }
    }
    if "error" in [tags] {
      elasticsearch {
        hosts => "http://192.168.208.120:9200"
        manage_template => false
        index => "nginx_error-%{+yyyy.MM.dd}"
      }
    }
}

3、重啟服務(wù)

[root@elk-server /]# systemctl restart filebeat
[root@elk-server /]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/kafka.conf

此時(shí)再刷新 nginx 訪問數(shù)據(jù), 數(shù)據(jù)就寫到 elasticsearch 了

=======================================================================================

elasticsearch 啟動(dòng)端口號(hào): 9200 9300 一個(gè)是數(shù)據(jù)訪問端口, 一個(gè)是集群端口
kibana 啟動(dòng)端口號(hào): 5601
filebeat
logstash
zookeeper 啟動(dòng)端口號(hào): 2888、3888、2181, 2181:kafka與zookeeper連接端口,2888/3888同步選舉
kafka 啟動(dòng)端口號(hào): 9092
redis 啟動(dòng)端口號(hào): 6379
終極架構(gòu)

nginx+keepalived --> redis --> logstash ->es -- kibana

nginx+keepalived --> kafka --> logstash ->es -- kibana
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。
禁止轉(zhuǎn)載,如需轉(zhuǎn)載請(qǐng)通過簡(jiǎn)信或評(píng)論聯(lián)系作者。

友情鏈接更多精彩內(nèi)容