ELK-搭建實時日志ELK分析系統(tǒng)

ELK是elasticsearch,logstash,kibana三個開源工具的簡稱,一般用于搭建日志分析系統(tǒng)。

  • elasticsearch是核心,是一個分布式搜索引擎,查詢速度快,提供數(shù)據(jù)的存儲和檢索。
  • logstash負(fù)責(zé)數(shù)據(jù)的收集和處理,目前多使用一個更加輕量級的工具filebeat進(jìn)行收集數(shù)據(jù)。
  • kibana用于可視化展示elasticsearch中的數(shù)據(jù),并提供一些操作。

環(huán)境準(zhǔn)備

前往es官網(wǎng)下載所有es開源工具
我們下載最新的6.4.3版本:

2I5JCAA}V4@LI)GB{)V$VQG.png

安裝jdk環(huán)境

要求jdk版本為1.8+
查看jdk版本命令

java -version

顯示結(jié)果為

java version "1.8.0_171"
Java(TM) SE Runtime Environment (build 1.8.0_171-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode)

安裝elasticsearch

  1. 解壓文件
tar -zxvf elasticsearch-6.4.3.tar.gz
  1. 修改config目錄下elasticsearch.yml:
    network.host: 9200(以便在外網(wǎng)中訪問)
  2. 啟動elasticsearch,在bin目錄下啟動
    前臺啟動命令
    sh elasticsearch
    后臺啟動命令
    sh elasticsearch -d
常見錯誤及注意事項:
  • 需要用非root賬號啟動。
  • 默認(rèn)占用9200端口和9300端口,如已被占用,修改elasticsearch.yml
    transport.tcp.port: 9301
    http.port: 9201
  • 啟動報錯[max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]]
    修改/etc/security/limits.conf,增加以下配置
    * soft nofile 65536
    * hard nofile 65536
  • 啟動報錯max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
    修改/etc/sysctl.conf,增加
    vm.max_map_count = 655360
    修改完成之后重新登錄
  1. 啟動成功后在瀏覽器訪問:你的ip:9200,顯示以下頁面則表示啟動成功:


    P7V}HQ3Y8({X)HXD9{}0EJ.png
  2. 可安裝head插件,不過6.x版本安裝head插件比較麻煩,可以安裝chrome插件elasticsearch-head,比較方便

安裝logstash

logstash主要由input,filter,output幾大部分,可以根據(jù)實際場景進(jìn)行配置,此次我們設(shè)置input從filebeat處采集數(shù)據(jù),輸出到elasticsearch。

  1. 解壓文件
    tar -zxvf logstash-6.4.3.tar.gz
  2. 新建一個配置文件
    vi start.conf
    配置如下:
# 配置輸入為 beats
input {
    beats {
            port => "5044"

    }
}
# 數(shù)據(jù)過濾
filter {
    grok {
            match => { "message" => "%{COMBINEDAPACHELOG}" }

    }
}
# 輸出到本機(jī)的 ES
output {
    elasticsearch {
            hosts => [ "localhost:9200"  ]

    }
}
  1. 驗證配置文件格式是否正確
    logstash -f start.conf -t
    顯示: Configuration OK 則證明正確。
  2. 啟動logstash
    bin/logstash -f first.conf --config.reload.automatic
    成功監(jiān)聽5044端口
[2018-11-19T16:28:48,946][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-11-19T16:28:48,953][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-11-19T16:28:48,959][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x16b0a9f7 sleep>"}
[2018-11-19T16:28:48,973][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
常見錯誤及注意事項:
  • 報錯Expected one of #, input, filter, output at line 1, column 1 (byte 1) after
    原因是配置文件第一行有空白,需要把文件更改為UTF-8無BOM形式,使用notepad++進(jìn)行修改,修改完成后將文件移動到bin目錄下。
  • 此外還報了一個異常,最后先啟動filebeat再啟動logstash解決,建議先啟動filebeat。

安裝filebeat

filebeat安裝在日志文件存放的服務(wù)器上,讀取本機(jī)指定的日志文件發(fā)送到logstash。

  1. 解壓文件
    tar -zxvf kibana-6.4.3-linux-x86_64.tar.gz
  2. 編輯filebeat.yml對應(yīng)位置的配置
#=========================== Filebeat inputs =============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
  #你的日志目錄
    - c:\programdata\elasticsearch\logs\*

編輯輸出部分,注釋默認(rèn)的輸出到elasticsearch,改為輸出到logstash

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
 # hosts: ["localhost:9201"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"
  1. 使用root賬號啟動logstash
    ./filebeat -e -c filebeat.yml -d "publish"
    后臺啟動方式: nohup ./filebeat -e -c filebeat.yml > filebeat.log &
  2. 啟動之后顯示如下:
2018-11-20T09:08:07.318+0800    INFO    [monitoring]    log/log.go:141  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":240,"time":{"ms":13}},"total":{"ticks":1220,"time":{"ms":30},"value":1220},"user":{"ticks":980,"time":{"ms":17}}},"info":{"ephemeral_id":"f3fa0a1c-0bdc-40e7-9666-abeb3e308b75","uptime":{"ms":60017}},"memstats":{"gc_next":58381296,"memory_alloc":33685752,"memory_total":313273096}},"filebeat":{"harvester":{"open_files":15,"running":15}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":15}},"system":{"load":{"1":0.1,"15":0.13,"5":0.18,"norm":{"1":0.025,"15":0.0325,"5":0.045}}}}}}
2018-11-20T09:08:37.318+0800    INFO    [monitoring]    log/log.go:141  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":260,"time":{"ms":21}},"total":{"ticks":1240,"time":{"ms":26},"value":1240},"user":{"ticks":980,"time":{"ms":5}}},"info":{"ephemeral_id":"f3fa0a1c-0bdc-40e7-9666-abeb3e308b75","uptime":{"ms":90017}},"memstats":{"gc_next":58381296,"memory_alloc":33998256,"memory_total":313585600}},"filebeat":{"harvester":{"open_files":15,"running":15}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":15}},"system":{"load":{"1":0.06,"15":0.13,"5":0.17,"norm":{"1":0.015,"15":0.0325,"5":0.0425}}}}}}
2018-11-20T09:09:07.318+0800    INFO    [monitoring]    log/log.go:141  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":270,"time":{"ms":13}},"total":{"ticks":1270,"time":{"ms":29},"value":1270},"user":{"ticks":1000,"time":{"ms":16}}},"info":{"ephemeral_id":"f3fa0a1c-0bdc-40e7-9666-abeb3e308b75","uptime":{"ms":120017}},"memstats":{"gc_next":58381296,"memory_alloc":34325192,"memory_total":313912536}},"filebeat":{"harvester":{"open_files":15,"running":15}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":15}},"system":{"load":{"1":0.04,"15":0.13,"5":0.15,"norm":{"1":0.01,"15":0.0325,"5":0.0375}}}}}}
2018-11-20T09:09:37.318+0800    INFO    [monitoring]    log/log.go:141  Non-zero metrics in the last 30s        {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":290,"time":{"ms":15}},"total":{"ticks":1310,"time":{"ms":40},"value":1310},"user":{"ticks":1020,"time":{"ms":25}}},"info":{"ephemeral_id":"f3fa0a1c-0bdc-40e7-9666-abeb3e308b75","uptime":{"ms":150017}},"memstats":{"gc_next":12905552,"memory_alloc":6551512,"memory_total":314231136,"rss":258048}},"filebeat":{"harvester":{"open_files":15,"running":15}},"libbeat":{"config":{"module":{"running":0}},"pipeline":{"clients":1,"events":{"active":0}}},"registrar":{"states":{"current":15}},"system":{"load":{"1":0.1,"15":0.13,"5":0.15,"norm":{"1":0.025,"15":0.0325,"5":0.0375}}}}}}
  1. 這個時候我發(fā)現(xiàn)日志并沒有通過filebeat采集到并發(fā)送到logstash
    在input和output下新增enabled: true,解決問題
 # Paths that should be crawled and fetched. Glob based paths.
  enabled: true
  paths:
    - /home/appadmin/elk/logs/*
    #- c:\programdata\elasticsearch\logs\*
#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["localhost:5044"]
  enabled: true

安裝kibana

  1. 解壓文件
    tar -zxvf kibana-6.4.3-linux-x86_64.tar.gz
  2. 修改conf/kibana.yml使外網(wǎng)可以訪問
    server.host: "0.0.0.0"
  3. 瀏覽器訪問:ip:5601

到現(xiàn)在一個elk就搭建完成了。
進(jìn)一步設(shè)置參考ELK-搭建實時日志ELK分析系統(tǒng)(2)-配置日志合并,@timestamp,根據(jù)不同beats來源建立不同索引

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容