ELK由ElasticSearch、Logstash和Kiabana三個開源工具組成。官方網(wǎng)站:https://www.elastic.co/products
Elasticsearch
- Elasticsearch 是個開源分布式搜索引擎,它的特點(diǎn)有:分布式,零配置,自動發(fā)現(xiàn),索引自動分片,索引副本機(jī)制,restful風(fēng)格接口,多數(shù)據(jù)源,自動搜索負(fù)載等。
Logstash
- Logstash是一個完全開源的工具,他可以對你的日志進(jìn)行收集、分析,并將其存儲供以后使用(如,搜索)。
kibana
- kibana 是一個開源和免費(fèi)的工具,他Kibana可以為 Logstash 和 ElasticSearch 提供的日志分析友好的 Web 界面,可以幫助您匯總、分析和搜索重要數(shù)據(jù)日志。
-
工作原理:
image.png
安裝Logstash
- 安裝JDK
[root@h001 jdk1.8.0_45]# java -version
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
- 安裝Logstash,添加環(huán)境變量
[elk@h001 soft]$ tar -xzvf logstash-6.6.0.tar.gz -C ~/app/
[elk@h001 soft]$ cd ~/app
[elk@h001 app]$ ll
total 4
drwxrwxr-x 12 elk elk 4096 May 5 11:35 logstash-6.6.0
[elk@h001 soft]$ vi ~/.bash_profile
export LOGSTASH_HOME=/home/elk/app/logstash-6.6.0
export PATH=$LOGSTASH_HOME/bin:$PATH
- 安裝完后執(zhí)行如下命令
[elk@h001 logstash-6.6.0]$ logstash -e 'input { stdin { } } output { stdout {} }'
Sending Logstash logs to /home/elk/app/logstash-6.6.0/logs which is now configured via log4j2.properties
[2019-05-05T11:39:10,547][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-05-05T11:39:10,564][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.6.0"}
[2019-05-05T11:39:17,173][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-05-05T11:39:17,275][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x796505b4 run>"}
[2019-05-05T11:39:17,337][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
The stdin plugin is now waiting for input:
[2019-05-05T11:39:17,601][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
- 啟動完后在控制臺界面輸入
hello!!!
{
"@timestamp" => 2019-05-06T01:54:24.393Z,
"message" => "hello!!!",
"@version" => "1",
"host" => "h001"
}
- 創(chuàng)建配置文件
以上使用-e參數(shù)在命令行中指定配置是很常用的方式,不過如果需要配置更多設(shè)置則需要很長的內(nèi)容。這種情況,我們首先創(chuàng)建一個簡單的配置文件,并且指定logstash使用這個配置文件。在config目錄下編寫一個簡單的配置文件test.conf。
logstash使用input和output定義收集日志時的輸入和輸出的相關(guān)配置,本例中input定義了一個叫"stdin"的input,output定義一個叫"stdout"的output。無論我們輸入什么字符,Logstash都會按照某種格式來返回我們輸入的字符。使用logstash的-f參數(shù)來讀取配置文件,執(zhí)行如下開始進(jìn)行測試:
[elk@h001 config]$ vi test.conf
input { stdin { } }
output { stdout { }}
[elk@h001 config]$ logstash -f test.conf
Sending Logstash logs to /home/elk/app/logstash-6.6.0/logs which is now configured via log4j2.properties
[2019-05-06T10:28:01,611][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-05-06T10:28:01,625][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.6.0"}
[2019-05-06T10:28:07,493][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2019-05-06T10:28:07,627][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x62579fc2 run>"}
The stdin plugin is now waiting for input:
[2019-05-06T10:28:07,687][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-05-06T10:28:07,875][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
hello!!!++++
{
"host" => "h001",
"@version" => "1",
"@timestamp" => 2019-05-06T02:28:20.898Z,
"message" => "hello!!!++++"
}
安裝Elasticsearch
- 下載Elasticsearch解壓并添加環(huán)境變量
[elk@h001 soft]$ tar -xzvf elasticsearch-6.6.0.tar.gz -C ~/app
[elk@h001 soft]$ vi ~/.bash_profile
export ES_HOME=/home/elk/app/elasticsearch-6.6.0
export PATH=$ES_HOME/bin:$PATH
[elk@h001 soft]$ source ~/.bash_profile
[elk@h001 soft]$ echo $ES_HOME
/home/elk/app/elasticsearch-6.6.0
- 啟動Elasticsearch
[elk@h001 ~]$ nohup elasticsearch &
然后查看日志,看到如下日志后,說明ES已經(jīng)啟動,可以查看網(wǎng)頁
[elk@h001 ~]$ more nohup.out
[2019-05-06T11:33:53,769][INFO ][o.e.h.n.Netty4HttpServerTransport] [3bWHi_8] publish_address {172.26.121.32:9200}, bound_addre
sses {0.0.0.0:9200}
[2019-05-06T11:33:53,770][INFO ][o.e.n.Node ] [3bWHi_8] started

- ES啟動過程中遇到的問題
ERROR: [2] bootstrap checks failed
[1]: max file descriptors [65535] for elasticsearch process is too low, increase to at least [65536]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
解決辦法:
[1]:sudo vi /etc/sysctl.conf
修改如下配置vm.max_map_count=655360
并執(zhí)行命令sysctl -p即可
[2]:sudo vi /etc/security/limits.conf
修改如下配置* soft nofile 65536
需要系統(tǒng)重啟
另外啟動ES不可以使用root 賬號
- 測試使用ES
[elk@h001 config]$ vi logstash-es.conf
input { stdin { } }
output {
elasticsearch {hosts => "172.26.121.32:9200" }
stdout { }
}
[elk@h001 config]$ logstash -f logstash-es.conf
.....
[2019-05-06T14:01:07,003][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
hello
{
"@timestamp" => 2019-05-06T06:01:13.669Z,
"message" => "hellohello",
"@version" => "1",
"host" => "h001"
}
查看數(shù)據(jù)是否發(fā)送到ES
[elk@h001 ~]$ curl 'http://localhost:9200/_search?pretty'
{
"took" : 52,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 1,
"max_score" : 1.0,
"hits" : [
{
"_index" : "logstash-2019.05.06",
"_type" : "doc",
"_id" : "g267i2oBSTfxCsN6YJri",
"_score" : 1.0,
"_source" : {
"@timestamp" : "2019-05-06T06:01:13.669Z",
"message" : "hellohello",
"@version" : "1",
"host" : "h001"
}
}
]
}
}
- 安裝ES插件
參考:https://blog.csdn.net/u012332735/article/details/54946355
安裝Kibana
- 下載Kibana,解壓并配置環(huán)境變量
[elk@h001 soft]$ tar -xzvf kibana-6.6.0-linux-x86_64.tar.gz -C ~/app
[elk@h001 soft]$ vi ~/.bash_profile
export KIBANA_HOME=/home/elk/app/kibana-6.6.0-linux-x86_64
export PATH=$KIBANA_HOME/bin:$PATH
~
~
"~/.bash_profile" 15L, 402C written
[elk@h001 soft]$ source ~/.bash_profile
- 啟動Kibana
[elk@h001 ~]$ kibana
log [08:56:44.834] [info][listening] Server running at http://0.0.0.0:5601
log [08:56:45.125] [info][status][plugin:spaces@6.6.0] Status changed from yellow to green - Ready
打開網(wǎng)頁
image.png
第一次打開添加index
image.png
選擇默認(rèn)的@timestamp
image.png
然后在Discover中選擇對應(yīng)的index
image.png
即可采集到對應(yīng)的日志
image.png





