overview
kubernetes集群中scheduler對(duì)pod各個(gè)階段的調(diào)度耗時(shí)也展示了scheduler的性能,目前在社區(qū)中沒(méi)有找到相關(guān)的監(jiān)控項(xiàng),于是本文將使用filebeat+logstash+influxdb+grafana技術(shù)棧將Metres展示監(jiān)控起來(lái)。
kube-scheduler.INFO相關(guān)Trace日志
I0117 18:08:43.224106 87811 trace.go:76] Trace[1863564834]: "Scheduling fat-app/xxxxxxxx-xxxxxxxx-3404-0" (started: 2019-01-17 18:08:43.106820332 +0800 CST m=+116911.416416886) (total time: 117.213043ms):
Trace[1863564834]: [117.187μs] [117.187μs] Computing predicates
Trace[1863564834]: [2.093583ms] [1.976396ms] Prioritizing
Trace[1863564834]: [117.172797ms] [115.079214ms] Selecting host
Trace[1863564834]: [117.213043ms] [40.246μs] END
filebeat配置
filebeat.prospectors:
- type: log
paths:
- /var/log/kubernetes/kube-scheduler.*.INFO.*
include_lines: ['trace','Trace']
# pattern: 正則表達(dá)式
# negate: true 或 false(默認(rèn)是false),false表示匹配pattern的行合并到上一行;true表示不匹配pattern的行合并到上一行
# match: after 或 before,合并到上一行的末尾或開(kāi)頭
multiline:
pattern: '^Trace\[[0-9]+\]'
negate: false
match: after
fields:
tag: scheduler #添加tag
fields_under_root: true
scan_frequency: 10s
ignore_older: 6h
close_inactive: 5m
close_removed: true
clean_removed: true
tail_files: false
fields:
zone: $zone_name #添加自己的環(huán)境信息
fields_under_root: true
output.logstash:
enabled: true
hosts:
- $logstash_host:5044 #logstash的地址
logging.level: info
logging.metrics.enabled: false
http.enabled: true
http.host: localhost
http.port: 5066
setup.dashboards.enabled: false
setup.template.enabled: false
path.home: /usr/share/filebeat
path.data: /var/lib/filebeat
path.logs: /var/log/filebeat
filebeat.registry_file: ${path.data}/registry
max_procs: 2
note:
- debug時(shí)可通過(guò)
filebeat -e -c filebeat.yml -d "publish"來(lái)確認(rèn)采集的信息是否正確 - 若有如下報(bào)錯(cuò),則需要將filebeat stop掉,將/var/lib/filebeat/registry 刪除,重啟filebeat重新注冊(cè)即可
ERROR registrar/registrar.go:346 Writing of registry returned error: rename /var/lib/filebeat/registry.new /var/lib/filebeat/registry: no such file or directory. Continuing...
Logstash的安裝及配置
install logstash
- 下載logstash安裝
$ wget https://artifacts.elastic.co/downloads/logstash/logstash-6.5.4.rpm
$ rpm -ivh
- 到logstash的path目錄下,鏈接/etc/logstash
$ cd /usr/share/logstash/config
$ ln -s /etc/logstash/* .
- 更改logstash相關(guān)目錄權(quán)限
$ chown -R /etc/logstash
$ chown -R /usr/share/logstash
$ chown -R /var/lib/logstash
- 安裝所需插件,詳細(xì)見(jiàn)下文
$ cd /usr/share/logstash/
$ bin/logstash-plugin install logstash-output-influxdb
- 檢查配置文件是否正常
$ bin/logstash -f /etc/logstash/logstash.conf --config.test_and_exit
- 啟動(dòng)
$ bin/logstash -f /etc/logstash/logstash.conf --config.reload.automatic
install-plugin
- logstash-filter-grok
- logstash-output-influxdb
插件需要單獨(dú)安裝,安裝方式如果能不能連接外網(wǎng)則可采用以下方式
https://gems.ruby-china.com/
(1)盡可能用比較新的 RubyGems 版本,建議 2.6.x 以上
$ yum install -y gem
$ gem update --system # 這里請(qǐng)翻墻一下
$ gem -v
2.6.3
(2) 更換到國(guó)內(nèi)的源,確保只有 gems.ruby-china.com
$ gem sources --add https://gems.ruby-china.com/ --remove https://rubygems.org/
$ gem sources -l
https://gems.ruby-china.com
(3)安裝單個(gè)插件
$ gem install logstash-output-influxdb
(4)安裝所有Gemfile中的插件
# 在Gemfile中添加想要安裝的plugin
gem 'logstash-output-influxdb', '~> 5.0', '>= 5.0.5'
$ bin/logstash-plug ininstall --no-verify
self-defined grok-pattern
KUBESCHEDULER .*Scheduling %{NOTSPACE:namespace:tag}/%{NOTSPACE:podname:tag}\".*total time: %{NUMBER:total_time}%{NOTSPACE:total_mesurement}\).*Trace\[\S+\].*\[\S+\].*\[%{NUMBER:computing_time}%{NOTSPACE:computing_mesurement}\]\s+Computing %{WORD:ComputingPredicates:tag}.*Trace\[\S+\].*\[\S+\].*\[%{NUMBER:prioritizing_time}%{NOTSPACE:prioritizing_mesurement}\]\s+%{WORD:Prioritizing:tag}.*Trace\[\S+\].*\[\S+\].*\[%{NUMBER:SelectingHost_time}%{NOTSPACE:SelectingHost_mesurement}\]\s+%{WORD:SelectingHost:tag}.**Trace\[\S+\].*\[\S+\].*\[%{NUMBER:end_time}%{NOTSPACE:end_mesurement}\]\s+%{WORD:END:tag}
KUBESCHEDULERSHORT .*Scheduling %{NOTSPACE:namespace:tag}/%{NOTSPACE:podname:tag}\".*total time: %{NUMBER:total_time}%{NOTSPACE:total_mesurement}\).*Trace\[\S+\].*\[\S+\].*\[%{NUMBER:computing_time}%{NOTSPACE:computing_mesurement}\]\s+Computing %{WORD:ComputingPredicates:tag}.**Trace\[\S+\].*\[\S+\].*\[%{NUMBER:end_time}%{NOTSPACE:end_mesurement}\]\s+%{WORD:END:tag}
logstash.conf配置文件
input {
beats {
port => "5044"
client_inactivity_timeout => "60" #默認(rèn)60s
}
}
filter {
grok {
# 需要安裝grok插件,并且pattern能正確匹配才能有輸出
patterns_dir => ["/usr/share/logstash/config/pattern"] #自定義的grok pattern
match => {
"message" => ["%{KUBESCHEDULER}", "%{KUBESCHEDULERSHORT}"]
}
remove_field => ["message"]
}
ruby {
# logstash的算術(shù)運(yùn)算操作,統(tǒng)一運(yùn)算單位
code => "
total_unit = event.get('total_mesurement')
computing_unit = event.get('computing_mesurement')
prioritizing_unit = event.get('prioritizing_mesurement')
selecting_unit = event.get('SelectingHost_mesurement')
ending_unit = event.get('end_mesurement')
if total_unit == 'ms'
event.set('total_time',(event.get('total_time').to_f*1000))
elsif total_unit == 's'
event.set('total_time',(event.get('total_time').to_f*1000000))
else
event.set('total_time',(event.get('total_time').to_f))
end
if computing_unit == 'ms'
event.set('computing_time',(event.get('computing_time').to_f*1000))
elsif computing_unit == 's'
event.set('computing_time',(event.get('computing_time').to_f*1000000))
else
event.set('computing_time',(event.get('computing_time').to_f))
end
if prioritizing_unit == 'ms'
event.set('prioritizing_time',(event.get('prioritizing_time').to_f*1000))
elsif prioritizing_unit == 's'
event.set('prioritizing_time',(event.get('prioritizing_time').to_f*1000000))
else
event.set('prioritizing_time',(event.get('prioritizing_time').to_f))
end
if selecting_unit == 'ms'
event.set('SelectingHost_time',(event.get('SelectingHost_time').to_f*1000))
elsif selecting_unit == 's'
event.set('SelectingHost_time',(event.get('SelectingHost_time').to_f*1000000))
else
event.set('SelectingHost_time',(event.get('SelectingHost_time').to_f))
end
if ending_unit == 'ms'
event.set('end_time',(event.get('end_time').to_f*1000))
elsif ending_unit == 's'
event.set('end_time',(event.get('end_time').to_f*1000000))
else
event.set('end_time',(event.get('end_time').to_f))
end
"
remove_field => ["total_mesurement","computing_mesurement","prioritizing_mesurement","SelectingHost_mesurement","end_mesurement","prospector","offset","tags","beat","source"]
}
}
output {
influxdb {
db => "$dbname"
host => "$influxdb-host"
port => "8086"
user => "$username"
password => "$yourpasswd"
measurement => "kubeSchedulerTimeCost"
coerce_values => {
"total_time" => "float"
"computing_time" => "float"
"prioritizing_time" => "float"
"SelectingHost_time" => "float"
"end_time" => "float"
}
data_points => {
"namespace" => "%{namespace}"
"podname" => "%{podname}"
"ComputingPredicates" => "%{ComputingPredicates}"
"Prioritizing" => "%{Prioritizing}"
"SelectingHost" => "%{SelectingHost}"
"End" => "%{END}"
"zone" => "%{zone}"
"total_time" => "%{total_time}"
"computing_time" => "%{computing_time}"
"prioritizing_time" => "%{prioritizing_time}"
"SelectingHost_time" => "%{SelectingHost_time}"
"end_time" => "%{end_time}"
"host" => "%{host}"
}
send_as_tags => ["host","zone","namespace","podname","ComputingPredicates","Prioritizing","SelectingHost","End"]
}
# elasticsearch {
# hosts => "$es-host:9200"
# manage_template => false
# index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
# document_type => "%{[@metadata][type]}"
# }
stdout { codec => rubydebug }
}
note:
- 可通過(guò)執(zhí)行
bin/logstash -f ./config/conf.d/scheduler.conf -- config.reload.automatic - 若啟動(dòng)logstash服務(wù),但是input beats 5044 port沒(méi)有打開(kāi),則可能時(shí)權(quán)限問(wèn)題,更改一下queue的權(quán)限
chown -R logstash:logstash /var/lib/logstash
influxdb
influxdb需要將相應(yīng)的端口打開(kāi)
grafana看板
可看到各個(gè)集群的pod在各個(gè)階段的調(diào)度耗時(shí)及調(diào)度時(shí)間

scheduler.png