1 啟用zookeeper自帶服務(wù)
1.1 啟用監(jiān)控服務(wù)端口
New Metrics System從3.6.0開始提供,提供豐富的指標(biāo)幫助用戶監(jiān)控ZooKeeper的主題:znode、網(wǎng)絡(luò)、磁盤、quorum、leader選舉、client、security、failures、watch/session、requestProcessor等向前。
先決條件:
通過zoo.cfg 中Prometheus MetricsProvider的設(shè)置啟用。metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
端口也可以通過設(shè)置來配置metricsProvider.httpPort(默認(rèn)值:7000)
1.2 配置promtheus
將 Prometheus 的爬蟲設(shè)置為以 ZooKeeper 集群端點(diǎn)為目標(biāo):
- job_name: test-zk
static_configs:
- targets: ['192.168.10.32:7000','192.168.10.33:7000','192.168.10.34:7000']
重載prometheus
1.3 告警規(guī)則
提供了一個(gè)警報(bào)示例,其中應(yīng)特別注意這些指標(biāo)。注:僅供參考,需要根據(jù)自己的實(shí)際情況和資源環(huán)境進(jìn)行調(diào)整
groups:
- name: zk-alert-example
rules:
- alert: ZooKeeper server is down
expr: up == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Instance {{ $labels.instance }} ZooKeeper server is down"
description: "{{ $labels.instance }} of job {{$labels.job}} ZooKeeper server is down: [{{ $value }}]."
- alert: create too many znodes
expr: znode_count > 1000000
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} create too many znodes"
description: "{{ $labels.instance }} of job {{$labels.job}} create too many znodes: [{{ $value }}]."
- alert: create too many connections
expr: num_alive_connections > 50 # suppose we use the default maxClientCnxns: 60
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} create too many connections"
description: "{{ $labels.instance }} of job {{$labels.job}} create too many connections: [{{ $value }}]."
- alert: znode total occupied memory is too big
expr: approximate_data_size /1024 /1024 > 1 * 1024 # more than 1024 MB(1 GB)
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} znode total occupied memory is too big"
description: "{{ $labels.instance }} of job {{$labels.job}} znode total occupied memory is too big: [{{ $value }}] MB."
- alert: set too many watch
expr: watch_count > 10000
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} set too many watch"
description: "{{ $labels.instance }} of job {{$labels.job}} set too many watch: [{{ $value }}]."
- alert: a leader election happens
expr: increase(election_time_count[5m]) > 0
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} a leader election happens"
description: "{{ $labels.instance }} of job {{$labels.job}} a leader election happens: [{{ $value }}]."
- alert: open too many files
expr: open_file_descriptor_count > 300
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} open too many files"
description: "{{ $labels.instance }} of job {{$labels.job}} open too many files: [{{ $value }}]."
- alert: fsync time is too long
expr: rate(fsynctime_sum[1m]) > 100
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} fsync time is too long"
description: "{{ $labels.instance }} of job {{$labels.job}} fsync time is too long: [{{ $value }}]."
- alert: take snapshot time is too long
expr: rate(snapshottime_sum[5m]) > 100
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} take snapshot time is too long"
description: "{{ $labels.instance }} of job {{$labels.job}} take snapshot time is too long: [{{ $value }}]."
- alert: avg latency is too high
expr: avg_latency > 100
for: 1m
labels:
severity: warning
annotations:
summary: "Instance {{ $labels.instance }} avg latency is too high"
description: "{{ $labels.instance }} of job {{$labels.job}} avg latency is too high: [{{ $value }}]."
- alert: JvmMemoryFillingUp
expr: jvm_memory_bytes_used / jvm_memory_bytes_max{area="heap"} > 0.8
for: 5m
labels:
severity: warning
annotations:
summary: "JVM memory filling up (instance {{ $labels.instance }})"
description: "JVM memory is filling up (> 80%)\n labels: {{ $labels }} value = {{ $value }}\n"
1.4 通過k8s ServiceMonitor發(fā)現(xiàn)
可以通過k8s ServiceMonitor發(fā)現(xiàn)接入zookeeper監(jiān)控
1.4.1 創(chuàng)建zookeeper 監(jiān)控端口的service
# Service 定義(針對(duì) Exporter 或原生接口)
apiVersion: v1
kind: Service
metadata:
name: zookeeper-monitor
labels:
app: zookeeper-monitor
spec:
ports:
- name: metrics
port: 7000 # 監(jiān)控端口默認(rèn)端口7000
selector:
app: zookeeper # 或 zookeeper Pod 標(biāo)簽
1.4.2 創(chuàng)建zookeeper ServiceMonitor
# ServiceMonitor 定義(Prometheus Operator)
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: zookeeper-monitor
namespace: monitoring
spec:
endpoints:
- port: metrics
interval: 30s
scheme: http
selector:
matchLabels:
app: zookeeper-monitor
namespaceSelector:
matchNames:
- default
2 啟動(dòng)zookeeper-exporter監(jiān)控
2.1 下載char包
參考:https://github.com/feiyu563/prometheus-exporter/tree/master
注意要修改相關(guān)參數(shù):
1、鏡像拉取策略
- image: reg.hrlyit.com/common/zookeeper_exporter:latest
name: zookeeper-exporter
imagePullPolicy: IfNotPresent
2、容器標(biāo)簽值修改:app: zookeeper-exporter
3、修改apiVersion版本信息:
apiVersion: apps/v1
4、如果是zookeeper集群,建議容器名加上序號(hào):0、1、2,可以啟動(dòng)后進(jìn)行修改
啟動(dòng):
helm upgrade --install zookeeper-exporter-0 --namespace monitoring --set env.url='zookeeper-exporter-0' --set env.zookeeper_addr='zookeeper-0.zookeeper-headless.default:2181' ./zookeeper-exporter
helm upgrade --install zookeeper-exporter-1 --namespace monitoring --set env.url='zookeeper-exporter-1' --set env.zookeeper_addr='zookeeper-1.zookeeper-headless.default:2181' ./zookeeper-exporter
helm upgrade --install zookeeper-exporter-2 --namespace monitoring --set env.url='zookeeper-exporter-2' --set env.zookeeper_addr='zookeeper-2.zookeeper-headless.default:2181' ./zookeeper-exporter
2.2 創(chuàng)建service
因?yàn)閯?chuàng)建三個(gè)zookeeper-exporter,service也會(huì)自動(dòng)創(chuàng)建了三個(gè),先把這三個(gè)service刪除,重新創(chuàng)建,創(chuàng)建yaml文件如下:
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-exporter
name: zookeeper-exporter
namespace: monitoring
spec:
ports:
- name: http
port: 9141
protocol: TCP
targetPort: http
selector:
app: zookeeper-exporter
sessionAffinity: None
type: ClusterIP
2.3 創(chuàng)建ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: zookeeper-exporter
namespace: monitoring
spec:
endpoints:
- interval: 30s
path: /metrics
port: http
namespaceSelector:
matchNames:
- monitoring
selector:
matchLabels:
app: zookeeper-exporter
2.4 二進(jìn)制部署zookeeper_exporter
下載二進(jìn)制部署包,
下載地址:https://github.com/carlpett/zookeeper_exporter/releases
將二進(jìn)制包zookeeper_exporter 放到/opt/soft/zookeeper_exporter 目錄下
mkdir /opt/soft/zookeeper_exporter
chmod +x /opt/soft/zookeeper_exporter/zookeeper_exporter
## 啟動(dòng)腳本:/opt/soft/zookeeper_exporter/zk_exporter.sh
#!/bin/bash
cd /opt/soft/zookeeper_exporter && ./zookeeper_exporter -zookeeper localhost:2181 -bind-addr ":9141"
## 啟動(dòng)
nohup bash /opt/soft/zookeeper_exporter/zk_exporter.sh &
## 配置開機(jī)啟動(dòng)
echo 'bash /opt/soft/zookeeper_exporter/zk_exporter.sh' >> /etc/rc.local
## 驗(yàn)證
curl 127.0.0.1:9141/metrics
配置prometheus,并重啟
- job_name: "zookeeper-exporter"
static_configs:
- targets: ["10.51.10.4:9141","10.51.10.5:9141","10.51.10.6:9141"]
2.5 核心監(jiān)控指標(biāo)
狀態(tài)
zk_up 節(jié)點(diǎn)狀態(tài)
zk_server_state(節(jié)點(diǎn)角色:Leader/Follower)
zk_znode_count(節(jié)點(diǎn)數(shù)量)
zk_packets_received 接收數(shù)據(jù)包
zk_packets_sent 發(fā)送數(shù)據(jù)包
zk_outstanding_requests 待處理請(qǐng)求數(shù)
性能指標(biāo)
zk_avg_latency(請(qǐng)求平均延遲)
zk_max_latency(請(qǐng)求最大延遲)
zk_min_latency(請(qǐng)求最小延遲)
zk_outstanding_requests(堆積請(qǐng)求數(shù))
客戶端連接
zk_num_alive_connections 活躍的客戶端連接數(shù)
zookeeper_approximate_data_size:ZooKeeper 數(shù)據(jù)的大致大小
ZooKeeper 事務(wù)日志
zk_outstanding_requests:當(dāng)前等待處理客戶端請(qǐng)求數(shù)量
zookeeper_open_file_descriptor_count:打開的文件描述符數(shù)量