HPA 是 Kubernetes 提供了一個彈性伸縮的重要功能,簡單來說就是支持在應(yīng)用資源消耗量很大的情況下,根據(jù)用戶配置的閾值來自動進行擴容,減少人工的介入;在應(yīng)用資源消耗量很小的情況下,進行縮容,進而減少資源的消耗量,達到節(jié)約成本的目的。本文會先從幾個方面來講解 HPA 的功能、使用、擴展及使用,分別為 HPA背景、HPA原理、HPA組件、HPA功能及算法細節(jié)、Demo示例、HPA 高級使用、自定義指標(biāo)、定時擴容、HPA相關(guān)項目、HPA源碼分析。由于一篇文章寫完以上所有章節(jié),會比較多內(nèi)容,因此這里會分章節(jié)說明
一、HPA 背景
HPA 全稱是 Horizontal Pod Autoscaler(水平擴縮),是通過 kubernetes HPA Controller(控制器)來實現(xiàn)對 workload(可理解為業(yè)務(wù)應(yīng)用)進行自動的擴容和縮容。下面從兩個方面來分析一下背景
- 業(yè)務(wù)并不是全天候都是高負載情況,但同時為了應(yīng)對每日或者活動期間的隨機高峰流量,業(yè)務(wù)經(jīng)常會將應(yīng)用部署很多副本,很多時候都是負載很低,進而導(dǎo)致資源利用率很低
- 業(yè)務(wù)并不能準(zhǔn)確評估服務(wù)所需的 CPU、內(nèi)存等資源,一般都設(shè)置比較大,進而導(dǎo)致資源利用率低
因此,為了解決業(yè)務(wù)資源利用率低、評估不準(zhǔn)確等問題,需要通過動態(tài)調(diào)整方式來提高資源的利用率
目前 kubernetes 提供了三種維度的彈性擴縮方案:
- Cluster Autoscaler: 自動擴展和收縮 Kubernetes 集群 Node 的擴展,當(dāng)集群容量不足時,它會自動去 Cloud Provider (支持 GCE、GKE 和 AWS)創(chuàng)建新的 Node,而在 Node 長時間資源利用率很低時自動將其刪除以節(jié)省開支
- Vertical Pod Autoscaler: 用戶無需為其pods中的容器設(shè)置最新的資源request。配置后,它將根據(jù)使用情況自動設(shè)置request,從而允許在節(jié)點上進行適當(dāng)?shù)恼{(diào)度,以便為每個pod提供適當(dāng)?shù)馁Y源量,目前是 beta 狀態(tài)
- Horizontal Pod Autoscaler: 基于 CPU 利用率自動擴縮 ReplicationController、Deployment、ReplicaSet 和 StatefulSet 中的 Pod 數(shù)量。除了 CPU 利用率,也可以基于其他應(yīng)程序提供的 自定義度量指標(biāo) 來執(zhí)行自動擴縮
這里我們只針對于 HPA 進行講解,另外兩種組件不展開
二、HPA 原理
HPA 是如何工作的
下面圖片描述了HPA的工作流程,HPA 是通過一個控制循環(huán)(HPA Controller)來定時對應(yīng)用進行動態(tài)擴縮容的,通過啟動k8s時配置 --horizontal-pod-autoscaler-sync-period 參數(shù)指定周期(默認值為 15 秒)。

每個周期內(nèi),HPA Controller 通過用戶定義的 hpa 規(guī)則(下面會講到),比如通過資源度量指標(biāo) API(可能是 metrics-server 或者 custom-resource-server 服務(wù)提供)來獲取相應(yīng)的CPU、Memory指標(biāo)數(shù)據(jù)。
這里需要了解一下 k8s 的 APIServer 擴展API的兩種方式:
1、通過 API Aggregation聚合層將外部服務(wù)注冊到APIServer的特定接口(metrics-server、custom-resource-server)
2、通過 CRD 方式提供服務(wù)(這里不涉及)
HPA 提供了三種指標(biāo)接口,用于不同指標(biāo)信息的提供:
- 對于資源指標(biāo),將使用 metrics.k8s.io API,一般由 metrics-server 提供
- 對于自定義指標(biāo),將使用 custom.metrics.k8s.io API。 它由其他度量指標(biāo)方案廠商的“適配器(Adapter)” API 服務(wù)器提供(比如 prometheus-adapter)
- 對于外部指標(biāo),將使用 external.metrics.k8s.io API??赡苡缮厦娴淖远x指標(biāo)適配器提供。
下圖就是 HPA 調(diào)用以上注冊的接口進行指標(biāo)獲取的流程圖

有個更具體的 網(wǎng)易輕舟提供的架構(gòu)圖

HPA 執(zhí)行步驟:
1、HPA Controller 每隔 15s 進行一次流程,先獲取用戶定義的 HPA規(guī)則,然后通過 aggregator 層,請求 metrics-server 獲取 CPU 利用率
2、metrics-server 定時從 kubelet 的接口獲取到相應(yīng)的應(yīng)用指標(biāo)情況
3、kubelet 是通過內(nèi)置的 cadvisor(指標(biāo)收集組件),本質(zhì)上是通過讀取獲取 /sys/cgroup/ 下應(yīng)用的資源情況
4、根據(jù) metrics-server 獲取的 CPU 利用率,根據(jù)計算公式(下面會說到),得出新的副本數(shù)
5、如果用戶當(dāng)前的副本數(shù)與計算出來的不一致,則執(zhí)行擴容或縮容的流程,最終也是把 deployment 資源的 replica (副本值)修改成計算的值
6、Deployment Controller 通過監(jiān)控到副本值的變化,最終進行pod的擴縮動作
三、HPA組件
從 HPA 的架構(gòu)圖可以看出,如果需要完整使用 HPA 的全部指標(biāo)類型,那么提供三個接口所需的服務(wù)
- metrics-server: 通過 metrics.k8s.io API 提供服務(wù)
- custome-metrics-server: 通過custom.metrics.k8s.io API 提供服務(wù)
- external-metrics-server: 通過 external.metrics.k8s.io API 提供服務(wù)
并不是一定是三個組件,像 prometheus-adapter, 已經(jīng)實現(xiàn)了 metrics-server 組件的接口。因此安裝了 prometheus-adapter 就不需要額外安裝 metrics-server
metrics-server 的安裝
安裝步驟:
# 使用 yaml 方式在 k8s 集群安裝
$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# 使用 helm chart 方式安裝(前提是需要先安裝 helm)
$ helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
metrics-server 安裝完成后, kubectl top 命令就可用,平時需要查看某個pod 的CPU、內(nèi)存情況,可直接通過該命令查看
驗證安裝是否成功
# 查看 metrcis-server 是否部署成功
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl get pods -n kube-system -l k8s-app=metrics-server
NAME READY STATUS RESTARTS AGE
metrics-server-96bbfd9f5-mz75w 1/1 Running 3 (72m ago) 8d
# 查看 API 是否注冊成功
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl get apiservice | grep metrics
v1beta1.metrics.k8s.io kube-system/metrics-server True 8d
# 嘗試獲取節(jié)點指標(biāo)信息
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq
{
"kind": "NodeMetricsList",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
},
"items": [
{
"metadata": {
"name": "minikube",
"selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/minikube",
"creationTimestamp": "2022-04-10T02:46:10Z"
},
"timestamp": "2022-04-10T02:45:49Z",
"window": "30s",
"usage": {
"cpu": "138063026n",
"memory": "1332100Ki"
}
}
]
}
# 使用 kubectl top 查看應(yīng)用的CPU及內(nèi)存
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl top pod --all-namespaces
NAMESPACE NAME CPU(cores) MEMORY(bytes)
kube-system coredns-65c54cc984-d9w46 0m 1Mi
kube-system etcd-minikube 0m 0Mi
kube-system kube-apiserver-minikube 0m 0Mi
kube-system kube-controller-manager-minikube 0m 0Mi
kube-system kube-proxy-hbwsc 0m 2Mi
kube-system kube-scheduler-minikube 0m 2Mi
kube-system metrics-server-96bbfd9f5-mz75w 0m 0Mi
kube-system storage-provisioner 0m 3Mi
custom-metrics-server 的安裝
因為 custom-metrics-server 有很多開源項目或者公司都提供了自己的實現(xiàn),這里只介紹promethues-adapter 的使用
安裝步驟
# helm 是 k8s 的包管理工具,類似于 mac 的 brew,入門可參考:[http://www.itdecent.cn/p/4bd853a8068b(Helm](http://www.itdecent.cn/p/4bd853a8068b(Helm) 從入門到實踐)`
$ helm repo add prometheus-community https:``//prometheus-community``.github.io``/helm-charts`
$ helm repo update`
$ helm` `install` `--name my-release prometheus-community``/prometheus-adapter`
安裝成功驗證
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq
{
"kind": "APIResourceList",
"apiVersion": "v1",
"groupVersion": "custom.metrics.k8s.io/v1beta1",
"resources": []
}
四、HPA功能及算法細節(jié)
這章節(jié)主要介紹 HPA 提供的功能(各種指標(biāo)的擴縮)、版本的演化及相關(guān)算法細節(jié)
HPA 演進歷程
目前 HPA 已經(jīng)支持 autoscaling/v1、autoscaling/v2beta1 和 autoscaling/v2beta2、 autoscaling/v2 共四個版本
- autoscaling/v1: 只支持設(shè)置 CPU 一個指標(biāo)進行彈性伸縮
- autoscaling/v2beta1: 增加了自定義指標(biāo)
- autoscaling/v2beta2: 支持外部指標(biāo)(當(dāng)前使用)
- autoscaling/v2: 貌似還沒正式發(fā)布,只是有接口文檔定義,和 v2beta2 沒有區(qū)別
v1 版本示例
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
v2beta2 版本:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
- type: Pods
pods:
metric:
name: packets-per-second
target:
type: AverageValue
averageValue: 1k
- type: Object
object:
metric:
name: requests-per-second
describedObject:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
name: main-route
target:
type: Value
value: 10k
- type: External
external:
metric:
name: queue_messages_ready
selector: "queue=worker_tasks"
target:
type: AverageValue
averageValue: 30
注:最開始 metrics.k8s.io 接口是通過 Heapster 提供指標(biāo)信息,后續(xù)才更新為 metrics-server
HPA 算法細節(jié)
Pod 水平自動擴縮控制器根據(jù)當(dāng)前指標(biāo)和期望指標(biāo)來計算擴縮比例,后續(xù)會單獨一章講解伸縮算法的演進
期望副本數(shù) = ceil[當(dāng)前副本數(shù) * (當(dāng)前指標(biāo) / 期望指標(biāo))]
舉個例子:
# 擴容計算
當(dāng)前指標(biāo): 200m
期望指標(biāo): 100m
當(dāng)前副本數(shù): 1
期望副本數(shù) = ceil(1 * (200 / 100)) = 2
# 縮容計算
當(dāng)前指標(biāo): 50m
期望指標(biāo): 100m
當(dāng)前副本數(shù): 1
期望副本數(shù) = ceil(1 * (50 / 100)) = 0.5 接近 1
注意點:擴縮容還有一個全局設(shè)置:horizontal-pod-autoscaler-tolerance,容忍值,默認為 0.1,如果計算出來的副本數(shù)在這個范圍之內(nèi),則不進行擴縮容操作。舉個例子,比如 HPA 設(shè)置 CPU 使用率超過 50% 就觸發(fā)擴容,因為容忍值為 0.1, 因此只有當(dāng) CPU 使用率大于 50% + 50% * 0.1 = 55% 才會觸發(fā)擴容動作。
雖然算法是很簡單,但是 HPA 在計算時會做很多邏輯判斷,來保證 HPA 功能的正常使用
1、通過全局參數(shù)控制擴縮容頻率
舊版本提供了兩個全局參數(shù)用于控制擴縮容的頻率
- horizontal-pod-autoscaler-upscale-delay: 默認值為 3 min, 表示第一次擴容后,需要間隔 3min后才能第二次擴容(目前新版本已經(jīng)去除該參數(shù),擴容本質(zhì)上是不需要冷卻時間的)
- horizontal-pod-autoscaler-downscale-delay: 默認值為 5min, 表示第一次縮容后,需要間隔 5min后才能再次縮容
從 v1.12 版本開始,HPA算法調(diào)整后,擴容的冷卻時間就不需要設(shè)置,縮容冷卻時間通過
--horizontal-pod-autoscaler-downscale-stabilization 設(shè)置,默認值為 5 min
2、計算時針對Pod 異常的優(yōu)化
HPA 的計算指標(biāo)都來自 pod,而因為 deployment 頻繁地擴縮容,pod 的數(shù)量、狀態(tài)以及負載情況一直在變化,會導(dǎo)致 HPA 在執(zhí)行時,獲取的指標(biāo)存在一定的異常,。
比如,
Pod 正在關(guān)閉(標(biāo)記了delete_timestamp) 或者 pod 狀態(tài)為 failed
Pod 由于業(yè)務(wù)代碼初始化比較久,還沒 Running,都會導(dǎo)致 metrcis 指標(biāo)的缺失(metrcis-server 采集不到這種pod的指標(biāo)數(shù)據(jù))
HPA 將 deployment 的所有 pod 的 metrics 指標(biāo)分為三種集合ready pods, Running狀態(tài)的Pod且能夠通過metrics-server(或其它服務(wù))獲取的pod指標(biāo)數(shù)據(jù)的列表
ignore pods, pending狀態(tài)的Pod 或者 pod running 并且能夠獲取到 pod 指標(biāo)數(shù)據(jù),但是pod的啟動時間在配置的 initial-readiness-delay和cpu-initialization-period 保護期內(nèi)
missing pods, 處于running狀態(tài)的pod(非pending、非failed、非deleted狀態(tài)), 但是無法獲取到 pod 指標(biāo)數(shù)據(jù)的列表
以上三個集合的使用如下:
1、在計算pod的平均metrics值的時候,統(tǒng)一把 ignore pods的metrics設(shè)置為最小值0,
2、如果HPA擴縮容的方向是擴容,把missing pods的metrics也設(shè)置為最小值0,
3、如果是縮容方向則把missing pods的metrics也設(shè)置為最大值(如果是Resouce類型,最大值是Pod的request值,否則最大值就是target value)
這種計算策略比較保守,能夠最小程度減少HPA 功能對業(yè)務(wù)容器的頻繁變動
總結(jié):
- 正在關(guān)閉的 pod 或 failed pod 不會參與HPA 計算
- Pod 指標(biāo)信息缺少時,在最后計算擴縮副本數(shù)的時候才會參與計算
- 使用 CPU 指標(biāo)擴縮容時,未就緒(pending) 和剛就緒的 Pod 也不會參與計算
- 指標(biāo)信息缺少的Pod,在縮容時,當(dāng)成 100% 計算,在擴容時,當(dāng)成 0% 計算
- 未就緒和剛就緒的Pod, 默認當(dāng)成 0% 計算
注:這里列出一下 HPA 所有全局的配置參數(shù)
kube-controller-manager中包含了多個跟Pod 水平自動伸縮相關(guān)的啟動參數(shù):
| 參數(shù)名 | 參數(shù)描述 |
|---|---|
| horizontal-pod-autoscaler-sync-period | controller控制循環(huán)的檢查周期(默認值為15秒) |
| horizontal-pod-autoscaler-upscale-delay | 上次擴容之后,再次擴容需要等待的時間,默認 |
| horizontal-pod-autoscaler-downscale-stabilization | 上次縮容執(zhí)行結(jié)束后,再次執(zhí)行縮容的間隔,默認5分鐘 |
| horizontal-pod-autoscaler-downscale-delay | 上次擴容之后,再次擴容需要等待的時間, |
| horizontal-pod-autoscaler-tolerance | 縮放比例的容忍值,默認為0.1,即在0.9~1.1不會觸發(fā)擴縮容 |
| horizontal-pod-autoscaler-use-rest-clients | 使用rest client獲取metric數(shù)據(jù),支持custom metric時需要使用 |
| horizontal-pod-autoscaler-cpu-initialization-period | pod 的初始化時間, 在此時間內(nèi)的 pod,CPU 資源指標(biāo)將不會被采納 |
| horizontal-pod-autoscaler-initial-readiness-delay | pod 準(zhǔn)備時間, 在此時間內(nèi)的 pod 統(tǒng)統(tǒng)被認為未就緒 |
五、Demo示例
步驟1:首先需要新增一個 nginx 應(yīng)用,部署為 deployment (注:該配置是有點小問題的, HPA是不能計算利用率信息)
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-demo
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
步驟2:使用 kubectl 命令部署 nginx deployment 和 hpa 對象
# 部署應(yīng)用
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl apply -f hap-demo.yaml
deployment.apps/hpa-demo created
# 查看應(yīng)用部署狀態(tài)
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
hpa-demo 0/1 1 0 31s
my-release-prometheus-adapter 1/1 1 1 9d
# 創(chuàng)建 HPA 對象
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl autoscale deployment hpa-demo --cpu-percent=10 --min=1 --max=10
horizontalpodautoscaler.autoscaling/hpa-demo autoscaled
# 查看 HPA 狀態(tài)
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hpa-demo Deployment/hpa-demo <unknown>/10% 1 10 1 22s
# 查看 HPA 詳細信息
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl describe hpa hpa-demo
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
Name: hpa-demo
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Tue, 19 Apr 2022 20:42:29 +0800
Reference: Deployment/hpa-demo
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 10%
Min replicas: 1
Max replicas: 10
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: missing request for cpu
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 5s (x2 over 20s) horizontal-pod-autoscaler failed to get cpu utilization: missing request for cpu
Warning FailedComputeMetricsReplicas 5s (x2 over 20s) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: missing request for cpu
由上面步驟可以知道,創(chuàng)建 HPA 成功后 ,由于沒有設(shè)置 request, 導(dǎo)致 HPA Events 出現(xiàn)錯誤 failed to get cpu utilization: missing request for cpu ,因為 HPA默認是通過實際的利用率/request作為利用率的數(shù)值,因此可以檢查Pod的Resource字段中是否包含Request字段。
步驟3:將上面的應(yīng)用的 yaml 修改為如下:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hpa-demo
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
resources: # 新增內(nèi)容
requests:
memory: 50Mi
cpu: 50m
步驟4:然后通過 kubectl 更新 nginx deployment 的配置
# 更新 nginx 應(yīng)用
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl apply -f hpa.yaml
deployment.apps/hpa-demo configured
# 刪除 HPA 對象
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl delete hpa hpa-demo
horizontalpodautoscaler.autoscaling "hpa-demo" deleted
# 重新創(chuàng)建 HPA 對象
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl autoscale deployment hpa-demo --cpu-percent=10 --min=1 --max=10
horizontalpodautoscaler.autoscaling/hpa-demo autoscaled
步驟5:對 nginx 應(yīng)用進行壓測
# 獲取 nginx 應(yīng)用 pod 的 IP
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl get pod -o wide
# 新增壓測環(huán)境 test-hpa, 在容器內(nèi)部對 nginx 應(yīng)用進行壓測
╭─guoweikuang@guoweikngdeMBP2 ~
╰─$ kubectl run -it --image busybox test-hpa --restart=Never --rm /bin/sh
If you don't see a command prompt, try pressing enter.
/ # while true; do wget -q -O- http://<pod_ip>; done
步驟6:查看 hpa 擴縮情況
# 查看 pod 的 CPU、內(nèi)存指標(biāo)變化
╭─guoweikuang@guoweikngdeMBP2 ~/hpa
╰─$ kubectl top pod
NAME CPU(cores) MEMORY(bytes)
hpa-demo-6b4467b546-75dv8 138m 4Mi
test-hpa 471m 0Mi
# 查看 HPA 擴容具體情況,可以通過 Event 看出,當(dāng)前CPU使用率 264% 遠大于設(shè)置的10%,因此進行擴容操作
╭─guoweikuang@guoweikngdeMBP2 ~/hpa
╰─$ kubectl describe hpa hpa-demo
Name: hpa-demo
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 20 Apr 2022 00:20:00 +0800
Reference: Deployment/hpa-demo
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 264% (132m) / 10%
Min replicas: 1
Max replicas: 10
Deployment pods: 1 current / 4 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 4
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited True ScaleUpLimit the desired replica count is increasing faster than the maximum scale rate
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 2m32s horizontal-pod-autoscaler failed to get cpu utilization: did not receive metrics for any ready pods
Warning FailedComputeMetricsReplicas 2m32s horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: did not receive metrics for any ready pods
Normal SuccessfulRescale 32s horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
# 擴容因為沒有冷卻窗口的限制,因此一直擴容達到了 10個
╭─guoweikuang@guoweikngdeMBP2 ~/hpa
╰─$ kubectl describe hpa hpa-demo
Name: hpa-demo
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 20 Apr 2022 00:20:00 +0800
Reference: Deployment/hpa-demo
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (0) / 10%
Min replicas: 1
Max replicas: 10
Deployment pods: 10 current / 10 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ScaleDownStabilized recent recommendations were higher than current one, applying the highest recent recommendation
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited True TooManyReplicas the desired replica count is more than the maximum replica count
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 5m8s horizontal-pod-autoscaler failed to get cpu utilization: did not receive metrics for any ready pods
Warning FailedComputeMetricsReplicas 5m8s horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: did not receive metrics for any ready pods
Normal SuccessfulRescale 3m8s horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 2m8s horizontal-pod-autoscaler New size: 8; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 68s horizontal-pod-autoscaler New size: 10; reason:
# 查看 pod 的創(chuàng)建情況
╭─guoweikuang@guoweikngdeMBP2 ~/hpa
╰─$ kubectl get pod
NAME READY STATUS RESTARTS AGE
hpa-demo-6b4467b546-22dwf 0/1 ContainerCreating 0 73s
hpa-demo-6b4467b546-4kfbc 0/1 ContainerCreating 0 73s
hpa-demo-6b4467b546-75dv8 1/1 Running 0 6m20s
hpa-demo-6b4467b546-c979v 1/1 Running 0 2m13s
hpa-demo-6b4467b546-cj8tv 0/1 ContainerCreating 0 13s
hpa-demo-6b4467b546-k2gkv 1/1 Running 0 2m13s
hpa-demo-6b4467b546-k8qb7 1/1 Running 0 2m13s
hpa-demo-6b4467b546-rqjm2 0/1 ContainerCreating 0 13s
hpa-demo-6b4467b546-tscgj 0/1 ContainerCreating 0 73s
hpa-demo-6b4467b546-zvzdz 1/1 Running 0 73s
test-hpa 1/1 Running 0 4m18s
步驟7:停止壓測,查看HPA情況
# 沒有壓測后,cpu 使用率很快降下來,HPA 開始執(zhí)行縮容操作
╭─guoweikuang@guoweikngdeMBP2 ~/hpa
╰─$ kubectl describe hpa
Name: hpa-demo
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Wed, 20 Apr 2022 00:20:00 +0800
Reference: Deployment/hpa-demo
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 0% (0) / 10%
Min replicas: 1
Max replicas: 10
Deployment pods: 10 current / 1 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededRescale the HPA controller was able to update the target scale to 1
ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited True TooFewReplicas the desired replica count is less than the minimum replica count
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 13m horizontal-pod-autoscaler failed to get cpu utilization: did not receive metrics for any ready pods
Warning FailedComputeMetricsReplicas 13m horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: did not receive metrics for any ready pods
Normal SuccessfulRescale 11m horizontal-pod-autoscaler New size: 4; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 10m horizontal-pod-autoscaler New size: 8; reason: cpu resource utilization (percentage of request) above target
Normal SuccessfulRescale 9m4s horizontal-pod-autoscaler New size: 10; reason:
Warning FailedGetResourceMetric 6m4s horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
Warning FailedComputeMetricsReplicas 6m4s horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
Normal SuccessfulRescale 4s horizontal-pod-autoscaler New size: 1; reason: All metrics below target
# 檢查縮容后的應(yīng)用的副本數(shù)
╭─guoweikuang@guoweikngdeMBP2 ~/hpa
╰─$ kubectl get deployments.apps
NAME READY UP-TO-DATE AVAILABLE AGE
hpa-demo 1/1 1 1 16m
# 檢查 hpa 的相關(guān)信息
╭─guoweikuang@guoweikngdeMBP2 ~/hpa
╰─$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hpa-demo Deployment/hpa-demo 0%/10% 1 10 1 15m