在實際生產(chǎn)系統(tǒng)中,我們經(jīng)常會遇到某個服務(wù)需要擴(kuò)容的場景,也可能會遇到由于資源緊張或者工作負(fù)載降低而需要減少服務(wù)實例數(shù)量的場景。此時我們可以利用 Deployment/RC 的Scale機(jī)制來完成這些工作。
Kubernetes對Pod的擴(kuò)容和縮容操作提供了手動和自動兩種模式,手動模式通過執(zhí)行kubectl scale命令對一個 Deployment/RC 進(jìn)行Pod副本數(shù)量的設(shè)置,即可一鍵完成。自動模式則需要用戶根據(jù)某個性能指標(biāo)或者自定義業(yè)務(wù)指標(biāo),并指定Pod副本數(shù)量的范圍,系統(tǒng)將自動在這個范圍內(nèi)根據(jù)性能指標(biāo)的變化進(jìn)行調(diào)整。
1. 手動擴(kuò)容和縮容模式
以 Deployment nginx為例:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
已運(yùn)行的Pod副本數(shù)量為3個:
$ kubectl get pods | grep nginx
nginx-deployment-76bf4969df-2zgwr 1/1 Running 0 4m15s
nginx-deployment-76bf4969df-fmcz2 1/1 Running 0 4m15s
nginx-deployment-76bf4969df-t7zrs 1/1 Running 0 4m15s
通過kubectl scale命令可以將Pod副本數(shù)量從初始的3個更新為5個:
$ kubectl scale deployment nginx-deployment --replicas 5
deployment.extensions/nginx-deployment scaled
將--replicas設(shè)置為比當(dāng)前Pod副本數(shù)量更小的數(shù)字,系統(tǒng)將會“殺掉”一些運(yùn)行中的Pod,以實現(xiàn)應(yīng)用集群縮容:
$ kubectl scale deployment nginx-deployment --replicas=1
deployment.extensions/nginx-deployment scaled
2. 自動擴(kuò)容和縮容模式
從Kubernetes v1.1 版本開始,新增了名為Horizontal Pod Autoscaler(HPA)的控制器,用于實現(xiàn)基于CPU使用率進(jìn)行自動Pod擴(kuò)容和縮容的功能。
HPA控制器基于Master的kube-controller-manager服務(wù)啟動參數(shù)--horizontal-pod-autoscaler-sync-period定義的時長(默認(rèn)30s),周期性地檢測目標(biāo)Pod的CPU使用率,并在滿足條件時對 Deployment/RC 或 Deployment 中的Pod副本數(shù)量進(jìn)行調(diào)整,以符合用戶定義的平均Pod CPU使用率。
Pod CPU使用率來源于Heapster和Metric-Server組件,所以需要預(yù)先安裝好Heapster和Metric-Server,安裝過程參考:
Kubernetes heapster監(jiān)控插件安裝
Kubernetes Metrics Server安裝
創(chuàng)建HPA時可以使用kubectl autoscale命令進(jìn)行快速創(chuàng)建或者使用yaml配置文件進(jìn)行創(chuàng)建。
在創(chuàng)建HPA之前,需要已經(jīng)存在一個 Deployment/RC 對象,并且該 Deployment/RC 中的Pod必須定義 resources.requests.cpu的資源請求值,如果不設(shè)置該值,則Heapster將無法收集到該P(yáng)od的CPU使用情況,會導(dǎo)致HPA無法正常工作。
下面通過為一個Deployment設(shè)置HPA,然后使用一個客戶端對其進(jìn)行壓力測試,對HPA的用法進(jìn)行示例。
以php-apache的Deployment為例,設(shè)置cpu request為200m,未設(shè)置limit上限的值:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: php-apache
spec:
replicas: 1
template:
metadata:
name: php-apache
labels:
app: php-apache
spec:
containers:
- name: php-apache
image: siriuszg/hpa-example
resources:
requests:
cpu: 200m
ports:
- containerPort: 80
在創(chuàng)建一個php-apache的Service,供客戶端訪問:
apiVersion: v1
kind: Service
metadata:
name: php-apache
spec:
ports:
- port: 80
selector:
app: php-apache
接下來為Deployment “php-apache” 創(chuàng)建一個HPA控制器,在1和10之間調(diào)整Pod的副本數(shù)量,以使得平均Pod CPU使用率維持在50%。
使用kubectl autoscale命令進(jìn)行創(chuàng)建:
kubectl autoscale deployment php-apache --min=1 --max=10 --cpu-percent=50
或者通過yaml配置文件來創(chuàng)建HPA,需要在scaleTargetRef字段指定需要管理的 Deployment/RC 的名字,然后設(shè)置minReplicas、maxReplicas和targetCPUUtilizationPercentage參數(shù):
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 50
查看已創(chuàng)建的HPA:
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache <unknown>/50% 1 10 1 47s
然后,創(chuàng)建一個busybox Pod,用于對php-apache服務(wù)發(fā)起壓力測試的請求:
apiVersion: v1
kind: Pod
metadata:
name: busybox
spec:
containers:
- name: busybox
image: busybox
command: ["sleep", "3600"]
登錄busybox容器,執(zhí)行一個無限循環(huán)的wget命令來訪問php-apache服務(wù):
while true; do wget -q -O- http://php-apache > /dev/null; done
3. 遇到的問題
通過kubectl get hpa命令查看HPA狀態(tài)時,TARGETS狀態(tài)為unknown。
通過kubectl describe hpa查看HPA詳細(xì)信息:
# kubectl describe hpa
Name: php-apache
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Sat, 05 Oct 2019 18:50:34 +0800
Reference: Deployment/php-apache
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 50%
Min replicas: 1
Max replicas: 10
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 4s (x11 over 2m35s) horizontal-pod-autoscaler unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
Warning FailedComputeMetricsReplicas 4s (x11 over 2m35s) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
可以看到failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)。
原因可能是:
- 配置資源限額時拼寫錯誤,致使資源限額添加失敗。
- Metrics Server未安裝。
解決方案:
- 修正資源限額配置。
- Kubernetes Metrics Server安裝
等待一段時間后,觀察HPA控制器收集到的Pod CPU使用率:
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 2156%/50% 1 10 1 47s