原因
在業(yè)務(wù)中需要使用kubernetes.Clientset創(chuàng)建容器,創(chuàng)建時(shí)需要對(duì)容器進(jìn)行綁定PersistentVolume所以要對(duì)其進(jìn)行一番了解
參考文檔
配置 Pod 以使用 PersistentVolume 作為存儲(chǔ)
Binding Persistent Volumes by Labels
改變默認(rèn) StorageClass
測(cè)試方式
完成k8s文檔中的測(cè)試用例
done.
根據(jù)當(dāng)前項(xiàng)目測(cè)試
問題1: storageClassName是否必須存在
根據(jù)Binding Persistent Volumes by Labels文檔中表明PersistentVolumeClaim可以使用
selector:
matchLabels:
storage-tier: gold
aws-availability-zone: us-east-1
來匹配對(duì)應(yīng)的PersistentVolume而不用指定StorageClassName
我將三個(gè)yaml復(fù)制到此處
glusterfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: gluster-volume
labels:
storage-tier: gold
aws-availability-zone: us-east-1
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
glusterfs:
endpoints: glusterfs-cluster
path: myVol1
readOnly: false
persistentVolumeReclaimPolicy: Retain
glusterfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gluster-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
storage-tier: gold
aws-availability-zone: us-east-1
Volume Endpoints
glusterfs-ep.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: glusterfs-cluster
subsets:
- addresses:
- ip: 192.168.122.221
ports:
- port: 1
- addresses:
- ip: 192.168.122.222
ports:
- port: 1
實(shí)際測(cè)試時(shí),獲取版本
qiantao@qiant k8s % kubectl version -o yaml
clientVersion:
buildDate: "2020-02-11T18:14:22Z"
compiler: gc
gitCommit: 06ad960bfd03b39c8310aaf92d1e7c12ce618213
gitTreeState: clean
gitVersion: v1.17.3
goVersion: go1.13.6
major: "1"
minor: "17"
platform: darwin/amd64
serverVersion:
buildDate: "2020-01-15T08:18:29Z"
compiler: gc
gitCommit: e7f962ba86f4ce7033828210ca3556393c377bcc
gitTreeState: clean
gitVersion: v1.16.6-beta.0
goVersion: go1.13.5
major: "1"
minor: 16+
platform: linux/amd64
獲取endpoints
qiantao@qiant k8s % kubectl get endpoints
NAME ENDPOINTS AGE
glusterfs-cluster 192.168.122.221:1,192.168.122.222:1 4m32s
kubernetes 192.168.65.3:6443 12d
獲取pvc信息
qiantao@qiant k8s % kubectl get pvc gluster-claim -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: docker.io/hostpath
creationTimestamp: "2020-07-13T12:05:42Z"
finalizers:
- kubernetes.io/pvc-protection
name: gluster-claim
namespace: default
resourceVersion: "201203"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/gluster-claim
uid: 977b792d-fc2f-440c-9746-b4670250a239
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
matchLabels:
aws-availability-zone: us-east-1
storage-tier: gold
storageClassName: hostpath
volumeMode: Filesystem
volumeName: pvc-977b792d-fc2f-440c-9746-b4670250a239
status:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
phase: Bound
發(fā)現(xiàn)對(duì)應(yīng)的PersistentVolume居然是pvc-977b792d-fc2f-440c-9746-b4670250a239,發(fā)生了什么???
獲取一下我們?cè)O(shè)置的gluster-volume和pvc-977b792d-fc2f-440c-9746-b4670250a239
qiantao@qiant k8s % kubectl get pv pvc-977b792d-fc2f-440c-9746-b4670250a239 gluster-volume -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
docker.io/hostpath: /var/lib/k8s-pvs/gluster-claim/pvc-977b792d-fc2f-440c-9746-b4670250a239
pv.kubernetes.io/provisioned-by: docker.io/hostpath
creationTimestamp: "2020-07-13T12:05:42Z"
finalizers:
- kubernetes.io/pv-protection
name: pvc-977b792d-fc2f-440c-9746-b4670250a239
resourceVersion: "201200"
selfLink: /api/v1/persistentvolumes/pvc-977b792d-fc2f-440c-9746-b4670250a239
uid: 1269e814-3d66-4757-b379-c1d65b6bd178
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: gluster-claim
namespace: default
resourceVersion: "201195"
uid: 977b792d-fc2f-440c-9746-b4670250a239
hostPath:
path: /var/lib/k8s-pvs/gluster-claim/pvc-977b792d-fc2f-440c-9746-b4670250a239
type: ""
persistentVolumeReclaimPolicy: Delete
storageClassName: hostpath
volumeMode: Filesystem
status:
phase: Bound
- apiVersion: v1
kind: PersistentVolume
metadata:
creationTimestamp: "2020-07-13T12:05:28Z"
finalizers:
- kubernetes.io/pv-protection
labels:
aws-availability-zone: us-east-1
storage-tier: gold
name: gluster-volume
resourceVersion: "201169"
selfLink: /api/v1/persistentvolumes/gluster-volume
uid: 12f08ccd-454c-46d0-b2dd-cf5f774b9266
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
glusterfs:
endpoints: glusterfs-cluster
path: myVol1
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
status:
phase: Available
kind: List
metadata:
resourceVersion: ""
selfLink: ""
根據(jù)輸出的 claimRef來看,我們?cè)O(shè)置的PersistentVolume沒有被使用反而自行設(shè)置了一個(gè),為什么?
qiantao@qiant k8s % kubectl get storageclass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 12d
獲取默認(rèn)的storageclass發(fā)現(xiàn)我們有個(gè)默認(rèn)的hostpath
重點(diǎn)就是,如果我們不設(shè)置
storageclass那么k8s會(huì)使用默認(rèn)的storageclass來創(chuàng)建我們需要的PersistentVolume
如何修改默認(rèn)的storageclass參考https://k8smeetup.github.io/docs/tasks/administer-cluster/change-default-storage-class/
關(guān)于
StorageClass的詳細(xì)介紹可以參考此處https://kubernetes.io/zh/docs/concepts/storage/storage-classes/
官方文檔說明如下
可以在群集上啟用動(dòng)態(tài)卷供應(yīng),以便在未指定存儲(chǔ)類的情況下動(dòng)態(tài)設(shè)置所有聲明。 集群管理員可以通過以下方式啟用此行為:
- 標(biāo)記一個(gè)
StorageClass為 默認(rèn); - 確保
DefaultStorageClass準(zhǔn)入控制器在 API 服務(wù)端被啟用。
管理員可以通過向其添加 storageclass.kubernetes.io/is-default-class 注解來將特定的 StorageClass 標(biāo)記為默認(rèn)。 當(dāng)集群中存在默認(rèn)的 StorageClass 并且用戶創(chuàng)建了一個(gè)未指定 storageClassName 的 PersistentVolumeClaim 時(shí), DefaultStorageClass 準(zhǔn)入控制器會(huì)自動(dòng)向其中添加指向默認(rèn)存儲(chǔ)類的 storageClassName 字段。
請(qǐng)注意,群集上最多只能有一個(gè) 默認(rèn) 存儲(chǔ)類,否則無法創(chuàng)建沒有明確指定 storageClassName 的 PersistentVolumeClaim。
2.PV在Retain策略Released狀態(tài)下重新分配到PVC恢復(fù)數(shù)據(jù)
參考文檔 https://blog.51cto.com/ygqygq2/2308576
問題:
傳出PVC之后出現(xiàn)PV還在還處于占中狀態(tài),無法重新分配
示例如下:
kind: PersistentVolume
apiVersion: v1
metadata:
name: chenjunhao1-10-26-133-27-pv-nfs-data
selfLink: /api/v1/persistentvolumes/chenjunhao1-10-26-133-27-pv-nfs-data
uid: 75e64e9d-94f6-4bc0-baea-9b89b5fcc7fb
resourceVersion: '208095'
creationTimestamp: '2020-07-10T08:48:10Z'
labels:
pv: chenjunhao1-10-26-133-27-pv-nfs-data
annotations:
pv.kubernetes.io/bound-by-controller: 'yes'
finalizers:
- kubernetes.io/pv-protection
spec:
capacity:
storage: 200Gi
nfs:
server: 10.26.133.27
path: /home/chenjunhao1
accessModes:
- ReadWriteMany
claimRef:
kind: PersistentVolumeClaim
namespace: default
name: chenjunhao1-10-26-133-27-pvc-nfs-data
uid: b1be9f21-3fec-49c5-a803-94a30783be86
apiVersion: v1
resourceVersion: '123603'
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs-data
volumeMode: Filesystem
status:
phase: Released
但是對(duì)應(yīng)的PVC已經(jīng)刪除,
這時(shí)候我們可以放心大膽的刪除spec. claimRef字段重新 kubectl apply -f <spec.yaml>
就可以得到
kind: PersistentVolume
apiVersion: v1
metadata:
name: chenjunhao1-10-26-133-27-pv-nfs-data
selfLink: /api/v1/persistentvolumes/chenjunhao1-10-26-133-27-pv-nfs-data
uid: 75e64e9d-94f6-4bc0-baea-9b89b5fcc7fb
resourceVersion: '208224'
creationTimestamp: '2020-07-10T08:48:10Z'
labels:
pv: chenjunhao1-10-26-133-27-pv-nfs-data
annotations:
pv.kubernetes.io/bound-by-controller: 'yes'
finalizers:
- kubernetes.io/pv-protection
spec:
capacity:
storage: 200Gi
nfs:
server: 10.26.133.27
path: /home/chenjunhao1
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: nfs-data
volumeMode: Filesystem
status:
phase: Available
故障查詢說明
https://kubernetes.io/zh/docs/tasks/debug-application-cluster/debug-application/