
創(chuàng)建集群全環(huán)境備份非常有必要,特別是在生產過程中。當集群發(fā)生異常崩潰,數據丟失時,備份的數據就派上了用場,利用備份數據可以將之前的環(huán)境重新構建出來。
在Openshift平臺,我們可以對集群的完整狀態(tài)備份到外部存儲。集群全環(huán)境包括:
集群數據文件
etcd數據庫
Openshift對象配置
私有鏡像倉庫存儲
持久化卷
我們要定期對集群作備份,以防止數據的丟失。
集群全環(huán)境備份并不是萬能的,應用自己的數據我們應該保證有單獨的備份。
創(chuàng)建Master節(jié)點備份
在系統(tǒng)基礎架構進行更改,都需要對節(jié)點做備份。比如說,系統(tǒng)升級,集群升級或者任何重大更新。通過定期備份數據,當集群出現故障時,我們就能使用備份恢復集群。
Master主機上運行著非常重要的服務:API、Controllers。/etc/origin/master目錄下存放著許多重要的文件。
API、Controllers服務等的配置文件
安裝生成的證書
云提供商提供的配置文件
密鑰和其它身份認證文件
另外如果有額外自定義的配置,比如更改日志級別,使用代理等。這些配置文件在/etc/sysconfig/目錄下。
Master節(jié)點同時也是計算節(jié)點,所以備份/etc/origin整個目錄。
備份過程
需要在每臺Master節(jié)點上都運行備份操作
-
主機配置文件備份
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig $ sudo cp -aR /etc/origin ${MYBACKUPDIR}/etc $ sudo cp -aR /etc/sysconfig/ ${MYBACKUPDIR}/etc/sysconfig/注意:/etc/origin/master/ca.serial.txt文件只在會安裝的ansible inventory hosts中的第一臺master主機上創(chuàng)建,如果棄用該臺主機時,需要將該文件拷貝到其它的Master主機上。 -
備份其它重要的文件
File Description /etc/cni/* CNI配置 /etc/sysconfig/iptables iptables防火墻配置/etc/sysconfig/docker-storage-setup container-storage-setup命令調用/etc/sysconfig/docker docker應用配置/etc/sysconfig/docker-network docker網絡配置/etc/sysconfig/docker-storage docker容器存儲配置/etc/dnsmasq.conf dnsmasq的配置/etc/dnsmasq.d/* dnsmasq的額外配置/etc/sysconfig/flanneld flannel配置文件/etc/pki/ca-trust/source/anchors/ 系統(tǒng)信任的證書 備份以上文件
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig $ sudo mkdir -p ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors $ sudo cp -aR /etc/sysconfig/{iptables,docker-*,flanneld} \ ${MYBACKUPDIR}/etc/sysconfig/ $ sudo cp -aR /etc/dnsmasq* /etc/cni ${MYBACKUPDIR}/etc/ $ sudo cp -aR /etc/pki/ca-trust/source/anchors/* \ ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/ -
如果安裝在系統(tǒng)中的應用包被意外刪除,也會影響到集群的運行,所以需要備份系統(tǒng)中安裝的rpm包列表
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo mkdir -p ${MYBACKUPDIR} $ rpm -qa | sort | sudo tee $MYBACKUPDIR/packages.txt -
執(zhí)行了上面的操作后,備份目錄中會有如下文件列表,可將它們壓縮在一個文件中進行保存
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo find ${MYBACKUPDIR} -mindepth 1 -type f -printf '%P\n' etc/sysconfig/flanneld etc/sysconfig/iptables etc/sysconfig/docker-network etc/sysconfig/docker-storage etc/sysconfig/docker-storage-setup etc/sysconfig/docker-storage-setup.rpmnew etc/origin/master/ca.crt etc/origin/master/ca.key etc/origin/master/ca.serial.txt etc/origin/master/ca-bundle.crt etc/origin/master/master.proxy-client.crt etc/origin/master/master.proxy-client.key etc/origin/master/service-signer.crt etc/origin/master/service-signer.key etc/origin/master/serviceaccounts.private.key etc/origin/master/serviceaccounts.public.key etc/origin/master/openshift-master.crt etc/origin/master/openshift-master.key etc/origin/master/openshift-master.kubeconfig etc/origin/master/master.server.crt etc/origin/master/master.server.key etc/origin/master/master.kubelet-client.crt etc/origin/master/master.kubelet-client.key etc/origin/master/admin.crt etc/origin/master/admin.key etc/origin/master/admin.kubeconfig etc/origin/master/etcd.server.crt etc/origin/master/etcd.server.key etc/origin/master/master.etcd-client.key etc/origin/master/master.etcd-client.csr etc/origin/master/master.etcd-client.crt etc/origin/master/master.etcd-ca.crt etc/origin/master/policy.json etc/origin/master/scheduler.json etc/origin/master/htpasswd etc/origin/master/session-secrets.yaml etc/origin/master/openshift-router.crt etc/origin/master/openshift-router.key etc/origin/master/registry.crt etc/origin/master/registry.key etc/origin/master/master-config.yaml etc/origin/generated-configs/master-master-1.example.com/master.server.crt ...[OUTPUT OMITTED]... etc/origin/cloudprovider/openstack.conf etc/origin/node/system:node:master-0.example.com.crt etc/origin/node/system:node:master-0.example.com.key etc/origin/node/ca.crt etc/origin/node/system:node:master-0.example.com.kubeconfig etc/origin/node/server.crt etc/origin/node/server.key etc/origin/node/node-dnsmasq.conf etc/origin/node/resolv.conf etc/origin/node/node-config.yaml etc/origin/node/flannel.etcd-client.key etc/origin/node/flannel.etcd-client.csr etc/origin/node/flannel.etcd-client.crt etc/origin/node/flannel.etcd-ca.crt etc/pki/ca-trust/source/anchors/openshift-ca.crt etc/pki/ca-trust/source/anchors/registry-ca.crt etc/dnsmasq.conf etc/dnsmasq.d/origin-dns.conf etc/dnsmasq.d/origin-upstream-dns.conf etc/dnsmasq.d/node-dnsmasq.conf packages.txt將備份的文件進行壓縮
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo tar -zcvf /backup/$(hostname)-$(date +%Y%m%d).tar.gz $MYBACKUPDIR $ sudo rm -Rf ${MYBACKUPDIR}
Openshift 已經在openshift-ansible-contrib這個項目中提供了備份腳本backup_master_node.sh
將該腳本代碼存放在master主機上,并執(zhí)行,將會自動運行上面的步驟,對master主機進行備份
$ mkdir ~/git
$ cd ~/git
$ git clone https://github.com/openshift/openshift-ansible-contrib.git
$ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts
$ ./backup_master_node.sh -h
創(chuàng)建計算節(jié)點備份
創(chuàng)建計算節(jié)點的備份與Master節(jié)點的備份不一樣,Master節(jié)點上有很多非常重要的文件,所以備份Master節(jié)點非常有必要。但是計算節(jié)點上一般并不保存運行集群的必要數據,即使計算節(jié)點出現了故障,其它節(jié)點也能代替它的功能,而不受影響。所以一般不需要備份計算節(jié)點,如果有一些特殊的配置必須要備份計算節(jié)點,則備份計算節(jié)點。
如果計算節(jié)點需要備份,那跟Master節(jié)點一樣,在系統(tǒng)升級,集群升級或者集群有重要變更時都需要對節(jié)點做備份,同時也需要定期備份。
計算節(jié)點的主要配置文件存放在/etc/origin/和/etc/origin/node目錄中。
計算節(jié)點服務的配置
安裝時生成的證書
云提供商提供的配置文件
密鑰和其它身份認證文件
另外如果有額外自定義的配置,比如更改日志級別,使用代理等。這些配置文件在/etc/sysconfig/目錄下。
備份過程
- 對計算節(jié)點服務的配置作備份
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
$ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
$ sudo cp -aR /etc/origin ${MYBACKUPDIR}/etc
$ sudo cp -aR /etc/sysconfig/origin-node ${MYBACKUPDIR}/etc/sysconfig/
-
備份其它重要的文件
File Description /etc/cni/* CNI配置 /etc/sysconfig/iptables iptables防火墻配置/etc/sysconfig/docker-storage-setup container-storage-setup命令調用/etc/sysconfig/docker docker應用配置/etc/sysconfig/docker-network docker網絡配置/etc/sysconfig/docker-storage docker容器存儲配置/etc/dnsmasq.conf dnsmasq的配置/etc/dnsmasq.d/* dnsmasq的額外配置/etc/sysconfig/flanneld flannel配置文件/etc/pki/ca-trust/source/anchors/ 系統(tǒng)信任的證書 備份以上文件
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig $ sudo mkdir -p ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors $ sudo cp -aR /etc/sysconfig/{iptables,docker-*,flanneld} \ ${MYBACKUPDIR}/etc/sysconfig/ $ sudo cp -aR /etc/dnsmasq* /etc/cni ${MYBACKUPDIR}/etc/ $ sudo cp -aR /etc/pki/ca-trust/source/anchors/* \ ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/ -
如果安裝在系統(tǒng)中的應用包被意外刪除,也會影響到集群的運行,所以需要備份系統(tǒng)中安裝的rpm包列表
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo mkdir -p ${MYBACKUPDIR} $ rpm -qa | sort | sudo tee $MYBACKUPDIR/packages.txt -
執(zhí)行了上面的操作后,備份目錄中會有如下文件列表,可將它們壓縮在一個文件中進行保存
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo find ${MYBACKUPDIR} -mindepth 1 -type f -printf '%P\n' etc/sysconfig/origin-node etc/sysconfig/flanneld etc/sysconfig/iptables etc/sysconfig/docker-network etc/sysconfig/docker-storage etc/sysconfig/docker-storage-setup etc/sysconfig/docker-storage-setup.rpmnew etc/origin/node/system:node:app-node-0.example.com.crt etc/origin/node/system:node:app-node-0.example.com.key etc/origin/node/ca.crt etc/origin/node/system:node:app-node-0.example.com.kubeconfig etc/origin/node/server.crt etc/origin/node/server.key etc/origin/node/node-dnsmasq.conf etc/origin/node/resolv.conf etc/origin/node/node-config.yaml etc/origin/node/flannel.etcd-client.key etc/origin/node/flannel.etcd-client.csr etc/origin/node/flannel.etcd-client.crt etc/origin/node/flannel.etcd-ca.crt etc/origin/cloudprovider/openstack.conf etc/pki/ca-trust/source/anchors/openshift-ca.crt etc/pki/ca-trust/source/anchors/registry-ca.crt etc/dnsmasq.conf etc/dnsmasq.d/origin-dns.conf etc/dnsmasq.d/origin-upstream-dns.conf etc/dnsmasq.d/node-dnsmasq.conf packages.txt將備份的文件進行壓縮
$ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d) $ sudo tar -zcvf /backup/$(hostname)-$(date +%Y%m%d).tar.gz $MYBACKUPDIR $ sudo rm -Rf ${MYBACKUPDIR}Openshift 已經在openshift-ansible-contrib這個項目中提供了備份腳本backup_master_node.sh
將該腳本代碼存放在master主機上,并執(zhí)行,將會自動運行上面的步驟,對master主機進行備份
$ mkdir ~/git $ cd ~/git $ git clone https://github.com/openshift/openshift-ansible-contrib.git $ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts $ ./backup_master_node.sh -h
備份私服鏡像倉庫證書
如果使用了外部私有鏡像倉庫,就必須備份所有的外部鏡像倉庫的證書。
備份過程
$ cd /etc/docker/certs.d/
$ tar cf /tmp/docker-registry-certs-$(hostname).tar *
備份相關安裝文件
還原過程集群過程需要完全重新安裝,所以需要保存所有相關的文件。包括
- ansible playbooks和inventory hosts完整內容
- yum源文件
備份應用數據
大部分情況下,可以使用oc rsync 命令來對應用數據做備份。這個是通用的備份方案。
不同的存儲方案,比如說NFS等,也可以根據存儲的不同,使用更方便的備份方案。
同時備份的目錄,也根據應用程序的不同而不同。
以下是一個備份jenkins應用的例子。
備份過程
-
獲得jenkins應用數據掛載目錄
$ oc get dc/jenkins -o jsonpath='{ .spec.template.spec.containers[?(@.name=="jenkins")].volumeMounts[?(@.name=="jenkins-data")].mountPath }' /var/lib/jenkins -
獲取當前運行的應用的pod名字
$ oc get pod --selector=deploymentconfig=jenkins -o jsonpath='{ .metadata.name }' jenkins-1-37nux -
使用
oc rsync對數據進行備份$ oc rsync jenkins-1-37nux:/var/lib/jenkins /tmp/
備份etcd數據庫
備份etcd分布式數據庫,需要備份etcd的配置文件及數據。備份時既可以使用etcd v2版本也可以使用etcd v3版本API來備份etcd數據
備份過程
- 備份etcd配置文件
etcd的配置文件在/etc/etcd目錄中,其中包括etcd.conf配置文件,及集群通信所需的證書。這些文件都是在用ansible安裝時生成的。
對每個etcd節(jié)點備份相關配置文件
$ ssh master-0
$ mkdir -p /backup/etcd-config-$(date +%Y%m%d)/
$ cp -R /etc/etcd/ /backup/etcd-config-$(date +%Y%m%d)/
- 備份etcd數據
openshift容器平臺為了方便調用etcdctl不同版本,創(chuàng)建了兩個別名,etcdctl2和etcdctl3。但是,etcdctl3別名不會向etcdctl命令提供完整的端點列表,因此您必須指定--endpoints選項并列出所有端點。
在做etcd數據備份前,需要先做如下處理。
etcdctl可執(zhí)行文件必須可用,容器化安裝時容器etcd必須可用
確保openshit容器平臺的api服務正常運行
確保與etcd集群的2379端口TCP通信正常
確保有etcd集群的請求證書
-
檢查etcd集群的健康狀態(tài),可以使用etcdctl2或者etcdctl3
-
使用etcd v2 api
$ etcdctl2 --cert-file=/etc/etcd/peer.crt \ --key-file=/etc/etcd/peer.key \ --ca-file=/etc/etcd/ca.crt \ --endpoints="https://master-0.example.com:2379,\ https://master-1.example.com:2379,\ https://master-2.example.com:2379"\ cluster-health member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379 member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379 member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379 cluster is healthy -
使用etcd v3 api
$ etcdctl3 --cert="/etc/etcd/peer.crt" \ --key=/etc/etcd/peer.key \ --cacert="/etc/etcd/ca.crt" \ --endpoints="https://master-0.example.com:2379,\ https://master-1.example.com:2379,\ https://master-2.example.com:2379" endpoint health https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 5.011358ms https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.305173ms https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.388772ms
-
-
查看member列表
-
使用etcd v2 api
# etcdctl2 member list 2a371dd20f21ca8d: name=master-1.example.com peerURLs=https://192.168.55.12:2380 clientURLs=https://192.168.55.12:2379 isLeader=false 40bef1f6c79b3163: name=master-0.example.com peerURLs=https://192.168.55.8:2380 clientURLs=https://192.168.55.8:2379 isLeader=false 95dc17ffcce8ee29: name=master-2.example.com peerURLs=https://192.168.55.13:2380 clientURLs=https://192.168.55.13:2379 isLeader=true -
使用etcd v3 api
# etcdctl3 member list 2a371dd20f21ca8d, started, master-1.example.com, https://192.168.55.12:2380, https://192.168.55.12:2379 40bef1f6c79b3163, started, master-0.example.com, https://192.168.55.8:2380, https://192.168.55.8:2379 95dc17ffcce8ee29, started, master-2.example.com, https://192.168.55.13:2380, https://192.168.55.13:2379
-
-
開始備份etcd數據
v2版本有
etcdctl backup命令,用這個命令可以對etcd集群數據做備份。但是etcdctl v3沒有這個命令,但是v3版本有etcdctl snapshot save命令或者直接復制member/snap/db文件。etcdctl backup命令會重寫備份中包含的一些元數據,特別是節(jié)點ID和集群ID,這意味著在備份中,節(jié)點將丟失其以前的標識。 要從備份重新創(chuàng)建群集,需要創(chuàng)建新的單節(jié)點群集,然后將其余節(jié)點添加到群集。 重寫元數據以防止新節(jié)點加入現有集群。-
如果etcd部署在獨立的主機上,使用etcd v2 api備份
-
通過刪除etcd pod yaml文件,停止etcd服務
$ mkdir -p /etc/origin/node/pods-stopped $ mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/ -
創(chuàng)建etcd數據備份文件夾,復制etcd db文件
$ mkdir -p /backup/etcd-$(date +%Y%m%d) $ etcdctl2 backup \ --data-dir /var/lib/etcd \ --backup-dir /backup/etcd-$(date +%Y%m%d) $ cp /var/lib/etcd/member/snap/db /backup/etcd-$(date +%Y%m%d) -
重啟主機
$ reboot
-
-
如果etcd部署在獨立的主機上,使用etcd v3 api
-
在etcd節(jié)點上創(chuàng)建快照snapshot
$ systemctl show etcd --property=ActiveState,SubState $ mkdir -p /backup/etcd-$(date +%Y%m%d) $ etcdctl3 snapshot save /backup/etcd-$(date +%Y%m%d)/db -
通過刪除etcd pod yaml文件,停止etcd服務
$ mkdir -p /etc/origin/node/pods-stopped $ mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/ -
創(chuàng)建etcd數據備份文件夾,復制etcd db文件
$ etcdctl2 backup \ --data-dir /var/lib/etcd \ --backup-dir /backup/etcd-$(date +%Y%m%d) -
重啟主機
$ reboot
-
-
如果部署etcd使用的是容器安裝,使用etcd v3 api
-
從etcd pod的配置文件中獲取etcd endpoint IP
$ export ETCD_POD_MANIFEST="/etc/origin/node/pods/etcd.yaml" $ export ETCD_EP=$(grep https ${ETCD_POD_MANIFEST} | cut -d '/' -f3) -
獲得etcd pod名
$ oc login -u system:admin $ export ETCD_POD=$(oc get pods -n kube-system | grep -o -m 1 '\S*etcd\S*') -
創(chuàng)建快照snapshot,并將它保存到本地
$ oc project kube-system $ oc exec ${ETCD_POD} -c etcd -- /bin/bash -c "ETCDCTL_API=3 etcdctl \ --cert /etc/etcd/peer.crt \ --key /etc/etcd/peer.key \ --cacert /etc/etcd/ca.crt \ --endpoints <ETCD_EP> \ snapshot save /var/lib/etcd/snapshot.db"
-
-
備份項目project
項目的備份,涉及導出所有相關的對象,最終使用備份的文件恢復到新的項目中。
備份過程
-
列出需要備份的所有對象
$ oc get all NAME TYPE FROM LATEST bc/ruby-ex Source Git 1 NAME TYPE FROM STATUS STARTED DURATION builds/ruby-ex-1 Source Git@c457001 Complete 2 minutes ago 35s NAME DOCKER REPO TAGS UPDATED is/guestbook 10.111.255.221:5000/myproject/guestbook latest 2 minutes ago is/hello-openshift 10.111.255.221:5000/myproject/hello-openshift latest 2 minutes ago is/ruby-22-centos7 10.111.255.221:5000/myproject/ruby-22-centos7 latest 2 minutes ago is/ruby-ex 10.111.255.221:5000/myproject/ruby-ex latest 2 minutes ago NAME REVISION DESIRED CURRENT TRIGGERED BY dc/guestbook 1 1 1 config,image(guestbook:latest) dc/hello-openshift 1 1 1 config,image(hello-openshift:latest) dc/ruby-ex 1 1 1 config,image(ruby-ex:latest) NAME DESIRED CURRENT READY AGE rc/guestbook-1 1 1 1 2m rc/hello-openshift-1 1 1 1 2m rc/ruby-ex-1 1 1 1 2m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/guestbook 10.111.105.84 <none> 3000/TCP 2m svc/hello-openshift 10.111.230.24 <none> 8080/TCP,8888/TCP 2m svc/ruby-ex 10.111.232.117 <none> 8080/TCP 2m NAME READY STATUS RESTARTS AGE po/guestbook-1-c010g 1/1 Running 0 2m po/hello-openshift-1-4zw2q 1/1 Running 0 2m po/ruby-ex-1-build 0/1 Completed 0 2m po/ruby-ex-1-rxc74 1/1 Running 0 2m -
將對象配置導出為yaml文件或者json文件
-
導出為yaml文件
$ oc get -o yaml --export all > project.yaml -
導出為json文件
$ oc get -o json --export all > project.json
-
-
將
role bindings,secrets,service accounts和persistent volume claims等導出$ for object in rolebindings serviceaccounts secrets imagestreamtags podpreset cms egressnetworkpolicies rolebindingrestrictions limitranges resourcequotas pvcs templates cronjobs statefulsets hpas deployments replicasets poddisruptionbudget endpoints do oc get -o yaml --export $object > $object.yaml done
說明
-
列出所有的對象種類
$ oc api-resources --namespaced=true -o name 有些對象的參數中依賴于元數據,或者帶有唯一的認證標識。這些對象在恢復時將會受到影響。比如說deploymentconfig中的image指向imagestream時,image將會指向鏡像的一個sha256值,在恢復時將無法找到鏡像,而導致恢復失敗。
備份持久化卷
將持久化卷掛載到pod上,再使用oc rsync命令將數據備份到服務器
備份過程
-
查看pod
$ oc get pods NAME READY STATUS RESTARTS AGE demo-1-build 0/1 Completed 0 2h demo-2-fxx6d 1/1 Running 0 1h -
查看pod將pvc掛載到的目錄
$ oc describe pod demo-2-fxx6d Name: demo-2-fxx6d Namespace: test Security Policy: restricted Node: ip-10-20-6-20.ec2.internal/10.20.6.20 Start Time: Tue, 05 Dec 2017 12:54:34 -0500 Labels: app=demo deployment=demo-2 deploymentconfig=demo Status: Running IP: 172.16.12.5 Controllers: ReplicationController/demo-2 Containers: demo: Container ID: docker://201f3e55b373641eb36945d723e1e212ecab847311109b5cee1fd0109424217a Image: docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a Image ID: docker-pullable://docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a Port: 8080/TCP State: Running Started: Tue, 05 Dec 2017 12:54:52 -0500 Ready: True Restart Count: 0 Volume Mounts: */opt/app-root/src/uploaded from persistent-volume (rw)* /var/run/secrets/kubernetes.io/serviceaccount from default-token-8mmrk (ro) Environment Variables: <none> ...omitted...可以看到將pvc對應在pod中的目錄為
/opt/app-root/src/uploaded from persistent-volume -
oc rsync備份數據$ oc rsync demo-2-fxx6d:/opt/app-root/src/uploaded ./demo-app receiving incremental file list uploaded/ uploaded/ocp_sop.txt uploaded/lost+found/ sent 38 bytes received 190 bytes 152.00 bytes/sec total size is 32 speedup is 0.14
一鍵備份etcd數據腳本
一鍵備份etcd
[root@master01 ~]# cat backup_etcd.sh
#!/bin/bash
export ETCD_POD_MANIFEST="/etc/origin/node/pods/etcd.yaml"
export ETCD_EP=$(grep https ${ETCD_POD_MANIFEST} | cut -d '/' -f3)
oc login -u system:admin
export ETCD_POD=$(oc get pods -n kube-system | grep -o -m 1 '\S*etcd\S*')
oc project kube-system
oc exec ${ETCD_POD} -c etcd -- /bin/sh -c "ETCDCTL_API=3 etcdctl --cert /etc/etcd/peer.crt --key /etc/etcd/peer.key --cacert /etc/etcd/ca.crt --endpoints $ETCD_EP snapshot save /var/lib/etcd/snapshot.db"
today_date=$(date +%Y%m%d)
mkdir -p /backup/${today_date}/etcd
mv /var/lib/etcd/snapshot.db /backup/${today_date}/etcd/snapshot.db
ls /backup/${today_date}/etcd/
echo "success backup etcd"
參考文章
Openshift官方文檔之集群備份