SuSE CaaS4.5 容錯環(huán)境搭建v1

1. 環(huán)境概述

????3 Master Node + 2 Worker Node + 1Management Note + 1 Mirror Node + 1 RMT/SMT

????Management Note:

????????用于管理CaaS平臺,并附加部署負(fù)載均衡服務(wù)器

????Mirror Servers:

????????包含Helm Chart Repositories、Container Registry

????????Container Registries提供內(nèi)網(wǎng)環(huán)境離線下載鏡像

? ? RMT/SMT:

? ? ? ? 提供系統(tǒng)升級軟件包及CaaS搭建軟件包

Network

2. 必要條件

k8s環(huán)境相關(guān)服務(wù)器/虛機CPU最小需要:2(v)CPU;

服務(wù)器命名要求符合:完全限定域名(FQDN);

禁用ipv6;

啟用ip轉(zhuǎn)發(fā)功能(net.ipv4.ip_forward = 1);

在集群平臺引導(dǎo)之前,必須禁用swap;

時鐘同步(NTP);

集群平臺配置有效網(wǎng)關(guān);

3. 準(zhǔn)備步驟

3.1 hosts編輯

# vim?/etc/hosts

/etc/hosts

3.2 內(nèi)核參數(shù)

# vim /etc/sysctl.conf

/etc/sysctl.conf

3.3 關(guān)閉swap

# touch/etc/init.d/after.local

# chmod 744/etc/init.d/after.local

# vim/etc/init.d/after.local

swap

3.4 時鐘同步設(shè)置

# sed -i '3i pool?192.168.55.131?iburst' /etc/chrony.conf

# systemctl enable chronyd.service

# systemctl restart chronyd.service

# systemctl status chronyd.service

3.5 網(wǎng)關(guān)設(shè)置

? ? 設(shè)置:3 Master Node + 2 Worker Node + 1Management Note?

# echo "default192.168.55.1 - -" >> /etc/sysconfig/network/routes

# rcnetwork restart

3.6 軟件庫添加


zypper repo

3.7 更新系統(tǒng)補丁

# zypper dup

3.8 重啟系統(tǒng)

# reboot

4. Mirror服務(wù)器配置

4.1 軟件包安裝

# zypper in docker helm-mirror skopeo

# systemctl enable --now docker.service?? //啟動服務(wù)并配置自啟動

4.2 從SUSE提取Registry container image

# docker pull registry.suse.com/sles12/registry:2.6.2

鏡像打包/導(dǎo)入步驟

# docker save -o /tmp/registry.tar registry.suse.com/sles12/registry:2.6.2

# docker load -i /tmp/registry.tar

4.3 Registry Container配置文件

# mkdir /etc/docker/registry/

# vim /etc/docker/registry/config.yml

config.yml

4.4 啟動Registry Container

# docker run -d -p5000:5000 --restart=always --name registry \

?-v/etc/docker/registry:/etc/docker/registry:ro \

?-v /var/lib/registry:/var/lib/registryregistry.suse.com/sles12/registry:2.6.2

# docker ps -a


# docker stats <Container ID>

# docker start <Container ID>

# docker stop <Container ID>

4.5 配置Nginx webserver

# zypper install nginx

# vim /etc/nginx/vhosts.d/charts-server-http.conf

charts-server-http.conf

# systemctl enable--now nginx.service

4.6 更新CaaS平臺構(gòu)建鏡像到Registry Mirror

https://documentation.suse.com/external-tree/en-us/suse-caasp/4/skuba-cluster-images.txt

或許在安裝skuba軟件包的服務(wù)器上執(zhí)行:

# skuba cluster images

images

# mkdir /tmp/skuba-cluster-images

# vim?/tmp/skuba-cluster-images/sync.yaml

skuba-cluster-images

# cd?/tmp/skuba-cluster-images

# skopeo sync --src yaml --dest sync.yaml /tmp/skuba-cluster-images/ --scoped

#?skopeo sync?--dest-tls-verify=false --src dir --dest docker /tmp/skuba-cluster-images/ mirror.demo.com:5000 --scoped

4.7 Helm Charts數(shù)據(jù)獲取及發(fā)布

4.7.1 從存儲庫下載所有charts到本地

# mkdir /tmp/charts

# cd /tmp/charts

# helm-mirror --new-root-url http://charts.demo.com/charts https://kubernetes-charts.suse.com /tmp/charts

/tmp/charts/

4.7.2 轉(zhuǎn)換charts信息為skopeo格式

# helm-mirror inspect-images /tmp/charts/ -o skopeo=sync.yaml -i

調(diào)整轉(zhuǎn)換后的文件:

刪除重復(fù)版本信息及鏡像信息

例如:

charts
sync1.yaml
sync2.yaml

注意:

? ? gcr.io 地址中的鏡像需要翻墻

? ? 在CaaS4.5中默認(rèn)安裝的是helm2,本文檔會使用helm3替換helm2,所以不再需要tiller。相關(guān)鏡像也不再需要下載。

4.7.3 下載charts庫數(shù)據(jù)并發(fā)布到Registry Mirror

# mkdir /tmp/skopeodata

# skopeo sync --src yaml --dest dir sync.yaml /tmp/skopeodata/ --scoped

# skopeo sync?--dest-tls-verify=false --src dir --dest docker /tmp/skopeodata/ mirror.demo.com:5000 --scoped

查看本地Registry 鏡像內(nèi)容

# curl mirror.demo.com:5000/v2/_catalog | tr "," "\n"

4.7.4 Helm chart數(shù)據(jù)拷貝到web家目錄

# cp -a /tmp/charts/ /srv/www/charts/

# chown -R nginx:nginx /srv/www/charts

# chmod -R 555 /srv/www/charts

# systemctl restart nginx.service

5. Nginx負(fù)載均衡

? ? 配置在management節(jié)點

5.1 nginx配置

#zypper -n in nginx

#vim /etc/nginx/nginx.conf

nginx.conf-1
nginx.conf-2

# systemctl enable --now nginx

# systemctl status nginx

5.2 校驗負(fù)載均衡功能

? ? 在部署CaaS后進(jìn)行校驗

management:~#cd /root/CaaS-Cluster

management:~#while true; do skuba cluster status; sleep 1; done;

management:~ # tail -100f /var/log/nginx/k8s-masters-lb-access.log

log

6. ssh-agent配置

? ? 配置在management節(jié)點

6.1 生成密鑰對

management:~ # ssh-keygen

management:~ # cd ~/.ssh

management:~ # ssh-copy-id root@management.demo.com

management:~ # ssh-copy-id root@master01.demo.com

management:~ # ssh-copy-id root@master02.demo.com

management:~ # ssh-copy-id root@master03.demo.com

management:~ # ssh-copy-id root@worker01.demo.com

management:~ # ssh-copy-id root@worker02.demo.com

management:~ # ssh-copy-id root@worker03.demo.com

6.2 啟動ssh-agent服務(wù)

management:~# eval"$(ssh-agent -s)"

6.3 添加私鑰到ssh-agent

management:~# ssh-add~/.ssh/id_rsa

management:~# ssh-add -l

7. CaaS搭建

7.1 組件安裝

????在所有management /?master / worker 節(jié)點安裝

????#?zypper -n in -l -t pattern SUSE-CaaSP-Management

7.2 配置cri-o連接Registry Container

? ? 在所有management / master / worker 節(jié)點安裝

????# zypper -n install cri-o-1.18

? ??# mv /etc/containers/registries.conf /etc/containers/registries.conf.backup

? ??# vim /etc/containers/registries.conf

registries.conf

7.3 CaaS初始化

management:~ # cd ~/

management:~ # skuba cluster init --control-plane management.demo.com CaaS-Cluster

7.4 初始化添加首個master節(jié)點

management:~ # cd ~/CaaS-Cluster/

management:~ # skuba node bootstrap --target master01.demo.com master01 -v5

7.5 添加其他節(jié)點

語法:

skuba node join --role <master/worker> --user <user-name> --sudo --target <IP/FQDN> <node-name>

management:~ # skuba node join --role master --target master02.demo.com master02 -v5

management:~ # skuba node join --role master --target master03.demo.com master03 -v5

management:~ # skuba node join --role worker --target worker01.demo.com worker01 -v5

management:~ # skuba node join --role worker --target worker02.demo.com worker02 -v5

management:~ # skuba node join --role worker --target worker03.demo.com worker03 -v5

7.6 測試集群

management:~?# mkdir ~/.kube

management:~?# cp ~/CaaS-Cluster/admin.conf ~/.kube/config

management:~?# kubectl get nodes

display

management:~?#?kubectl get nodes -o wide

nodes display

8. CaaS集群狀態(tài)

8.1 當(dāng)前節(jié)點下載的鏡像

master01:~ # crictl images

master01-images

8.2 當(dāng)前節(jié)點運行的容器

master01:~ # crictl ps -a

container

8.3 查看節(jié)點運行的Pods

master01:~ # crictl pods

pods display

8.4 查看容器日志

master01:~ # crictl logs d506b0fb5db13

container-logs

8.5 顯示集群信息

management:~ # kubectl cluster-info

cluster-info

查看集群dump信息

management:~?# kubectl cluster-info dump | less

management:~ # kubectl version --short=true

cluster-version

8.6 顯示資源信息

# kubectl --namespace=kube-systemget deployments -o wide

# kubectl get nodes-o wide

# kubectl get pods--all-namespaces -o wide

# kubectl get svc--all-namespaces

9. K8s stack

9.1 安裝helm

# zypper in helm

從CassP 4.1.2版本開始,集群軟件包含了helm,無需額外安裝

9.2 替換helm2到helm3

# zypper in helm3

# update-alternatives --set helm /usr/bin/helm3

9.3 添加Mirror服務(wù)器的charts庫

management:~ # helm repo add mirror-local http://charts.demo.com/charts

查看生成的庫配置文件

management:~ # cat ~/.config/helm/repositories.yaml

repositories.yaml

查看charts庫列表

# helm repo list

charts-repo

更新庫數(shù)據(jù)

# helm repo update

charts-update

列出charts repo中的內(nèi)容

# helm search repo

chart-date

附錄A、常見chart庫

● 微軟chart倉庫

http://mirror.azure.cn/kubernetes/charts/

●?阿里chart倉庫

https://apphub.aliyuncs.com/

https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

官方網(wǎng)站:https://developer.aliyun.com/hub#/?_k=bfaiyc

●?k8s官方chart倉庫

https://hub.kubeapps.com/charts/incubator

●?SUSE chart倉庫

https://kubernetes-charts.suse.com

●?谷歌chart倉庫

http://storage.googleapis.com/kubernetes-charts-incubator

附錄B、重置CaaS搭建痕跡

swapoff -a

kubeadm reset

systemctl daemon-reload

systemctl unmask kubelet.service

systemctl restart kubelet

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

附錄C、node加入錯誤處理

master和worker節(jié)點在加入k8s成功后,若爆出異常錯誤,需要逐步重啟master和worker節(jié)點即可。

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

友情鏈接更多精彩內(nèi)容