Kubernetes(k8s1.24版本)最新版最完整版環(huán)境部署+master高可用實(shí)現(xiàn)

官網(wǎng):https://kubernetes.io/
官方文檔:https://kubernetes.io/zh-cn/docs/home/

二、基礎(chǔ)環(huán)境部署
1)前期準(zhǔn)備(所有節(jié)點(diǎn))
1、修改主機(jī)名和配置hosts
先部署1master和2node節(jié)點(diǎn),后面再加一個(gè)master節(jié)點(diǎn)

# 在192.168.0.113執(zhí)行
hostnamectl set-hostname  k8s-master-168-0-113
# 在192.168.0.114執(zhí)行
hostnamectl set-hostname k8s-node1-168-0-114
# 在192.168.0.115執(zhí)行
hostnamectl set-hostname k8s-node2-168-0-115

配置hosts

cat >> /etc/hosts<<EOF
192.168.0.113 k8s-master-168-0-113
192.168.0.114 k8s-node1-168-0-114
192.168.0.115 k8s-node2-168-0-115
EOF

2、配置ssh互信

# 直接一直回車就行
ssh-keygen

ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master-168-0-113
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node1-168-0-114
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node2-168-0-115

3、時(shí)間同步

yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
chronyc sources

7、關(guān)閉防火墻

systemctl stop firewalld
systemctl disable firewalld

4、關(guān)閉swap

# 臨時(shí)關(guān)閉;關(guān)閉swap主要是為了性能考慮
swapoff -a
# 可以通過這個(gè)命令查看swap是否關(guān)閉了
free
# 永久關(guān)閉        
sed -ri 's/.*swap.*/#&/' /etc/fstab

5、禁用SELinux

# 臨時(shí)關(guān)閉
setenforce 0
# 永久禁用
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

6、允許 iptables 檢查橋接流量(可選,所有節(jié)點(diǎn))
若要顯式加載此模塊,請(qǐng)運(yùn)行 sudo modprobe br_netfilter,通過運(yùn)行 lsmod | grep br_netfilter 來(lái)驗(yàn)證 br_netfilter 模塊是否已加載,

sudo modprobe br_netfilter
lsmod | grep br_netfilter

為了讓 Linux 節(jié)點(diǎn)的 iptables 能夠正確查看橋接流量,請(qǐng)確認(rèn) sysctl 配置中的 net.bridge.bridge-nf-call-iptables 設(shè)置為 1。 例如:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 設(shè)置所需的 sysctl 參數(shù),參數(shù)在重新啟動(dòng)后保持不變
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 應(yīng)用 sysctl 參數(shù)而不重新啟動(dòng)
sudo sysctl --system

2)安裝容器docker(所有節(jié)點(diǎn))
提示:v1.24 之前的 Kubernetes 版本包括與 Docker Engine 的直接集成,使用名為 dockershim 的組件。 這種特殊的直接整合不再是 Kubernetes 的一部分 (這次刪除被作為 v1.20 發(fā)行版本的一部分宣布)。 你可以閱讀檢查 Dockershim 棄用是否會(huì)影響你 以了解此刪除可能會(huì)如何影響你。 要了解如何使用 dockershim 進(jìn)行遷移,請(qǐng)參閱從 dockershim 遷移。

# 配置yum源
cd /etc/yum.repos.d ; mkdir bak; mv CentOS-Linux-* bak/
# centos7
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# centos8
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo

# 安裝yum-config-manager配置工具
yum -y install yum-utils
# 設(shè)置yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安裝docker-ce版本
yum install -y docker-ce
# 啟動(dòng)
systemctl start docker
# 開機(jī)自啟
systemctl enable docker
# 查看版本號(hào)
docker --version
# 查看版本具體信息
docker version

# Docker鏡像源設(shè)置
# 修改文件 /etc/docker/daemon.json,沒有這個(gè)文件就創(chuàng)建
# 添加以下內(nèi)容后,重啟docker服務(wù):
cat >/etc/docker/daemon.json<<EOF
{
   "registry-mirrors": ["http://hub-mirror.c.163.com"]
}
EOF
# 加載
systemctl reload docker

# 查看
systemctl status docker containerd

【溫馨提示】dockerd實(shí)際真實(shí)調(diào)用的還是containerd的api接口,containerd是dockerd和runC之間的一個(gè)中間交流組件。所以啟動(dòng)docker服務(wù)的時(shí)候,也會(huì)啟動(dòng)containerd服務(wù)的。
3)配置k8s yum源(所有節(jié)點(diǎn))

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF

4)將 sandbox_image 鏡像源設(shè)置為阿里云google_containers鏡像源(所有節(jié)點(diǎn))

# 導(dǎo)出默認(rèn)配置,config.toml這個(gè)文件默認(rèn)是不存在的
containerd config default > /etc/containerd/config.toml
grep sandbox_image  /etc/containerd/config.toml
sed -i "s#k8s.gcr.io/pause#registry.aliyuncs.com/google_containers/pause#g"       /etc/containerd/config.toml
grep sandbox_image  /etc/containerd/config.toml
image.png

5)配置containerd cgroup 驅(qū)動(dòng)程序systemd(所有節(jié)點(diǎn))
kubernets自v1.24.0后,就不再使用docker.shim,替換采用containerd作為容器運(yùn)行時(shí)端點(diǎn)。因此需要安裝containerd(在docker的基礎(chǔ)下安裝),上面安裝docker的時(shí)候就自動(dòng)安裝了containerd了。這里的docker只是作為客戶端而已。容器引擎還是containerd。

sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
# 應(yīng)用所有更改后,重新啟動(dòng)containerd
systemctl restart containerd

6)開始安裝kubeadm,kubelet和kubectl(master節(jié)點(diǎn))

# 不指定版本就是最新版本,當(dāng)前最新版就是1.24.1
yum install -y kubelet-1.24.1  kubeadm-1.24.1  kubectl-1.24.1 --disableexcludes=kubernetes
# disableexcludes=kubernetes:禁掉除了這個(gè)kubernetes之外的別的倉(cāng)庫(kù)
# 設(shè)置為開機(jī)自啟并現(xiàn)在立刻啟動(dòng)服務(wù) --now:立刻啟動(dòng)服務(wù)
systemctl enable --now kubelet

# 查看狀態(tài),這里需要等待一段時(shí)間再查看服務(wù)狀態(tài),啟動(dòng)會(huì)有點(diǎn)慢
systemctl status kubelet
image.png

查看日志,發(fā)現(xiàn)有報(bào)錯(cuò),報(bào)錯(cuò)如下:

kubelet.service: Main process exited, code=exited, status=1/FAILURE kubelet.service: Failed with result 'exit-code'.
···
![image.png](https://upload-images.jianshu.io/upload_images/26078328-030c2ea89726f391.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
【解釋】重新安裝(或第一次安裝)k8s,未經(jīng)過kubeadm init 或者 kubeadm join后,kubelet會(huì)不斷重啟,這個(gè)是正?,F(xiàn)象……,執(zhí)行init或join后問題會(huì)自動(dòng)解決,對(duì)此官網(wǎng)有如下描述,也就是此時(shí)不用理會(huì)kubelet.service。

查看版本

kubectl version
yum info kubeadm
image.png

7)使用 kubeadm 初始化集群(master節(jié)點(diǎn))
最好提前把鏡像下載好,這樣安裝快

docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.1
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.1
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.1
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.24.1
docker pull registry.aliyuncs.com/google_containers/pause:3.7
docker pull registry.aliyuncs.com/google_containers/etcd:3.5.3-0
docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.6

集群初始化

kubeadm init \
  --apiserver-advertise-address=192.168.0.113 \
  --image-repository registry.aliyuncs.com/google_containers \
  --control-plane-endpoint=cluster-endpoint \
  --kubernetes-version v1.24.1 \
  --service-cidr=10.1.0.0/16 \
  --pod-network-cidr=10.244.0.0/16 \
  --v=5
# –image-repository string:    這個(gè)用于指定從什么位置來(lái)拉取鏡像(1.13版本才有的),默認(rèn)值是k8s.gcr.io,我們將其指定為國(guó)內(nèi)鏡像地址:registry.aliyuncs.com/google_containers
# –kubernetes-version string:  指定kubenets版本號(hào),默認(rèn)值是stable-1,會(huì)導(dǎo)致從https://dl.k8s.io/release/stable-1.txt下載最新的版本號(hào),我們可以將其指定為固定版本(v1.22.1)來(lái)跳過網(wǎng)絡(luò)請(qǐng)求。
# –apiserver-advertise-address  指明用 Master 的哪個(gè) interface 與 Cluster 的其他節(jié)點(diǎn)通信。如果 Master 有多個(gè) interface,建議明確指定,如果不指定,kubeadm 會(huì)自動(dòng)選擇有默認(rèn)網(wǎng)關(guān)的 interface。這里的ip為master節(jié)點(diǎn)ip,記得更換。
# –pod-network-cidr             指定 Pod 網(wǎng)絡(luò)的范圍。Kubernetes 支持多種網(wǎng)絡(luò)方案,而且不同網(wǎng)絡(luò)方案對(duì)  –pod-network-cidr有自己的要求,這里設(shè)置為10.244.0.0/16 是因?yàn)槲覀儗⑹褂?flannel 網(wǎng)絡(luò)方案,必須設(shè)置成這個(gè) CIDR。
# --control-plane-endpoint     cluster-endpoint 是映射到該 IP 的自定義 DNS 名稱,這里配置hosts映射:192.168.0.113   cluster-endpoint。 這將允許你將 --control-plane-endpoint=cluster-endpoint 傳遞給 kubeadm init,并將相同的 DNS 名稱傳遞給 kubeadm join。 稍后你可以修改 cluster-endpoint 以指向高可用性方案中的負(fù)載均衡器的地址。

【溫馨提示】kubeadm 不支持將沒有 --control-plane-endpoint 參數(shù)的單個(gè)控制平面集群轉(zhuǎn)換為高可用性集群。

kubeadm reset
rm -fr ~/.kube/  /etc/kubernetes/* var/lib/etcd/*
kubeadm init \
  --apiserver-advertise-address=192.168.0.113  \
  --image-repository registry.aliyuncs.com/google_containers \
  --control-plane-endpoint=cluster-endpoint \
  --kubernetes-version v1.24.1 \
  --service-cidr=10.1.0.0/16 \
  --pod-network-cidr=10.244.0.0/16 \
  --v=5
# –image-repository string:    這個(gè)用于指定從什么位置來(lái)拉取鏡像(1.13版本才有的),默認(rèn)值是k8s.gcr.io,我們將其指定為國(guó)內(nèi)鏡像地址:registry.aliyuncs.com/google_containers
# –kubernetes-version string:  指定kubenets版本號(hào),默認(rèn)值是stable-1,會(huì)導(dǎo)致從https://dl.k8s.io/release/stable-1.txt下載最新的版本號(hào),我們可以將其指定為固定版本(v1.22.1)來(lái)跳過網(wǎng)絡(luò)請(qǐng)求。
# –apiserver-advertise-address  指明用 Master 的哪個(gè) interface 與 Cluster 的其他節(jié)點(diǎn)通信。如果 Master 有多個(gè) interface,建議明確指定,如果不指定,kubeadm 會(huì)自動(dòng)選擇有默認(rèn)網(wǎng)關(guān)的 interface。這里的ip為master節(jié)點(diǎn)ip,記得更換。
# –pod-network-cidr             指定 Pod 網(wǎng)絡(luò)的范圍。Kubernetes 支持多種網(wǎng)絡(luò)方案,而且不同網(wǎng)絡(luò)方案對(duì)  –pod-network-cidr有自己的要求,這里設(shè)置為10.244.0.0/16 是因?yàn)槲覀儗⑹褂?flannel 網(wǎng)絡(luò)方案,必須設(shè)置成這個(gè) CIDR。
# --control-plane-endpoint     cluster-endpoint 是映射到該 IP 的自定義 DNS 名稱,這里配置hosts映射:192.168.0.113   cluster-endpoint。 這將允許你將 --control-plane-endpoint=cluster-endpoint 傳遞給 kubeadm init,并將相同的 DNS 名稱傳遞給 kubeadm join。 稍后你可以修改 cluster-endpoint 以指向高可用性方案中的負(fù)載均衡器的地址。

配置環(huán)境變量

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 臨時(shí)生效(退出當(dāng)前窗口重連環(huán)境變量失效)
export KUBECONFIG=/etc/kubernetes/admin.conf
# 永久生效(推薦)
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source  ~/.bash_profile
image.png

發(fā)現(xiàn)節(jié)點(diǎn)還是有問題,查看日志 /var/log/messages

"Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
image.png

接下來(lái)就是安裝Pod網(wǎng)絡(luò)插件

8)安裝Pod網(wǎng)絡(luò)插件(CNI:Container Network Interface)(master)
你必須部署一個(gè)基于 Pod 網(wǎng)絡(luò)插件的 容器網(wǎng)絡(luò)接口 (CNI),以便你的 Pod 可以相互通信。

最好提前下載鏡像(所有節(jié)點(diǎn))

docker pull quay.io/coreos/flannel:v0.14.0
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
如果上面安裝失敗,則下載我百度里的,離線安裝

鏈接:https://pan.baidu.com/s/1HB9xuO3bssAW7v5HzpXkeQ
提取碼:8888

再查看node節(jié)點(diǎn),就已經(jīng)正常了


image.png

9)node節(jié)點(diǎn)加入k8s集群
先安裝kubelet

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 設(shè)置為開機(jī)自啟并現(xiàn)在立刻啟動(dòng)服務(wù) --now:立刻啟動(dòng)服務(wù)
systemctl enable --now kubelet
systemctl status kubelet

如果沒有令牌,可以通過在控制平面節(jié)點(diǎn)上運(yùn)行以下命令來(lái)獲取令牌:

kubeadm token list

默認(rèn)情況下,令牌會(huì)在24小時(shí)后過期。如果要在當(dāng)前令牌過期后將節(jié)點(diǎn)加入集群, 則可以通過在控制平面節(jié)點(diǎn)上運(yùn)行以下命令來(lái)創(chuàng)建新令牌:

kubeadm token create
# 再查看
kubeadm token list

如果你沒有 –discovery-token-ca-cert-hash 的值,則可以通過在控制平面節(jié)點(diǎn)上執(zhí)行以下命令鏈來(lái)獲取它:

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

如果執(zhí)行kubeadm init時(shí)沒有記錄下加入集群的命令,可以通過以下命令重新創(chuàng)建(推薦)一般不用上面的分別獲取token和ca-cert-hash方式,執(zhí)行以下命令一氣呵成:

kubeadm token create --print-join-command

這里需要等待一段時(shí)間,再查看節(jié)點(diǎn)節(jié)點(diǎn)狀態(tài),因?yàn)樾枰惭bkube-proxy和flannel。

kubectl get pods -A
kubectl get nodes
image.png

10)配置IPVS
【問題】集群內(nèi)無(wú)法ping通ClusterIP(或ServiceName)

1、加載ip_vs相關(guān)內(nèi)核模塊

modprobe -- ip_vs
modprobe -- ip_vs_sh
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr

所有節(jié)點(diǎn)驗(yàn)證開啟了ipvs:

lsmod |grep ip_vs

2、安裝ipvsadm工具

yum install ipset ipvsadm -y

3、編輯kube-proxy配置文件,mode修改成ipvs

kubectl edit  configmap -n kube-system  kube-proxy
image.png

4、重啟kube-proxy

# 先查看
kubectl get pod -n kube-system | grep kube-proxy
# 再delete讓它自拉起
kubectl get pod -n kube-system | grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
# 再查看
kubectl get pod -n kube-system | grep kube-proxy
image.png

image.png

5、查看ipvs轉(zhuǎn)發(fā)規(guī)則

ipvsadm -Ln
image.png

11)集群高可用配置
配置高可用(HA)Kubernetes 集群實(shí)現(xiàn)的兩種方案:

使用堆疊(stacked)控制平面節(jié)點(diǎn),其中 etcd 節(jié)點(diǎn)與控制平面節(jié)點(diǎn)共存(本章使用),架構(gòu)圖如下:


image.png

使用外部 etcd 節(jié)點(diǎn),其中 etcd 在與控制平面不同的節(jié)點(diǎn)上運(yùn)行,架構(gòu)圖如下:


image.png

這里新增一臺(tái)機(jī)器作為另外一個(gè)master節(jié)點(diǎn):192.168.0.116
配置跟上面master節(jié)點(diǎn)一樣。只是不需要最后一步初始化了。

1、修改主機(jī)名和配置hosts
所有節(jié)點(diǎn)都統(tǒng)一如下配置:

# 在192.168.0.113執(zhí)行
hostnamectl set-hostname  k8s-master-168-0-113
# 在192.168.0.114執(zhí)行
hostnamectl set-hostname k8s-node1-168-0-114
# 在192.168.0.115執(zhí)行
hostnamectl set-hostname k8s-node2-168-0-115
# 在192.168.0.116執(zhí)行
hostnamectl set-hostname k8s-master2-168-0-116

配置hosts

cat >> /etc/hosts<<EOF
192.168.0.113 k8s-master-168-0-113 cluster-endpoint
192.168.0.114 k8s-node1-168-0-114
192.168.0.115 k8s-node2-168-0-115
192.168.0.116 k8s-master2-168-0-116
EOF

2、配置ssh互信

# 直接一直回車就行
ssh-keygen

ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master-168-0-113
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node1-168-0-114
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node2-168-0-115
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master2-168-0-116

3、時(shí)間同步

yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
chronyc sources

4、關(guān)閉swap

# 臨時(shí)關(guān)閉;關(guān)閉swap主要是為了性能考慮
swapoff -a
# 可以通過這個(gè)命令查看swap是否關(guān)閉了
free
# 永久關(guān)閉        
sed -ri 's/.*swap.*/#&/' /etc/fstab

#關(guān)閉防火墻
systemctl stop firewalld
systemctl disable firewalld

5、禁用SELinux

# 臨時(shí)關(guān)閉
setenforce 0
# 永久禁用
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config

6、允許 iptables 檢查橋接流量(可選,所有節(jié)點(diǎn))
若要顯式加載此模塊,請(qǐng)運(yùn)行 sudo modprobe br_netfilter,通過運(yùn)行 lsmod | grep br_netfilter 來(lái)驗(yàn)證 br_netfilter 模塊是否已加載,

sudo modprobe br_netfilter
lsmod | grep br_netfilter

為了讓 Linux 節(jié)點(diǎn)的 iptables 能夠正確查看橋接流量,請(qǐng)確認(rèn) sysctl 配置中的 net.bridge.bridge-nf-call-iptables 設(shè)置為 1。 例如:

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 設(shè)置所需的 sysctl 參數(shù),參數(shù)在重新啟動(dòng)后保持不變
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 應(yīng)用 sysctl 參數(shù)而不重新啟動(dòng)
sudo sysctl --system

7、安裝容器docker(所有節(jié)點(diǎn))
提示:v1.24 之前的 Kubernetes 版本包括與 Docker Engine 的直接集成,使用名為 dockershim 的組件。 這種特殊的直接整合不再是 Kubernetes 的一部分 (這次刪除被作為 v1.20 發(fā)行版本的一部分宣布)。 你可以閱讀檢查 Dockershim 棄用是否會(huì)影響你 以了解此刪除可能會(huì)如何影響你。 要了解如何使用 dockershim 進(jìn)行遷移,請(qǐng)參閱從 dockershim 遷移。

# 配置yum源
cd /etc/yum.repos.d ; mkdir bak; mv CentOS-Linux-* bak/
# centos7
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# centos8
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo

# 安裝yum-config-manager配置工具
yum -y install yum-utils
# 設(shè)置yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安裝docker-ce版本
yum install -y docker-ce
# 啟動(dòng)
systemctl start docker
# 開機(jī)自啟
systemctl enable docker
# 查看版本號(hào)
docker --version
# 查看版本具體信息
docker version

# Docker鏡像源設(shè)置
# 修改文件 /etc/docker/daemon.json,沒有這個(gè)文件就創(chuàng)建
# 添加以下內(nèi)容后,重啟docker服務(wù):
cat >/etc/docker/daemon.json<<EOF
{
   "registry-mirrors": ["http://hub-mirror.c.163.com"]
}
EOF
# 加載
systemctl reload docker

# 查看
systemctl status docker containerd

【溫馨提示】dockerd實(shí)際真實(shí)調(diào)用的還是containerd的api接口,containerd是dockerd和runC之間的一個(gè)中間交流組件。所以啟動(dòng)docker服務(wù)的時(shí)候,也會(huì)啟動(dòng)containerd服務(wù)的。

8、配置k8s yum源(所有節(jié)點(diǎn))

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF

9、將 sandbox_image 鏡像源設(shè)置為阿里云google_containers鏡像源(所有節(jié)點(diǎn))

# 導(dǎo)出默認(rèn)配置,config.toml這個(gè)文件默認(rèn)是不存在的
containerd config default > /etc/containerd/config.toml
grep sandbox_image  /etc/containerd/config.toml
sed -i "s#k8s.gcr.io/pause#registry.aliyuncs.com/google_containers/pause#g"       /etc/containerd/config.toml
grep sandbox_image  /etc/containerd/config.toml
image.png

10、配置containerd cgroup 驅(qū)動(dòng)程序systemd
kubernets自v1.24.0后,就不再使用docker.shim,替換采用containerd作為容器運(yùn)行時(shí)端點(diǎn)。因此需要安裝containerd(在docker的基礎(chǔ)下安裝),上面安裝docker的時(shí)候就自動(dòng)安裝了containerd了。這里的docker只是作為客戶端而已。容器引擎還是containerd。

sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
# 應(yīng)用所有更改后,重新啟動(dòng)containerd
systemctl restart containerd

11、開始安裝kubeadm,kubelet和kubectl(master節(jié)點(diǎn))

# 不指定版本就是最新版本,當(dāng)前最新版就是1.24.1
yum install -y kubelet-1.24.1  kubeadm-1.24.1  kubectl-1.24.1 --disableexcludes=kubernetes
# disableexcludes=kubernetes:禁掉除了這個(gè)kubernetes之外的別的倉(cāng)庫(kù)
# 設(shè)置為開機(jī)自啟并現(xiàn)在立刻啟動(dòng)服務(wù) --now:立刻啟動(dòng)服務(wù)
systemctl enable --now kubelet

# 查看狀態(tài),這里需要等待一段時(shí)間再查看服務(wù)狀態(tài),啟動(dòng)會(huì)有點(diǎn)慢
systemctl status kubelet

# 查看版本

kubectl version
yum info kubeadm

12、加入k8s集群

# 證如果過期了,可以使用下面命令生成新證書上傳,這里會(huì)打印出certificate key,后面會(huì)用到
kubeadm init phase upload-certs --upload-certs
# 你還可以在 【init】期間指定自定義的 --certificate-key,以后可以由 join 使用。 要生成這樣的密鑰,可以使用以下命令(這里不執(zhí)行,就用上面那個(gè)自命令就可以了):
kubeadm certs certificate-key

kubeadm token create --print-join-command

kubeadm join cluster-endpoint:6443 --token wswrfw.fc81au4yvy6ovmhh --discovery-token-ca-cert-hash sha256:43a3924c25104d4393462105639f6a02b8ce284728775ef9f9c30eed8e0abc0f --control-plane --certificate-key 8d2709697403b74e35d05a420bd2c19fd8c11914eb45f2ff22937b245bed5b68

# --control-plane 標(biāo)志通知 kubeadm join 創(chuàng)建一個(gè)新的控制平面。加入master必須加這個(gè)標(biāo)記
# --certificate-key ... 將導(dǎo)致從集群中的 kubeadm-certs Secret 下載控制平面證書并使用給定的密鑰進(jìn)行解密。這里的值就是上面這個(gè)命令(kubeadm init phase upload-certs --upload-certs)打印出的key。
image.png

根據(jù)提示執(zhí)行如下命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看

kubectl get nodes
kubectl get pods -A -owide
image.png

雖然現(xiàn)在已經(jīng)有兩個(gè)master了,但是對(duì)外還是只能有一個(gè)入口的,所以還得要一個(gè)負(fù)載均衡器,如果一個(gè)master掛了,會(huì)自動(dòng)切到另外一個(gè)master節(jié)點(diǎn)。
12)部署Nginx+Keepalived高可用負(fù)載均衡器


image.png

1、安裝Nginx和Keepalived

# 在兩個(gè)master節(jié)點(diǎn)上執(zhí)行
yum install nginx keepalived -y

2、Nginx配置
在兩個(gè)master節(jié)點(diǎn)配置

cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
    worker_connections 1024;
}
# 四層負(fù)載均衡,為兩臺(tái)Master apiserver組件提供負(fù)載均衡
stream {
    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;
    upstream k8s-apiserver {
       # Master APISERVER IP:PORT
       server 192.168.0.113:6443;
       # Master2 APISERVER IP:PORT
       server 192.168.0.116:6443;
    }
    server {
       listen 16443;
       proxy_pass k8s-apiserver;
    }
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    server {
        listen       80 default_server;
        server_name  _;

        location / {
        }
    }
}
EOF

【溫馨提示】如果只保證高可用,不配置k8s-apiserver負(fù)載均衡的話,可以不裝nginx,但是最好還是配置一下k8s-apiserver負(fù)載均衡。

3、Keepalived配置(master)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from fage@qq.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID實(shí)例,每個(gè)實(shí)例是唯一的
    priority 100    # 優(yōu)先級(jí),備服務(wù)器設(shè)置 90
    advert_int 1    # 指定VRRP 心跳包通告間隔時(shí)間,默認(rèn)1秒
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    # 虛擬IP
    virtual_ipaddress {
        192.168.0.120/24
    }
    track_script {
        check_nginx
    }
}
EOF

vrrp_script:指定檢查nginx工作狀態(tài)腳本(根據(jù)nginx狀態(tài)判斷是否故障轉(zhuǎn)移)

virtual_ipaddress:虛擬IP(VIP)

檢查nginx狀態(tài)腳本:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh

4、Keepalived配置(backup)

cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from fage@qq.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}
vrrp_script check_nginx {
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID實(shí)例,每個(gè)實(shí)例是唯一的
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.0.120/24
    }
    track_script {
        check_nginx
    }
}
EOF

檢查nginx狀態(tài)腳本:

cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh

5、啟動(dòng)并設(shè)置開機(jī)啟動(dòng)

systemctl daemon-reload
systemctl restart nginx && systemctl enable nginx && systemctl status nginx
systemctl restart keepalived && systemctl enable keepalived && systemctl status keepalived

查看VIP

ip a
image.png

6、修改hosts(所有節(jié)點(diǎn))
將cluster-endpoint之前執(zhí)行的ip修改執(zhí)行現(xiàn)在的VIP

192.168.0.113 k8s-master-168-0-113
192.168.0.114 k8s-node1-168-0-114
192.168.0.115 k8s-node2-168-0-115
192.168.0.116 k8s-master2-168-0-116
192.168.0.120 cluster-endpoint

7、測(cè)試驗(yàn)證
查看版本(負(fù)載均衡測(cè)試驗(yàn)證)

curl -k https://cluster-endpoint:16443/version
image.png

高可用測(cè)試驗(yàn)證,將k8s-master-168-0-113節(jié)點(diǎn)關(guān)機(jī)

shutdown -h now
curl -k https://cluster-endpoint:16443/version
kubectl get nodes -A
kubectl get pods -A

【溫馨提示】堆疊集群存在耦合失敗的風(fēng)險(xiǎn)。如果一個(gè)節(jié)點(diǎn)發(fā)生故障,則 etcd 成員和控制平面實(shí)例都將丟失, 并且冗余會(huì)受到影響。你可以通過添加更多控制平面節(jié)點(diǎn)來(lái)降低此風(fēng)險(xiǎn)。

三、k8s管理平臺(tái)dashboard環(huán)境部署
1)dashboard部署
GitHub地址:https://github.com/kubernetes/dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
kubectl get pods -n kubernetes-dashboard

但是這個(gè)只能內(nèi)部訪問,所以要外部訪問,要么部署ingress,要么就是設(shè)置service NodePort類型。這里選擇service暴露端口。

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml

修改后的內(nèi)容如下:

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 31443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.6.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
image.png

重新部署

kubectl delete -f recommended.yaml
kubectl apply -f recommended.yaml
kubectl get svc,pods -n kubernetes-dashboard
image.png

2)創(chuàng)建登錄用戶

cat >ServiceAccount.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF
kubectl apply -f ServiceAccount.yaml

創(chuàng)建并獲取登錄token

kubectl -n kubernetes-dashboard create token admin-user

3)配置hosts登錄dashboard web

192.168.0.120 cluster-endpoint

登錄:https://cluster-endpoint:31443

image.png

輸入上面創(chuàng)建的token登錄
image.png

四、k8s鏡像倉(cāng)庫(kù)harbor環(huán)境部署
GitHub地址:https://github.com/helm/helm/releases
這使用helm安裝,所以得先安裝helm

1)安裝helm

mkdir -p /opt/k8s/helm && cd /opt/k8s/helm
wget https://get.helm.sh/helm-v3.9.0-rc.1-linux-amd64.tar.gz
tar -xf helm-v3.9.0-rc.1-linux-amd64.tar.gz
ln -s /opt/k8s/helm/linux-amd64/helm /usr/bin/helm
helm version
helm help

2)配置hosts

192.168.0.120 myharbor.com

3)創(chuàng)建stl證書

mkdir /opt/k8s/helm/stl && cd /opt/k8s/helm/stl
# 生成 CA 證書私鑰
openssl genrsa -out ca.key 4096
# 生成 CA 證書
openssl req -x509 -new -nodes -sha512 -days 3650 \
 -subj "/C=CN/ST=Guangdong/L=Shenzhen/O=harbor/OU=harbor/CN=myharbor.com" \
 -key ca.key \
 -out ca.crt
# 創(chuàng)建域名證書,生成私鑰
openssl genrsa -out myharbor.com.key 4096
# 生成證書簽名請(qǐng)求 CSR
openssl req -sha512 -new \
    -subj "/C=CN/ST=Guangdong/L=Shenzhen/O=harbor/OU=harbor/CN=myharbor.com" \
    -key myharbor.com.key \
    -out myharbor.com.csr
# 生成 x509 v3 擴(kuò)展
cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1=myharbor.com
DNS.2=*.myharbor.com
DNS.3=hostname
EOF
#創(chuàng)建 Harbor 訪問證書
openssl x509 -req -sha512 -days 3650 \
    -extfile v3.ext \
    -CA ca.crt -CAkey ca.key -CAcreateserial \
    -in myharbor.com.csr \
    -out myharbor.com.crt

4)安裝ingress
ingress 官方網(wǎng)站:https://kubernetes.github.io/ingress-nginx/
ingress 倉(cāng)庫(kù)地址:https://github.com/kubernetes/ingress-nginx
部署文檔:https://kubernetes.github.io/ingress-nginx/deploy/

1、通過helm部署

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace

2、通過YAML 文件安裝(本章使用這個(gè)方式安裝ingress)

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml

如果下載鏡像失敗,可以用以下方式修改鏡像地址再安裝

# 可以先把鏡像下載,再安裝
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.2.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
# 修改鏡像地址
sed -i 's@k8s.gcr.io/ingress-nginx/controller:v1.2.0\(.*\)@registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.2.0@' deploy.yaml
sed -i 's@k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1\(.*\)$@registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1@' deploy.yaml

###還需要修改兩地方
#1、kind: 類型修改成DaemonSet,replicas: 注銷掉,因?yàn)镈aemonSet模式會(huì)每個(gè)節(jié)點(diǎn)運(yùn)行一個(gè)pod
#2、在添加一條: hostnetwork:true
#3、把LoadBalancer修改成NodePort
#4、在--validating-webhook-key下面添加- --watch-ingress-without-class=true
#5、設(shè)置master節(jié)點(diǎn)可調(diào)度
kubectl taint nodes k8s-master-168-0-113 node-role.kubernetes.io/control-plane:NoSchedule-
kubectl taint nodes k8s-master2-168-0-116 node-role.kubernetes.io/control-plane:NoSchedule-

kubectl apply -f deploy.yaml
image.png

5)安裝nfs
1、所有節(jié)點(diǎn)安裝nfs

yum -y install  nfs-utils rpcbind

2、在master節(jié)點(diǎn)創(chuàng)建共享目錄并授權(quán)

mkdir /opt/nfsdata
# 授權(quán)共享目錄
chmod 666 /opt/nfsdata

3、配置exports文件

cat > /etc/exports<<EOF
/opt/nfsdata *(rw,no_root_squash,no_all_squash,sync)
EOF
# 配置生效
exportfs -r

exportfs命令

常用選項(xiàng)
-a 全部掛載或者全部卸載
-r 重新掛載
-u 卸載某一個(gè)目錄
-v 顯示共享目錄 以下操作在服務(wù)端上

4、啟動(dòng)rpc和nfs(客戶端只需要啟動(dòng)rpc服務(wù))(注意順序)

systemctl start rpcbind
systemctl start nfs-server
systemctl enable rpcbind
systemctl enable nfs-server

查看

showmount -e
# VIP
showmount -e 192.168.0.120

-e 顯示NFS服務(wù)器的共享列表
-a 顯示本機(jī)掛載的文件資源的情況NFS資源的情況
-v 顯示版本號(hào)

5、客戶端

# 安裝
yum -y install  nfs-utils rpcbind
# 啟動(dòng)rpc服務(wù)
systemctl start rpcbind
systemctl enable rpcbind
# 創(chuàng)建掛載目錄
mkdir /mnt/nfsdata
# 掛載
echo "192.168.0.120:/opt/nfsdata /mnt/nfsdata     nfs    defaults  0 1">> /etc/fstab
mount -a

6、rsync數(shù)據(jù)同步
【1】rsync安裝

# 兩端都得安裝
yum -y install rsync

【2】配置
在/etc/rsyncd.conf中添加

cat >/etc/rsyncd.conf<<EOF
uid = root
gid = root
#禁錮在源目錄
use chroot = yes
#監(jiān)聽地址                                   
address = 192.168.0.113
#監(jiān)聽地址tcp/udp 873,可通過cat /etc/services | grep rsync查看                    
port 873
#日志文件位置
log file = /var/log/rsyncd.log
#存放進(jìn)程 ID 的文件位置
pid file = /var/run/rsyncd.pid
#允許訪問的客戶機(jī)地址
hosts allow = 192.168.0.0/16
#共享模塊名稱
[nfsdata]
#源目錄的實(shí)際路徑                                           
path = /opt/nfsdata
comment = Document Root of www.kgc.com
#指定客戶端是否可以上傳文件,默認(rèn)對(duì)所有模塊為 true
read only = yes
#同步時(shí)不再壓縮的文件類型
dont compress = *.gz *.bz2 *.tgz *.zip *.rar *.z
#授權(quán)賬戶,多個(gè)賬號(hào)以空格分隔,不加則為匿名,不依賴系統(tǒng)賬號(hào)
auth users = backuper
#存放賬戶信息的數(shù)據(jù)文件
secrets file = /etc/rsyncd_users.db
EOF

配置rsyncd_users.db

cat >/etc/rsyncd_users.db<<EOF
backuper:123456
EOF
#官方要求,最好只是賦權(quán)600!
chmod 600 /etc/rsyncd_users.db

【3】rsyncd.conf 常用參數(shù)詳解
rsyncd.conf 參數(shù) 參數(shù)說明
uid=root rsync 使用的用戶。
gid=root rsync 使用的用戶組(用戶所在的組)
use chroot=no 如果為 true,daemon 會(huì)在客戶端傳輸文件前“chroot to the path”。這是一種安全配置,因?yàn)槲覀兇蠖鄶?shù)都在內(nèi)網(wǎng),所以不配也沒關(guān)系
max connections=200 設(shè)置最大連接數(shù),默認(rèn) 0,意思無(wú)限制,負(fù)值為關(guān)閉這個(gè)模塊
timeout=400 默認(rèn)為 0,表示 no timeout,建議 300-600(5-10 分鐘)
pid file rsync daemon 啟動(dòng)后將其進(jìn)程 pid 寫入此文件。如果這個(gè)文件存在,rsync 不會(huì)覆蓋該文件,而是會(huì)終止
lock file 指定 lock 文件用來(lái)支持“max connections”參數(shù),使得總連接數(shù)不會(huì)超過限制
log file 不設(shè)或者設(shè)置錯(cuò)誤,rsync 會(huì)使用 rsyslog 輸出相關(guān)日志信息
ignore errors 忽略 I/O 錯(cuò)誤
read only=false 指定客戶端是否可以上傳文件,默認(rèn)對(duì)所有模塊為 true
list=false 是否允許客戶端可以查看可用模塊列表,默認(rèn)為可以
hosts allow 指定可以聯(lián)系的客戶端主機(jī)名或和 ip 地址或地址段,默認(rèn)情況沒有此參數(shù),即都可以連接
hosts deny 指定不可以聯(lián)系的客戶端主機(jī)名或 ip 地址或地址段,默認(rèn)情況沒有此參數(shù),即都可以連接
auth users 指定以空格或逗號(hào)分隔的用戶可以使用哪些模塊,用戶不需要在本地系統(tǒng)中存在。默認(rèn)為所有用戶無(wú)密碼訪問
secrets file 指定用戶名和密碼存放的文件,格式;用戶名;密碼,密碼不超過 8 位
[backup] 這里就是模塊名稱,需用中括號(hào)擴(kuò)起來(lái),起名稱沒有特殊要求,但最好是有意義的名稱,便于以后維護(hù)
path 這個(gè)模塊中,daemon 使用的文件系統(tǒng)或目錄,目錄的權(quán)限要注意和配置文件中的權(quán)限一致,否則會(huì)遇到讀寫的問題
【4】rsync常用命令參數(shù)詳解


image.png

【5】啟動(dòng)服務(wù)(數(shù)據(jù)源機(jī)器)

#rsync監(jiān)聽端口:873
#rsync運(yùn)行模式:C/S
rsync --daemon --config=/etc/rsyncd.conf
netstat -tnlp|grep :873

【6】執(zhí)行命令同步數(shù)據(jù)

# 在目的機(jī)器上執(zhí)行
# rsync -avz 用戶名@源主機(jī)地址/源目錄 目的目錄
rsync -avz root@192.168.0.113:/opt/nfsdata/* /opt/nfsdata/

【7】crontab定時(shí)同步

# 配置crontab, 每五分鐘同步一次,這種方式不好
*/5 * * * * rsync -avz root@192.168.0.113:/opt/nfsdata/* /opt/nfsdata/

【溫馨提示】crontab定時(shí)同步數(shù)據(jù)不太好,可以使用rsync+inotify做數(shù)據(jù)實(shí)時(shí)同步,這里篇幅有點(diǎn)長(zhǎng)了,先不講,如果后面有時(shí)間會(huì)出一篇單獨(dú)文章來(lái)講。

6)創(chuàng)建nfs provisioner和持久化存儲(chǔ)SC
【溫馨提示】這里跟我之前的文章有點(diǎn)不同,之前的方式也不適用新版本。

GitHub地址:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

helm部署nfs-subdir-external-provisioner

1、添加helm倉(cāng)庫(kù)

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/

2、helm安裝nfs provisioner
【溫馨提示】默認(rèn)鏡像是無(wú)法訪問的,這里使用dockerhub搜索到的鏡像willdockerhub/nfs-subdir-external-provisioner:v4.0.2,還有就是StorageClass不分命名空間,所有在所有命名空間下都可以使用。

helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
  --namespace=nfs-provisioner \
  --create-namespace \
  --set image.repository=willdockerhub/nfs-subdir-external-provisioner \
  --set image.tag=v4.0.2 \
  --set replicaCount=2 \
  --set storageClass.name=nfs-client \
  --set storageClass.defaultClass=true \
  --set nfs.server=192.168.0.120 \
  --set nfs.path=/opt/nfsdata

【溫馨提示】上面 nfs.server設(shè)置為VIP,可實(shí)現(xiàn)高可用。
3、查看

kubectl get pods,deploy,sc -n nfs-provisioner
image.png

7)部署 Harbor(Https方式)
1、創(chuàng)建 Namespace

kubectl create ns harbor

2、創(chuàng)建證書秘鑰

kubectl create secret tls myharbor.com --key myharbor.com.key --cert myharbor.com.crt -n harbor
kubectl get secret myharbor.com -n harbor

3、添加 Chart 庫(kù)

helm repo add harbor https://helm.goharbor.io

4、通過helm安裝harbor

helm install myharbor --namespace harbor harbor/harbor \
  --set expose.ingress.hosts.core=myharbor.com \
  --set expose.ingress.hosts.notary=notary.myharbor.com \
  --set-string expose.ingress.annotations.'nginx\.org/client-max-body-size'="1024m" \
  --set expose.tls.secretName=myharbor.com \
  --set persistence.persistentVolumeClaim.registry.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.jobservice.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.database.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.redis.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.trivy.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.chartmuseum.storageClass=nfs-client \
  --set persistence.enabled=true \
  --set externalURL=https://myharbor.com \
  --set harborAdminPassword=Harbor12345

這里稍等一段時(shí)間在查看資源狀態(tài)

kubectl get ingress,svc,pods,pvc -n harbor
image.png

5、ingress沒有ADDRESS問題解決
【分析】,發(fā)現(xiàn)"error: endpoints “default-http-backend” not found"

cat << EOF > default-http-backend.yaml
---
 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: harbor
spec:
  replicas: 1
  selector:
    matchLabels:
      app: default-http-backend
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissible as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.4
#        image: gcr.io/google_containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---
 
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: harbor
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend
EOF
kubectl apply -f default-http-backend.yaml

6、卸載重新部署

卸載

helm uninstall myharbor -n harbor
kubectl get pvc -n harbor| awk 'NR!=1{print $1}' | xargs kubectl delete pvc -n harbor

部署

helm install myharbor --namespace harbor harbor/harbor \
  --set expose.ingress.hosts.core=myharbor.com \
  --set expose.ingress.hosts.notary=notary.myharbor.com \
  --set-string expose.ingress.annotations.'nginx\.org/client-max-body-size'="1024m" \
  --set expose.tls.secretName=myharbor.com \
  --set persistence.persistentVolumeClaim.registry.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.jobservice.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.database.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.redis.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.trivy.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.chartmuseum.storageClass=nfs-client \
  --set persistence.enabled=true \
  --set externalURL=https://myharbor.com \
  --set harborAdminPassword=Harbor12345
image.png

5、訪問harbor
https://myharbor.com
賬號(hào)/密碼:admin/Harbor12345

image.png

6、harbor常見操作
【1】創(chuàng)建項(xiàng)目bigdata
image.png

【2】配置私有倉(cāng)庫(kù)
在文件/etc/docker/daemon.json添加如下內(nèi)容:

"insecure-registries":["https://myharbor.com"]

重啟docker

systemctl restart docker

【3】服務(wù)器上登錄harbor

docker login https://myharbor.com
#賬號(hào)/密碼:admin/Harbor12345
image.png

【4】打標(biāo)簽并把鏡像上傳到harbor

docker tag rancher/pause:3.6 myharbor.com/bigdata/pause:3.6
docker push myharbor.com/bigdata/pause:3.6

7、修改containerd配置
以前使用docker-engine的時(shí)候,只需要修改/etc/docker/daemon.json就行,但是新版的k8s已經(jīng)使用containerd了,所以這里需要做相關(guān)配置,要不然containerd會(huì)失敗。證書(ca.crt)可以在頁(yè)面上下載:


image.png

創(chuàng)建域名目錄

mkdir /etc/containerd/myharbor.com
cp ca.crt /etc/containerd/myharbor.com/

配置文件:/etc/containerd/config.toml

[plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."myharbor.com".tls]
          ca_file = "/etc/containerd/myharbor.com/ca.crt"
        [plugins."io.containerd.grpc.v1.cri".registry.configs."myharbor.com".auth]
          username = "admin"
          password = "Harbor12345"

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."myharbor.com"]
          endpoint = ["https://myharbor.com"]

image.png

重啟containerd

#重新加載配置
systemctl daemon-reload
#重啟containerd
systemctl restart containerd

簡(jiǎn)單使用

# 把docker換成crictl 就行,命令都差不多
crictl pull myharbor.com/bigdata/mysql:5.7.38

執(zhí)行crictl報(bào)如下錯(cuò)誤的解決辦法

WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"

這個(gè)報(bào)錯(cuò)是docker的報(bào)錯(cuò),這里沒使用,所以這個(gè)錯(cuò)誤不影響使用,但是還是解決好點(diǎn),解決方法如下:

cat <<EOF> /etc/crictl.yaml 
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

再次拉取鏡像

crictl pull myharbor.com/bigdata/mysql:5.7.38
image.png

image.png

Kubernetes(k8s)最新版最完整版基礎(chǔ)環(huán)境部署+master高可用實(shí)現(xiàn)詳細(xì)步驟就到這里了,有疑問的小伙伴歡迎給我留言哦~

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。
禁止轉(zhuǎn)載,如需轉(zhuǎn)載請(qǐng)通過簡(jiǎn)信或評(píng)論聯(lián)系作者。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容