kubeadm1.19高可用kubernetes部署
預(yù)準備
3臺2核2G服務(wù)器(虛擬機),電腦性能好推薦虛擬機省不少錢
前置的配置,源,docker安裝等請參考之前的博客 -> 安裝教程
配置好后不要執(zhí)行kubeadm init 就可回到這篇文章繼續(xù)看
開始安裝
1.Etcd集群安裝
etcd是一個高可用的分布式鍵值(key-value)數(shù)據(jù)庫,kubernetes將服務(wù)和數(shù)據(jù)信息保存在etcd中,如果etcd掛掉集群不可用,數(shù)據(jù)如果丟失集群將變?yōu)槌跏紶顟B(tài),所以etcd的高可用必須要保證,這里將使用外置etcd集群,不在使用kubeadm初始化時自動生成的容器化的單機etcd
首先選擇一個服務(wù)器,用來生成etcd所需要的加密證書,這里使用cfssl方式,可以以json的格式生成,不需要寫一長串命令
# 安裝并配置環(huán)境變量
yum install wget -y
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
export PATH=/usr/local/bin:$PATH
生成證書的json文件
mkdir /root/ssl
cd /root/ssl
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes-Soulmate": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "kubernetes-Soulmate",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "shanghai",
"L": "shanghai",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
#hosts需要填所有的節(jié)點ip
cat > etcd-csr.json <<EOF
{
"CN": "etcd",
"hosts": [
"127.0.0.1",
"172.16.3.130",
"172.16.3.131",
"172.16.3.132"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "shanghai",
"L": "shanghai",
"O": "k8s",
"OU": "System"
}
]
}
EOF
#簽名
cfssl gencert -ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd
接下來我們需要將證書分發(fā)到每一個需要安裝etcd的服務(wù)上,由于我是在130服務(wù)器上生成的證書所以只需要將證書發(fā)送給另外兩臺服務(wù)器即可
mkdir -p /etc/etcd/ssl
cp /root/ssl/etcd.pem /root/ssl/etcd-key.pem /root/ssl/ca.pem /etc/etcd/ssl/
scp -r /etc/etcd/ 172.16.3.131:/etc/
scp -r /etc/etcd/ 172.16.3.132:/etc/
接下來在每臺需要安裝etcd服務(wù)的節(jié)點上安裝etcd服務(wù)
yum install etcd -y
# 創(chuàng)建數(shù)據(jù)保存目錄,備份點
mkdir -p /var/lib/etcd
然后配置etcd的配置文件,將其修改為集群模式,需要注意的幾個點
需要修改--name
需要將ip修改為對應(yīng)服務(wù)器的ip
如果出現(xiàn)request sent was ignored (cluster ID mismatch: peer[f73f6335fab3c75e]=903824bb6a071282問題,可以將--initial-cluster-state修改為existing
--initial-cluster 后面的名字一定要和--name對應(yīng)上
--data-dir 要提前創(chuàng)建好
cat <<EOF >/usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/bin/etcd \
--name k8s01 \
--cert-file=/etc/etcd/ssl/etcd.pem \
--key-file=/etc/etcd/ssl/etcd-key.pem \
--peer-cert-file=/etc/etcd/ssl/etcd.pem \
--peer-key-file=/etc/etcd/ssl/etcd-key.pem \
--trusted-ca-file=/etc/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \
--initial-advertise-peer-urls https://172.16.3.130:2380 \
--listen-peer-urls https://172.16.3.130:2380 \
--listen-client-urls https://172.16.3.130:2379,http://127.0.0.1:2379 \
--advertise-client-urls https://172.16.3.130:2379 \
--initial-cluster-token etcd-cluster-0 \
--initial-cluster k8s01=https://172.16.3.130:2380,k8s02=https://172.16.3.131:2380,k8s03=https://172.16.3.132:2380 --initial-cluster-state new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
每臺etcd都配置完成后,啟動,如果出現(xiàn)異常請檢查配置文件
systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd
找一臺安裝了etcd服務(wù)的機器來查看集群狀態(tài)
#配置3.0的api 否則有些命令沒有
echo "export ETCDCTL_API=3" >>/etc/profile && source /etc/profile
#修改為自己的ip
etcdctl --endpoints=https://172.16.3.130:2379,https://172.16.3.131:2379,https://172.16.3.132:2379 --cacert=/etc/etcd/ssl/ca.pem --cert=/etc/etcd/ssl/etcd.pem --key=/etc/etcd/ssl/etcd-key.pem endpoint health
如果如下圖全部successfully就是成功了,如果有一個失敗的請檢查systemctl status etcd,查看錯誤信息,如果沒有特殊情況就是配置文件出錯了
2.安裝keepalived
這里我模擬2臺master節(jié)點,1個node節(jié)點,兩個master節(jié)點通過keepalived漂移ip保活,以下配置需要在兩臺master節(jié)點都修改
#啟動kubelet
systemctl daemon-reload
systemctl enable kubelet
安裝keepalived
yum install -y keepalived
systemctl enable keepalived
cat <<EOF > /etc/keepalived/keepalived.conf
global_defs {
router_id LVS_k8s
}
#心跳腳本
vrrp_script CheckK8sMaster {
script "curl -k https://172.16.3.110:6443" # 虛ip
interval 3
timeout 9
fall 2
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens33 #云服務(wù)器一般eth0,虛擬機需要自己ifconfig查看下
virtual_router_id 61
priority 120
advert_int 1
mcast_src_ip 172.16.3.130 #寫當(dāng)前服務(wù)器的ip
nopreempt
authentication {
auth_type PASS
auth_pass sqP05dQgMSlzrxHj
}
unicast_peer {
172.16.3.131 #另外一臺masterIP,例如在131服務(wù)器上這里需要寫130,也就是除了mcast_src_ip的ip
}
virtual_ipaddress {
172.16.3.110/24 # VIP
}
track_script {
CheckK8sMaster
}
}
EOF
啟動keepalived并測試下ip能否正常漂移
systemctl start keepalived
systemctl status keepalived
首先查看vip在哪臺master上
#找到對應(yīng)的vip
ip a
如下圖,我的vip目前在172.16.3.131這臺機器上,這時候關(guān)閉131上的keepalived看看vip能否正常漂移到另一臺master也就是130上,如果可以正常漂移過去就說明生效了
3.整理初始化配置文件
這個步驟由于每個版本的配置文件模板不一樣所以需要動態(tài)調(diào)整,需要在自己的版本配置文件的基礎(chǔ)上來修改,我這里的版本是kubeadm.k8s.io/v1beta2,以下有幾個注意的點
apiServer.certSANs需要把集群所有ip添加,并且還要吧對應(yīng)的hosts的名稱添加進來
localAPIEndpoint.advertiseAddress是當(dāng)前的節(jié)點ip
controlPlaneEndpoint是 vip的地址
imageRepository需要改為國內(nèi)鏡像源
etcd需要把剛才的ip都配置一遍
#先導(dǎo)出一份基礎(chǔ)配置
kubeadm config print init-defaults >init-config.yaml
這個是基于1.19版本的修改為高可用后的配置文件
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.16.3.130
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
certSANs:
- "node0"
- "node1"
- "node2"
- "172.16.3.110"
- "172.16.3.130"
- "172.16.3.131"
- "172.16.3.132"
- "127.0.0.1"
extraArgs:
etcd-cafile: /etc/etcd/ssl/ca.pem
etcd-certfile: /etc/etcd/ssl/etcd.pem
etcd-keyfile: /etc/etcd/ssl/etcd-key.pem
controlPlaneEndpoint: "172.16.3.130:6443"
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
external:
caFile: /etc/etcd/ssl/ca.pem
certFile: /etc/etcd/ssl/etcd.pem
keyFile: /etc/etcd/ssl/etcd-key.pem
endpoints:
- https://172.16.3.130:2379
- https://172.16.3.131:2379
- https://172.16.3.132:2379
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: "10.244.0.0/16"
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
4.開始配置集群
需要注意的是,我們可以ping通vip的地址,但是有一個問題,集群的初始化必須要在當(dāng)前vip所在的機器上,否則會瘋狂的超時
GET https://172.16.3.110:6443/healthz?timeout=10s in 0 milliseconds
找到vip所在的master節(jié)點,開始初始化集群
kubeadm init --config init-config.yaml
安裝完成后,會有兩個join的命令,帶有 --control-plane 是加入master集群的,執(zhí)行之前需要先將證書發(fā)送過去
scp -r /etc/kubernetes/pki 172.16.3.130:/etc/kubernetes/
#然后執(zhí)行對應(yīng)的join命令即可
kubeadm join 172.16.3.110:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:c546b1e5c5cf3e587752bbd862db332c183607b6f9c48b6514e9197f25cdbe39 \
--control-plane
#加入成功后配置命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
然后查看下節(jié)點即可
再查看下各組件狀態(tài),如果是1.19版本的話默認是會連接失敗
如果連接失敗只需將/etc/kubernetes/manifests下面的配置文件修改下
[圖片上傳失敗...(image-121cea-1606199416446)]
將這個兩個配置文件上的--port=0這行刪除即可,到此整個高可用集群搭建完成