官方文檔的指導(dǎo),有些問(wèn)題,按照其步驟,無(wú)法正常建立高可用的K8S環(huán)境,現(xiàn)將可行的步驟作此記錄。
本質(zhì)安裝基于CentOS7.6 + Docker 19.03 + Kubernetes 1.17.3 + HAProxy 1.5.18
前置條件:
- 3 Master Node + 3 Work Node 安裝docker
- 3 Master Node + 3 Work Node 安裝Kubernetes
- LB Node 安裝 HAProxy
- 3 Master Node設(shè)置SSH互信,方便后續(xù)scp傳文件
- 關(guān)閉所有Node的SELinux與firewalld,同時(shí)在iptables中添加相應(yīng)端口
操作步驟:
- 在3 Master Node上執(zhí)行kubeadm config images pull --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers下載所需image
- 修改3 Master Node的hostname為master-01,02,03;修改3 Work Node的hostname為work-01,02,03;修改LB Node的hostname為loadblance
hostnamectl set-hostname master-01(其余6個(gè)Node的命令類(lèi)似)
- 在3 Master Node上安裝etcd
yum install -y etcd && systemctl enable etcd
- 在master上配置ETCD
在master-01上執(zhí)行以下腳本
etcd1=10.128.132.234
etcd2=10.128.132.232
etcd3=10.128.132.231
TOKEN=LNk8sTest
ETCDHOSTS=($etcd1 $etcd2 $etcd3)
NAMES=("infra0" "infra1" "infra2")
for i in "${!ETCDHOSTS[@]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/$NAME.conf
# [member]
ETCD_NAME=$NAME
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://$HOST:2380"
ETCD_LISTEN_CLIENT_URLS="http://$HOST:2379,http://127.0.0.1:2379"
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://$HOST:2380"
ETCD_INITIAL_CLUSTER="${NAMES[0]}=http://${ETCDHOSTS[0]}:2380,${NAMES[1]}=http://${ETCDHOSTS[1]}:2380,${NAMES[2]}=http://${ETCDHOSTS[2]}:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="$TOKEN"
ETCD_ADVERTISE_CLIENT_URLS="http://$HOST:2379"
EOF
done
for i in "${!ETCDHOSTS[@]}"; do
HOST=${ETCDHOSTS[$i]}
NAME=${NAMES[$i]}
scp /tmp/$NAME.conf $HOST:
ssh $HOST "\mv -f $NAME.conf /etc/etcd/etcd.conf"
rm -f /tmp/$NAME.conf
done
- 在3 Master Node上啟動(dòng)etcd
systemctl start etcd
- 檢查ETCD的狀態(tài),任意master node上執(zhí)行以下命令
[root@master-01 ~]# etcdctl member list
30bf939e6a7c2da9: name=infra1 peerURLs=http://10.128.132.232:2380 clientURLs=http://10.128.132.232:2379 isLeader=false
49194e6617aabed9: name=infra2 peerURLs=http://10.128.132.231:2380 clientURLs=http://10.128.132.231:2379 isLeader=false
7564c96b6750649c: name=infra0 peerURLs=http://10.128.132.234:2380 clientURLs=http://10.128.132.234:2379 isLeader=true
[root@master-01 ~]# etcdctl cluster-health
member 30bf939e6a7c2da9 is healthy: got healthy result from http://10.128.132.232:2379
member 49194e6617aabed9 is healthy: got healthy result from http://10.128.132.231:2379
member 7564c96b6750649c is healthy: got healthy result from http://10.128.132.234:2379
cluster is healthy
- 在LB Node上配置HAProxy
在LB Node上執(zhí)行以下腳本進(jìn)行配置
[root@loadblance ~]# cat lbconfig.sh
master1=10.128.132.234
master2=10.128.132.232
master3=10.128.132.231
yum install -y haproxy
systemctl enable haproxy
cat << EOF >> /etc/haproxy/haproxy.cfg
listen k8s-lb
bind 0.0.0.0:8443
mode tcp
balance source
timeout server 900s
timeout connect 15s
server master-01 10.128.132.234:6443 check
server master-02 10.128.132.232:6443 check
server master-03 10.128.132.231:6443 check
EOF
- 在master-01上初始化集群
執(zhí)行以下腳本進(jìn)行配置
[root@master-01 ~]# cat initCluster.sh
proxy=10.128.132.230
etcd1=10.128.132.234
etcd2=10.128.132.232
etcd3=10.128.132.231
master1=$etcd1
master2=$etcd2
master3=$etcd3
cat << EOF > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: stable
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
apiServerCertSANs:
- "$proxy"
controlPlaneEndpoint: "$proxy:8443"
etcd:
external:
endpoints:
- "http://$etcd1:2379"
- "http://$etcd2:2379"
- "http://$etcd3:2379"
networking:
podSubnet: "10.244.0.0/16"
EOF
kubeadm init --config kubeadm-config.yaml --v=5
- 拷貝集群證書(shū)至其他master node
拷貝以下證書(shū)
/etc/kubernetes/pki/ca.crt
/etc/kubernetes/pki/ca.key
/etc/kubernetes/pki/sa.key
/etc/kubernetes/pki/sa.pub
/etc/kubernetes/pki/front-proxy-ca.crt
/etc/kubernetes/pki/front-proxy-ca.key
在master-02與master-03上創(chuàng)建目錄/etc/kubernetes/pki,將以上證書(shū)拷貝至該目錄下方。
- 將master-02與master-03加入集群中
根據(jù)在master-01上初始化集群成功后的信息,將其余master加入集群,參考如下
kubeadm join 10.128.132.230:8443 --token pk72tg.30u2cs41v2i4jk0y \
--discovery-token-ca-cert-hash sha256:24b7ff6c9ca456a9155e8f1d0e72500abc71db122a1728afc9d3e14883779c9b \
--control-plane
- 將work-01,02,03加入集群,命令與第十步類(lèi)似,只是去掉了--control-plane參數(shù),參考如下
kubeadm join 10.128.132.230:8443 --token pk72tg.30u2cs41v2i4jk0y \
--discovery-token-ca-cert-hash sha256:24b7ff6c9ca456a9155e8f1d0e72500abc71db122a1728afc9d3e14883779c9b
- 在非root賬號(hào)下,安裝flannel網(wǎng)絡(luò)插件
創(chuàng)建普通賬號(hào)apple,為apple添加sudoer權(quán)限,按照初始化集群后的提示進(jìn)行操作,參考如下
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
切換到apple賬號(hào)下,安裝flannel網(wǎng)絡(luò)插件
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
13 檢查node添加狀況及集群狀態(tài)
kubectl get cs # 查看etcd集群狀態(tài)
kubectl get pods -o wide -n kube-system # 查看系統(tǒng)服務(wù)狀態(tài)
kubectl get nodes # 查看集群節(jié)點(diǎn)狀態(tài)
14 為worknode添加Roles
kubectl label nodes work-01 node-role.kubernetes.io/work=
kubectl label nodes work-02 node-role.kubernetes.io/work=
kubectl label nodes work-03 node-role.kubernetes.io/work=
檢查最終狀態(tài)
[apple@master-03 root]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-01 Ready master 8d v1.17.3
master-02 Ready master 12h v1.17.3
master-03 Ready master 12h v1.17.3
work-01 Ready work 11h v1.17.3
work-02 Ready work 11h v1.17.3
work-03 Ready work 11h v1.17.3
注意事項(xiàng)
- Kubernetes隨著版本改變,API版本會(huì)有變化,如果安裝中遇到找不到某些kind的問(wèn)題,需要去查看當(dāng)前所用版本Kubernetes中kind所對(duì)應(yīng)的具體API版本
- 如果遇到無(wú)法初始化的問(wèn)題,可以通過(guò)curl查看LB與master節(jié)點(diǎn)間能否正常訪問(wèn)