基于KubeAdm快速搭建K8s集群(持續(xù)更新)

基礎(chǔ)配置

這里提供一種基于Kubeadm快速部署K8s集群的操作方式,基礎(chǔ)環(huán)境如下:

ip os component hostname Cpu Memory Storage
172.30.3.220 Ubuntu16.04 server master k8s-n1 4 Core 8G 200G
172.30.3.221 Ubuntu16.04 server node k8s-n2 4 Core 8G 200G
172.30.3.222 Ubuntu16.04 server node k8s-n3 4 Core 8G 200G

kubelet版本:V1.14

Docker版本:V18.09.2

沒有特別說明,用戶使用root

首先,三臺機(jī)器配置好hosts并關(guān)閉sawp:

# 三臺機(jī)器的hosts文件中添加:
172.30.3.220  k8s-n1
172.30.3.221  k8s-n2
172.30.3.222  k8s-n3
# 關(guān)閉swap
$ swapoff -a

其次,給三臺機(jī)器配置ssh免密碼登錄,非必要步驟,主要是方便操作。

# k8s-n1、k8s-n2、k8s-n3生成ssh密鑰
$ ssh-keygen -t rsa
# 創(chuàng)建~/.ssh/authorized_keys
$ touch ~/.ssh/authorized_keys
# 將公鑰導(dǎo)入到授權(quán)keys
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# 將k8s-n1的授權(quán)keys發(fā)送給k8s-n2
$ scp -i ~/.ssh/authorized_keys root@k8s-n2:~/.ssh/
# 登錄到k8s-n2并將公鑰追加到授權(quán)keys
$ ssh root@k8s-n2
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# 將k8s-n2的授權(quán)keys發(fā)送給k8s-n3
$ scp -i ~/.ssh/authorized_keys root@k8s-n3:~/.ssh/
# 登錄到k8s-n3并將公鑰追加到授權(quán)keys
$ ssh root@k8s-n3
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
# 將k8s-n3上的授權(quán)keys文件發(fā)送到k8s-n1、k8s-n2
$ scp -i ~/.ssh/authorized_keys root@k8s-n1:~/.ssh/
$ scp -i ~/.ssh/authorized_keys root@k8s-n2:~/.ssh/

給三臺機(jī)器安裝Docker:

# 更新apt源(可以翻墻的情況下,直接使用ubuntu的原聲配置就闊以)
$ apt update
# 安裝最新的docker
$ apt install docker.io -y
# 設(shè)置開機(jī)啟動,并啟動服務(wù)
$ systemctl enable docker && systemctl start docker
# 查看docker是否啟動成功
$ systemctl status docker
# 如果啟動失敗,可以通過該命令查看啟動過程
$ journalctl -xefu docker

給三臺機(jī)器配置kubelet、kubeadm、kubectl等組件:

# 安裝必要組建
$ apt-get update && apt-get install -y apt-transport-https
# 配置證書和deb源
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
$ cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
# 更新源并安裝
$ apt-get update
$ apt-get install -y kubelet kubeadm kubectl
$ systemctl enable kubelet
$ systemctl start kubelet
# 注意,這個時候kubelet還不能正常啟動,因為缺少基礎(chǔ)配置文件
# 通過如下指令查看不能啟動的原因
$ journalctl -xefu kubelet

Maser節(jié)點K8s-n1的配置

開始配置Master節(jié)點K8s-n1:

# 配置cgroup driver,先看下默認(rèn)配置,一般是cgroupfs
$ docker info | grep cgroup   
# 再看下kube10-kubeadm.conf中的--cgroup-driver 默認(rèn)是 systemd
# 在這一行的后面添加"--cgroup-driver=cgroupfs"
$ vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
# 變成
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=cgroupfs"

# 重啟kubelet
$ systemctl daemon-reload
$ systemctl restart kubelet

初始化master節(jié)點:

# --pod-network-cidr是網(wǎng)絡(luò)插件flannel會用到的參數(shù),設(shè)定pod會使用的IP range
$ kubeadm init --apiserver-advertise-address=172.30.3.220 --pod-network-cidr=10.244.0.0/16
# 命令執(zhí)行成功后會輸出join node的指令:
$ kubeadm join 172.30.3.220:6443 --token jkuc3w.5sq85b4dh5f2deet \
    --discovery-token-ca-cert-hash sha256:ad953ebdc367105595ec70b9d4d9d2a17cc6c98e68bd0b8857bce34745c1a9d5
# 當(dāng)需要添加node節(jié)點時,可以在node機(jī)器上運(yùn)行該指令
# 如果失敗,可以運(yùn)行以下指令重來:
$ kubeadm reset

此時,k8s-n1上的kubelet服務(wù)應(yīng)該是正常的狀態(tài),但是提示缺少網(wǎng)絡(luò)插件:

$ systemctl status kubelet
kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Wed 2019-04-10 21:59:46 EDT; 3h 2min ago
     Docs: https://kubernetes.io/docs/home/
 Main PID: 31544 (kubelet)
    Tasks: 18
   Memory: 47.7M
      CPU: 32min 52.401s
   CGroup: /system.slice/kubelet.service
           └─31544 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml

Apr 11 01:02:40 k8s-n1 kubelet[31544]: E0411 01:02:40.342504   31544 remote_runtime.go:109] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to
Apr 11 01:02:40 k8s-n1 kubelet[31544]: E0411 01:02:40.342633   31544 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "coredns-fb8b8dccf-tx5vf_kube-system(b9daf3d4-5c04-11e9-9
Apr 11 01:02:40 k8s-n1 kubelet[31544]: E0411 01:02:40.342724   31544 kuberuntime_manager.go:693] createPodSandbox for pod "coredns-fb8b8dccf-tx5vf_kube-system(b9daf3d4-5c04-11e9-
Apr 11 01:02:40 k8s-n1 kubelet[31544]: E0411 01:02:40.342941   31544 pod_workers.go:190] Error syncing pod b9daf3d4-5c04-11e9-99bc-000c2968fc47 ("coredns-fb8b8dccf-tx5vf_kube-sys
Apr 11 01:02:40 k8s-n1 kubelet[31544]: W0411 01:02:40.437223   31544 docker_sandbox.go:384] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook
Apr 11 01:02:40 k8s-n1 kubelet[31544]: W0411 01:02:40.494044   31544 pod_container_deletor.go:75] Container "7ab3628dd31222b83406e4366847eaed26d8b5def60bd5d4a87700b215b25e0a" not
Apr 11 01:02:40 k8s-n1 kubelet[31544]: W0411 01:02:40.498937   31544 cni.go:309] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated c
Apr 11 01:02:40 k8s-n1 kubelet[31544]: W0411 01:02:40.516730   31544 docker_sandbox.go:384] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook
Apr 11 01:02:40 k8s-n1 kubelet[31544]: W0411 01:02:40.566554   31544 pod_container_deletor.go:75] Container "64449655fad7b8397bc62582bccb81cd77998b173a879b354ca29b882f4a30de" not
Apr 11 01:02:40 k8s-n1 kubelet[31544]: W0411 01:02:40.571572   31544 cni.go:309] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated c

配置網(wǎng)絡(luò)flannel

這是正常的,因為k8s集群網(wǎng)絡(luò)必須依賴外部的網(wǎng)絡(luò)插件,接下來安裝網(wǎng)絡(luò)插件flannel :

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

# 查看安裝情況
$ kubeclt -n kube-system get pod
NAME                             READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-dthq9          1/1     Running   0          158m
coredns-fb8b8dccf-tx5vf          1/1     Running   0          158m
etcd-k8s-n1                      1/1     Running   0          3h28m
kube-apiserver-k8s-n1            1/1     Running   0          3h28m
kube-controller-manager-k8s-n1   1/1     Running   0          3h28m
kube-flannel-ds-amd64-zrn9h      1/1     Running   0          32s
kube-proxy-6vswp                 1/1     Running   0          3h29m
kube-scheduler-k8s-n1            1/1     Running   0          3h28m

# 也可以看到flannel的鏡像
$ docker image ls |grep flannel
quay.io/coreos/flannel               v0.11.0-amd64       ff281650a721        2 months ago        52.6MB

配置node節(jié)點(添加/刪除Node節(jié)點)

分別在k8s-n2、k8s-n3上執(zhí)行:

# 添加node節(jié)點
$ kubeadm join 172.30.3.220:6443 --token cvenka.7z3kqwjo9ca6k4js \
    --discovery-token-ca-cert-hash sha256:55e84a80cd67fb070d1484a5d28e1457cdcbe5f4d781fc8ec78de5843fce6cc8

# 刪除node k8s-n2節(jié)點
$ kubectl drain k8s-n2 --delete-local-data --force --ignore-daemonsets
$ kubectl delete node k8s-n2
# 在k8s-n2節(jié)點重置
$ kubeadm reset

配置kubectl訪問k8s集群

# 配置環(huán)境變量
vi /etc/profile
# 添加
export KUBECONFIG=/etc/kubernetes/admin.conf

# 配置非root用戶訪問,非必要過程
$ su yzhou
$ mkdir -p ~/.kube
$ sudo cp -i /etc/kubernetes/admin.conf ~/.kube/config
$ sudo chown $(id -u):$(id -g) ~/.kube/config

# 配置本地用戶訪問(macOS 安裝kubectl)
$ brew install kubectl 
$ mkdir ~/.kube
$ vim ~/.kube/config
# 把k8s-n1機(jī)器上的/etc/kubernetes/admin.conf文件的內(nèi)容copy出來復(fù)制到本地的~/.kube/config
$ sudo chown $(id -u):$(id -g) ~/.kube/config
# 測試kubectl
$ kubectl cluster-info
# 有結(jié)果輸出,無錯誤則配置正確,如果失敗,使用以下指令查看日志
$ kubectl cluster-info dump

查看集群狀態(tài)

# 查看node情況
$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
k8s-n1    Ready     master    3h        v1.14.1
k8s-n2    Ready     <none>    3m        v1.14.1
k8s-n3    Ready     <none>    1m        v1.14.1

# 查看kube的系統(tǒng)pod情況
$ kubectl get pods -o wide --all-namespaces=true
NAME                             READY     STATUS    RESTARTS   AGE
coredns-fb8b8dccf-dthq9          1/1       Running   0          2h
coredns-fb8b8dccf-tx5vf          1/1       Running   0          2h
etcd-k8s-n1                      1/1       Running   0          3h
kube-apiserver-k8s-n1            1/1       Running   0          3h
kube-controller-manager-k8s-n1   1/1       Running   0          3h
kube-flannel-ds-amd64-d5rqq      1/1       Running   0          4m
kube-flannel-ds-amd64-sqfqt      1/1       Running   0          2m
kube-flannel-ds-amd64-zrn9h      1/1       Running   0          10m
kube-proxy-2bhnb                 1/1       Running   0          4m
kube-proxy-6vswp                 1/1       Running   0          3h
kube-proxy-bcnrb                 1/1       Running   0          2m
kube-scheduler-k8s-n1            1/1       Running   0          3h

到此為止,基于kubeadm搭建的k8s集群就全部完畢了。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

友情鏈接更多精彩內(nèi)容