K8S的安裝(Ubuntu 20.04)

前言

本文介紹如何在ubuntu上部署k8s集群,大致可以分為如下幾個(gè)步驟:

  • 修改ubuntu配置
  • 安裝docker
  • 安裝kubeadm、kubectl以及kubelet
  • 初始化master節(jié)點(diǎn)
  • slave節(jié)點(diǎn)加入網(wǎng)絡(luò)

如果你對(duì)上面的某些名字感到陌生,沒(méi)關(guān)系,下文會(huì)一一進(jìn)行講解,如果你想先了解一下 docker 和 k8s,可以參考 10分鐘看懂Docker和K8S。好了,在正式開(kāi)始之前,首先看一下我們都有哪些服務(wù)器,如果你對(duì)如何組建如下虛擬機(jī)網(wǎng)絡(luò)感興趣的話,可以參考 virtualbox 虛擬機(jī)組網(wǎng)

VMware 中使用Ubuntu 20.04 版本的鏡像安裝一個(gè)虛擬機(jī),執(zhí)行如下操作:

sudo apt-get install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
    

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

    
echo \
  "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


sudo apt-get install openssh-server
sudo apt install net-tools

sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

sudo systemctl enable ssh
sudo systemctl enable docker


克隆兩臺(tái)虛擬機(jī),然后按圖中的四步修改虛擬機(jī)的MAC地址


4.png
主機(jī)名 主機(jī)ip 版本 CPU 內(nèi)存
master1 192.168.56.11 Ubuntu server 18.04 2核 1G
worker1 192.168.56.21 Ubuntu server 18.04 2核 1G

因?yàn)?code>k8s分為管理節(jié)點(diǎn)和工作節(jié)點(diǎn),所以我們將要 master1上部署管理節(jié)點(diǎn),在worker1上部署工作節(jié)點(diǎn)。如果想了解如何創(chuàng)建這兩個(gè)節(jié)點(diǎn),可以參考 virtualbox 虛擬機(jī)組網(wǎng) 。服務(wù)器配置上,k8s 要求 CPU 最低為 2 核,不然在安裝過(guò)程中會(huì)報(bào)錯(cuò),雖然這個(gè)錯(cuò)誤可以避免,但是為了穩(wěn)定起見(jiàn)還是把虛擬機(jī)的配置成它想要的,至于內(nèi)存 k8s 沒(méi)有硬性要求,所以我就按我電腦的性能來(lái)分配了。

注意,本文的 docker、k8s 等軟件安裝均未指定具體版本,在本文完成時(shí)2019/6/27,下載到的版本如下,如有特殊版本需要請(qǐng)自行指定版本。

軟件名 版本
docker 18.09.5
kubectl 1.15.0-00 amd64
kubeadm 1.15.0-00 amd64
kubelet 1.15.0-00 amd64

一. 修改 ubuntu 配置

首先,k8s 要求我們的 ubuntu 進(jìn)行一些符合它要求的配置。很簡(jiǎn)單,包括以下兩步:關(guān)閉 Swap 內(nèi)存 以及 配置免密登錄,這一步兩臺(tái)主機(jī)都需要進(jìn)行配置。

關(guān)閉 swap 內(nèi)存

這個(gè)swap其實(shí)可以類比成 windows 上的虛擬內(nèi)存,它可以讓服務(wù)器在內(nèi)存吃滿的情況下可以保持低效運(yùn)行,而不是直接卡死。但是 k8s 的較新版本都要求關(guān)閉swap。所以咱們直接動(dòng)手,修改/etc/fstab文件:

sudo vi /etc/fstab

你應(yīng)該可以看到如下內(nèi)容,把第二條用#注釋掉就好了,注意第一條別注釋了,不然重啟之后系統(tǒng)有可能會(huì)報(bào)file system read-only錯(cuò)誤。

UUID=e2048966-750b-4795-a9a2-7b477d6681bf /   ext4    errors=remount-ro 0    1
# /dev/fd0        /media/floppy0  auto    rw,user,noauto,exec,utf8 0       0

然后輸入reboot重啟即可,重啟后使用top命令查看任務(wù)管理器,如果看到如下KiB Swap后均為 0 就說(shuō)明關(guān)閉成功了。

關(guān)閉swap之后的任務(wù)管理器

上面說(shuō)的是永久關(guān)閉swap內(nèi)存,其實(shí)也可以暫時(shí)關(guān)閉,使用swapoff -a命令即可,效果會(huì)在重啟后消失。

配置免密登錄

k8s 要求 管理節(jié)點(diǎn)可以直接免密登錄工作節(jié)點(diǎn) 的原因是:在集群搭建完成后,管理節(jié)點(diǎn)的 kubelet 需要登陸工作節(jié)點(diǎn)進(jìn)行操作。而至于怎么操作很簡(jiǎn)單,這里就不詳提了,可以參見(jiàn)文章 virtualbox 虛擬機(jī)組網(wǎng) 的最后一個(gè)章節(jié) 免密鑰登錄 。

設(shè)置host-name

注意集群內(nèi)的機(jī)器hostname不能重復(fù),要不然節(jié)點(diǎn)加入的時(shí)候會(huì)報(bào)錯(cuò):

error execution phase kubelet-start: a Node with name "ubuntu" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node

設(shè)置命令

# 設(shè)置hostname master 或者 worker1 或者 worker2
hostnamectl set-hostname master/worker1/worker2

二. 安裝 docker

docker 是 k8s 的基礎(chǔ),在安裝完成之后也需要修改一些配置來(lái)適配 k8s ,所以本章分為 docker 的安裝docker 的配置 兩部分。如果你已經(jīng)安裝并使用了一段時(shí)間的 docker 了話,建議使用docker -v查看已安裝的 docker 版本,并在 k8s 官網(wǎng)上查詢適合該版本的 k8s 進(jìn)行安裝。這一步兩臺(tái)主機(jī)都需要進(jìn)行安裝。

docker 的安裝

docker 在 ubuntu 的安裝上真是再簡(jiǎn)單不過(guò)了,執(zhí)行如下命令即可,在安裝之前請(qǐng)記得把鏡像源切換到國(guó)內(nèi)。

sudo apt install docker.io

等安裝完成之后使用docker -v來(lái)驗(yàn)證 docker是否可用。

docker 的配置

安裝完成之后需要進(jìn)行一些配置,包括 切換docker下載源為國(guó)內(nèi)鏡像站 以及 修改cgroups

這個(gè)cgroups是啥呢,你可以把它理解成一個(gè)進(jìn)程隔離工具,docker就是用它來(lái)實(shí)現(xiàn)容器的隔離的。docker 默認(rèn)使用的是cgroupfs,而 k8s 也用到了一個(gè)進(jìn)程隔離工具systemd,如果使用兩個(gè)隔離組的話可能會(huì)引起異常,所以我們要把 docker 的也改成systemd

這兩者都是在/etc/docker/daemon.json里修改的,所以我們一起配置了就好了,首先執(zhí)行下述命令編輯daemon.json

sudo vi /etc/docker/daemon.json

打開(kāi)后輸入以下內(nèi)容:

{
  "registry-mirrors": [
    "https://dockerhub.azk8s.cn",
    "https://reg-mirror.qiniu.com",
    "https://quay-mirror.qiniu.com"
  ],
  "exec-opts": [ "native.cgroupdriver=systemd" ]
}

然后:wq保存后重啟 docker:

sudo systemctl daemon-reload
sudo systemctl restart docker

然后就可以通過(guò)docker info | grep Cgroup來(lái)查看修改后的 docker cgroup 狀態(tài),發(fā)現(xiàn)變?yōu)?code>systemd即為修改成功。

三. 安裝 k8s

安裝完了 docker 就可以下載 k8s 的三個(gè)主要組件kubeletkubeadm以及kubectl了。這一步兩臺(tái)主機(jī)都需要進(jìn)行安裝。先來(lái)簡(jiǎn)單介紹一下這三者:

  • kubelet: k8s 的核心服務(wù)
  • kubeadm: 這個(gè)是用于快速安裝 k8s 的一個(gè)集成工具,我們?cè)?code>master1和worker1上的 k8s 部署都將使用它來(lái)完成。
  • kubectl: k8s 的命令行工具,部署完成之后后續(xù)的操作都要用它來(lái)執(zhí)行

其實(shí)這三個(gè)的下載很簡(jiǎn)單,直接用apt-get就好了,但是因?yàn)槟承┰?,它們的下載地址不存在了。所以我們需要用國(guó)內(nèi)的鏡像站來(lái)下載,也很簡(jiǎn)單,依次執(zhí)行下面五條命令即可:

# 使得 apt 支持 ssl 傳輸
apt-get update && apt-get install -y apt-transport-https
# 下載 gpg 密鑰   這個(gè)需要root用戶否則會(huì)報(bào)錯(cuò)
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 
# 添加 k8s 鏡像源 這個(gè)需要root用戶否則會(huì)報(bào)錯(cuò)
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
# 更新源列表
apt-get update
# 下載 kubectl,kubeadm以及 kubelet
apt-get install -y kubelet kubeadm kubectl

直接在/etc/apt/sources.list里添加https://mirrors.aliyun.com/kubernetes/apt/是不行的,因?yàn)檫@個(gè)阿里鏡像站使用的ssl進(jìn)行傳輸?shù)?,所以要先安裝apt-transport-https并下載鏡像站的密鑰才可以進(jìn)行下載。

四. 安裝 master 節(jié)點(diǎn)

下載完成后就要迎來(lái)重頭戲了,初始化master節(jié)點(diǎn),這一章節(jié)只需要在管理節(jié)點(diǎn)上配置即可,大致可以分為如下幾步:

  • 初始化master節(jié)點(diǎn)
  • 部署flannel網(wǎng)絡(luò)
  • 配置kubectl工具

初始化 master 節(jié)點(diǎn)

使用kubeadminit命令就可以輕松的完成初始化,不過(guò)需要攜帶幾個(gè)參數(shù),如下。先不要直接復(fù)制執(zhí)行,將賦值給--apiserver-advertise-address參數(shù)的 ip 地址修改為自己的master主機(jī)地址,然后再執(zhí)行。

# 先拉取coredns的鏡像否則下一步會(huì)失敗
docker pull coredns/coredns:1.8.4
docker tag coredns/coredns:1.8.4 registry.aliyuncs.com/google_containers/coredns:v1.8.4

# 初始化
kubeadm init \
--apiserver-advertise-address=192.168.196.141 \
--image-repository registry.aliyuncs.com/google_containers \
--pod-network-cidr=10.244.0.0/16

這里介紹一下一些常用參數(shù)的含義:

  • --apiserver-advertise-address: k8s 中的主要服務(wù)apiserver的部署地址,填自己的管理節(jié)點(diǎn) ip
  • --image-repository: 拉取的 docker 鏡像源,因?yàn)槌跏蓟臅r(shí)候kubeadm會(huì)去拉 k8s 的很多組件來(lái)進(jìn)行部署,所以需要指定國(guó)內(nèi)鏡像源,下不然會(huì)拉取不到鏡像。
  • --pod-network-cidr: 這個(gè)是 k8s 采用的節(jié)點(diǎn)網(wǎng)絡(luò),因?yàn)槲覀儗⒁褂?code>flannel作為 k8s 的網(wǎng)絡(luò),所以這里填10.244.0.0/16就好
  • --kubernetes-version: 這個(gè)是用來(lái)指定你要部署的 k8s 版本的,一般不用填,不過(guò)如果初始化過(guò)程中出現(xiàn)了因?yàn)榘姹静粚?duì)導(dǎo)致的安裝錯(cuò)誤的話,可以用這個(gè)參數(shù)手動(dòng)指定。
  • --ignore-preflight-errors: 忽略初始化時(shí)遇到的錯(cuò)誤,比如說(shuō)我想忽略 cpu 數(shù)量不夠 2 核引起的錯(cuò)誤,就可以用--ignore-preflight-errors=CpuNum。錯(cuò)誤名稱在初始化錯(cuò)誤時(shí)會(huì)給出來(lái)。

當(dāng)你看到如下字樣是,就說(shuō)明初始化成功了,請(qǐng)把最后那行以kubeadm join開(kāi)頭的命令復(fù)制下來(lái),之后安裝工作節(jié)點(diǎn)時(shí)要用到的,如果你不慎遺失了該命令,可以在master節(jié)點(diǎn)上使用kubeadm token create --print-join-command命令來(lái)重新生成一條。

Your Kubernetes master has initialized successfully!
 
To start using your cluster, you need to run the following as a regular user:
 
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 
You can now join any number of machines by running the following on each node
as root:
 
kubeadm join 192.168.56.11:6443 --token wbryr0.am1n476fgjsno6wa --discovery-token-ca-cert-hash sha256:7640582747efefe7c2d537655e428faa6275dbaff631de37822eb8fd4c054807

如果在初始化過(guò)程中出現(xiàn)了任何Error導(dǎo)致初始化終止了,使用kubeadm reset重置之后再重新進(jìn)行初始化。

完整輸出:

root@ubuntu:/home/mico# kubeadm init --apiserver-advertise-address=192.168.196.141 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.22.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ubuntu] and IPs [10.96.0.1 192.168.196.141]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ubuntu] and IPs [192.168.196.141 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ubuntu] and IPs [192.168.196.141 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.516007 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ubuntu as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ubuntu as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: i9rpmm.8jqs342cmyj1hfwg
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.196.141:6443 --token i9rpmm.8jqs342cmyj1hfwg \
    --discovery-token-ca-cert-hash sha256:95f83d494d4d484945f8017a70bf2f7c6b238c8ca844d431facc4b19dc4105f2 

配置 kubectl 工具

這一步就比較簡(jiǎn)單了,直接執(zhí)行如下命令即可:

mkdir -p /root/.kube && \
cp /etc/kubernetes/admin.conf /root/.kube/config

執(zhí)行完成后并不會(huì)刷新出什么信息,可以通過(guò)下面兩條命令測(cè)試 kubectl是否可用:

# 查看已加入的節(jié)點(diǎn)
kubectl get nodes
# 查看集群狀態(tài)
kubectl get cs

部署 flannel 網(wǎng)絡(luò)

flannel是什么?它是一個(gè)專門為 k8s 設(shè)置的網(wǎng)絡(luò)規(guī)劃服務(wù),可以讓集群中的不同節(jié)點(diǎn)主機(jī)創(chuàng)建的 docker 容器都具有全集群唯一的虛擬IP地址。想要部署flannel的話直接執(zhí)行下述命令即可:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

報(bào)錯(cuò):

root@ubuntu:/home/mico# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg unchanged
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "ClusterRole" in version "rbac.authorization.k8s.io/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "ClusterRoleBinding" in version "rbac.authorization.k8s.io/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"
unable to recognize "https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml": no matches for kind "DaemonSet" in version "extensions/v1beta1"

換地址

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds created

輸出如下內(nèi)容即為安裝完成:

clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

至此,k8s 管理節(jié)點(diǎn)部署完成。

五. 將 slave 節(jié)點(diǎn)加入網(wǎng)絡(luò)

首先需要重復(fù)步驟 1 ~ 3 來(lái)安裝 docker 、k8s 以及修改服務(wù)器配置,之后執(zhí)行從步驟 4 中保存的命令即可完成加入,注意,這條命令每個(gè)人的都不一樣,不要直接復(fù)制執(zhí)行:

kubeadm join 192.168.56.11:6443 --token wbryr0.am1n476fgjsno6wa --discovery-token-ca-cert-hash sha256:7640582747efefe7c2d537655e428faa6275dbaff631de37822eb8fd4c054807

待控制臺(tái)中輸出以下內(nèi)容后即為加入成功:

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.

隨后登錄master1查看已加入節(jié)點(diǎn)狀態(tài),可以看到worker1已加入,并且狀態(tài)均為就緒。至此,k8s 搭建完成:

root@master1:~# kubectl get nodes
NAME      STATUS   ROLES    AGE    VERSION
master1   Ready    master   145m   v1.15.0
worker1   Ready    <none>   87m    v1.15.0

默認(rèn)網(wǎng)卡問(wèn)題修復(fù)

如果你是使用virtualBox部署的虛擬機(jī),并且虛擬機(jī)直接無(wú)法使用網(wǎng)卡1的 ip 地址互相訪問(wèn)的話(例如組建雙網(wǎng)卡,網(wǎng)卡1為 NAT 地址轉(zhuǎn)換用來(lái)上網(wǎng),網(wǎng)卡2為Host-only,用于虛擬機(jī)之間訪問(wèn))。就需要執(zhí)行本節(jié)的內(nèi)容來(lái)修改 k8s 的默認(rèn)網(wǎng)卡。不然會(huì)出現(xiàn)一些命令無(wú)法使用的問(wèn)題。如果你的默認(rèn)網(wǎng)卡可以進(jìn)行虛擬機(jī)之間的相互訪問(wèn),則沒(méi)有該問(wèn)題。

修改 kubelet 默認(rèn)地址

訪問(wèn)kubelet配置文件:

sudo vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

在最后一行ExecStart 之前 添加如下內(nèi)容:

Environment="KUBELET_EXTRA_ARGS=--node-ip=192.168.56.21"

重啟kubelet

systemctl stop kubelet.service && \
systemctl daemon-reload && \
systemctl start kubelet.service

至此修改完成,更多信息詳見(jiàn) kubectl logs、exec、port-forward 執(zhí)行失敗問(wèn)題解決

修改 flannel 的默認(rèn)網(wǎng)卡

編輯flannel配置文件

sudo kubectl edit daemonset kube-flannel-ds-amd64 -n kube-system

找到spec.template.spec.containers.args字段并添加--iface=網(wǎng)卡名,例如我的網(wǎng)卡是enp0s8

- args:
  - --ip-masq
  - --kube-subnet-mgr
  # 添加到這里
  - --iface=enp0s8

:wq保存修改后輸入以下內(nèi)容刪除所有 flannel,k8s 會(huì)自動(dòng)重建:

kubectl delete pod -n kube-system -l app=flannel

至此修改完成,更多內(nèi)容請(qǐng)見(jiàn) 解決k8s無(wú)法通過(guò)svc訪問(wèn)其他節(jié)點(diǎn)pod的問(wèn)題

總結(jié)

至此你應(yīng)該已經(jīng)搭建好了一個(gè)完整可用的雙節(jié)點(diǎn) k8s 集群了。接下來(lái)你可以通過(guò)如下內(nèi)容繼續(xù)深入 k8s,排名分先后,推薦依次閱讀:

參考

原文鏈接

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容