kubeadm 安裝 kubernetes 1.16.1集群

Author Date Last Update
liliming 2019-06-09 2019-11-17

一、環(huán)境準備

1.1 實驗環(huán)境

環(huán)境 版本
CentOS 7.7.1908
Docker 18.09.7
Kubernetes 1.16.1
Helm 3.0
Kernel 3.10.0-1062.1.2.el7.x86_64
CPU 2
Cgroup Driver systemd
Network Calico
serviceSubnet 10.1.0.0/16
podSubnet 10.2.0.0/16
apiserver apiserver.yoho8
ingress-controller ingress-nginx

1.2 主機規(guī)劃

Hostname IP
kube-node001 10.0.0.11
kube-node002 10.0.0.12
kube-node003 10.0.0.13

1.3 配置hosts文件

$ cat <<EOF | tee -a /etc/hosts
10.0.0.11 kube-node001 apiserver.yoho8
10.0.0.12 kube-node002
10.0.0.13 kube-node003
EOF

1.4 禁用防火墻和selinux

systemctl stop firewalld
systemctl disable firewalld

setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

1.5 由于開啟內核 ipv4 轉發(fā)需要加載 br_netfilter 模塊,所以加載下該模塊:

$ modprobe br_netfilter

1.6 創(chuàng)建/etc/sysctl.d/k8s.conf文件,添加如下內容:

如果以下配置在/etc/sysctl.conf已經(jīng)存在,那么修改就行

$ cat <<EOF> /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF

$ sysctl -p /etc/sysctl.d/k8s.conf

bridge-nf

使得 netfilter 可以對 Linux 網(wǎng)橋上的 IPv4/ARP/IPv6 包過濾。比如,設置net.bridge.bridge-nf-call-iptables=1后,二層的網(wǎng)橋在轉發(fā)包時也會被 iptables的 FORWARD 規(guī)則所過濾。常用的選項包括:

  • net.bridge.bridge-nf-call-arptables:是否在 arptables 的 FORWARD 中過濾網(wǎng)橋的 ARP 包
  • net.bridge.bridge-nf-call-ip6tables:是否在 ip6tables 鏈中過濾 IPv6 包
  • net.bridge.bridge-nf-call-iptables:是否在 iptables 鏈中過濾 IPv4 包
  • net.bridge.bridge-nf-filter-vlan-tagged:是否在 iptables/arptables 中過濾打了 vlan 標簽的包。

1.7 關閉 swap

swap 嚴重影響k8s性能,所以關閉

swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

1.7 配置kube-proxy開啟ipvs的前置條件

由于ipvs已經(jīng)加入到了內核的主干,所以為kube-proxy開啟ipvs的前提需要加載以下的內核模塊:

  • ip_vs
  • ip_vs_rr
  • ip_vs_wrr
  • ip_vs_sh
  • nf_conntrack_ipv4

執(zhí)行命令

$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

檢查

lsmod | grep -e ip_vs -e nf_conntrack_ipv4

安裝ipset查看ipvs的代理規(guī)則,安裝管理工具ipvsadm

yum install -y ipset ipvsadm

如果以上前提條件如果不滿足,則即使kube-proxy的配置開啟了ipvs模式,也會退回到iptables模式。

二、安裝docker & kubernetes

2.1 安裝Docker

2.1.1 安裝docker的yum源
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
2.1.2 安裝指定版本的docker

注意k8s兼容性,一般最新的k8s只能兼容前幾個版本的Docker

yum makecache fast

yum install -y docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io
2.1.3 修改配置
  • 修改DockerCgroup systemSystemd
  • 配置鏡像加速
  • 其他一些優(yōu)化

由于默認情況下 kubelet 使用的 cgroupdriver 是 systemd,所以需要保持 docker 和kubelet 的 cgroupdriver 一致,我們這里修改 docker 的 cgroupdriver=systemd。如果不修改 docker 則需要修改 kubelet 的啟動配置,需要保證兩者一致。

mkdir /etc/docker -p
tee /etc/docker/daemon.json <<-'EOF'
{
    "registry-mirrors": [
        "https://dockerhub.azk8s.cn",
        "https://reg-mirror.qiniu.com",
        "https://registry.docker-cn.com"
    ],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m",
        "max-file": "10"
    },

    "storage-driver": "overlay2",
    "live-restore": true
}
EOF
2.1.4 重啟Docker
systemctl daemon-reload
systemctl enable docker && systemctl start docker

2.2 安裝Kubernetes

2.2.1 添加阿里云kubernetes源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.2.2 下載kubectl kubeadm kubelet
yum install -y kubelet-1.16.1 kubeadm-1.16.1 kubectl-1.16.1
2.2.3 重啟docker和kubernetes
systemctl daemon-reload && systemctl restart docker
systemctl enable kubelet && systemctl start kubelet
2.2.4 kubectl 命令自動補全
yum install -y bash-completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc

==以上操作在所有節(jié)點執(zhí)行==

或者執(zhí)行一鍵腳本

curl -fsSL https://gitee.com/llmgo/shell/raw/master/kubernetes.sh | bash

三、初始化kubernetes集群

3.1 配置kubeadm-config集群初始化文件

cat <<EOF > ./kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.1
imageRepository: registry.aliyuncs.com/google_containers
controlPlaneEndpoint: "apiserver.yoho8:6443"
networking:
  serviceSubnet: "10.1.0.0/16"
  podSubnet: "10.2.0.0/16"
  dnsDomain: "cluster.local"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
EOF
  • serviceSubnet: Service的IP段
  • podSubnet: Pod的IP段
  • controlPlaneEndpoint: 主節(jié)點IP或者域名
  • imageRepository: 指定鏡像地址,默認gcr用不了,所以指定阿里云
  • mode: 指定kube-proxy模式為ipvs(默認是iptables)

3.1 開始初始化

kubeadm init --config=kubeadm-config.yaml --upload-certs

如果看到如下輸入代表初始化成功:

...
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join apiserver.yoho8:6443 --token ye6lec.w890f6fbdhi0pv90 \
    --discovery-token-ca-cert-hash sha256:5e0b1fb0f0523f7e2e5d29f4a1f70be0a2f00b4ca91c0401747cdb6094fd923e \
    --control-plane --certificate-key 75cae1eb13c0f6eff5035707abc481362e6027690e5c2e6516e24a53e998106e

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join apiserver.yoho8:6443 --token ye6lec.w890f6fbdhi0pv90 \
    --discovery-token-ca-cert-hash sha256:5e0b1fb0f0523f7e2e5d29f4a1f70be0a2f00b4ca91c0401747cdb6094fd923e

按照提示執(zhí)行執(zhí)行如下指令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

其他節(jié)點加入

kube-node002kube-node003執(zhí)行

kubeadm join apiserver.yoho8:6443 --token ye6lec.w890f6fbdhi0pv90 \
    --discovery-token-ca-cert-hash sha256:5e0b1fb0f0523f7e2e5d29f4a1f70be0a2f00b4ca91c0401747cdb6094fd923e

# 上面信息的最后就是node節(jié)點加入的命令,如果是控制節(jié)點就是上面的那條join命令。如果忘記保存了也沒事,執(zhí)行下面的這條命令,不過有效期只有兩小時。

$ kubeadm token create --print-join-command
kubeadm join apiserver.yoho8:6443 --token ch5cb3.qr6q2ad9ge1h6z7o     --discovery-token-ca-cert-hash sha256:5e0b1fb0f0523f7e2e5d29f4a1f70be0a2f00b4ca91c0401747cdb6094fd923e

去除master的taint

# 去除taint
kubectl taint nodes kube-node001 node-role.kubernetes.io/master-

# 添加taint(NoSchedule 以后不會將Pod調度到這個節(jié)點,但是已經(jīng)在運行的沒有影響)
kubectl taint node kube-node001 node-role.kubernetes.io/master=:NoSchedule

如果集群初始化遇到問題,可以使用下面的命令進行清理

kubeadm reset
rm -rf /var/lib/cni/
rm -rf ~/.kube
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
ipvsadm --clear

3.3 安裝Network (二選一即可)

這里我們選擇calico

3.3.1 安裝flannel

export POD_SUBNET=10.2.0.0/16
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sed -i "s#10.244.0.0/16#${POD_SUBNET}#" kube-flannel.yml
sed -i 's/quay.io/quay.azk8s.cn/g' kube-flannel.yml
kubectl apply -f  kube-flannel.yml

Github地址:https://github.com/coreos/flannel

3.3.2 安裝calico

export POD_SUBNET=10.2.0.0/16
wget https://docs.projectcalico.org/v3.10/manifests/calico.yaml
sed -i "s#192.168.0.0/16#${POD_SUBNET}#" calico.yaml
kubectl apply -f calico.yaml

參考文檔:https://docs.projectcalico.org/v3.9/getting-started/kubernetes/

3.4 檢查集群情況

耐心等待,可能會需要一點時間

$ kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-7b9dcdcc5-tphr9   1/1     Running   0          24m
kube-system   calico-node-kw2k6                         1/1     Running   0          15m
kube-system   calico-node-pnzks                         1/1     Running   0          24m
kube-system   calico-node-zgr2x                         1/1     Running   0          12s
kube-system   coredns-58cc8c89f4-ll9mn                  1/1     Running   0          28m
kube-system   coredns-58cc8c89f4-p7ckb                  1/1     Running   0          28m
kube-system   etcd-kube-node001                         1/1     Running   0          27m
kube-system   kube-apiserver-kube-node001               1/1     Running   0          27m
kube-system   kube-controller-manager-kube-node001      1/1     Running   0          27m
kube-system   kube-proxy-jtjv4                          1/1     Running   0          14m
kube-system   kube-proxy-md7f5                          1/1     Running   0          15m
kube-system   kube-proxy-tjrd4                          1/1     Running   0          28m
kube-system   kube-scheduler-kube-node001               1/1     Running   0          27m
$ kubectl get nodes
NAME           STATUS   ROLES    AGE     VERSION
kube-node001   Ready    master   18m     v1.16.1
kube-node002   Ready    <none>   5m31s   v1.16.1
kube-node003   Ready    <none>   4m44s   v1.16.1

檢查ipvs是否成功開啟

$ kubectl logs kube-proxy-jtjv4 -n kube-system
I1117 11:14:14.892257       1 node.go:135] Successfully retrieved node IP: 10.0.83.42
I1117 11:14:14.892304       1 server_others.go:176] Using ipvs Proxier.
W1117 11:14:14.892489       1 proxier.go:420] IPVS scheduler not specified, use rr by default
..

日志中打印出了Using ipvs Proxier,說明ipvs模式已經(jīng)開啟。

測試DNS

# 啟動 busybox plus
kubectl run -it --rm --restart=Never --image=radial/busyboxplus:curl --generator=run-pod/v1 curl

# 進入后執(zhí)行nslookup kubernetes.default確認解析正常
[ root@curl-66959f6557-dfztk:/ ]$ nslookup kubernetes.default
Server:    10.1.0.10
Address 1: 10.1.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.1.0.1 kubernetes.default.svc.cluster.local

3.5 如何從集群中移除Node

移除kube-node002

在master執(zhí)行

$ kubectl drain kube-node002 --delete-local-data --force --ignore-daemonsets
$ kubectl delete nodes kube-node002

在kube-node002執(zhí)行

$ kubeadm reset
$ rm -rf /var/lib/cni/

3.6 如何給Node加上Roles標簽

查看節(jié)點的標簽

$ kubectl get nodes --show-labels
NAME           STATUS   ROLES    AGE   VERSION   LABELS
kube-node001   Ready    master   54m   v1.16.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kube-node001,kubernetes.io/os=linux,node-role.kubernetes.io/master=
kube-node002   Ready    <none>   41m   v1.16.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kube-node002,kubernetes.io/os=linux
kube-node003   Ready    <none>   40m   v1.16.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kube-node003,kubernetes.io/os=linux

增加標簽

$ kubectl label nodes kube-node002 node-role.kubernetes.io/node2=
node/kube-node002 labeled
$ kubectl label nodes kube-node003 node-role.kubernetes.io/node3=
node/kube-node003 labeled

查看效果

$ kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
kube-node001   Ready    master   58m   v1.16.1
kube-node002   Ready    node2    45m   v1.16.1
kube-node003   Ready    node3    44m   v1.16.1

4.刪除標簽

在標簽后面加上 - 號就可以了

$ kubectl label nodes kube-node002 node-role.kubernetes.io/node2-
node/kube-node002 labeled
$ kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
kube-node001   Ready    master   59m   v1.16.1
kube-node002   Ready    <none>   46m   v1.16.1
kube-node003   Ready    node3    45m   v1.16.1

四、安裝和部署 Helm 3.0

  • Helm Charts是 Kubernetes 項目中的一個 子項目 目的是提供 Kubernetes 的包管理平臺。Helm 能夠幫你管理 Kubernetes 的應用集合。Helm Charts 能夠幫你定義,安裝,升級最復雜的 Kubernetes 應用集合。

安裝

$ wget https://get.helm.sh/helm-v3.0.0-linux-amd64.tar.gz
$ tar xf helm-v3.0.0-linux-amd64.tar.gz
$ mv linux-amd64/helm /usr/local/bin/

五、安裝ingress-nginx

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
sed -i 's/quay.io/quay.azk8s.cn/g' mandatory.yaml
kubectl apply -f mandatory.yaml

部署ingress-nginxservice

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml

參考文檔: https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md


參考鏈接:

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯(lián)系作者
【社區(qū)內容提示】社區(qū)部分內容疑似由AI輔助生成,瀏覽時請結合常識與多方信息審慎甄別。
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發(fā)布,文章內容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。

友情鏈接更多精彩內容