主要包括環(huán)境信息、節(jié)點(diǎn)操作、master操作、node操作、驗(yàn)證操作五個(gè)過(guò)程,具體步驟如下:
一、環(huán)境信息
IP地址 節(jié)點(diǎn)角色 CPU Memory Hostname 磁盤(pán) 系統(tǒng)
192.168.0.125 master >=2c >=2G master sda、sdb centos7.5
192.168.0.126 node >=2c >=2G node1 sda、sdb centos7.5
192.168.0.127 node >=2c >=2G node2 sda、sdb centos7.5
二、節(jié)點(diǎn)操作
1.設(shè)置主機(jī)名hostname,管理節(jié)點(diǎn)設(shè)置主機(jī)名為 master 。
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2
2.編輯 /etc/hosts 文件,添加域名解析
cat <<EOF >>/etc/hosts
192.168.0.125 master
192.168.0.126 node1
192.168.0.127 node2
EOF
3.關(guān)閉防火墻、selinux和swap
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
4.配置內(nèi)核參數(shù),將橋接的IPv4流量傳遞到iptables的鏈
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
5.配置國(guó)內(nèi)yum源
yum install -y wget
mkdir /etc/yum.repos.d/bak && mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bak
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repo
yum clean all && yum makecache
6.配置國(guó)內(nèi)Kubernetes源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
7.配置 docker 源
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
8.安裝docker
yum install -y docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker
docker version
9、安裝kubeadm、kubelet、kubectl(yum list kubelet --showduplicates查看版本信息)
yum install -y kubelet-1.13.0-0 kubectl-1.13.0-0
yum install -y kubeadm-1.13.0-0
systemctl enable kubelet
三、master操作
1.在master進(jìn)行Kubernetes集群初始化。
kubeadm init --kubernetes-version=1.13.0 \
--apiserver-advertise-address=192.168.0.125 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
定義POD的網(wǎng)段為: 10.244.0.0/16, api server地址就是master本機(jī)IP地址。
這一步很關(guān)鍵,由于kubeadm 默認(rèn)從官網(wǎng)k8s.grc.io下載所需鏡像,國(guó)內(nèi)無(wú)法訪(fǎng)問(wèn),因此需要通過(guò)–image-repository指定阿里云鏡像倉(cāng)庫(kù)地址,很多新手初次部署都卡在此環(huán)節(jié)無(wú)法進(jìn)行后續(xù)配置。
集群初始化成功后返回如下信息:記錄生成的最后部分內(nèi)容,此內(nèi)容需要在其它節(jié)點(diǎn)加入Kubernetes集群時(shí)執(zhí)行。
kubeadm join 192.168.0.125:6443 --token 6mvbri.80a7bfcda1a8gn94 --discovery-token-ca-cert-hash sha256:906f7f14bbb9dfdb675fdc76137ea76f609816ef9ceb0aedc5b2f46fc8741c77
2.配置kubectl工具
mkdir -p /root/.kube
cp /etc/kubernetes/admin.conf /root/.kube/config
kubectl get nodes
kubectl get cs
3.部署flannel網(wǎng)絡(luò)(可能下載quay.io/coreos/flannel:v0.11.0-amd64鏡像會(huì)失敗,導(dǎo)致無(wú)法成功部署)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
四、node操作
1.node節(jié)點(diǎn)加入Kubernetes集群
kubeadm join 192.168.0.125:6443 --token 6mvbri.80a7bfcda1a8gn94 --discovery-token-ca-cert-hash sha256:906f7f14bbb9dfdb675fdc76137ea76f609816ef9ceb0aedc5b2f46fc8741c77
五、驗(yàn)證操作
1.在master節(jié)點(diǎn)輸入命令檢查集群狀態(tài)。
kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 45m v1.13.0
node1 Ready <none> 10m v1.13.0
node2 Ready <none> 10m v1.13.0
2.在master節(jié)點(diǎn)輸入命令檢查系統(tǒng)pod信息。
kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78d4cf999f-cvxdq 1/1 Running 0 44m
coredns-78d4cf999f-tprsr 1/1 Running 0 44m
etcd-master 1/1 Running 1 48m
kube-apiserver-master 1/1 Running 1 48m
kube-controller-manager-master 1/1 Running 1 48m
kube-flannel-ds-amd64-c25nl 1/1 Running 0 14m
kube-flannel-ds-amd64-crkxk 1/1 Running 0 14m
kube-flannel-ds-amd64-s9s5s 1/1 Running 0 36m
kube-proxy-b7vpw 1/1 Running 0 14m
kube-proxy-hhcdf 1/1 Running 1 49m
kube-proxy-rl875 1/1 Running 0 14m
kube-scheduler-master 1/1 Running 1 48m