快速搭建k8s集群(kubeadm方式)

2020-09-03

快速搭建k8s集群

環(huán)境準(zhǔn)備:
(至少3臺(tái)機(jī)器)

  1. k8s-master centos7.6 192.168.191.133 內(nèi)存4G,CPU2,硬盤(pán)40G
  2. k8s-node1 centos7.6 192.168.191.134 內(nèi)存4G,CPU2,硬盤(pán)40G
  3. k8s-node2 centos7.6 192.168.191.135 內(nèi)存4G,CPU2,硬盤(pán)40G

kubeadm方式部署Kubernetes

1. 以下操作在master和所有node節(jié)點(diǎn)都需要進(jìn)行
#修改主機(jī)名
[root@localhost ~]# hostnamectl set-hostname k8s-master        #192.168.191.133
[root@localhost ~]# hostnamectl set-hostname k8s-node1         #192.168.191.134
[root@localhost ~]# hostnamectl set-hostname k8s-node2         #192.168.191.135

#關(guān)閉防火墻
[root@k8s-master ~]# systemctl stop firewalld
[root@k8s-master ~]# systemctl disable firewalld

#關(guān)閉selinux
[root@k8s-master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@k8s-master ~]# setenforce 0

#關(guān)閉swap
[root@k8s-master ~]# swapoff -a          # 臨時(shí)
[root@k8s-master ~]# vim /etc/fstab        #永久,把文件中帶有swap的那行注釋掉

#添加主機(jī)名與IP對(duì)應(yīng)關(guān)系
[root@k8s-master ~]# vim /etc/hosts
192.168.191.133  k8s-master
192.168.191.134  k8s-node1
192.168.191.135  k8s-node2

#將橋接的IPv4流量傳遞到iptables的鏈   (k8s1.14版本的)
[root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@k8s-master ~]# sysctl --system

附:1.17版本的k8s的優(yōu)化和流量傳遞
vim /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
2. 所有節(jié)點(diǎn)安裝Docker、kubeadm、kubelet

docker 18.06.1
kubeadm 1.14.0
kubelet 1.14.0
kubectl 1.14.0

安裝docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce-18.06.1.ce-3.el7
systemctl enable docker && systemctl start docker
docker --version

添加阿里的kubernetes源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安裝kubeadm、kubelet、kubectl

#因版本更新頻繁,所以指定版本號(hào),默認(rèn)下載最新版(注意所有節(jié)點(diǎn)版本必須一致)
yum -y install kubeadm-1.14.0
yum -y install kubelet-1.14.0
yum -y install kubectl-1.14.0
systemctl start kubelet && systemctl enable kubelet
3. 部署k8s master

以下操作在master上執(zhí)行

#由于默認(rèn)拉取的鏡像地址為k8s.gcr.io(國(guó)內(nèi)無(wú)法訪問(wèn)),因此指向阿里云的鏡像倉(cāng)庫(kù)
[root@k8s-master ~]# kubeadm init \
>   --apiserver-advertise-address=192.168.191.133 \
>   --image-repository registry.aliyuncs.com/google_containers \
>   --kubernetes-version v1.14.0 \
>   --service-cidr=10.1.0.0/16 \
>   --pod-network-cidr=10.244.0.0/16

執(zhí)行完上面的操作后會(huì)出現(xiàn)如下提示,先不要清屏,建議把它復(fù)制到文本中保存

bb1.png

根據(jù)這些提示繼續(xù)操作

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes
4. 安裝Pod網(wǎng)絡(luò)插件(CNI,即flannel)
#通過(guò)官方的yaml文件拉取鏡像(這個(gè)方式比較慢,而且這個(gè)網(wǎng)址貌似失效了)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

#以下操作可加快速度
瀏覽器打開(kāi)[https://github.com/mrlxxx/kube-flannel.yml](https://github.com/mrlxxx/kube-flannel.yml),將kube-flannel.yml克隆下來(lái)(下載zip壓縮包也行)
[root@k8s-master ~]# cd kube-flannel.yml  && ls 
[root@k8s-master ~]# grep image kube-flannel.yml      #查看yaml中定義的需要下載的鏡像
        image: quay.io/coreos/flannel:v0.12.0-amd64
        image: quay.io/coreos/flannel:v0.12.0-amd64
        image: quay.io/coreos/flannel:v0.12.0-arm64
        image: quay.io/coreos/flannel:v0.12.0-arm64
        image: quay.io/coreos/flannel:v0.12.0-arm
        image: quay.io/coreos/flannel:v0.12.0-arm
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        image: quay.io/coreos/flannel:v0.12.0-ppc64le
        image: quay.io/coreos/flannel:v0.12.0-s390x
        image: quay.io/coreos/flannel:v0.12.0-s390x

#手動(dòng)把鏡像pull下來(lái)
docker pull quay.io/coreos/flannel:v0.12.0-amd64
docker pull quay.io/coreos/flannel:v0.12.0-arm64
docker pull quay.io/coreos/flannel:v0.12.0-arm
docker pull quay.io/coreos/flannel:v0.12.0-ppc64le
docker pull quay.io/coreos/flannel:v0.12.0-s390x

docker images |grep coreos      #查看所需鏡像是否拉取成功

kubectl apply -f kube-flannel.yml        #將kube-flannel.yml里的配置應(yīng)用到flannel中

kubectl get pods -n kube-system        #查看創(chuàng)建了哪些pod
5. node節(jié)點(diǎn)加入k8s集群

以下操作在node1和node2上相同
(準(zhǔn)備好在master上執(zhí)行kubeadm init輸出的kubeadm join......的指令,注:每個(gè)集群的token是唯一的)

kubeadm join 192.168.191.133:6443 --token xvnp3x.pl6i8ikcdoixkaf0 \
    --discovery-token-ca-cert-hash sha256:9f90161043001c0c75fac7d61590734f844ee507526e948f3647d7b9cfc1362d
#node1加入集群(node2同操作)
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.14.0

docker pull quay.io/coreos/flannel:v0.12.0-amd64       #拉取flannel網(wǎng)絡(luò)插件的鏡像,注意版本與master一致

docker pull registry.aliyuncs.com/google_containers/pause:3.1

kubeadm join 192.168.191.133:6443 --token xvnp3x.pl6i8ikcdoixkaf0 \
    --discovery-token-ca-cert-hash sha256:9f90161043001c0c75fac7d61590734f844ee507526e948f3647d7b9cfc1362d

出現(xiàn)如下提示說(shuō)明node節(jié)點(diǎn)加入集群成功

[root@k8s-node1 ~]# kubeadm join 192.168.191.133:6443 --token xvnp3x.pl6i8ikcdoixkaf0     --discovery-token-ca-cert-hash sha256:9f90161043001c0c75fac7d61590734f844ee507526e948f3647d7b9cfc1362d
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

在master上查詢集群狀態(tài)和數(shù)量

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   24h   v1.14.0
k8s-node1    Ready    <none>   56m   v1.14.0
k8s-node2    Ready    <none>   16s   v1.14.0

若網(wǎng)絡(luò)插件鏡像拉取很慢,可將master上的鏡像打包,然后拷貝到node節(jié)點(diǎn)再導(dǎo)入

#在master操作
[root@k8s-master ~]# docker images        #查看所有鏡像
[root@k8s-master ~]# docker save -o flannel-v0.12.0-amd64.tar quay.io/coreos/flannel:v0.12.0-amd64      #打包鏡像
[root@k8s-master ~]# scp flannel-v0.12.0-amd64.tar k8s-node2:/root/        #把鏡像拷貝到node上
#在node操作
[root@k8s-node2 ~]# docker load < flannel-v0.12.0-amd64.tar      #導(dǎo)入鏡像
[root@k8s-node2 ~]# docker images

若node節(jié)點(diǎn)加入集群失敗見(jiàn)另一篇文檔:http://www.itdecent.cn/p/6a38c100e3d1

6. k8s集群測(cè)試和使用
[root@k8s-master ~]# kubectl create deployment nginx --image=daocloud.io/library/nginx       #創(chuàng)建一個(gè)資源對(duì)象deployment,拉取nginx的鏡像
[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort     #啟動(dòng)nginx
service/nginx exposed
[root@k8s-master ~]# kubectl get pod,svc
[root@k8s-master ~]# kubectl get pod,svc -o wide      #查看nginx運(yùn)行在哪個(gè)節(jié)點(diǎn)上,還有具體的訪問(wèn)端口
NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE        NOMINATED NODE   READINESS GATES
pod/nginx-5f965696dd-q5jt5   1/1     Running   0          10m   10.244.1.2   k8s-node1   <none>           <none>

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE     SELECTOR
service/kubernetes   ClusterIP   10.1.0.1       <none>        443/TCP        25h     <none>
service/nginx        NodePort    10.1.128.102   <none>        80:31438/TCP   5m33s   app=nginx
#瀏覽訪問(wèn)集群中任意節(jié)點(diǎn)的ip:31438,均可訪問(wèn)到nginx的默認(rèn)主頁(yè),則說(shuō)明集群搭建成功且可用
[root@k8s-master ~]# kubectl get pod nginx-5f965696dd-q5jt5 -o yaml      #查nginx的yaml文件內(nèi)容

7. 下載安裝Dashboard(官方版)

在master上操作
1.準(zhǔn)備Dashboard鏡像
[root@k8s-master ~]# docker pull tigerfive/kubernetes-dashboard-amd64:v1.10.1
(把這個(gè)鏡像打包發(fā)到其他node節(jié)點(diǎn)上,防止master故障時(shí),上面的服務(wù)前移到其他機(jī)器上,然后又得重新拉取鏡像,過(guò)程就比較慢了)
2.準(zhǔn)備kubernetes-dashboard.yaml文件
拷貝網(wǎng)址下的文件內(nèi)容
https://github.com/kubernetes/dashboard/blob/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
[root@k8s-master ~]# vim kubernetes-dashboard.yaml (:set paste(按i))把內(nèi)容粘貼進(jìn)去
修改以下內(nèi)容

[root@k8s-master ~]# grep image kubernetes-dashboard.yaml 
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
#把默認(rèn)的鏡像改為 image: tigerfive/kubernetes-dashboard-amd64:v1.10.1
bb2.png

添加這兩條內(nèi)容

bb3.png

# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001
  selector:
    k8s-app: kubernetes-dashboard
[root@k8s-master ~]# ss -tunlp | grep 30001     #確認(rèn)端口沒(méi)被占用
[root@k8s-master ~]# kubectl apply -f kubernetes-dashboard.yaml
[root@k8s-master ~]# kubectl get pod,deployment,svc -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
pod/coredns-8686dcc4fd-bcqsm               1/1     Running   0          26h
pod/coredns-8686dcc4fd-twm87               1/1     Running   0          26h
pod/etcd-k8s-master                        1/1     Running   0          26h
pod/kube-apiserver-k8s-master              1/1     Running   0          26h
pod/kube-controller-manager-k8s-master     1/1     Running   0          26h
pod/kube-flannel-ds-amd64-cc54r            1/1     Running   0          120m
pod/kube-flannel-ds-amd64-mpqv8            1/1     Running   0          176m
pod/kube-flannel-ds-amd64-whlnx            1/1     Running   0          7h43m
pod/kube-proxy-6dhq8                       1/1     Running   0          26h
pod/kube-proxy-cmkbm                       1/1     Running   0          176m
pod/kube-proxy-f9lk5                       1/1     Running   0          120m
pod/kube-scheduler-k8s-master              1/1     Running   0          26h
pod/kubernetes-dashboard-5bbc9b8dd-t7d96   1/1     Running   0          6m16s

NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.extensions/coredns                2/2     2            2           26h
deployment.extensions/kubernetes-dashboard   1/1     1            1           6m16s

NAME                           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns               ClusterIP   10.1.0.10      <none>        53/UDP,53/TCP,9153/TCP   26h
service/kubernetes-dashboard   NodePort    10.1.211.192   <none>        443:30001/TCP            6m16s

用火狐瀏覽器訪問(wèn):https://192.168.191.133:30001/ 出現(xiàn)如下頁(yè)面


bb5.png

在master上創(chuàng)建service account并綁定默認(rèn)cluster-admin管理員集群角色

#通過(guò)以下命令獲取到令牌
[root@k8s-master ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created
[root@k8s-master ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
[root@k8s-master ~]# kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
Name:         dashboard-admin-token-7692f
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: e6acd0af-ee8e-11ea-b326-000c29e3d07b

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tNzY5MmYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTZhY2QwYWYtZWU4ZS0xMWVhLWIzMjYtMDAwYzI5ZTNkMDdiIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.z65xe57EqDldRijPyux75RsW11oSotEMuH4SchFJt_FtyxmVZcr_WdBbzZd9GwIbOhAFj-Qd5UobcStGPNT1kBuGnfp7fWScFMNXsTTScS_1Oko4hDhqLDCuWdktpwEAAXmE7G5bptrk8GIEiQuj3KFNVh7Oknpl1tTnyeRfHNJO41RKHyV93y46wrpx0z9p8TdEECzNi0Sv73mAEyu1whQ0-btOmyvt1WcRSqbYQfVgRxrR2L0Ri7Cvba1DQDVkp0SZ8FF3ho5cY0whs2ADkNKF43Y-mWppp4l-tul5mh9pG4uSVLPEM9sApybQVlXY8q-6ZTBrU5oqRxRB1GX93g

使用token登錄Dashboard


bb6.png

完成


bb7.png
安裝Kuboard

Kuboard是另一款開(kāi)源免費(fèi)的k8s圖形化管理界面,更適合微服務(wù)的架構(gòu)的k8s資源展示
詳細(xì)安裝過(guò)程參考官方文檔:https://kuboard.cn/install/install-dashboard.html#%E5%AE%89%E8%A3%85kuboard

轉(zhuǎn)載請(qǐng)注明出處,謝謝~~

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書(shū)系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。
禁止轉(zhuǎn)載,如需轉(zhuǎn)載請(qǐng)通過(guò)簡(jiǎn)信或評(píng)論聯(lián)系作者。

友情鏈接更多精彩內(nèi)容