k8s環(huán)境搭建Ubuntu1804(多master節(jié)點集群1.13)

準(zhǔn)備工作

修改主機名稱及hosts

sudo vim /etc/cloud/cloud.cfg,將preserve_hostname開關(guān)設(shè)置為true

sudo vim /etc/hostname修改主機名

修改hosts

sudo vim /etc/hosts

192.168.1.49    cluster.kube.com #虛擬浮動IP

192.168.1.50    master1          #主節(jié)點1

192.168.1.51    master2          #主節(jié)點2

修改系統(tǒng)參數(shù)

sudo vim /etc/sysctl.conf

net.ipv4.ip_forward = 1

net.ipv4.ip_nonlocal_bind = 1

sysctl -p

sudo vim /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_nonlocal_bind = 1

net.ipv4.ip_forward = 1

vm.swappiness=0

sysctl --system

1.安裝配置keepalived(主節(jié)點和備用節(jié)點配置稍有不同,如下)

(1)安裝

sudo apt install -y keepalived

(2)修改配置文件

sudo vim /etc/keepalived/keepalived.conf

###################################################################################

! Configuration File for keepalived

global_defs {

   router_id LVS_DEVEL

}

vrrp_script check_haproxy {

    script "killall -0 haproxy"

    interval 3

    weight -2

    fall 10

    rise 2

}

vrrp_instance VI_1

    state BACKUP  #主節(jié)點MASTER  其它節(jié)點BACKUP

    interface ens33 #查看本地網(wǎng)卡實際名稱

    virtual_router_id 51

    priority 250  #每個節(jié)點不同

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 35f18af7190d51c9f7f78f37300a0cbd

    }

    virtual_ipaddress {

        192.168.1.49 #浮動虛擬ip,確保沒有被占用

    }

    track_script {

        check_haproxy

    }

}

###################################################################################

(3)重啟并查看狀態(tài)

sudo systemctl enable keepalived.service && sudo systemctl start keepalived.service && sudo systemctl status keepalived.service

ip address show ens33

2.安裝配置haproxy(主節(jié)點和備用節(jié)點完全相同)

(1)安裝

sudo apt install -y haproxy

(2)修改配置

sudo vim /etc/haproxy/haproxy.cfg
###################################################################################

#---------------------------------------------------------------------

# kubernetes apiserver frontend which proxys to the backends

#---------------------------------------------------------------------

frontend kubernetes-apiserver

    mode                 tcp

    bind                 *:16443

    option               tcplog

    default_backend      kubernetes-apiserver

#---------------------------------------------------------------------

# round robin balancing between the various backends

#---------------------------------------------------------------------

backend kubernetes-apiserver

    mode        tcp

    balance     roundrobin

    server  master1 192.168.1.50:6443 check

    server  master2 192.168.1.51:6443 check

#---------------------------------------------------------------------

# collection haproxy statistics message

#---------------------------------------------------------------------

listen stats

    bind                 *:1080

    stats auth           admin:awesomePassword

    stats refresh        5s

    stats realm          HAProxy\ Statistics

    stats uri            /admin?stats

###################################################################################

(3)重啟并查看狀態(tài)

sudo systemctl enable haproxy.service && sudo systemctl start haproxy.service && sudo systemctl status haproxy.service && sudo ss -lnt | grep -E "16443|1080"

3.安裝kubelet

最好安裝特定版本--》 安裝特定版本kubernetes

4.初始化集群(master上)

使用kubeadm config print init-defaults > kubeadm-config.yaml 打印出默認(rèn)配置,然后在根據(jù)自己的環(huán)境修改配置

以此文件內(nèi)容為準(zhǔn)---》: kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.13.4
apiServer:
controlPlaneEndpoint: "192.168.1.49:16443"
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
networking:
  podSubnet: "192.168.0.0/16"

**執(zhí)行初始化 **

sudo kubeadm init --config kubeadm-config.yaml

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl get pods --all-namespaces

5.創(chuàng)建網(wǎng)絡(luò)

創(chuàng)建網(wǎng)絡(luò)

sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml

sudo kubectl apply -f https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
``
或者文件下載到本地:


sudo kubectl apply -f rbac-kdd.yaml

sudo kubectl apply -f calico.yaml # 可以修改默認(rèn)網(wǎng)絡(luò)段192.168.0.0/16再執(zhí)行

6.master加入集群

(1)將證書及配置文件復(fù)制到其它master機器

USER=root

CONTROL_PLANE_IPS="master2,master3"

for host in ${CONTROL_PLANE_IPS}; do

    ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"

    scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/

    scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/

    scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/

    scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/

    scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/

done

(2)master加入集群

kubeadm join cluster.kube.com:16443 --token rhs7lq.hbnxtghe8176kbas --discovery-token-ca-cert-hash sha256:7dcd41bc338235780a4b200ee066e08392ea6f1bf0c25cd93c5295ff7c05512f --experimental-control-plane

7.node加入集群

kubeadm join 192.168.1.50:6443 --token 63wyoi.svz5o3snjvies9gm --discovery-token-ca-cert-hash sha256:717caa098d581b827bf129f73fc646403c77b937541539d5fdd580fe8c313b9f

增加私有倉庫

vim /etc/docker/daemon.json

{ "insecure-registries":["192.168.1.60:5000"] }

啟用防火墻規(guī)則

iptables -P FORWARD ACCEPT
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容