高階k8s HA 集群搭建(一)

前言

嘗到k8s甜頭以后,我們就想著應(yīng)用到生產(chǎn)環(huán)境里去,以提高業(yè)務(wù)迭代效率,可是部署在生產(chǎn)環(huán)境里有一個(gè)要求,就是k8s集群不可以存在單點(diǎn)故障。。。誒唷我的乖乖,這不就要求k8s集群高可用嗎,好,下面就是介紹兩種目前比較火的k8s集群master高可用方式。


介紹

首先介紹的第一種k8sHA集群我覺得更應(yīng)該叫做主從結(jié)構(gòu)k8s集群,它由三臺(tái)master組成,有三個(gè)keepalived提供一個(gè)vip 來作為apiserver的ip入口,keepalived設(shè)置權(quán)重,使得vip落在權(quán)重大的master節(jié)點(diǎn)上,node節(jié)點(diǎn)通過訪問這個(gè)vip從而訪問到這一臺(tái)master,另外兩臺(tái)master則通過etcd集群,來完成數(shù)據(jù)同步。

缺點(diǎn):這樣的集群是通過keepalived來實(shí)現(xiàn)高可用的,也就是說在權(quán)重較大的節(jié)點(diǎn)沒有故障之前,keepalived所指向的流量永遠(yuǎn)都是經(jīng)過主master,只有當(dāng)主master出現(xiàn)故障或者宕機(jī)的情況下,才有可能轉(zhuǎn)移到另外兩臺(tái)從master節(jié)點(diǎn)上。這樣會(huì)導(dǎo)致主master節(jié)點(diǎn)壓力過大,而另外兩臺(tái)從master可能永遠(yuǎn)不會(huì)被調(diào)用,導(dǎo)致資源浪費(fèi)等等情況。

不過,這也是排除單點(diǎn)故障的一種方式。

下面是理想的高可用架構(gòu)圖。


k8s 理想HA高可用

本文中要部署高可用的架構(gòu)圖:


本文高可用架構(gòu)

上圖摘抄至https://www.kubernetes.org.cn/3536.html

好了,到此我們整理一下本文中需要使用的技術(shù)棧

keepalived+etcd+k8s master

其中keepalived提供vip供node做apiserver入口,etcd必須是高可用集群,實(shí)現(xiàn)數(shù)據(jù)同步;以及基本的k8s master節(jié)點(diǎn)部署。


安裝準(zhǔn)備


節(jié)點(diǎn)部署相關(guān)情況

軟件版本:

docker17.03.2-ce

socat-1.7.3.2-2.el7.x86_64

kubelet-1.10.0-0.x86_64

kubernetes-cni-0.6.0-0.x86_64

kubectl-1.10.0-0.x86_64

kubeadm-1.10.0-0.x86_64

以上軟件在上一篇初階k8s集群搭建里已經(jīng)介紹并附有下載地址。

環(huán)境配置

systemctl stop firewalldsystemctl disable firewalld

修改每個(gè)節(jié)點(diǎn)hostname

cat < /etc/hosts?> /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.100.1?master1

192.168.100.2?master2

192.168.100.3?master3

EOF

swapoff -a

sed -i 's/.*swap.*/#&/' /etc/fstab

setenforce 0

echo "* soft nofile 65536" >> /etc/security/limits.conf

echo "* hard nofile 65536" >> /etc/security/limits.conf

echo "* soft nproc 65536" >> /etc/security/limits.conf

echo "* hard nproc 65536" >> /etc/security/limits.conf

echo "* soft memlock unlimited" >> /etc/security/limits.conf

echo "* hard memlock unlimited" >> /etc/security/limits.conf

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

echo 1 > /proc/sys/net/ipv4/ip_forward

sysctl -w net.bridge.bridge-nf-call-iptables=1

vim /etc/sysctl.conf

net.ipv4.ip_forward=1

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

sysctl -p


keepalived安裝

libnfnetlink-devel-1.0.1-4.el7.x86_64.rpm

wget?http://www.keepalived.org/software/keepalived-1.4.3tar.gz

yum install -y libnfnetlink-devel-1.0.1-4.el7.x86_64.rpm

yum -y install libnl libnl-devel

tar -xzvf keepalived-1.4.3.tar.gz

cd?keepalived-1.4.3

./configure --prefix=/usr/local/keepalived #檢查環(huán)境配置


出現(xiàn)上圖即為正確環(huán)境,如果出現(xiàn)錯(cuò)誤

checking?openssl/ssl.h?usability...?no

checking?openssl/ssl.h?presence...?no

checking?foropenssl/ssl.h...?no

configure:?error:?

??!!!?OpenSSL?is?not?properly?installed?on?your?system.?!!!

??!!!?Can?not?include?OpenSSL?headers?files.????????????!!!

則:安裝openssl和openssl-devel包,然后從新編譯配置文件。

yum?install?openssl?openssl-devel

./configure --prefix=/usr/local/keepalived

make && make install

cp keepalived/etc/init.d/keepalived /etc/init.d/

mkdir /etc/keepalived

cp?/usr/local/keepalived/etc/keepalived/keepalived.conf?/etc/keepalived/

cp keepalived/etc/sysconfig/keepalived /etc/sysconfig/

cp /usr/local/keepalived/sbin/keepalived /usr/sbin/?

ps -aux |grep keepalived

chkconfig keepalived on

通過systemctl status keepalived查看keepalived狀態(tài)

三臺(tái)master重復(fù)以上步驟,直到完成keepalived的安裝。

安裝完成后編寫配置文件:

master1的keepalived.conf

cat >/etc/keepalived/keepalived.conf <<EOF

global_defs {

????router_id LVS_k8s

}

vrrp_script CheckK8sMaster{

????????script "curl -k https://192.168.100.4:6443"

????????interval 3

????????timeout 9

????????fall 2

????????rise 2 ? ?

}

vrrp_instance VI_1 {

????state MASTER

????interface ens33?#本機(jī)物理網(wǎng)卡名字,可通過ip a來查看

????virtual_router_id 61

????priority 120??# 主節(jié)點(diǎn)權(quán)重最高 依次減少

????advert_int 1

????mcast_src_ip 192.168.100.1??#修改為本地IP

????nopreempt

????authentication {

????????auth_type PASS

????????auth_pass awzhXylxy.T

????}

????unicast_peer{

????????#注釋掉本地IP?

????????#192.168.100.1

????????192.168.100.2

????????192.168.100.3

????}

????virtual_ipaddress {

????????192.168.100.4/22 #VIP

????}

????track_script {

????????#CheckK8sMaster#這個(gè)方法在沒部署k8s之前最好注釋掉,因?yàn)楹芸赡芤驗(yàn)檫@個(gè)報(bào)錯(cuò)

????}

}

EOF


master2的keepalived.conf

cat >/etc/keepalived/keepalived.conf <

global_defs {

????router_id LVS_k8s

}

vrrp_script CheckK8sMaster{

????????script "curl -k https://192.168.100.4:6443"

????????interval 3

????????timeout 9

????????fall 2

????????rise 2 ? ?

}

vrrp_instance VI_1 {

????state?BACKUP

????interface ens33?#本機(jī)物理網(wǎng)卡名字,可通過ip a來查看

????virtual_router_id 61

????priority 110 ?# 主節(jié)點(diǎn)權(quán)重最高 依次減少

????advert_int 1

????mcast_src_ip 192.168.100.2 ?#修改為本地IP

????nopreempt

????authentication {

????????auth_type PASS

????????auth_pass awzhXylxy.T

????}

????unicast_peer{

????????#注釋掉本地IP?

? ? ? ? 192.168.100.1

????????#192.168.100.2

????????192.168.100.3

????}

????virtual_ipaddress {

????????192.168.100.4/22 #VIP

????}

????track_script {

????????#CheckK8sMaster#這個(gè)方法在沒部署k8s之前最好注釋掉,因?yàn)楹芸赡芤驗(yàn)檫@個(gè)報(bào)錯(cuò)

????}

}

EOF

master3的keepalived.conf

cat >/etc/keepalived/keepalived.conf <

global_defs {

????router_id LVS_k8s

}

vrrp_script CheckK8sMaster{

????????script "curl -k https://192.168.100.4:6443"

????????interval 3

????????timeout 9

????????fall 2

????????rise 2 ? ?

}

vrrp_instance VI_1 {

????state?BACKUP

????interface ens33?#本機(jī)物理網(wǎng)卡名字,可通過ip a來查看

????virtual_router_id 61

????priority 100 ?# 主節(jié)點(diǎn)權(quán)重最高 依次減少

????advert_int 1

????mcast_src_ip 192.168.100.3 ?#修改為本地IP

????nopreempt

????authentication {

????????auth_type PASS

????????auth_pass awzhXylxy.T

????}

????unicast_peer{

????????#注釋掉本地IP?

? ? ? ? 192.168.100.1

? ? ? ? 192.168.100.2

????????#192.168.100.3

????}

????virtual_ipaddress {

????????192.168.100.4/22 #VIP

????}

????track_script {

????????#CheckK8sMaster#這個(gè)方法在沒部署k8s之前最好注釋掉,因?yàn)楹芸赡芤驗(yàn)檫@個(gè)報(bào)錯(cuò)

????}

}

EOF

啟動(dòng)keepalived

systemctl restart keepalived

通過ip a可以查看


除了本機(jī)ip還多了一個(gè)虛擬ip

也可以通過ping ip去驗(yàn)證vip是否生效。

安裝ETCD

1:設(shè)置cfssl環(huán)境

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl

chmod +x cfssljson_linux-amd64

mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

chmod +x cfssl-certinfo_linux-amd64

mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfoexport PATH=/usr/local/bin:$PATH

2:創(chuàng)建 CA 配置文件(下面配置的IP為etc節(jié)點(diǎn)的IP

mkdir /root/ssl

cd /root/ssl

cat >? ca-config.json <<EOF

{"signing": {"default": { "expiry": "8760h"},"profiles": { "kubernetes-Soulmate": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "8760h" }}}}EOF

cat >? ca-csr.json <<EOF

{"CN": "kubernetes-Soulmate","key": {"algo": "rsa","size": 2048},"names": [{ "C": "CN", "ST": "shanghai", "L": "shanghai", "O": "k8s", "OU": "System"}]}EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

cat > etcd-csr.json?<<EOF

{ "CN": "etcd", "hosts": [ "127.0.0.1", "192.168.100.1", "192.168.100.2", "192.168.100.3" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "shanghai", "L": "shanghai", "O": "k8s", "OU": "System" } ]}EOF

cfssl gencert -ca=ca.pem \

? -ca-key=ca-key.pem \

? -config=ca-config.json \

? -profile=kubernetes-Soulmate etcd-csr.json | cfssljson -bare etcd

3:master1分發(fā)etcd證書到master2、master3上面

mkdir -p /etc/etcd/ssl

cp etcd.pem etcd-key.pem ca.pem /etc/etcd/ssl/

ssh -n master2 "mkdir -p /etc/etcd/ssl && exit"

ssh -n master3 "mkdir -p /etc/etcd/ssl && exit"

scp -r /etc/etcd/ssl/*.pem master2:/etc/etcd/ssl/

scp -r /etc/etcd/ssl/*.pem master3:/etc/etcd/ssl/

解壓etcd-v3.3.2-linux-amd64.tar.gz并安裝

wget https://github.com/coreos/etcd/releases/download/v3.3.2/etcd-v3.3.2-linux-amd64.tar.gz

tar -xzvf etcd-v3.3.2-linux-amd64.tar.gz

cd etcd-v3.3.2-linux-amd64

cp etcd* /bin/

#查看是否安裝好

etcd -version?

etcd Version: 3.3.2Git SHA: c9d46ab37Go Version: go1.9.4Go OS/Arch: linux/amd64

etcdctl -version

etcdctl version: 3.3.2API version: 2

在每一個(gè)master上都創(chuàng)建一個(gè)etcd存儲(chǔ)目錄mkdir -p /u03/etcd/

這里可以自行選擇儲(chǔ)存數(shù)據(jù)地址,但是要記得在下面配置文件中做修改

master1

cat <<EOF >/etc/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

[Service]

Type=notify

WorkingDirectory=/u03/etcd/

ExecStart=/usr/bin/etcd \

? --name master1 \

? --cert-file=/etc/etcd/ssl/etcd.pem \

? --key-file=/etc/etcd/ssl/etcd-key.pem \

? --peer-cert-file=/etc/etcd/ssl/etcd.pem \

? --peer-key-file=/etc/etcd/ssl/etcd-key.pem \

? --trusted-ca-file=/etc/etcd/ssl/ca.pem \

? --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \

? --initial-advertise-peer-urls https://192.168.100.1:2380 \

? --listen-peer-urls https://192.168.100.1:2380 \

? --listen-client-urls https://192.168.100.1:2379,http://127.0.0.1:2379 \

? --advertise-client-urls https://192.168.100.1:2379 \

? --initial-cluster-token etcd-cluster-0 \

? --initial-cluster master1=https://192.168.100.1:2380,master2=https://192.168.100.2:2380,master3=https://192.168.100.3:2380 \

? --initial-cluster-state new \

? --data-dir=/u03/etcd/

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF


master2

cat <<EOF >/etc/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

[Service]

Type=notify

WorkingDirectory=/u03/etcd/

ExecStart=/usr/bin/etcd \

? --name master2 \

? --cert-file=/etc/etcd/ssl/etcd.pem \

? --key-file=/etc/etcd/ssl/etcd-key.pem \

? --peer-cert-file=/etc/etcd/ssl/etcd.pem \

? --peer-key-file=/etc/etcd/ssl/etcd-key.pem \

? --trusted-ca-file=/etc/etcd/ssl/ca.pem \

? --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \

? --initial-advertise-peer-urls https://192.168.100.2:2380 \

? --listen-peer-urls https://192.168.100.2:2380 \

? --listen-client-urls https://192.168.100.2:2379,http://127.0.0.1:2379 \

? --advertise-client-urls https://192.168.220.146:2379 \

? --initial-cluster-token etcd-cluster-0 \

? --initial-cluster master1=https://192.168.100.1:2380,master2=https://192.168.100.2:2380,master3=https://192.168.100.3:2380 \

? --initial-cluster-state new \

? --data-dir=/u03/etcd/

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF


master3

cat <<EOF >/etc/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos

[Service]

Type=notify

WorkingDirectory=/u03/etcd/

ExecStart=/usr/bin/etcd \

? --name master3 \

? --cert-file=/etc/etcd/ssl/etcd.pem \

? --key-file=/etc/etcd/ssl/etcd-key.pem \

? --peer-cert-file=/etc/etcd/ssl/etcd.pem \

? --peer-key-file=/etc/etcd/ssl/etcd-key.pem \

? --trusted-ca-file=/etc/etcd/ssl/ca.pem \

? --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \

? --initial-advertise-peer-urls https://192.168.100.3:2380 \

? --listen-peer-urls https://192.168.100.3:2380 \

? --listen-client-urls https://192.168.100.3:2379,http://127.0.0.1:2379 \

? --advertise-client-urls https://192.168.100.3:2379 \

? --initial-cluster-token etcd-cluster-0 \

? --initial-cluster master1=https://192.168.100.1:2380,master2=https://192.168.100.2:2380,master3=https://192.168.100.3:2380 \

? --initial-cluster-state new \

? --data-dir=/u03/etcd/

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF


每個(gè)master都執(zhí)行以下命令以啟動(dòng)etcd集群

cd /etc/systemd/system/

mv etcd.service /usr/lib/systemd/system/

systemctl daemon-reload

systemctl enable etcd

systemctl restart etcd

systemctl status etcd

通過以下命令檢測集群是否正常

etcdctl --endpoints=https://192.168.100.1:2379,https://192.168.100.2:2379,https://192.168.100.3:2379 \

? --ca-file=/etc/etcd/ssl/ca.pem \

? --cert-file=/etc/etcd/ssl/etcd.pem \

? --key-file=/etc/etcd/ssl/etcd-key.pem? cluster-health


keepalived+etcd安裝完成后,開始部署k8s

安裝docker、k8s相關(guān)rpm包,以及上傳k8s相關(guān)鏡像。請看我上一篇初階k8s集群搭建。

所有節(jié)點(diǎn)修改kubelet配置文件

sed -i -e 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

重啟kubelet

systemctl daemon-reload && systemctl restart kubelet

初始化集群,創(chuàng)建集群配置文件

# 生成token

# 保留token后面還要使用

我們使用coreDNS作為k8s集群內(nèi)部DNS解析,使用canal作為網(wǎng)絡(luò)服務(wù)

kubeadm token generate

cat <<EOF >?config.yaml

apiVersion: kubeadm.k8s.io/v1alpha1

kind: MasterConfiguration

etcd:

? endpoints:

? - https://192.168.100.1:2379

? - https://192.168.100.2:2379

? - https://192.168.100.3:2379

? caFile: /etc/etcd/ssl/ca.pem

? certFile: /etc/etcd/ssl/etcd.pem

? keyFile: /etc/etcd/ssl/etcd-key.pem

? dataDir: /var/lib/etcd

networking:

? podSubnet: 10.244.0.0/16

kubernetesVersion: 1.10.0

api:

? advertiseAddress: "192.168.150.186"

token: "hpobow.vw1g1ya5dre7sq06" #剛剛保存的token

tokenTTL: "0s"#表示永不過期

apiServerCertSANs:

- master1

- master2

- master3

- 192.168.100.1

- 192.168.100.2

- 192.168.100.3

- 192.168.100.4

featureGates:

? CoreDNS: true

EOF


編輯完成后執(zhí)行kubeadm init --config config.yaml

如果失敗則查看錯(cuò)誤journalctl -xeu kubelet 查看服務(wù)啟動(dòng)日志或根據(jù)相關(guān)日志查看問題

通過kubeadm reset重置

注意,如果etcd已經(jīng)寫入數(shù)據(jù),請先到etcd存儲(chǔ)數(shù)據(jù)路徑下清空數(shù)據(jù)記錄。

若成功,你會(huì)看到以下內(nèi)容

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

? mkdir -p $HOME/.kube

? sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

? sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

? https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node

as root:

kubeadm join 192.168.100.1:6443 --token hpobow.vw1g1ya5dre7sq06 --discovery-token-ca-cert-hash sha256:f79b68fb698c92b9336474eb3bf184e847fgerbc58a6296911892662b98b1315

按照上面提示,此時(shí)root用戶還不能使用kubelet控制集群需要,配置下環(huán)境變量

對于非root用戶

mkdir -p $HOME/.kube

?sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

?sudo chown $(id -u):$(id -g) $HOME/.kube/config

對于root用戶

echo?"export KUBECONFIG=/etc/kubernetes/admin.conf"?>> ~/.bash_profile

source一下環(huán)境變量

source?~/.bash_profile

kubeadm生成證書密碼文件分發(fā)到master2和master3上面去

scp -r /etc/kubernetes/pki master2:/etc/kubernetes/

scp -r /etc/kubernetes/pki master3:/etc/kubernetes/

部署canal網(wǎng)絡(luò),在master1執(zhí)行

kubectl apply -f \

https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml

kubectl apply-f \https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml

可能鏡像下載需要一點(diǎn)時(shí)間,也可以先將yaml文件下載到本地,自行修改鏡像路徑,使用自己下載好的鏡像

wget?https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/rbac.yaml

wget?https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/canal/canal.yaml

等待部署好了之后查看當(dāng)前節(jié)點(diǎn)是否準(zhǔn)備好

[root@master1 ~]# kubectl get node

NAME STATUS ROLES AGE VERSION

master1? ? Ready? ? master? ? 31m? ? ? v1.10.0

通過kubectl get pods --all-namespaces查看是否所有的容器都已經(jīng)運(yùn)行,如果出現(xiàn)error或crash,就使用kubectl describe pod -n kube-system來查看出現(xiàn)的問題。

在master2和master3上面分別執(zhí)行初始化

使用之前在master1執(zhí)行的配置config.yaml在另外兩個(gè)節(jié)點(diǎn)上執(zhí)行kubeadm init --config config.yaml,將獲得與master1一樣的結(jié)果

同樣的配置下環(huán)境變量

對于非root用戶

mkdir -p $HOME/.kube

?sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

?sudo chown $(id -u):$(id -g) $HOME/.kube/config

對于root用戶

echo?"export KUBECONFIG=/etc/kubernetes/admin.conf"?>> ~/.bash_profile

source一下環(huán)境變量

source?~/.bash_profile

[root@master1 ~]# kubectl get nodes

NAME STATUS ROLES AGE VERSION

master1? ? Ready? ? master? ? 1h? ? ? ? v1.10.0

master2? ? Ready? ? master? ? 1h? ? ? ? v1.10.0

master3? ? Ready? ? master? ? 1h? ? ? ? v1.10.0

查看所有節(jié)點(diǎn)運(yùn)行的容器kubectl get pods --all-namespaces -o wide

這樣,基本的主備模式的高可用就搭建完成了,若要部署dashboard請看我上一篇文章初階k8s集群搭建,值得注意的是,設(shè)置dashboard的basicauth的方式進(jìn)行apiserver的驗(yàn)證,這個(gè)設(shè)置需要在每一臺(tái)master上執(zhí)行以保證高可用。

另外,在k8s 1.10中想使用HPA需要在每個(gè)master節(jié)點(diǎn) /etc/kubernetes/manifests/kube-controller-manager.yaml中增加?- --horizontal-pod-autoscaler-use-rest-clients=false 才可以監(jiān)控到cpu使用率來完成自動(dòng)擴(kuò)容。

監(jiān)控插件heapster

需要有heapster.yaml、influxdb.yaml、grafana.yaml

vim heapster.yaml

---

apiVersion: v1

kind: ServiceAccount

metadata:

? name: heapster

? namespace: kube-system

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

? name: heapster

subjects:

? - kind: ServiceAccount

? ? name: heapster

? ? namespace: kube-system

roleRef:

? kind: ClusterRole

? name: system:heapster

? apiGroup: rbac.authorization.k8s.io

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

? name: heapster

? namespace: kube-system

spec:

? replicas: 1

? template:

? ? metadata:

? ? ? labels:

? ? ? ? task: monitoring

? ? ? ? k8s-app: heapster

? ? spec:

? ? ? serviceAccountName: heapster

? ? ? containers:

? ? ? - name: heapster

? ? ? ? image: 192.168.220.84/third_party/heapster-amd64:v1.3.0 #這里我用的是自己的私服鏡像地址

? ? ? ? imagePullPolicy: IfNotPresent

? ? ? ? command:

? ? ? ? - /heapster

? ? ? ? - --source=kubernetes:https://kubernetes.default

? ? ? ? - --sink=influxdb:http://monitoring-influxdb.kube-system.svc:8086

---

apiVersion: v1

kind: Service

metadata:

? labels:

? ? task: monitoring

? ? # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

? ? # If you are NOT using this as an addon, you should comment out this line.

? ? kubernetes.io/cluster-service: 'true'

? ? kubernetes.io/name: Heapster

? name: heapster

? namespace: kube-system

spec:

? ports:

? - port: 80

? ? targetPort: 8082

? selector:

? ? k8s-app: heapster


vim influxdb.yaml

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

? name: monitoring-influxdb

? namespace: kube-system

spec:

? replicas: 1

? template:

? ? metadata:

? ? ? labels:

? ? ? ? task: monitoring

? ? ? ? k8s-app: influxdb

? ? spec:

? ? ? containers:

? ? ? - name: influxdb

? ? ? ? image: 192.168.220.84/third_party/heapster-influxdb-amd64:v1.1.1 #私服地址,需要自行更換

? ? ? ? volumeMounts:

? ? ? ? - mountPath: /data

? ? ? ? ? name: influxdb-storage

? ? ? volumes:

? ? ? - name: influxdb-storage

? ? ? ? emptyDir: {}

---

apiVersion: v1

kind: Service

metadata:

? labels:

? ? task: monitoring

? ? # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

? ? # If you are NOT using this as an addon, you should comment out this line.

? ? kubernetes.io/cluster-service: 'true'

? ? kubernetes.io/name: monitoring-influxdb

? name: monitoring-influxdb

? namespace: kube-system

spec:

? ports:

? - port: 8086

? ? targetPort: 8086

? selector:

? ? k8s-app: influxdb


vim?grafana.yaml

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

? name: monitoring-grafana

? namespace: kube-system

spec:

? replicas: 1

? template:

? ? metadata:

? ? ? labels:

? ? ? ? task: monitoring

? ? ? ? k8s-app: grafana

? ? spec:

? ? ? containers:

? ? ? - name: grafana

? ? ? ? image: 192.168.220.84/third_party/heapster-grafana-amd64:v4.4.1 #私服地址,需自行更換

? ? ? ? ports:

? ? ? ? - containerPort: 3000

? ? ? ? ? protocol: TCP

? ? ? ? volumeMounts:

? ? ? ? - mountPath: /var

? ? ? ? ? name: grafana-storage

? ? ? ? env:

? ? ? ? - name: INFLUXDB_HOST

? ? ? ? ? value: monitoring-influxdb

? ? ? ? - name: GF_SERVER_HTTP_PORT

? ? ? ? ? value: "3000"

? ? ? ? ? # The following env variables are required to make Grafana accessible via

? ? ? ? ? # the kubernetes api-server proxy. On production clusters, we recommend

? ? ? ? ? # removing these env variables, setup auth for grafana, and expose the grafana

? ? ? ? ? # service using a LoadBalancer or a public IP.

? ? ? ? - name: GF_AUTH_BASIC_ENABLED

? ? ? ? ? value: "false"

? ? ? ? - name: GF_AUTH_ANONYMOUS_ENABLED

? ? ? ? ? value: "true"

? ? ? ? - name: GF_AUTH_ANONYMOUS_ORG_ROLE

? ? ? ? ? value: Admin

? ? ? ? - name: GF_SERVER_ROOT_URL

? ? ? ? ? # If you're only using the API Server proxy, set this value instead:

? ? ? ? ? # value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/

? ? ? ? ? value: /

? ? ? volumes:

? ? ? - name: grafana-storage

? ? ? ? emptyDir: {}

---

apiVersion: v1

kind: Service

metadata:

? labels:

? ? # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)

? ? # If you are NOT using this as an addon, you should comment out this line.

? ? kubernetes.io/cluster-service: 'true'

? ? kubernetes.io/name: monitoring-grafana

? name: monitoring-grafana

? namespace: kube-system

spec:

? # In a production setup, we recommend accessing Grafana through an external Loadbalancer

? # or through a public IP.

? # type: LoadBalancer

? # You could also use NodePort to expose the service at a randomly-generated port

? type: NodePort

? ports:

? - port: 80

? ? targetPort: 3000

? ? nodePort: 31236

? selector:

? ? k8s-app: grafana


執(zhí)行kubectl apply -f?heapster.yaml -f?influxdb.yaml -f?grafana.yaml

在dashboard上的展示效果


heapster展示


heapster展示


grafana展示

加入的node工作節(jié)點(diǎn)

安裝以下軟件版本,文章開頭有說道

docker17.03.2-ce

socat-1.7.3.2-2.el7.x86_64

kubelet-1.10.0-0.x86_64

kubernetes-cni-0.6.0-0.x86_64

kubectl-1.10.0-0.x86_64

kubeadm-1.10.0-0.x86_64

環(huán)境配置

systemctl stop firewalldsystemctl disable firewalld

修改每個(gè)節(jié)點(diǎn)hostname

cat < /etc/hosts?> /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.100.1?master1

192.168.100.2?master2

192.168.100.3?master3

EOF

swapoff -a

sed -i 's/.*swap.*/#&/' /etc/fstab

setenforce 0

echo "* soft nofile 65536" >> /etc/security/limits.conf

echo "* hard nofile 65536" >> /etc/security/limits.conf

echo "* soft nproc 65536" >> /etc/security/limits.conf

echo "* hard nproc 65536" >> /etc/security/limits.conf

echo "* soft memlock unlimited" >> /etc/security/limits.conf

echo "* hard memlock unlimited" >> /etc/security/limits.conf

echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables

echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables

echo 1 > /proc/sys/net/ipv4/ip_forward

sysctl -w net.bridge.bridge-nf-call-iptables=1

vim /etc/sysctl.conf

net.ipv4.ip_forward=1

net.bridge.bridge-nf-call-iptables=1

net.bridge.bridge-nf-call-ip6tables=1

sysctl -p

然后執(zhí)行在master上留下的kubeadm join 192.168.100.1:6443 --token hpobow.vw1g1ya5dre7sq06 --discovery-token-ca-cert-hash sha256:f79b68fb698c92b9336474eb3bf184e847fgerbc58a6296911892662b98b1315即可。

主要參考文章:

kubeadm安裝Kubernetes V1.10集群詳細(xì)文檔

kubernetes1.9離線部署

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請結(jié)合常識與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • 安裝k8s Master高可用集群 主機(jī) 角色 組件 172.18.6.101 K8S Master Kubele...
    jony456123閱讀 8,162評論 0 9
  • 時(shí)隔大半年,我又回來了,這回帶來的是最近非常火的容器編排工具——kubernetes 先附上docker 官網(wǎng)和k...
    我的橙子很甜閱讀 13,482評論 2 79
  • docker實(shí)現(xiàn)了更便捷的單機(jī)容器虛擬化的管理, docker的位置處于操作系統(tǒng)層與應(yīng)用層之間; 相對傳統(tǒng)虛擬化(...
    Harvey_L閱讀 20,138評論 3 44
  • 遇見他的那一刻,我不信一輩子。我什么都不信。 我本是女嬌娥,又不是男兒郎。 后來, 我的靈魂為了他變的柔軟,變的生...
    lnnn閱讀 244評論 0 0
  • 《人生沒有白走的路,每一步都算數(shù)》 不可取的抱怨 不久前,一間名為“喪茶”的奶茶店突然火爆了起來,令人驚訝的是,他...
    mi穿衣服的刺猬閱讀 243評論 0 1

友情鏈接更多精彩內(nèi)容