kubernetes的部署有2種方式
- 在線使用yum安裝
- 離線使用安裝包安裝
由于并不是每個(gè)安裝環(huán)境都能聯(lián)網(wǎng),所以本文是采用離線安裝的方式。kubernetes部署有點(diǎn)復(fù)雜,尤其是網(wǎng)絡(luò)部分,需要先劃定好網(wǎng)絡(luò),注意每步的細(xì)節(jié)成功率才高點(diǎn)。以下是安裝步驟,本文較長,有些細(xì)節(jié)盡量補(bǔ)全,不使遺漏。
準(zhǔn)備
1). 版本信息
| 組件 | 版本號(hào) | 補(bǔ)充說明 |
|---|---|---|
| docker | 18.03.0-ce | 無 |
| kubernetes | 1.18.12 | 無 |
| etcd | 3.4.7 | API VERSION 3.4 |
| linux | centos | 3.10.0-1127.8.2.el7.x86_64 |
2). 選擇安裝節(jié)點(diǎn)
資源有限,這里用了三臺(tái)機(jī)器,除了kubernetes的組件外,etcd集群也共享了相同的資源。
| IP地址 | 角色 | 部署的組件 |
|---|---|---|
| 173.119.126.200 | master | kube-proxy,kubelet,etcd,flanneld,kube-apiserver,kube-controller-manager,kube-scheduler |
| 173.119.126.199 | node | kube-proxy,kubelet,etcd,flanneld |
| 173.119.126.198 | node | kube-proxy,kubelet,etcd,flanneld |
3). 修改host,3臺(tái)機(jī)器都要修改
#在200機(jī)器執(zhí)行
echo "k8s-master-216-200" > /etc/hosts
#或者
vim /etc/hosts
173.119.126.200 k8s-master-216-200
173.119.126.199 k8s-worker-216-199
173.119.126.198 k8s-worker-216-198
4). 確認(rèn)mac地址和product_uuid的唯一性
ifconfig -a
cat /sys/class/dmi/id/product_uuid
5). 關(guān)閉防火墻
systemctl stop firewalld # 關(guān)閉服務(wù)
systemctl disable firewalld
6). 禁用SELinux
sestatus # 查看SELinux狀態(tài)
vi /etc/sysconfig/selinux
SELINUX=disabled
7). 禁止交換分區(qū)
vim /etc/fstab
#以下這行注釋掉
/dev/mapper/rhel-swap swap swap defaults 0 0
8).安裝ETCD
此步驟請(qǐng)參照其他文檔吧
docker安裝
kubernetes是運(yùn)行于容器之上的組件,需要先安裝docker。
下載版本 docker-18.03.0-ce
wget https://download.docker.com/linux/static/stable/x86_64/docker-18.03.0-ce.tgz
解壓
tar -xvzf docker-18.03.0-ce.tgz -C ./
cp docker/* /usr/bin/
配置服務(wù)自適應(yīng)啟動(dòng),用systemd管理docker
cat > /usr/lib/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
測試下是否安裝好
docker -v
設(shè)置開機(jī)啟動(dòng)
systemctl daemon-reload
systemctl start docker
systemctl enable docker
kubernetes安裝
終于到了kubernetes安裝這一步了,堅(jiān)持下。
先根據(jù)服務(wù)器版本下載對(duì)應(yīng)的kubernetes版本
https://dl.k8s.io/v1.18.12/kubernetes-server-linux-amd64.tar.gz
解壓
mkdir -p /tools/kubernetes/{bin,cfg,ssl,logs}
tar -xvzf kubernetes-server-linux-amd64.tar.gz ./
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager /tools/kubernetes/bin
cp kubectl /usr/bin/
1). 證書生成
mkdir -p /tools/k8s/k8s-cert && cd /tools/k8s/k8s-cert
cat > server-csr.json<<EOF
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"173.10.0.1",
"173.119.126.200",
"173.119.126.199",
"173.119.126.198",
"localhost",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShangHai",
"ST": "ShangHai",
"O": "k8s",
"OU": "System"
}
]
}
EOF
cat > ca-config.json<<EOF
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
EOF
cat > ca-csr.json<<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShangHai",
"ST": "ShangHai",
"O": "k8s",
"OU": "System"
}
]
}
EOF
生成ca證書
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
根據(jù)ca證書生成server端證書
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2) 在master節(jié)點(diǎn)部署kube-apisever
cat >/tools/kubernetes/cfg/kube-apiserver.conf <<EOF
KUBE_APISERVER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/tools/kubernetes/logs \
--etcd-servers=https://173.119.126.200:2379,https://173.119.126.199:2379,https://173.119.126:2379 \
--bind-address=173.119.126.200 \
--secure-port=6443 \
--advertise-address=173.119.126.200 \
--allow-privileged=true \
--service-cluster-ip-range=173.10.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth=true \
--token-auth-file=/tools/kubernetes/cfg/token.csv \
--service-node-port-range=30000-32767 \
--kubelet-client-certificate=/tools/kubernetes/ssl/server.pem
--kubelet-client-key=/tools/kubernetes/ssl/server-key.pem \
--tls-cert-file=/tools/kubernetes/ssl/server.pem \
--tls-private-key-file=/tools/kubernetes/ssl/server-key.pem \
--client-ca-file=/tools/kubernetes/ssl/ca.pem \
--service-account-key-file=/tools/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/tools/etcd/ssl/ca.pem \
--etcd-certfile=/tools/etcd/ssl/server.pem \
--etcd-keyfile=/tools/etcd/ssl/server-key.pem \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/tools/kubernetes/logs/k8s-audit.log"
EOF
copy證書
cp /tools/k8s/k8s-cert/*pem /tools/kubernetes/ssl/
配置服務(wù)自適應(yīng)啟動(dòng),用systemd管理kube-apiserver
cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/tools/kubernetes/cfg/kube-apiserver.conf
ExecStart=/tools/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
設(shè)置開機(jī)啟動(dòng)
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
3)在master節(jié)點(diǎn)部署kube-controller-manager
cat > /tools/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/tools/kubernetes/logs \
--leader-elect=true \
--master=127.0.0.1:8080 \
--bind-address=127.0.0.1 \
--allocate-node-cidrs=true \
--cluster-cidr=173.10.0.0/16 \
--service-cluster-ip-range=173.10.0.0/24 \
--cluster-signing-cert-file=/tools/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/tools/kubernetes/ssl/ca-key.pem \
--root-ca-file=/tools/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/tools/kubernetes/ssl/ca-key.pem \
--experimental-cluster-signing-duration=87600h0m0s"
EOF
配置服務(wù)自適應(yīng)啟動(dòng),用systemd管理kube-controller-manager
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/tools/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/tools/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
設(shè)置開機(jī)啟動(dòng)
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager
授權(quán)kubelet-bootstrap用戶允許請(qǐng)求證書
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
3). 部署kube-scheduler
cat > /tools/kubernetes/cfg/kube-scheduler.conf << EOF
KUBE_SCHEDULER_OPTS="--logtostderr=false \\
--v=2 \\
--log-dir=/tools/kubernetes/logs \\
--leader-elect \\
--master=127.0.0.1:8080 \\
--bind-address=127.0.0.1"
EOF
配置服務(wù)自適應(yīng)啟動(dòng),用systemd管理kube-scheduler
cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/tools/kubernetes/cfg/kube-scheduler.conf
ExecStart=/tools/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
設(shè)置開機(jī)啟動(dòng)
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
4). 部署kubelet
在node節(jié)點(diǎn)上創(chuàng)建工作目錄
mkdir -p /tools/kubernetes/{bin,cfg,ssl,logs}
在各個(gè)節(jié)點(diǎn)上執(zhí)行
cp kubectl /usr/bin/
在master節(jié)點(diǎn)上執(zhí)行
cat > /tools/kubernetes/cfg/kubelet.conf << EOF
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/tools/kubernetes/logs \
--hostname-override=k8s-master-216-200 \
--network-plugin=cni \
--kubeconfig=/tools/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/tools/kubernetes/cfg/bootstrap.kubeconfig \
--config=/tools/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/tools/kubernetes/ssl \
--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
EOF
配置參數(shù)文件
cat > /tools/kubernetes/cfg/kubelet-config.yml << EOF
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 173.10.10.2
clusterDomain: cluster.local
failSwapOn: false
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /tools/kubernetes/ssl/ca.pem
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
EOF
生成bootstrap.kubeconfig文件
注意這里的TOKEN配置要與: /tools/kubernetes/cfg/token.csv里保持一致
生成 kubelet bootstrap kubeconfig 配置文件
apiserver IP:PORT
KUBE_APISERVER="https://173.119.126 .200:6443"
TOKEN=""
kubectl config set-cluster kubernetes \
--certificate-authority=/tools/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials "kubelet-bootstrap" \
--token=${TOKEN} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
kubectl create clusterrolebinding system:anonymous --clusterrole=cluster-admin --user=system:anonymous
配置服務(wù)自適應(yīng)啟動(dòng),用systemd管理kubelet
cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
After=docker.service
[Service]
EnvironmentFile=/tools/kubernetes/cfg/kubelet.conf
ExecStart=/tools/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
設(shè)置開機(jī)啟動(dòng)
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
master上查看kubelet證書請(qǐng)求
kubectl get csr
{生成的token}
批準(zhǔn)申請(qǐng) 注意:此命令不要直接復(fù)制執(zhí)行,將后面的node-csr-* 替換為kubectl get csr得到的name值
kubectl certificate approve node-csr-{替換成生成的token}
注:由于網(wǎng)絡(luò)插件還沒有部署,節(jié)點(diǎn)會(huì)沒有準(zhǔn)備就緒 NotReady
5). 部署kube-proxy
創(chuàng)建配置文件
cat > /tools/kubernetes/cfg/kube-proxy.conf << EOF
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/tools/kubernetes/logs \
--config=/tools/kubernetes/cfg/kube-proxy-config.yml"
EOF
配置參數(shù)文件
cat > /tools/kubernetes/cfg/kube-proxy-config.yml << EOF
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
kubeconfig: /tools/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master-216-200
clusterCIDR: 173.10.0.0/24
EOF
切換目錄
cd /tools/k8s/k8s-cert/
創(chuàng)建證書請(qǐng)求文件
cat > kube-proxy-csr.json << EOF
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "ShangHai",
"ST": "ShangHai",
"O": "k8s",
"OU": "System"
}
]
}
EOF
生成證書
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
查看證書
ls kube-proxy*pem
執(zhí)行以下命令
KUBE_APISERVER="https://173.119.126.200:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/tools/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=/tools/k8s/k8s-cert/kube-proxy.pem \
--client-key=/tools/k8s/k8s-cert/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
拷貝生成的配置文件kube-proxy.kubeconfig到指定路徑
cp kube-proxy.kubeconfig /tools/kubernetes/cfg/
配置服務(wù)自適應(yīng)啟動(dòng),用systemd管理kube-proxy
cat > /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=/tools/kubernetes/cfg/kube-proxy.conf
ExecStart=/tools/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
設(shè)置開機(jī)啟動(dòng)
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
把證書同步到其它2臺(tái)worker主機(jī)
scp -r /tools/k8s/k8s-cert/kube-proxy*pem 173.119.126.199:/tools/kubernetes/ssl/
scp -r /tools/k8s/k8s-cert/kube-proxy*pem 173.119.126.198:/tools/kubernetes/ssl/
基本的組件部署完畢了,接下來需要部署CNI網(wǎng)絡(luò)通信
4). 部署CNI網(wǎng)絡(luò)
替換鏡像地址,這里將flannel:v0.12.0-amd64上傳到了私有harbor上了,如果可以聯(lián)網(wǎng)不需要這一步操作,直接執(zhí)行下一步
sed -i -r "s#quay.io/dummy.net/g" kube-flannel.yml
創(chuàng)建配置文件
FLANNEL_OPTIONS="--etcd-endpoints=https://173.119.126.200:2379,https://173.119.126.199:2379,https://173.119.126.198:2379,https://127.0.0.1:2379 -etcd-cafile=/tools/etcd/ssl/ca.pem -etcd-certfile=/tools/etcd/ssl/server.pem -etcd-keyfile=/tools/etcd/ssl/server-key.pem -etcd-prefix=/dummy.net/network"
設(shè)置IP地址網(wǎng)段, 值存入ETCD中
ETCDCTL_API=2 /tools/etcd/bin/etcdctl --ca-file=/tools/etcd/ssl/ca.pem --cert-file=/tools/etcd/ssl/server.pem --key-file=/tools/etcd/ssl/server-key.pem --endpoints="https://173.119.126.200:2379,https://173.119.126.199:2379,https://173.119.126.198:2379" set /bestpay.net/network/config { "Network": "173.10.0.0/16", "Backend": {"Type": "vxlan"}}
配置服務(wù)自適應(yīng)啟動(dòng),用systemd管理flanneld
cat >/usr/lib/systemd/system/flanneld.service<<EOF
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
#EnvironmentFile=/etc/sysconfig/flanneld
#EnvironmentFile=/etc/sysconfig/docker-network
EnvironmentFile=/tools/kubernetes/cfg/flanneld
ExecStart=/tools/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/tools/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
使flannel生效
kubectl apply -f kube-flannel.yml
#查看flannel是否已在運(yùn)行
kubectl get pods -n kube-system
flannel需要在其他node節(jié)點(diǎn)運(yùn)行
配置flanneld網(wǎng)絡(luò)
cat >/tools/kubernetes/cfg/flanneld<<EOF
FLANNEL_OPTIONS="--etcd-endpoints=https://173.119.126.200:2379,https://173.119.126.199:2379,https://173.119.126.198:2379 -etcd-cafile=/tools/etcd/ssl/ca.pem -etcd-certfile=/tools/etcd/ssl/server.pem -etcd-keyfile=/tools/etcd/ssl/server-key.pem -etcd-prefix=/dummy.net/network"
EOF
cat >/usr/lib/systemd/system/flanneld.service<<EOF
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
#EnvironmentFile=/etc/sysconfig/flanneld
#EnvironmentFile=/etc/sysconfig/docker-network
EnvironmentFile=/tools/kubernetes/cfg/flanneld
ExecStart=/tools/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/tools/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
設(shè)置開機(jī)啟動(dòng)
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
網(wǎng)絡(luò)配置完成可查看
/run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=173.10.1.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=173.10.1.1/24 --ip-masq=false --mtu=1450"
在etcd中設(shè)置網(wǎng)絡(luò)模型
ETCDCTL_API=2 /tools/etcd/bin/etcdctl --ca-file=/tools/etcd/ssl/ca.pem --cert-file=/tools/etcd/ssl/server.pem --key-file=/tools/etcd/ssl/server-key.pem --endpoints="https://173.119.126.200:2379,https://173.119.126.199:2379,https://173.119.126.198:2379" set /dummy.net/network/config '{ "Network": "173.10.0.0/16", "Backend": {"Type": "vxlan"}}'
授權(quán)apiserver訪問kubelet
cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
- pods/log
verbs:
- "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
使之生效
kubectl apply -f apiserver-to-kubelet-rbac.yaml
掃尾工作
在node節(jié)點(diǎn)上這幾個(gè)文件是證書申請(qǐng)審批后自動(dòng)生成的,每個(gè)node不同,必須刪除重新生成。
rm -f /tools/kubernetes/cfg/kubelet.kubeconfig
rm -f /tools/kubernetes/ssl/kubelet*
5). CoreDNS部署
獲取yaml文件
將獲取到的yaml文件重命名
mv coredns.yaml.base coredns.yaml
修改鏡像地址(如果可以聯(lián)網(wǎng),此步驟可以忽略)將spec.containers.image指向可以下載的地址,這里已經(jīng)上傳到私有harbor
dummy.net/coredns/coredns:1.3.1
6). Dashboard部署
獲取yaml配置文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta8/aio/deploy/recommended.yaml --no-check-certificate
指定在某個(gè)worker節(jié)點(diǎn)執(zhí)行(此步驟可以忽略) 添加配置 nodeName: k8s-worker-126-198,此處使用的鏡像已上傳到了私有harbor,如果可以聯(lián)網(wǎng),也可以不修改。
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
nodeName: k8s-worker-216-198 # 可以不修改
containers:
- name: kubernetes-dashboard
image: dummy.net/kubernetesui/dashboard:v2.0.0-beta8 #如果可以聯(lián)網(wǎng),也可以不修改
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
啟動(dòng),使服務(wù)生效
kubectl apply -f recommended.yaml
到此,組件都安裝完畢了,看看最后的運(yùn)行結(jié)果吧
| NAMESPACE | NAME | READY | STATUS | RESTARTS | AGE |
|---|---|---|---|---|---|
| default | pod/busybox | 1/1 | Running | 0 | 30h |
| kube-system | pod/coredns-79b975988-69r5p | 1/1 | Running | 0 | 30h |
| kube-system | pod/coredns-79b975988-bn4mc | 1/1 | Running | 0 | 30h |
| kube-system | pod/kube-flannel-ds-amd64-jnnwg | 1/1 | Running | 0 | 5d23h |
| kube-system | pod/kube-flannel-ds-amd64-s9hmz | 1/1 | Running | 6 | 5d23h |
| kube-system | pod/kube-flannel-ds-amd64-vt8cl | 1/1 | Running | 3 | 5d23h |
| kubernetes-dashboard | pod/dashboard-metrics-scraper-cd77fc8d-k5pm4 | 1/1 | Running | 0 | 5d22h |
| kubernetes-dashboard | pod/kubernetes-dashboard-9d8dc486-675wl | 1/1 | Running | 0 | 5d22h |
| NAMESPACE | NAME | TYPE | CLUSTER-IP | EXTERNAL-IP | PORT(S) | AGE |
|---|---|---|---|---|---|---|
| default | service/kubernetes | ClusterIP | 173.10.0.1 | <none> | 443/TCP | 8d |
| kube-system | service/coredns | ClusterIP | 173.10.0.11 | <none> | 53/UDP,53/TCP,9153/TCP | 30h |
| kube-system | service/kube-dns | ClusterIP | 173.10.0.2 | <none> | 53/UDP,53/TCP,9153/TCP | 2d |
| kubernetes-dashboard | service/dashboard-metrics-scraper | ClusterIP | 173.10.0.168 | <none> | 8000/TCP | 5d22h |
| kubernetes-dashboard | service/kubernetes-dashboard | NodePort | 173.10.0.34 | <none> | 443:30001/TCP | 5d22h |
其他常用命令
查看系統(tǒng)狀態(tài)
kubectl get pods,svc --all-namespaces -o wide
查看異常日志
kubectl logs pod/dashboard-metrics-scraper-cd77fc8d-k5pm4 -n kubernetes-dashboard
刪除網(wǎng)卡
kubectl delete -f kube-flannel.yml
ifconfig flannel0 down
ip link delete flannel0
node節(jié)點(diǎn)上重啟操作
systemctl restart flanneld.service
systemctl restart kube-proxy && systemctl status kube-proxy.service
systemctl restart kubelet && systemctl status kubelet.service
master節(jié)點(diǎn)上重啟操作
systemctl restart kube-apiserver && systemctl status kube-apiserver.service
systemctl restart kube-controller-manager && systemctl status kube-controller-manager
systemctl restart kube-scheduler && systemctl status kube-scheduler
顯示和查找資源
$ kubectl get services # 列出所有 namespace 中的所有 service
$ kubectl get pods --all-namespaces # 列出所有 namespace 中的所有 pod
$ kubectl get pods -o wide # 列出所有 pod 并顯示詳細(xì)信息
$ kubectl get deployment my-dep # 列出指定 deployment
$ kubectl get pods --include-uninitialized # 列出該 namespace 中的所有 pod 包括未初始化的