0.準(zhǔn)備工作
-
這次的安裝部署是在arm64上面執(zhí)行的,板子為RK3328,系統(tǒng)為Ubuntu16.04。
主要有一個(gè)Master和兩個(gè)Node,主要需要安裝以下的組件:
| 角色 | IP | 組件 |
|---|---|---|
| k8s-master | 172.16.32.10 | etcd,apiserver,controller-manager,scheduler,flannel |
| k8s-node1 | 172.16.32.11 | kubelet,kube-proxy,flannel,docker |
| k8s-node2 | 172.16.32.12 | kubelet,kube-proxy,flannel,docker |
這里Master的組件是:kube-apiserver,kube-controller-manager,kube-scheduler
-
提前下載好相關(guān)的組件
- etcd-v3.3.5-linux-arm64.tar.gz
- flannel-v0.10.0-linux-arm64.tar.gz
- kubernetes-node-linux-arm64.tar.gz
- kubernetes-server-linux-arm64.tar.gz
具體下載到GitHub上下載,如果不知道網(wǎng)址,可以直接搜 xxx release(比如etcd release),一般出來第一個(gè)就是了,然后選擇對應(yīng)版本下載
1.部署Master
-
部署前的初始化
-
首先以root用戶執(zhí)行以下動作
關(guān)閉防火墻
ufw disable安裝ntp(如果沒有安裝)
sudo apt-get install ntp-
添加主機(jī)名和IP到/etc/hosts
172.16.32.10 k8s-master 172.16.32.11 k8s-node1 172.16.32.12 k8s-node2 -
新建一個(gè)k8s-master用戶,并且賦予root權(quán)限
useradd -m -d /home/k8s-master -s /bin/bash k8s-master sudo sed -i -r '/root.*ALL=\(ALL.*ALL/a \k8s-master ALL=\(ALL\) NOPASSWD: ALL' /etc/sudoers 切換到k8s-master用戶
su k8s-master
-
切換到k8s-master用戶執(zhí)行以下動作
-
創(chuàng)建文件夾,用來保存bin和組件的配置文件
sudo mkdir -p ~/kubernetes/bin && sudo mkdir ~/kubernetes/cfg -
設(shè)置環(huán)境變量,因?yàn)槲覀兊亩M(jìn)制放在自己指定的路徑,加入環(huán)境變量就方便使用。
echo "export PATH=\$PATH:/home/k8s-master/kubernetes/bin" >> ~/.bashrcsource ~/.bashrc
-
-
-
安裝ETCD
-
解壓etcd-v3.3.5-linux-arm64.tar.gz
sudo tar -zxvf etcd-v3.3.5-linux-arm64.tar.gz -
復(fù)制解壓目錄下的etcd,etcdctl到~/kubernetes/bin
sudo cp etcd-v3.3.5-linux-arm64/etcd* ~/kubernetes/bin -
創(chuàng)建etcd配置文件
sudo vi ~/kubernetes/cfg/ectd.cfgETCD_NAME="default" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"注意這里在啟動etcd服務(wù)之前,需要先創(chuàng)建好/var/lib/etcd目錄,存放etcd的數(shù)據(jù)
-
創(chuàng)建etcd服務(wù)文件
sudo vi /lib/systemd/system/etcd.service[Unit] Description=Etcd Server After=network.target [Service] Type=simple WorkingDirectory=/var/lib/etcd Environment=ETCD_UNSUPPORTED_ARCH=arm64 EnvironmentFile=-/home/k8s-master/kubernetes/cfg/etcd.conf # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=\$(nproc) /home/k8s-master/kubernetes/bin/etcd" Type=notify [Install] WantedBy=multi-user.target注意,這里需要加上一句Environment=ETCD_UNSUPPORTED_ARCH=arm64,因?yàn)楫?dāng)前etcd要支持arm必須這樣子,否則etcd無法啟動。
-
啟動ETCD服務(wù)
sudo systemctl daemon-reload sudo systemctl start etcd sudo systemctl enable etcd可以通過systemctl status etcd 查看ETCD的狀態(tài),如果出錯(cuò),可以看log日志:/var/log/syslog
-
創(chuàng)建ETCD網(wǎng)絡(luò)
etcdctl set /coreos.com/network/config "{\"Network\":\"10.1.0.0/16\",\" Backend \"{\"Type\":\"vxlan\"}}"如果不指定Backend類型為vxlan,在安裝flannel時(shí)會報(bào)錯(cuò):不支持UDP Backend,因?yàn)閒lannel默認(rèn)Backend是UDP,arm不支持,所以在創(chuàng)建ETCD網(wǎng)絡(luò)的時(shí)候需要指定Backend類型
-
-
安裝Master三個(gè)重要組件:kube-apiserver,kube-controller-manager,kube-scheduler
-
解壓kubernetes-server-linux-arm64.tar.gz
sudo tar -zxvf kubernetes-server-linux-arm64.tar.gz -C kubernetes-server-linux-arm64 -
復(fù)制二進(jìn)制bin文件到 ~/kubernetes/bin
sudo cp kubernetes-server-linux-arm64/kubernetes/server/bin/kube-apiserver ~/kubernetes/binsudo cp kubernetes-server-linux-arm64/kubernetes/server/bin/kube-controller-manager ~/kubernetes/binsudo cp kubernetes-server-linux-arm64/kubernetes/server/bin/kube-scheduler ~/kubernetes/bin -
安裝kube-apiserver
-
添加kube-apisever配置文件
sudo vi ~/kubernetes/cfg/kube-apiserver# --logtostderr=true: log to standard error instead of files KUBE_LOGTOSTDERR="--logtostderr=true" # --v=0: log level for V logs KUBE_LOG_LEVEL="--v=4" # --etcd-servers=[]: List of etcd servers to watch (http://ip:port), # comma separated. Mutually exclusive with -etcd-config KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379" # --insecure-bind-address=127.0.0.1: The IP address on which to serve the --insecure-port. KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0" # --insecure-port=8080: The port on which to serve unsecured, unauthenticated access. KUBE_API_PORT="--insecure-port=8080" # --allow-privileged=false: If true, allow privileged containers. KUBE_ALLOW_PRIV="--allow-privileged=false" # --service-cluster-ip-range=<nil>: A CIDR notation IP range from which to assign service cluster IPs. # This must not overlap with any IP ranges assigned to nodes for pods. KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=169.169.0.0/16" # --admission-control="AlwaysAdmit": Ordered list of plug-ins # to do admission control of resources into cluster. # Comma-delimited list of: # LimitRanger, AlwaysDeny, SecurityContextDeny, NamespaceExists, # NamespaceLifecycle, NamespaceAutoProvision, AlwaysAdmit, # ServiceAccount, DefaultStorageClass, DefaultTolerationSeconds, ResourceQuota KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ResourceQuota" -
添加api-server服務(wù)文件
sudo vi /lib/systemd/system/kube-apiserver.service[Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/home/k8s-master/kubernetes/cfg/kube-apiserver ExecStart=/home/k8s-master/kubernetes/bin/kube-apiserver ${KUBE_LOGTOSTDERRR} \ ${KUBE_LOG_LEVEL} \ ${KUBE_ETCD_SERVERS} \ ${KUBE_API_ADDRESS} \ ${KUBE_API_PORT} \ ${KUBE_ALLOW_PRIV} \ ${KUBE_SERVICE_ADDRESSES} \ ${KUBE_ADMISSION_CONTROL} Restart=on-failure [Install] WantedBy=multi-user.target -
啟動服務(wù)
sudo systemctl daemon-reload sudo systemctl start kube-apiserver sudo systemctl enable kube-apiserver
-
-
安裝kube-controller-manager
-
添加配置文件
sudo vi ~/kubernetes/cfg/kube-controller-managerKUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=4" KUBE_MASTER="--master=172.16.32.10:8080" -
添加服務(wù)文件
sudo vi /lib/systemd/system/kube-controller-manager.service[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/home/k8s-master/kubernetes/cfg/kube-controller-manager ExecStart=/home/k8s-master/kubernetes/bin/kube-controller-manager ${KUBE_LOOGTOSTDERR} \ ${KUBE_LOG_LEVEL} \ ${KUBE_MASTER} Restart=on-failure [Install] WantedBy=multi-user.target -
啟動服務(wù)
sudo systemctl daemon-reload sudo systemctl strat kube-controller-manager sudo systemctl enable kube-controller-manager
-
-
安裝kube-scheduler
-
添加配置文件
sudo vi ~/kubernetes/cfg/kube-scheduler# --logtostderr=true: log to standard error instead of files KUBE_LOGTOSTDERR="--logtostderr=true" # --v=0: log level for V logs KUBE_LOG_LEVEL="--v=4" KUBE_MASTER="--master=172.16.32.10:8080" -
添加服務(wù)文件
sudo vi ~/lib/systemd/system/kube-scheduler.service[Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/home/k8s-master/kubernetes/cfg/kube-scheduler ExecStart=/home/$USER_HOME_DIR/kubernetes/bin/kube-scheduler ${KUBE_LOGTOSTDERRR} \ ${KUBE_LOG_LEVEL} \ ${KUBE_MASTER} Restart=on-failure [Install] WantedBy=multi-user.target -
啟動服務(wù)
sudo systemctl daemon-reload sudo systemctl start kube-scheduler sudo systemctl enable kube-scheduler
-
-
-
安裝flannel
-
解壓flannel-v0.10.0-linux-arm64.tar.gz
sudo tar -zxvf flannel-v0.10.0-linux-arm64.tar.gz -C flannel-v0.10.0-linux-arm64 -
先復(fù)制二進(jìn)制文件到~/kubernetes/bin
sudo cp flannel-v0.10.0-linux-arm64/* ~/kubernetes/bin -
安裝flannel
-
添加配置文件
sudo vi ~/kubernetes/cfg/flanneld.confFLANNEL_ETCD_ENDPOINTS="http://172.16.32.10:2379" FLANNEL_IFACE="eth0" FLANNEL_ETCD_PREFIX="/coreos.com/network" FLANNEL_OPTIONS=""注意,如果有多個(gè)網(wǎng)卡,需要指定具體的網(wǎng)卡,要加上FLANNEL_IFACE=xxx選項(xiàng),在啟動時(shí)候加上-iface=$(FLANNEL_IFACE)
-
添加服務(wù)文件
sudo vi /lib/systemd/system/flanneld.service[Unit] Description=Flanneld Documentation=https://github.com/coreos/flannel After=network.target Before=docker.service [Service] EnvironmentFile=-/home/k8s-master/kubernetes/cfg/flanneld.conf ExecStart=/home/k8s-master/kubernetes/bin/flanneld \ -etcd-endpoints=\${FLANNEL_ETCD_ENDPOINTS} \ -iface=\${FLANNEL_IFACE} \ -etcd-prefix=\${FLANNEL_ETCD_PREFIX} \ \${FLANNEL_OPTIONS} Restart=on-failure Type=notify LimitNOFILE=65536 [Install] WantedBy=multi-user.target RequiredBy=docker.service -
啟動服務(wù)
sudo systemctl daemon-reload sudo systemctl start flanneld sudo systemctl enable flanneld
-
-
配置docker網(wǎng)絡(luò),覆蓋docker網(wǎng)絡(luò)
sudo cat /run/flannel/subnet.env | grep "FLANNEL_SUBNET" | cut -d= -f2 # 將結(jié)果保存起來(10.1.90.1/24) #創(chuàng)建文件,寫入 sudo echo " [Service] ExecStart= ExecStart=/usr/bin/dockerd --bip=10.1.90.1/24 --mtu=1472 " > /etc/systemd/system/docker.service.d/docker.conf #重啟docker sudo systemctl daemon-reload sudo systemctl start docker sudo systemctl enable docker
-
2.部署Node
Node的部署與Master大致一樣,都是添加配置文件,服務(wù)文件,啟動服務(wù)。
-
同樣在部署前需要做些初始化
關(guān)閉防火墻
ufw disable安裝ntp(如果沒有安裝)
sudo apt-get install ntp-
添加主機(jī)名和IP到/etc/hosts
172.16.32.10 k8s-master 172.16.32.11 k8s-node1 172.16.32.12 k8s-node2 -
新建一個(gè)k8s-node1用戶,并且賦予root權(quán)限
useradd -m -d /home/k8s-node1 -s /bin/bash k8s-node1 sudo sed -i -r '/root.*ALL=\(ALL.*ALL/a \k8s-node1 ALL=\(ALL\) NOPASSWD: ALL' /etc/sudoers 切換到k8s-node1
su k8s-node1-
切換到k8s-node1用戶執(zhí)行以下動作
-
創(chuàng)建文件夾,用來保存bin和組件的配置文件
sudo mkdir -p ~/kubernetes/bin && sudo mkdir ~/kubernetes/cfg
-
-
安裝kubelet,kube-proxy
-
解壓kubernetes-node-linux-arm64.tar.gz
sudo tar -zxvf kubernetes-node-linux-arm64.tar.gz kubernetes-node-linux-arm64 -
復(fù)制二進(jìn)制bin到~/kubernetes/bin
sudo cp kubernetes-node-linux-arm64/kubernetes/node/bin/kubelet ~/kubernetes/binsudo cp kubernetes-node-linux-arm64/kubernetes/node/bin/kube-proxy ~/kubernetes/bin -
安裝kubelet
-
添加配置文件
sudo vi ~/kubernetes/cfg/kubelet.kubeconfigapiVersion: v1 kind: Config clusters: - cluster: server: http://172.16.32.10:8080/ name: local contexts: - context: cluster: local name: local current-context: localsudo vi ~/kubernetes/cfg/kubelet# --logtostderr=true: log to standard error instead of files KUBE_LOGTOSTDERR="--logtostderr=true" # --v=0: log level for V logs KUBE_LOG_LEVEL="--v=4" # --address=0.0.0.0: The IP address for the Kubelet to serve on (set to 0.0.0.0 for all interfaces) NODE_ADDRESS="--address=172.16.32.11" # --port=10250: The port for the Kubelet to serve on. Note that "kubectl logs" will not work if you set this flag. NODE_PORT="--port=10250" # --hostname-override="": If non-empty, will use this string as identification instead of the actual hostname. NODE_HOSTNAME="--hostname-override=ubuntu-node1" # Path to a kubeconfig file, specifying how to connect to the API server. KUBELET_KUBECONFIG="--kubeconfig=~/kubernetes/cfg/kubelet.kubeconfig" #KUBELET_KUBECONFIG="--api-servers=http://${MASTER_ADDRESS}:8080" # --allow-privileged=false: If true, allow containers to request privileged mode. [default=false] KUBE_ALLOW_PRIV="--allow-privileged=false" # DNS info KUBELET__DNS_IP="--cluster-dns=169.169.0.2" KUBELET_DNS_DOMAIN="--cluster-domain=cluster.local" KUBELET_SWAP="--fail-swap-on=false" KUBELET_ARGS="--pod_infra_container_image=hub.c.163.com/allan1991/pause-amd64:3.0"在kubernetes1.10版本中,KUBELET_KUBECONFIG修改為了用一個(gè)yaml的配置文件,而不再是以前的指定--api-server的方式,需要相應(yīng)更改過來,否則kubelet無法啟動。
-
添加服務(wù)文件
sudo vi /lib/systemd/system/kubelet.service[Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=-/home/k8s-node1/kubernetes/cfg/kubelet ExecStart=/home/k8s-node1/kubernetes/bin/kubelet ${KUBE_LOGTOSTDERR} \ ${KUBE_LOG_LEVEL} \ ${NODE_ADDRESS} \ ${NODE_PORT} \ ${NODE_HOSTNAME} \ ${KUBELET_KUBECONFIG} \ ${KUBE_ALLOW_PRIV} \ ${KUBELET__DNS_IP} \ ${KUBELET_DNS_DOMAIN} \ $KUBELET_SWAP Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target -
啟動服務(wù)
sudo systemctl daemon-reload sudo systemctl start kubelet sudo systemctl enable kubelet
-
-
安裝kube-proxy
-
添加配置文件
sudo vi ~/kubernetes/cfg/kube-proxy# --logtostderr=true: log to standard error instead of files KUBE_LOGTOSTDERR="--logtostderr=true" # --v=0: log level for V logs KUBE_LOG_LEVEL="--v=4" # --hostname-override="": If non-empty, will use this string as identification instead of the actual hostname. NODE_HOSTNAME="--hostname-override=k8s-node1" # --master="": The address of the Kubernetes API server (overrides any value in kubeconfig) KUBE_MASTER="--master=http://172.16.32.10:8080" -
添加服務(wù)文件
sudo vi /lib/systemd/system/kube-proxy.service[Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/home/k8s-node1/kubernetes/cfg/kube-proxy ExecStart=/home/k8s-node1/kubernetes/bin/kube-proxy ${KUBE_LOGTOSTDERR} \ ${KUBE_LOG_LEVEL} \ ${NODE_HOSTNAME} \ ${KUBE_MASTER} Restart=on-failure [Install] WantedBy=multi-user.target -
啟動服務(wù)
sudo systemctl daemon-reload sudo systemctl start kube-proxy sudo systemctl enable kube-proxy
-
-
安裝flannel
安裝方法和在Master中安裝的一樣,可以參考上面進(jìn)行安裝。
3.安裝過程遇到過的問題記錄
-
failed to start containermanager system validation failed - following cgroup subsystem not mounted:[memory]解決方法:
修/etc/default/grub:
添加:
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"更新:
update-grub重啟:
sudo systemctl reboot -i如果grub沒有安裝,可以先安裝:
sudo apt install grub-efi-arm64 grub-efi-arm64-bin grub2-common -
etcdmain: etcd on unsupported platform without ETCD_UNSUPPORTED_ARCH=arm64 set解決方法:
在etcd的服務(wù)文件中加上一句:
Environment=ETCD_UNSUPPORTED_ARCH=arm64 #此行必須添加 -
UDP backend is not supported on this architecture解決方法:
由于flannel預(yù)設(shè)的
backend type是udp,但arm64不支持,所以在etcd中需要指定backend參數(shù):etcdctl set /coreos.com/network/config "{\"Network\":\"10.1.0.0/16\",\" Backend \"{\"Type\":\"vxlan\"}}"
-