rke部署高可用k8s

rke搭建k8s流程

  1. 工具篇

    在命令行終端復(fù)制內(nèi)容到配置文件時(shí),遇到的格式錯(cuò)亂問題:
    
    vim 編輯 yaml 格式問題
    在粘貼之前執(zhí)行以下命令
    :set paste
    
  1. 部署架構(gòu)

    總共6臺(tái)服務(wù)器:
    針對(duì)每臺(tái)主機(jī)設(shè)置主機(jī)名,且配置每臺(tái)主機(jī)都能相互訪問
    每臺(tái)服務(wù)器的/etc/hosts要配置正確,一定要有127.0.0.1 localhost 這一項(xiàng)
    
    hostnamectl set-hostname lb-1
    hostnamectl set-hostname lb-2
    hostnamectl set-hostname k8s-master-1
    hostnamectl set-hostname k8s-master-2
    hostnamectl set-hostname k8s-master-3
    hostnamectl set-hostname k8s-worker-1
    
    cat >> /etc/hosts << EOF
    192.168.0.201  lb-1
    192.168.0.202  lb-2
    192.168.0.211 k8s-master-1
    192.168.0.212 k8s-master-2
    192.168.0.213 k8s-master-3
    192.168.0.221 k8s-worker-1
    EOF
    

    服務(wù)器部署架構(gòu):

    lb-1,lb-2
    作為集群的流量入口,承擔(dān)負(fù)載均衡作用,
    lb服務(wù)器需用keepalived配置VIP 192.168.0.200,負(fù)載均衡軟件可用nginx也可用haproxy
    k8s-master-1,k8s-master-2,k8s-master-3
    作為主節(jié)點(diǎn)的高可用部署
    k8s-worker-1,k8s-worker-n
    作為工作節(jié)點(diǎn)
    
  1. 系統(tǒng)和軟件版本

    系統(tǒng)版本:centos7.9     8.x
    docker版本:20.10.12
    docker-compose
    rke版本:rke1.3.3       下載地址:https://github.com/rancher/rke
    rancher版本:rancher/hyperkube:v1.20.13-rancher1
    keepalived
    nginx 或 haproxy
    

    centos修改國內(nèi)源

    cd /etc/yum.repos.d/
    mv CentOS-Base.repo CentOS-Base.repo.bak
    wget -O CentOs-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    
    #yum源更新命令
    yum clean all
    yum makecache
    yum update
    

    docker安裝

    添加阿里云的docker源
    yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    
    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum -y install docker-ce
    
    修改docker/daemon.json配置文件
    
    sudo mkdir -p /etc/docker
    cat <<EOF > /etc/docker/daemon.json
    {
        "exec-opts": ["native.cgroupdriver=systemd"],
    
     "log-driver": "json-file",
        "log-opts": {
         "max-size": "100m"
        },
        "storage-driver": "overlay2",
     "registry-mirrors": ["https://cenq021s.mirror.aliyuncs.com"]
    }
    EOF
    
    systemctl daemon-reload & systemctl restart docker & systemctl enable docker
    
    對(duì)于運(yùn)行 Linux 內(nèi)核版本 4.0 或更高版本,或使用 3.10.0-51 及更高版本的 RHEL 或 CentOS 的系統(tǒng),`overlay2`是首選的存儲(chǔ)驅(qū)動(dòng)程序。
    如發(fā)現(xiàn)無法啟動(dòng),則在配置文件中去除 :
    "storage-driver": "overlay2",
    "log-driver": "json-file",
    "log-opts": {
    "max-size": "100m"
    },
    
    可能遇到的問題:
    yum-config-manager: command not found
    需安裝yum-utils
    yum -y install yum-utils
    
    k8s原生安裝的時(shí)候,docker版本需與k8s版本一直,具體k8s的github中可以查詢
    
    如需安裝指定版本docker:
    yum install docker-ce-19.03.* -y
    
    已安裝高版的docker降級(jí)到指定版本
    yum downgrade --setopt=obsoletes=0 -y docker-ce-19.03.13  docker-ce-selinux-19.03.13
    

    docker-compose安裝

    yum -y install yum-utils
    
    安裝docker-compse
    
    sudo yum -y install epel-release
    
     yum install docker-compose
    
  1. 系統(tǒng)內(nèi)核參數(shù)修改

    關(guān)閉防火墻

    由于有網(wǎng)絡(luò)防火墻,系統(tǒng)自帶的firewall防火墻可以關(guān)閉;
    systemctl stop firewalld
    systemctl disable firewalld
    
    常用命令:
    #防火墻操作命令 備用
    firewall-cmd --zone=public --remove-port=80/tcp --permanent  
    配置立即生效
    firewall-cmd --reload 
    查看防火墻狀態(tài)
    systemctl status firewalld
    關(guān)閉防火墻
    systemctl stop firewalld
    打開防火墻
    systemctl start firewalld
    

    如不想關(guān)閉防火墻的,可以按照以下端口規(guī)則開放:

    協(xié)議 端口 描述
    TCP 32289 使用主機(jī)驅(qū)動(dòng)通過 SSH 進(jìn)行節(jié)點(diǎn)配置
    TCP 2376 主機(jī)驅(qū)動(dòng)與 Docker 守護(hù)進(jìn)程通信的 TLS 端口
    TCP 2379 etcd 客戶端請(qǐng)求
    TCP 2380 etcd 節(jié)點(diǎn)通信
    TCP 179 Calico BGP 端口
    UDP 8472 Canal/Flannel VXLAN overlay 網(wǎng)絡(luò)
    UDP 4789 Canal/Flannel VXLAN overlay 網(wǎng)絡(luò)
    TCP 9099 Canal/Flannel 健康檢查
    TCP 9100 Monitoring 從 Linux node-exporters 中抓取指標(biāo)所需的默認(rèn)端口
    UDP 8443 Rancher webhook
    TCP 9443 Rancher webhook
    TCP 9796 集群監(jiān)控拉取節(jié)點(diǎn)指標(biāo)的默認(rèn)端口
    TCP 6783 Weave 端口
    UDP 6783-6784 Weave UDP 端口
    TCP 10250 Metrics server 與所有節(jié)點(diǎn)的通信
    TCP 10254 Ingress controller 健康檢查
    TCP/UDP 30000-32767 NodePort 端口范圍
    TCP 6443 apiserver
    TCP 80 Ingress controller
    TCP 443 Ingress controller

    關(guān)閉SELINUX

    永久關(guān)閉:
    修改/etc/selinux/config這個(gè)配置文件
    
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/'  /etc/selinux/config 
    
    查看狀態(tài)
    setenforce status
    

    禁用swap分區(qū)

    vim  /etc/fstab 
    #/dev/mapper/cl-swap     swap
    

    各服務(wù)器設(shè)置時(shí)間同步

    centos7用ntp的方式 centos8用chrony
    
    yum install ntp -y
    
    修改配置文件:time.xxx.com為你們的時(shí)間服務(wù)器地址,如果沒有可以用阿里的 ntp.aliyun.com
    vim /etc/ntp.conf
    server time.xxx.com iburst
    
    執(zhí)行時(shí)間同步
    ntpdate time.xxx.edu.cn
    
    重啟服務(wù)和設(shè)置開機(jī)自啟動(dòng)
    systemctl restart ntpd & systemctl enable ntpd
    
    ----------------------------------------------------------
    chrony模式
    centos   時(shí)間同步
    vim /etc/chrony.conf
    
    添加時(shí)間服務(wù)器
    server ntp.aliyun.com iburst
    
    重啟
    systemctl restart chronyd.service
    
    同步時(shí)間
    chronyc sources -v
    
    

    設(shè)置內(nèi)核參數(shù)

    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
    br_netfilter
    EOF
    
    cat <<EOF > /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    fs.may_detach_mounts = 1
    vm.overcommit_memory=1
    vm.panic_on_oom=0
    fs.inotify.max_user_watches=89100
    fs.file-max=52706963
    fs.nr_open=52706963
    net.netfilter.nf_conntrack_max=2310720
    
    net.ipv4.tcp_keepalive_time = 600
    net.ipv4.tcp_keepalive_probes = 3
    net.ipv4.tcp_keepalive_intvl =15
    net.ipv4.tcp_max_tw_buckets = 36000
    net.ipv4.tcp_tw_reuse = 1
    net.ipv4.tcp_max_orphans = 327680
    net.ipv4.tcp_orphan_retries = 3
    net.ipv4.tcp_syncookies = 1
    net.ipv4.tcp_max_syn_backlog = 16384
    net.ipv4.ip_conntrack_max = 65536
    net.ipv4.tcp_max_syn_backlog = 16384
    net.ipv4.tcp_timestamps = 0
    net.core.somaxconn = 16384
    EOF
    
    
    sudo sysctl --system
    

    安裝相關(guān)命令組件

yum install ipvsadm ipset sysstat conntrack libseccomp -y

# 加入以下內(nèi)容 centos7中修改ipvs.conf文件會(huì)導(dǎo)致模塊無法啟動(dòng),centos8中正常
cat <<EOF > /etc/modules-load.d/ipvs.conf 
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

systemctl enable --now systemd-modules-load.service

如因系統(tǒng)版本過高,加載報(bào)錯(cuò),需要將 nf_conntrack_ipv4 替換為:nf_conntrack

cat <<EOF > /etc/modules-load.d/ipvs.conf 
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
  1. rke部署

    選擇一臺(tái)服務(wù)器作為部署節(jié)點(diǎn), 并下載rke:

    從rke的github中下載最新的release版, 我這邊選擇的是1.3.3 
    https://github.com/rancher/rke 
    
    wget https://github.com/rancher/rke/releases/download/v1.3.3/rke_linux-amd64
    
    mv rke_linux-amd64 /usr/local/bin/rke && chmod +x /usr/local/bin/rke
    

    為每一臺(tái)服務(wù)器創(chuàng)建一個(gè)用戶部署k8s的專用用戶,該用戶需要能執(zhí)行docker命令的權(quán)限,以便rke程序能通過該用戶自動(dòng)部署。

    # useradd ops
    # usermod -a -G docker ops
    

    配置部署服務(wù)器能免密登陸各節(jié)點(diǎn):

    #su – ops
    #ssh-keygen -t rsa -b 4096
    
    su - ops
    ssh-copy-id  ops@192.168.0.201
    ssh-copy-id  ops@192.168.0.202
    ssh-copy-id  ops@192.168.0.211
    ssh-copy-id  ops@192.168.0.212
    ssh-copy-id  ops@192.168.0.213
    ssh-copy-id  ops@192.168.0.221
    

    運(yùn)行rke生成配置文件

    rke config
    會(huì)彈出一系列對(duì)話選項(xiàng),逐個(gè)配置即可,最終會(huì)生成 cluster.yml文件
    注意點(diǎn):
    1、id_rsa不要配錯(cuò)
    2、ip不要填錯(cuò)
    3、ssh用戶和端口不要填錯(cuò),并確保安裝服務(wù)器的用戶能免密登陸到各個(gè)節(jié)點(diǎn)
    4、rke版本不要配錯(cuò),rancher/hyperkube:v1.20.13-rancher1   可以從github中查找rke所支持的對(duì)應(yīng)版本。
    5、其他的基本上默認(rèn)即可
    
    
    需要ETCD定時(shí)備份的,要更改一下配置文件
    services:
        etcd:
          snapshot: true
          creation: 6h
          retention: 24h
    
    執(zhí)行部署命令
    rke up --config ./cluster.yml     
    
    部署成功后會(huì)生成以下文件:
    kube_config_cluster.yml   cluster.rkestate  
    
    PS:
    ****** 
    kube_config_cluster.yml   cluster.rkestate cluster.yml 這3份文件很重要,一定要保存好。
    ****** 
    
    
    部署最終可能會(huì)出現(xiàn)以下錯(cuò)誤,執(zhí)行rke的更新命令即可:
    FATA[0668] Failed to get job complete status for job rke-network-plugin-deploy-job in namespace kube-system
    
    rke up --update-only --config ./cluster.yml
    
    
    
    
    
    新增、刪除節(jié)的流程:
    1、修改cluster.yml的配置文件
    2、執(zhí)行rke命令 
    rke up --update-only --config ./cluster.yml
    
    
    
    

    安裝kubectl

    cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    yum install -y kubectl-1.20.0
    systemctl enable kubectl
    
    mkdir $HOME/.kube && cp  kube_config_cluster.yml $HOME/.kube/config
    kubectl --kubeconfig=$KUBECONFIG   
    
    -- kube_config_rancher-cluster.yml 該文件為配置服務(wù)器上生成的文件。
    
    即可在命令行查看k8s集群狀態(tài):
    kubectl get nodes
    kubectl get pods -A -o wide
    
    強(qiáng)制刪除
    kubectl delete pods httpd-app-6df58645c6-cxgcm --grace-period=0 --force
    
  1. 安裝rancher

    生成rancher所需要的證書:可以是自己的證書文件,也可用腳本生成的

    #!/bin/bash -e
    
    help ()
    {
        echo  ' ================================================================ '
        echo  ' --ssl-domain: 生成ssl證書需要的主域名,如不指定則默認(rèn)為www.rancher.local,如果是ip訪問服務(wù),則可忽略;'
        echo  ' --ssl-trusted-ip: 一般ssl證書只信任域名的訪問請(qǐng)求,有時(shí)候需要使用ip去訪問server,那么需要給ssl證書添加擴(kuò)展IP,多個(gè)IP用逗號(hào)隔開;'
        echo  ' --ssl-trusted-domain: 如果想多個(gè)域名訪問,則添加擴(kuò)展域名(SSL_TRUSTED_DOMAIN),多個(gè)擴(kuò)展域名用逗號(hào)隔開;'
        echo  ' --ssl-size: ssl加密位數(shù),默認(rèn)2048;'
        echo  ' --ssl-date: ssl有效期,默認(rèn)10年;'
        echo  ' --ca-date: ca有效期,默認(rèn)10年;'
        echo  ' --ssl-cn: 國家代碼(2個(gè)字母的代號(hào)),默認(rèn)CN;'
        echo  ' 使用示例:'
        echo  ' ./create_self-signed-cert.sh --ssl-domain=www.test.com --ssl-trusted-domain=www.test2.com \ '
        echo  ' --ssl-trusted-ip=1.1.1.1,2.2.2.2,3.3.3.3 --ssl-size=2048 --ssl-date=3650'
        echo  ' ================================================================'
    }
    
    case "$1" in
        -h|--help) help; exit;;
    esac
    
    if [[ $1 == '' ]];then
        help;
        exit;
    fi
    
    CMDOPTS="$*"
    for OPTS in $CMDOPTS;
    do
        key=$(echo ${OPTS} | awk -F"=" '{print $1}' )
        value=$(echo ${OPTS} | awk -F"=" '{print $2}' )
        case "$key" in
            --ssl-domain) SSL_DOMAIN=$value ;;
            --ssl-trusted-ip) SSL_TRUSTED_IP=$value ;;
            --ssl-trusted-domain) SSL_TRUSTED_DOMAIN=$value ;;
            --ssl-size) SSL_SIZE=$value ;;
            --ssl-date) SSL_DATE=$value ;;
            --ca-date) CA_DATE=$value ;;
            --ssl-cn) CN=$value ;;
        esac
    done
    
    # CA相關(guān)配置
    
    CA_DATE=${CA_DATE:-3650}
    CA_KEY=${CA_KEY:-cakey.pem}
    CA_CERT=${CA_CERT:-cacerts.pem}
    CA_DOMAIN=cattle-ca
    
    # ssl相關(guān)配置
    
    SSL_CONFIG=${SSL_CONFIG:-$PWD/openssl.cnf}
    SSL_DOMAIN=${SSL_DOMAIN:-'www.rancher.local'}
    SSL_DATE=${SSL_DATE:-3650}
    SSL_SIZE=${SSL_SIZE:-2048}
    
    ## 國家代碼(2個(gè)字母的代號(hào)),默認(rèn)CN;
    
    CN=${CN:-CN}
    
    SSL_KEY=$SSL_DOMAIN.key
    SSL_CSR=$SSL_DOMAIN.csr
    SSL_CERT=$SSL_DOMAIN.crt
    
    echo -e "\033[32m ---------------------------- \033[0m"
    echo -e "\033[32m       | 生成 SSL Cert |       \033[0m"
    echo -e "\033[32m ---------------------------- \033[0m"
    
    if [[ -e ./${CA_KEY} ]]; then
        echo -e "\033[32m ====> 1. 發(fā)現(xiàn)已存在CA私鑰,備份"${CA_KEY}"為"${CA_KEY}"-bak,然后重新創(chuàng)建 \033[0m"
        mv ${CA_KEY} "${CA_KEY}"-bak
        openssl genrsa -out ${CA_KEY} ${SSL_SIZE}
    else
        echo -e "\033[32m ====> 1. 生成新的CA私鑰 ${CA_KEY} \033[0m"
        openssl genrsa -out ${CA_KEY} ${SSL_SIZE}
    fi
    
    if [[ -e ./${CA_CERT} ]]; then
        echo -e "\033[32m ====> 2. 發(fā)現(xiàn)已存在CA證書,先備份"${CA_CERT}"為"${CA_CERT}"-bak,然后重新創(chuàng)建 \033[0m"
        mv ${CA_CERT} "${CA_CERT}"-bak
        openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}"
    else
        echo -e "\033[32m ====> 2. 生成新的CA證書 ${CA_CERT} \033[0m"
        openssl req -x509 -sha256 -new -nodes -key ${CA_KEY} -days ${CA_DATE} -out ${CA_CERT} -subj "/C=${CN}/CN=${CA_DOMAIN}"
    fi
    
    echo -e "\033[32m ====> 3. 生成Openssl配置文件 ${SSL_CONFIG} \033[0m"
    cat > ${SSL_CONFIG} <<EOM
    [req]
    req_extensions = v3_req
    distinguished_name = req_distinguished_name
    [req_distinguished_name]
    [ v3_req ]
    basicConstraints = CA:FALSE
    keyUsage = nonRepudiation, digitalSignature, keyEncipherment
    extendedKeyUsage = clientAuth, serverAuth
    EOM
    
    if [[ -n ${SSL_TRUSTED_IP} || -n ${SSL_TRUSTED_DOMAIN} ]]; then
        cat >> ${SSL_CONFIG} <<EOM
    subjectAltName = @alt_names
    [alt_names]
    EOM
        IFS=","
        dns=(${SSL_TRUSTED_DOMAIN})
        dns+=(${SSL_DOMAIN})
        for i in "${!dns[@]}"; do
          echo DNS.$((i+1)) = ${dns[$i]} >> ${SSL_CONFIG}
        done
    
        if [[ -n ${SSL_TRUSTED_IP} ]]; then
            ip=(${SSL_TRUSTED_IP})
            for i in "${!ip[@]}"; do
              echo IP.$((i+1)) = ${ip[$i]} >> ${SSL_CONFIG}
            done
        fi
    
    fi
    
    echo -e "\033[32m ====> 4. 生成服務(wù)SSL KEY ${SSL_KEY} \033[0m"
    openssl genrsa -out ${SSL_KEY} ${SSL_SIZE}
    
    echo -e "\033[32m ====> 5. 生成服務(wù)SSL CSR ${SSL_CSR} \033[0m"
    openssl req -sha256 -new -key ${SSL_KEY} -out ${SSL_CSR} -subj "/C=${CN}/CN=${SSL_DOMAIN}" -config ${SSL_CONFIG}
    
    echo -e "\033[32m ====> 6. 生成服務(wù)SSL CERT ${SSL_CERT} \033[0m"
    openssl x509 -sha256 -req -in ${SSL_CSR} -CA ${CA_CERT} \
        -CAkey ${CA_KEY} -CAcreateserial -out ${SSL_CERT} \
        -days ${SSL_DATE} -extensions v3_req \
        -extfile ${SSL_CONFIG}
    
    echo -e "\033[32m ====> 7. 證書制作完成 \033[0m"
    echo
    echo -e "\033[32m ====> 8. 以YAML格式輸出結(jié)果 \033[0m"
    echo "----------------------------------------------------------"
    echo "ca_key: |"
    cat $CA_KEY | sed 's/^/  /'
    echo
    echo "ca_cert: |"
    cat $CA_CERT | sed 's/^/  /'
    echo
    echo "ssl_key: |"
    cat $SSL_KEY | sed 's/^/  /'
    echo
    echo "ssl_csr: |"
    cat $SSL_CSR | sed 's/^/  /'
    echo
    echo "ssl_cert: |"
    cat $SSL_CERT | sed 's/^/  /'
    echo
    
    echo -e "\033[32m ====> 9. 附加CA證書到Cert文件 \033[0m"
    cat ${CA_CERT} >> ${SSL_CERT}
    echo "ssl_cert: |"
    cat $SSL_CERT | sed 's/^/  /'
    echo
    
    echo -e "\033[32m ====> 10. 重命名服務(wù)證書 \033[0m"
    echo "cp ${SSL_DOMAIN}.key tls.key"
    cp ${SSL_DOMAIN}.key tls.key
    echo "cp ${SSL_DOMAIN}.crt tls.crt"
    cp ${SSL_DOMAIN}.crt tls.crt
    

    生成證書:

    把上面的執(zhí)行腳本保存到key.sh文件,且賦予chmod +x kye.sh 執(zhí)行權(quán)限

    mkdir ./rancher-ssl
    vim ./key.sh  #加入上述腳本
    chmod +x kye.sh
    
    
    ./key.sh --ssl-domain=rancher.xxx.com --ssl-trusted-domain=rancher2.xxx.com --ssl-trusted-ip=192.168.0.211,192.168.0.212,192.168.0.213,192.168.0.221 --ssl-size=2048 --ssl-date=36500
    
    會(huì)生成一堆證書文件,需要保存好
    
    注意:
    --ssl-domain        可信任的域名
    --ssl-trusted-ip    可信任的節(jié)點(diǎn)IP
    

    k8s環(huán)境中配置證書:

    #創(chuàng)建命名空間
    kubectl create namespace cattle-system
    
    #設(shè)置證書
    kubectl -n cattle-system create secret generic tls-ca --from-file=cacerts.pem
    cp cacerts.pem ca-additional.pem
    kubectl -n cattle-system create secret generic tls-ca-additional --from-file=ca-additional.pem
    kubectl -n cattle-system create secret tls tls-rancher-ingress --cert=tls.crt --key=tls.key
    
    如出現(xiàn)證書已存在的情況,需先刪除證書:
    kubecelt -n cattle-system delete secret tls-ca
    kubecelt -n cattle-system delete secret tls-ca-additional
    kubecelt -n cattle-system delete secret tls-rancher-ingress
    
    

    安裝helm:這里選擇用helm生成rancher的安裝yaml文件

    從github上下載helm的2進(jìn)制文件
    https://github.com/helm/helm
    
    tar -zxvf helm-v3.3.0-linux-amd64.tar.gz
    cd linux-amd64
    mv helm /usr/local/bin/helm && chmod +x /usr/local/bin/helm
    
    #添加rancher helm 倉庫
    helm repo add rancher-stable https://releases.rancher.com/server-charts/stable
    
    
    #查看rancher所有版本
    helm search repo rancher -l
    
    helm fetch rancher-stable/rancher --version 2.5.11
    當(dāng)前目錄會(huì)多一個(gè)rancher-2.5.11.tgz
    
    使用以下命令渲染模板:
    helm template rancher ./rancher-2.5.11.tgz \
         --namespace cattle-system --output-dir . \
         --set privateCA=true \
         --set additionalTrustedCAs=true \
         --set ingress.tls.source=secret \
         --set hostname=rancher.toowe.com \
         --set useBundledSystemChart=true
    
    渲染后會(huì)生成一個(gè)rancher目錄,其中ingress的配置文件需要修改一下
    ingress.yaml文件修改后如下:
    
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: rancher
      labels:
        app: rancher
        chart: rancher-2.5.11
        heritage: Helm
        release: rancher
      annotations:
        nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
        nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
        nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
    spec:
      rules:
      - host: rancher.toowe.com  # hostname to access rancher server
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: rancher
                port:
                  number: 80
    #      - backend:
    #          serviceName: rancher
    #          servicePort: 80
      tls:
      - hosts:
        - rancher.toowe.com
        secretName: tls-rancher-ingress
    
    
    使用kubectl安裝rancher  
    kubectl -n cattle-system apply -R -f ./rancher/templates/
    
    報(bào)錯(cuò):
    Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
    
    kubectl -n cattle-system delete -R -f ./rancher/templates/ingress.yaml
    
    
    kubectl -n cattle-system get all
    
    安裝完成
    
  1. 安裝lb負(fù)載均衡

    安裝keepalived

    yum install keepalived -y
    
    修改keepalived 配置文件,每個(gè)lb節(jié)點(diǎn)上都需要修改,注意配置文件中帶#部分
    global_defs {
       notification_email {
          user@example.com
      }
    
      notification_email_from mail@example.org
      smtp_server 192.168.x.x
      smtp_connect_timeout 30
      router_id LVS_MASTER  # 每個(gè)節(jié)點(diǎn)名稱要唯一
    }
    
    #監(jiān)測(cè)haproxy進(jìn)程狀態(tài),每2秒執(zhí)行一次 如果nginx則是監(jiān)聽nginx
    vrrp_script chk_haproxy {
        script "/bin/bash -c 'if [[ $(netstat -nlp | grep 16443) ]]; then exit 0; else exit 1; fi'"
        interval 2
        weight 2
    }
    
    vrrp_instance VI_1 {
        state MASTER #標(biāo)示狀態(tài)為MASTER
        interface enp0s3
        virtual_router_id 51
        priority 101  #MASTER權(quán)重要高于BACKUP
        advert_int 1
        unicast_src_ip 192.168.0.183  #當(dāng)前機(jī)器地址
        unicast_peer {
          192.168.0.201            #peer中其它地址
          192.168.0.202           #peer中其它地址
        }
    
        authentication {
            auth_type PASS #主從服務(wù)器驗(yàn)證方式
            auth_pass 1111
        }
    
        track_script {
            chk_haproxy #監(jiān)測(cè)haproxy進(jìn)程狀態(tài)
        }
    
        #VIP
        virtual_ipaddress {
            192.168.0.200 #虛擬IP
        }
    }
    
    
    systemctl daemon-reload
    systemctl enable keepalived
    systemctl start keepalived
    

    安裝haproxy 這里也可用nginx替代

    yum install haproxy -y
    
    修改haproxy配置
    #---------------------------------------------------------------------
    # Example configuration for a possible web application.  See the
    # full configuration options online.
    #
    #  https://www.haproxy.org/download/1.8/doc/configuration.txt
    #
    #---------------------------------------------------------------------
    #---------------------------------------------------------------------
    # Global settings
    #---------------------------------------------------------------------
    
    global
        # to have these messages end up in /var/log/haproxy.log you will
        # need to:
        #
        # 1) configure syslog to accept network log events.  This is done
        #    by adding the '-r' option to the SYSLOGD_OPTIONS in
        #    /etc/sysconfig/syslog
        #
        # 2) configure local2 events to go to the /var/log/haproxy.log
        #  file. A line like the following can be added to
        #  /etc/sysconfig/syslog
        #
        #    local2.*                      /var/log/haproxy.log
        #
        log        127.0.0.1 local2
        chroot      /var/lib/haproxy
        pidfile    /var/run/haproxy.pid
        maxconn    40000
        user        haproxy
        group      haproxy
        daemon
    
        # turn on stats unix socket
        stats socket /var/lib/haproxy/stats
    
     # utilize system-wide crypto-policies
        ssl-default-bind-ciphers PROFILE=SYSTEM
        ssl-default-server-ciphers PROFILE=SYSTEM
    
    #---------------------------------------------------------------------
    # common defaults that all the 'listen' and 'backend' sections will
    # use if not designated in their block
    #---------------------------------------------------------------------
    defaults
        mode                    http
        log                    global
        option                  httplog
        option                  dontlognull
        option http-server-close
        option forwardfor      except 127.0.0.0/8
        option                  redispatch
        retries                3
        timeout http-request    10s
        timeout queue          1m
        timeout connect        10s
        timeout client          1m
        timeout server          1m
        timeout http-keep-alive 10s
        timeout check          10s
        maxconn                3000
    
    #---------------------------------------------------------------------
    # kubernetes apiserver frontend which proxys to the backends
    #---------------------------------------------------------------------
    frontend rancher-forntend
        mode                tcp
        bind                *:443
        option              tcplog
        default_backend      rancher-backend
    #---------------------------------------------------------------------
    # round robin balancing between the various backends
    #---------------------------------------------------------------------
    
    backend rancher-backend
        mode        tcp
        balance    roundrobin
        server  node-0 192.168.0.211:443 check
        server  node-1 192.168.0.212:443 check
        server  node-2 192.168.0.222:443 check
        
    listen admin_stats
        bind 0.0.0.0:19198
        mode http
        log 127.0.0.1 local3 err
        #HAProxy監(jiān)控頁面統(tǒng)計(jì)自動(dòng)刷新時(shí)間。
        stats refresh 30s
        #設(shè)置監(jiān)控頁面URL路徑。 http://IP:19198/haproxy-status可查看
        stats uri /haproxy-status
        #統(tǒng)計(jì)頁面密碼框提示信息
        stats realm welcome login\ Haproxy
        #登錄統(tǒng)計(jì)頁面用戶和密碼
        stats auth toowe:toowe
        #隱藏HAProxy版本信息
        stats hide-version
        #設(shè)置TURE后可在監(jiān)控頁面手工啟動(dòng)關(guān)閉后端真實(shí)服務(wù)器
        stats admin if TRUE
    
  1. 卸載k8s,可忽略

    cat > clear.sh << EOF
    df -h|grep kubelet |awk -F % '{print $2}'|xargs umount 
    rm /var/lib/kubelet/* -rf
    rm /etc/kubernetes/* -rf
    rm /var/lib/rancher/* -rf
    rm /var/lib/etcd/* -rf
    rm /var/lib/cni/* -rf
    
    rm -rf /var/run/calico 
    
    iptables -F && iptables -t nat -F
    
    ip link del flannel.1
    
    docker ps -a|awk '{print $1}'|xargs docker rm -f
    docker volume ls|awk '{print $2}'|xargs docker volume rm
    
    rm -rf /var/etcd/
    rm -rf /run/kubernetes/
    docker rm -fv $(docker ps -aq)
    docker volume rm  $(docker volume ls)
    rm -rf /etc/cni
    rm -rf /opt/cni
    
    systemctl restart docker
    EOF
    
    
    #刪除容器
    sudo docker stop `sudo docker ps -aq`
    sudo docker rm -f `sudo docker ps -aq`
    
    #刪除掛載卷
    sudo docker volume rm $(sudo docker volume ls -q)
    
    for mount in $(mount tmpfs |grep '/vsr/lib/kubelet' |awk '{print $3}') ; do sudo umount $mount; done
    
    sudo mount |grep tmpfs |grep '/var/lib/kubelet' |awk '{print $3}'
    sudo umount /var/run/docker/netns/default
    
    #刪除相關(guān)文件
    sudo rm -rf /etc/cni
    sudo rm -rf /etc/kubernetes
    sudo rm -rf /opt/cni
    sudo rm -rf /opt/rke
    sudo rm -rf /run/secrets/kubernetes.io
    sudo rm -rf /run/calico
    sudo rm -rf /var/lib/etcd
    sudo rm -rf /var/lib/cni
    sudo rm -rf /var/lib/kubelet
    sudo rm -rf /var/log/containers
    sudo rm -rf /var/log/pods
    sudo rm -rf /var/lib/rancher
    
    sudo rm -rf /var/run/calico
    sudo rm -rf /var/run/docker
    sudo rm -rf /var/lib/docker
    sudo rm -rf /app/docker
    
  1. 自建ETCD集群,可忽略

    wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
    #注意,以上鏈接若打不開,直接使用我提供的軟件即可!
    
    設(shè)置cfssl執(zhí)行權(quán)限
    chmod +x cfssl*
    for x in cfssl*; do mv $x ${x%*_linux-amd64};  done
    mv cfssl* /usr/bin
    
    
    創(chuàng)建生成證書目錄
    mkdir -p ~/etcd_tls
    cd ~/etcd_tls
    
    etcd證書json
    cat > ca-config.json << EOF
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "www": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    EOF
    
    cat > ca-csr.json << EOF
    {
        "CN": "etcd CA",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing"
            }
        ]
    }
    EOF
    
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    
    cat > server-csr.json << EOF
    {
        "CN": "etcd",
        "hosts": [
        "192.168.0.179",
        "192.168.0.48",
        "192.168.0.163"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing"
            }
        ]
    }
    EOF
    
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
    
    
    etcd安裝文件
    
    mkdir /opt/etcd/{bin,cfg,ssl} -p
    tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
    mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
    
    
    etcd配置文件
    cat > /opt/etcd/cfg/etcd.conf << EOF
    #[Member]
    ETCD_NAME="etcd-1"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.0.179:2380" #2380是 集群通信的端口;
    ETCD_LISTEN_CLIENT_URLS="https://192.168.0.179:2379" #2379是指它的數(shù)據(jù)端口,其他客戶端要訪問etcd數(shù)據(jù)庫的讀寫都走的是這個(gè)端口;
    
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.179:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.179:2379"
    ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.0.179:2380,etcd-2=https://192.168.0.48:2380,etcd-3=https://192.168.0.163:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #一種簡(jiǎn)單的認(rèn)證機(jī)制,網(wǎng)絡(luò)里可能配置了多套k8s集群,防止誤同步;
    ETCD_INITIAL_CLUSTER_STATE="new"
    EOF
    
    etcd 執(zhí)行腳本,設(shè)置證書路徑
    cat > /usr/lib/systemd/system/etcd.service << EOF
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    
    [Service]
    Type=notify
    EnvironmentFile=/opt/etcd/cfg/etcd.conf
    ExecStart=/opt/etcd/bin/etcd \
    --cert-file=/opt/etcd/ssl/server.pem \
    --key-file=/opt/etcd/ssl/server-key.pem \
    --trusted-ca-file=/opt/etcd/ssl/ca.pem \
    --peer-cert-file=/opt/etcd/ssl/server.pem \
    --peer-key-file=/opt/etcd/ssl/server-key.pem \
    --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
    --logger=zap
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    
    拷貝證書
    cp ~/etcd_tls/ca*pem ~/etcd_tls/server*pem /opt/etcd/ssl/
    
    啟動(dòng)
    systemctl daemon-reload
    systemctl start etcd
    systemctl enable etcd
    scp -r /opt/etcd/ root@192.168.0.48:/opt/
    scp /usr/lib/systemd/system/etcd.service root@192.168.0.48:/usr/lib/systemd/system/
    
    scp -r /opt/etcd/ root@192.168.0.163:/opt/
    scp /usr/lib/systemd/system/etcd.service root@192.168.0.163:/usr/lib/systemd/system/
    
    修改每個(gè)節(jié)點(diǎn)上的etcd.conf文件
    ETCD_NAME 每個(gè)配置文件唯一
    ETCD_LISTEN_PEER_URLS 
    ETCD_LISTEN_CLIENT_URLS
    ETCD_INITIAL_ADVERTISE_PEER_URLS
    ETCD_ADVERTISE_CLIENT_URLS
    都設(shè)置為本機(jī)IP
    
    
    etcd 集群檢測(cè)
    ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.0.179:2379,https://192.168.0.48:2379,https://192.168.0.163:2379" endpoint health --write-out=table
    
  1. 待續(xù)
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容