使用KubeKey部署KubeSphere與Kubenetes集群

使用KubeKey部署KubeSphere與Kubenetes集群

通過KubeKey來部署KubeSphere-3.3.2kubernetes-1.23.10

一、 環(huán)境說明

序號(hào) CPU 內(nèi)存(G) 操作系統(tǒng) IP 主機(jī)名 備注
1 4 16 CentOS 7.9 192.168.3.81 ks-01.tiga.cc master
2 4 16 CentOS 7.9 192.168.3.82 ks-02.tiga.cc worker
3 4 16 CentOS 7.9 192.168.3.83 ks-03.tiga.cc worker
4 4 16 CentOS 7.9 192.168.3.84 ks-04.tiga.cc worker

其中 ks-01 作為 控制平面,其他3臺(tái)作為node

二、 準(zhǔn)備工作

2.1 安裝基礎(chǔ)軟件與配置

# 1.關(guān)閉centos7自帶的firewalld
systemctl disable firewalld
systemctl stop firewalld

# 2.安裝iptables
yum install -y iptables-services
systemctl enable iptables
systemctl start iptables

iptables -F
service iptables save

# 3.安裝基礎(chǔ)軟件包
yum install -y chrony zlib zlib-devel pcre pcre-devel epel-release bash-completion wget  man telnet lrzsz unzip zip 


# 4.設(shè)置文件描述符
echo '* - nofile 65535' >> /etc/security/limits.conf

# 5.關(guān)閉selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

# 6.關(guān)閉swap
sed -i 's:/dev/mapper/cl-swap:#/dev/mapper/cl-swap:g' /etc/fstab

# 7.開啟路由轉(zhuǎn)發(fā)
echo 'net.ipv4.ip_forward=1' >> /etc/sysctl.conf
sysctl -p

# 8. 配置hosts解析
echo '192.168.3.81 ks-01.tiga.cc ks-01' >> /etc/hosts
echo '192.168.3.82 ks-02.tiga.cc ks-02' >> /etc/hosts
echo '192.168.3.83 ks-03.tiga.cc ks-03' >> /etc/hosts
echo '192.168.3.84 ks-04.tiga.cc ks-04' >> /etc/hosts

2.3 升級內(nèi)核

  1. 導(dǎo)入公共秘鑰
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
  1. 安裝yum源
yum install https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
  1. 列出內(nèi)核
yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
  1. 安裝LTS版本內(nèi)核
# --enablerepo 選項(xiàng)開啟 CentOS 系統(tǒng)上的指定倉庫。默認(rèn)開啟的是 elrepo,這里用 elrepo-kernel 替換。
yum --enablerepo=elrepo-kernel install kernel-lt kernel-lt-devel kernel-lt-headers
  1. 查看系統(tǒng)上所有可用內(nèi)核
awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg

輸出

0 : CentOS Linux (5.4.249-1.el7.elrepo.x86_64) 7 (Core)
1 : CentOS Linux (3.10.0-1160.el7.x86_64) 7 (Core)
2 : CentOS Linux (0-rescue-dc46cf8f5b5d4bc099d5a66232a815c8) 7 (Core)
  1. 設(shè)置新的內(nèi)核為grub2的默認(rèn)版本
grub2-set-default 0
  1. 重啟系統(tǒng)
reboot

三、 安裝KubeKey

# 安裝KubeKey依賴項(xiàng)
yum install -y socat conntrack ebtables ipset ipvsadm

# 下載KubeKey
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
chmod +x kk

四、使用KubeKey創(chuàng)建集群

4.1 創(chuàng)建示例配置文件

./kk create config --with-kubesphere v3.3.2

會(huì)在當(dāng)前目錄下創(chuàng)建一個(gè)配置文件config-sample.yaml

4.2 編輯配置文件

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: ks-01.tiga.cc, address: 192.168.3.81, internalAddress: 192.168.3.81, user: root, password: "w123456"}
  - {name: ks-02.tiga.cc, address: 192.168.3.82, internalAddress: 192.168.3.82, user: root, password: "w123456"}
  - {name: ks-03.tiga.cc, address: 192.168.3.83, internalAddress: 192.168.3.83, user: root, password: "w123456"}
  - {name: ks-04.tiga.cc, address: 192.168.3.84, internalAddress: 192.168.3.84, user: root, password: "w123456"}
  roleGroups:
    etcd:
    - ks-01.tiga.cc
    control-plane: 
    - ks-01.tiga.cc
    worker:
    - ks-02.tiga.cc
    - ks-03.tiga.cc
    - ks-04.tiga.cc
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers 
    # internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.23.10
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: docker
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    privateRegistry: ""
    namespaceOverride: ""
    registryMirrors: []
    insecureRegistries: []
  addons: []



---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.3.2
spec:
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  zone: ""
  local_registry: ""
  namespace_override: ""
  # dev_tag: ""
  etcd:
    monitoring: false
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
  alerting:
    enabled: false
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: false
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    # resources: {}
    jenkinsMemoryLim: 8Gi
    jenkinsMemoryReq: 4Gi
    jenkinsVolumeSize: 8Gi
  events:
    enabled: false
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    # ruler:
    #   enabled: true
    #   replicas: 2
    #   resources: {}
  logging:
    enabled: false
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: false
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: false
    ippool:
      type: none
    topology:
      type: none
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  terminal:
    timeout: 600

4.3 使用配置文件創(chuàng)建集群

./kk create cluster -f config-sample.yaml

整個(gè)安裝過程可能需要 10 到 20 分鐘,具體取決于您的計(jì)算機(jī)和網(wǎng)絡(luò)環(huán)境。

4.4 驗(yàn)證安裝

安裝完成后,可以看到下列輸出

#####################################################
###              Welcome to KubeSphere!           ###
#####################################################

Console: http://192.168.3.81:30880
Account: admin
Password: P@88w0rd
NOTES:
  1. After you log into the console, please check the
     monitoring status of service components in
     "Cluster Management". If any service is not
     ready, please wait patiently until all components 
     are up and running.
  2. Please change the default password after login.

#####################################################
https://kubesphere.io             2023-07-08 17:31:23
#####################################################
17:31:24 CST success: [ks-01.tiga.cc]
17:31:24 CST Pipeline[CreateClusterPipeline] execute successfully
Installation is complete.

Please check the result using the command:

    kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

4.4.1 瀏覽器訪問KubeSphere

http://192.168.8.81:30880

賬號(hào): admin

密碼: P@88w0rd

4.4.2 kubectl查看節(jié)點(diǎn)信息

kubectl get nodes -o wide

輸出

NAME            STATUS   ROLES                  AGE     VERSION    INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
ks-01.tiga.cc   Ready    control-plane,master   6m7s    v1.23.10   192.168.3.81   <none>        CentOS Linux 7 (Core)   5.4.249-1.el7.elrepo.x86_64   docker://20.10.8
ks-02.tiga.cc   Ready    worker                 5m45s   v1.23.10   192.168.3.82   <none>        CentOS Linux 7 (Core)   5.4.249-1.el7.elrepo.x86_64   docker://20.10.8
ks-03.tiga.cc   Ready    worker                 5m44s   v1.23.10   192.168.3.83   <none>        CentOS Linux 7 (Core)   5.4.249-1.el7.elrepo.x86_64   docker://20.10.8
ks-04.tiga.cc   Ready    worker                 5m44s   v1.23.10   192.168.3.84   <none>        CentOS Linux 7 (Core)   5.4.249-1.el7.elrepo.x86_64   docker://20.10.8
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容