從零開(kāi)始搭建Kubernetes集群(五、搭建K8S Ingress)

一、前言

上一文《從零開(kāi)始搭建Kubernetes集群(四、搭建K8S Dashboard)》介紹了如何搭建Dashboard。本篇將介紹如何搭建Ingress來(lái)訪問(wèn)K8S集群的Service。

二、Ingress簡(jiǎn)介

Ingress是個(gè)什么鬼,網(wǎng)上資料很多(推薦官方),大家自行研究。簡(jiǎn)單來(lái)講,就是一個(gè)負(fù)載均衡的玩意,其主要用來(lái)解決使用NodePort暴露Service的端口時(shí)Node IP會(huì)漂移的問(wèn)題。同時(shí),若大量使用NodePort暴露主機(jī)端口,管理會(huì)非?;靵y。

好的解決方案就是讓外界通過(guò)域名去訪問(wèn)Service,而無(wú)需關(guān)心其Node IP及Port。那為什么不直接使用Nginx?這是因?yàn)樵贙8S集群中,如果每加入一個(gè)服務(wù),我們都在Nginx中添加一個(gè)配置,其實(shí)是一個(gè)重復(fù)性的體力活,只要是重復(fù)性的體力活,我們都應(yīng)該通過(guò)技術(shù)將它干掉。

Ingress就可以解決上面的問(wèn)題,其包含兩個(gè)組件Ingress Controller和Ingress:

  • Ingress
    將Nginx的配置抽象成一個(gè)Ingress對(duì)象,每添加一個(gè)新的服務(wù)只需寫一個(gè)新的Ingress的yaml文件即可
  • Ingress Controller
    將新加入的Ingress轉(zhuǎn)化成Nginx的配置文件并使之生效

好了,廢話不多,走你~

三、準(zhǔn)備操作

官方文檔

人生苦短,不造輪子,本文將以官方的標(biāo)準(zhǔn)腳本為基礎(chǔ)進(jìn)行搭建,參考請(qǐng)戳官方文檔。官方文檔中要求依次執(zhí)行如下命令:

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml \
    | kubectl apply -f -

以上yaml文件創(chuàng)建Ingress用到的Namespace、ConfigMap,以及默認(rèn)的后端default-backend。最關(guān)鍵的一點(diǎn)是,由于之前我們基于Kubeadm創(chuàng)建了K8S集群,則還必須執(zhí)行:

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml \
    | kubectl apply -f -

curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml \
    | kubectl apply -f -

這是由于Kubeadm創(chuàng)建的集群默認(rèn)開(kāi)啟了RABC,因此Ingress也必須創(chuàng)建相應(yīng)的RABC權(quán)限控制。

導(dǎo)入鏡像

但是,直接按照上述方式執(zhí)行,我們的Ingress很可能會(huì)無(wú)法使用。所以,我們需要將上述Yaml文件全部wget下來(lái),經(jīng)過(guò)一些修改后才能執(zhí)行kubectl apply -f創(chuàng)建。另外需要注意的是,這些yaml文件中提到的一些鏡像,國(guó)內(nèi)目前無(wú)法下載,如:

gcr.io/google_containers/defaultbackend:1.4
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0

本人已經(jīng)提前下載好,大家請(qǐng)戳:

地址:https://pan.baidu.com/s/1N-bK9hI7JTZZB6AzmaT8PA
密碼:1a8a

拿到鏡像后,在每個(gè)節(jié)點(diǎn)上執(zhí)行如下命令導(dǎo)入鏡像:

docker load < quay.io#kubernetes-ingress-controller#nginx-ingress-controller_0.14.0.tar
docker tag 452a96d81c30 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
docker load < gcr.io#google_containers#defaultbackend.tar
docker tag 452a96d81c30 gcr.io/google_containers/defaultbackend

如上所示,導(dǎo)入鏡像后,別忘記給打tag,否則鏡像名稱為<none>:


image.png

四、主要文件介紹

這里,我們先對(duì)一些重要的文件進(jìn)行簡(jiǎn)單介紹。

default-backend.yaml

default-backend的作用是,如果外界訪問(wèn)的域名不存在的話,則默認(rèn)轉(zhuǎn)發(fā)到default-http-backend這個(gè)Service,其會(huì)直接返回404:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: default-http-backend
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissible as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: gcr.io/google_containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend

rbac.yaml

rbac.yaml負(fù)責(zé)Ingress的RBAC授權(quán)的控制,其創(chuàng)建了Ingress用到的ServiceAccount、ClusterRole、Role、RoleBinding、ClusterRoleBinding。在上文《從零開(kāi)始搭建Kubernetes集群(四、搭建K8S Dashboard)》中,我們已對(duì)這些概念進(jìn)行了簡(jiǎn)單介紹。

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
        - events
    verbs:
        - create
        - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

with-rbac.yaml

with-rbac.yaml是Ingress的核心,用于創(chuàng)建ingress-controller。前面提到過(guò),ingress-controller的作用是將新加入的Ingress進(jìn)行轉(zhuǎn)化為Nginx的配置。

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx 
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --annotations-prefix=nginx.ingress.kubernetes.io
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          securityContext:
            runAsNonRoot: false

如上,可以看到nginx-ingress-controller啟動(dòng)時(shí)傳入了參數(shù),分別為前面創(chuàng)建的default-backend-service以及configmap。

五、創(chuàng)建Ingress

1.創(chuàng)建Ingress-controller

需要注意的是,官方提供的with-rbac.yaml文件不能直接使用,我們必須修改兩處:

加入hostNetwork配置

如下,在serviceAccountName上方添加hostNetwork: true:

spec:
      hostNetwork: true
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --annotations-prefix=nginx.ingress.kubernetes.io

配置hostNetwork: true是一種直接定義Pod網(wǎng)絡(luò)的方式。定義后,Ingress-controller的IP就與宿主機(jī)k8s-node1一樣(192.168.56.101),并且端口80也是宿主機(jī)上的端口。這樣,我們通過(guò)該192.168.56.101:80,就可以直接訪問(wèn)到Ingress-controller(實(shí)際上就是nginx),然后Ingress-controller則會(huì)轉(zhuǎn)發(fā)我們的請(qǐng)求到相應(yīng)后端。

加入環(huán)境變量

在其env部分加入如下環(huán)境變量:

          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: KUBERNETES_MASTER 
              value: http://192.168.56.101:8080

否則,創(chuàng)建后會(huì)提示如下錯(cuò)誤:

[root@k8s-node1 ingress]# kubectl describe pod nginx-ingress-controller-9fbd7596d-rt9sf  -n ingress-nginx
省略前面...
Events:
  Type     Reason                 Age                From                Message
  ----     ------                 ----               ----                -------
  Normal   Scheduled              30s                default-scheduler   Successfully assigned nginx-ingress-controller-9fbd7596d-rt9sf to k8s-node1
  Normal   SuccessfulMountVolume  30s                kubelet, k8s-node1  MountVolume.SetUp succeeded for volume "nginx-ingress-serviceaccount-token-lq2dt"
  Warning  BackOff                21s                kubelet, k8s-node1  Back-off restarting failed container
  Normal   Pulled                 11s (x3 over 29s)  kubelet, k8s-node1  Container image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0" already present on machine
  Normal   Created                11s (x3 over 29s)  kubelet, k8s-node1  Created container
  Warning  Failed                 10s (x3 over 28s)  kubelet, k8s-node1  Error: failed to start container "nginx-ingress-controller": Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/nginx-ingress-controller\": stat /nginx-ingress-controller: no such file or directory": unknown

修改with-rbac.yaml后,使用kubectl -f create命令分別執(zhí)行如下yaml文件,即可創(chuàng)建Ingress-controller:

image.png

創(chuàng)建成功后如下所示:

[root@k8s-node1 ingress]# kubectl get pod -n ingress-nginx -o wide
NAME                                        READY     STATUS    RESTARTS   AGE       IP              NODE
default-http-backend-5c6d95c48-pdjn9        1/1       Running   0          23s       192.168.36.81   k8s-node1
nginx-ingress-controller-547cd7d9cb-jmvpn   1/1       Running   0          8s        192.168.36.82   k8s-node1

2.創(chuàng)建自定義Ingress

有了ingress-controller,我們就可以創(chuàng)建自定義的Ingress了。這里已提前搭建好了Kibana服務(wù),我們針對(duì)Kibana創(chuàng)建一個(gè)Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kibana-ingress
  namespace: default

spec:
  rules:
  - host: myk8s.com
    http:
      paths:
      - path: /
        backend:
          serviceName: kibana
          servicePort: 5601

其中:

  • rules中的host必須為域名,不能為IP,表示Ingress-controller的Pod所在主機(jī)域名,也就是Ingress-controller的IP對(duì)應(yīng)的域名。
  • paths中的path則表示映射的路徑。如映射/表示若訪問(wèn)myk8s.com,則會(huì)將請(qǐng)求轉(zhuǎn)發(fā)至Kibana的service,端口為5601。

創(chuàng)建成功后,查看:

[root@k8s-node1 ingress]# kubectl get ingress -o wide
NAME             HOSTS       ADDRESS   PORTS     AGE
kibana-ingress   myk8s.com             80        6s

我們?cè)賵?zhí)行kubectl exec nginx-ingress-controller-5b79cbb5c6-2zr7f -it cat /etc/nginx/nginx.conf -n ingress-nginx,可以看到生成nginx配置,篇幅較長(zhǎng),各位自行篩選:

    ## start server myk8s.com
    server {
        server_name myk8s.com ;
        
        listen 80;
        
        listen [::]:80;
        
        set $proxy_upstream_name "-";
        
        location /kibana {
            
            log_by_lua_block {
                
            }
            
            port_in_redirect off;
            
            set $proxy_upstream_name "";
            
            set $namespace      "kube-system";
            set $ingress_name   "dashboard-ingress";
            set $service_name   "kibana";
            
            client_max_body_size                    "1m";
            
            proxy_set_header Host                   $best_http_host;
            
            # Pass the extracted client certificate to the backend
            
            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            
            proxy_set_header                        Connection        $connection_upgrade;
            
            proxy_set_header X-Real-IP              $the_real_ip;
            
            proxy_set_header X-Forwarded-For        $the_real_ip;
            
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            
            proxy_set_header X-Original-URI         $request_uri;
            
            proxy_set_header X-Scheme               $pass_access_scheme;
            
            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
            
            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";
            
            # Custom headers to proxied server
            
            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;
            
            proxy_buffering                         "off";
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";
            proxy_request_buffering                 "on";
            
            proxy_http_version                      1.1;
            
            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;
            
            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;
            proxy_next_upstream_tries               0;
            
            # No endpoints available for the request
            return 503;
            
        }
        
        location / {
            
            log_by_lua_block {
                
            }
            
            port_in_redirect off;
            
            set $proxy_upstream_name "";
            
            set $namespace      "default";
            set $ingress_name   "kibana-ingress";
            set $service_name   "kibana";
            
            client_max_body_size                    "1m";
            
            proxy_set_header Host                   $best_http_host;
            
            # Pass the extracted client certificate to the backend
            
            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            
            proxy_set_header                        Connection        $connection_upgrade;
            
            proxy_set_header X-Real-IP              $the_real_ip;
            
            proxy_set_header X-Forwarded-For        $the_real_ip;
            
            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            
            proxy_set_header X-Original-URI         $request_uri;
            
            proxy_set_header X-Scheme               $pass_access_scheme;
            
            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
            
            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";
            
            # Custom headers to proxied server
            
            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;
            
            proxy_buffering                         "off";
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";
            proxy_request_buffering                 "on";
            
            proxy_http_version                      1.1;
            
            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;
            
            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;
            proxy_next_upstream_tries               0;
            
            # No endpoints available for the request
            return 503;
            
        }
        
    }
    ## end server myk8s.com

3.設(shè)置host

首先,我們需要在Ingress-controller的Pod所在主機(jī)上(這里為k8s-node1),將上面提到的域名myk8s.com追加入/etc/hosts文件:

192.168.56.101 myk8s.com

除此之外,如果想在自己的Windows物理機(jī)上使用瀏覽器訪問(wèn)kibana,也需要在C:\Windows\System32\drivers\etc\hosts文件內(nèi)加入上述內(nèi)容。設(shè)置后,分別在k8s-node1和物理機(jī)上測(cè)試無(wú)誤即可:

image.png

image.png

六、測(cè)試

在Windows物理機(jī)上,使用Chrome訪問(wèn)myk8s.com,也就是相當(dāng)于訪問(wèn)了192.168.56.101:80

image.png

隨意訪問(wèn)一個(gè)錯(cuò)誤的地址myk8s.com/abc,返回預(yù)期的404:

image.png

七、廢話

至此,我們的Ingress已經(jīng)搭建完畢,實(shí)現(xiàn)了在外部通過(guò)域名訪問(wèn)K8S集群Service的功能。如果大家有興趣,可以嘗試為Ingress配置TLS,這樣就可以訪問(wèn)如Dashboard這種https服務(wù)了。下一章節(jié)《從零開(kāi)始搭建Kubernetes集群(五、在K8S上部署Redis 集群)》,敬請(qǐng)期待。

本人水平有限,難免有錯(cuò)誤或遺漏之處,望大家指正和諒解,歡迎評(píng)論留言。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書(shū)系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容