Kubernetes:分布式集群運(yùn)行Nginx

kubeadm安裝集群K8s:v1.13.2 - 簡書 的基礎(chǔ)上,接下來我們在該容器上運(yùn)行一個最簡單的nginx容器,并觀察Kubernetes是如何調(diào)度容器的:


Nginx ReplicationController配置文件如下:

[root@localhost ~]# cat mynginx-rc.yaml

apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-test
spec:
  replicas: 2
  selector:
    app: nginx-test
  template:
    metadata:
      labels:
        app: nginx-test
    spec:
      containers:
      - name: nginx-test
        image: docker.io/nginx
        ports:
        - containerPort: 80

Nginx Service配置文件如下:

[root@localhost ~]# cat mynginx-svc.yaml

apiVersion: v1
kind: Service
metadata:
    name: nginx-test
spec:
    type: NodePort
    ports:
     - port: 80
       nodePort: 30002
    selector:
        app: nginx-test

通過配置文件創(chuàng)建ReplicationControllerservice:

[root@localhost ~]# kubectl -f mynginx-rc.yaml
[root@localhost ~]# kubectl -f mynginx-svc.yaml
[root@k8s-master ~]# kubectl get pod
NAME               READY   STATUS    RESTARTS   AGE
mysql-v7vhb        1/1     Running   0          3h
myweb-bgzvg        1/1     Running   0          178m
nginx-test-lwttj   1/1     Running   0          25s
nginx-test-z4cht   1/1     Running   0          25s

[root@k8s-master ~]# kubectl get rc
NAME         DESIRED   CURRENT   READY   AGE
mysql        1         1         1       3h1m
myweb        1         1         1       178m
nginx-test   2         2         2       33s

[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP          24h
mysql        ClusterIP   10.96.95.56      <none>        3306/TCP         179m
myweb        NodePort    10.101.46.172    <none>        8080:30001/TCP   176m
nginx-test   NodePort    10.111.101.176   <none>        80:30002/TCP     28s

[root@k8s-master ~]# netstat -ntlp | grep 30002
tcp6       0      0 :::30002                :::*                    LISTEN      10462/kube-proxy    

使用kubectl describe pod命令查看pod的詳細(xì)信息,包括pod創(chuàng)建在哪個物理Node節(jié)點(diǎn)上:

[root@k8s-master ~]# kubectl describe pod nginx-test
//以下只顯示部分信息:
Name:               nginx-test-4fvtm
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               k8s-node1/192.168.1.130
...
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 

Name:               nginx-test-9lt2g
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               k8s-node2/192.168.1.131
...
Conditions:
  Type              Status
  Initialized       True 
  Ready             True
  ContainersReady   True 
  PodScheduled      True 

可以看到,創(chuàng)建的兩個Pod分別被部署到兩個Node物理節(jié)點(diǎn)上。不同的物理節(jié)點(diǎn)通過之前安裝的CNI網(wǎng)絡(luò)插件(如calico)進(jìn)行相互訪問:


訪問192.168.1.120:30001成功

接下來我們關(guān)閉物理節(jié)點(diǎn)Node2(k8s-node2/192.168.1.131,下同),觀察Kubernetes是如何調(diào)度容器的。關(guān)閉節(jié)點(diǎn)后立刻查看Kubernetes Node狀態(tài),沒有發(fā)生任何改變:

Master節(jié)點(diǎn)經(jīng)過一段時(shí)間后發(fā)現(xiàn)Node2無法聯(lián)系,置為NotReady狀態(tài)。

[root@k8s-master ~]# kubectl get no
NAME         STATUS     ROLES    AGE   VERSION
k8s-master   Ready      master   24h   v1.13.2
k8s-node1    Ready      <none>   24h   v1.13.2
k8s-node2    NotReady   <none>   24h   v1.13.2

與此同時(shí)Pod2(nginx-test-9lt2g:運(yùn)行在Node2節(jié)點(diǎn)的Pod,下同)STATUS仍處于Running狀態(tài),但Condition:Ready從True轉(zhuǎn)變?yōu)镕alse。
再間隔一段時(shí)間t1后,Pod2 STATUS從Running變?yōu)門erminating,同時(shí)新創(chuàng)建一個Pod容器運(yùn)行在Node1節(jié)點(diǎn)。t1沒有精確的測量,不知與哪個參數(shù)設(shè)置有關(guān),懷疑與describe pod 中Toleration信息相關(guān):

Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
     ?node.kubernetes.io/unreachable:NoExecute for 300s

[root@k8s-master ~]# kubectl get pod
NAME               READY   STATUS        RESTARTS   AGE
nginx-test-4fvtm   1/1     Running       0          10m
nginx-test-5ksxh   1/1     Running       0          3m57s
nginx-test-9lt2g   1/1     Terminating   0          10m

隨后我們開啟Node2物理節(jié)點(diǎn),他會自動向Master節(jié)點(diǎn)報(bào)備自己的信息,成功開啟后Node2 STATUS重新變回Ready狀態(tài)。Terminating狀態(tài)的Pod不會立刻被清除,而是間隔一段時(shí)間t2后被自動清除。但新創(chuàng)建的Pod3(即nginx-test-5ksxh)不會被從Node1節(jié)點(diǎn)自動部署至Node2節(jié)點(diǎn)。

[root@k8s-master ~]# kubectl get pod
NAME               READY   STATUS    RESTARTS   AGE
mysql-v7vhb        1/1     Running   0          4h16m
myweb-cnfbh        1/1     Running   0          65m
nginx-test-4fvtm   1/1     Running   0          71m
nginx-test-5ksxh   1/1     Running   0          65m

在Node1節(jié)點(diǎn)查看Docker產(chǎn)生的container:兩個nginx服務(wù)都運(yùn)行在Node1上

[root@k8s-node1 ~]# docker ps | grep nginx
0c81fc8808f2        docker.io/nginx@sha256:56bcd35e8433343dbae0484ed5b740843dd8bff9479400990f251c13bbb94763                  "nginx -g 'daemon ..."   About an hour ago   Up About an hour                        k8s_nginx-test_nginx-test-5ksxh_default_53ae0dcb-27a1-11e9-8bd1-000c29d747fb_0
9a6caae7a4f3        k8s.gcr.io/pause:3.1                                                                                     "/pause"                 About an hour ago   Up About an hour                        k8s_POD_nginx-test-5ksxh_default_53ae0dcb-27a1-11e9-8bd1-000c29d747fb_0
c690b1d16e11        docker.io/nginx@sha256:56bcd35e8433343dbae0484ed5b740843dd8bff9479400990f251c13bbb94763                  "nginx -g 'daemon ..."   About an hour ago   Up About an hour                        k8s_nginx-test_nginx-test-4fvtm_default_624fa9c6-27a0-11e9-8bd1-000c29d747fb_0
0bc2479d8a70        k8s.gcr.io/pause:3.1                                                                                     "/pause"                 About an hour ago   Up About an hour                        k8s_POD_nginx-test-4fvtm_default_624fa9c6-27a0-11e9-8bd1-000c29d747fb_0
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容