k8s構建容器化微服務項目

容器化微服務項目

熟悉Spring Cloud微服務項目

1639991817856.png

源代碼編譯構建

#安裝jdk、maven
[root@prometheus simple-microservice-dev3]# yum install java-1.8.0-openjdk maven

#修改product、stock、order連接數(shù)據(jù)庫地址
url: jdbc:mysql://192.168.153.27:3306/tb_order?characterEncoding=utf-8
url: jdbc:mysql://192.168.153.27:3306/tb_product?characterEncoding=utf-8
url: jdbc:mysql://192.168.153.27:3306/tb_stock?characterEncoding=utf-8

#編譯構建
[root@prometheus simple-microservice-dev3]# mvn clean package -Dmaven.test.skip=true

構建項目鏡像并推送到鏡像倉庫

登錄harbor

[root@prometheus harbor]# docker login 192.168.153.20

eureka

[root@prometheus eureka-service]# docker build -t 192.168.153.20/ms/eureka:v1 .
[root@prometheus eureka-service]# docker push 192.168.153.20/ms/eureka:v1

K8s中部署Eureka集群

安裝Ingress

[root@k8s-m1 k8s]# kubectl apply -f ingress-controller.yaml 
[root@k8s-m1 k8s]#  kubectl get pods -n ingress-nginx -o wide
NAME                                       READY   STATUS   IP               NODE     
nginx-ingress-controller-5dc64b58f-stb5j   1/1     Running  192.168.153.25   k8s-m1   

創(chuàng)建registry-pull-secret

[root@k8s-m1 k8s]# kubectl create secret docker-registry registry-pull-secret --docker-username=admin --docker-password=Harbor12345 --docker-server=192.168.153.20 
secret/registry-pull-secret created

部署eureka集群

[root@k8s-m1 k8s]# kubectl apply -f eureka.yaml 
[root@k8s-m1 k8s]# kubectl get pod,svc -n ms
NAME           READY   STATUS    RESTARTS   AGE
pod/eureka-0   1/1     Running   0          6m55s
pod/eureka-1   1/1     Running   0          2m43s
pod/eureka-2   1/1     Running   0          95s

NAME             TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/eureka   ClusterIP   None         <none>        8888/TCP   6m55s
---------------------------------------------------------------------------------
hosts:
192.168.153.27 eureka.ctnrs.com
#訪問:
http://eureka.ctnrs.com/

K8s中部署MySQL

部署mysql

#拉起MySQL鏡像(:5.7 表示5.7版本)
docker pull mysql:5.7
#運行MySQL容器
docker run -d -p 3306:3306 --privileged=true -v /docker/mysql/conf/my.cnf:/etc/my.cnf -v /docker/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql mysql:5.7 --character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci
#參數(shù)說明:
run run 是運行一個容器
-d  表示后臺運行
-p  表示容器內部端口和服務器端口映射關聯(lián)
--privileged=true 設值MySQL 的root用戶權限, 否則外部不能使用root用戶登陸
-v /docker/mysql/conf/my.cnf:/etc/my.cnf 將服務器中的my.cnf配置映射到docker中的/docker/mysql/conf/my.cnf配置
-v /docker/mysql/data:/var/lib/mysql  同上,映射數(shù)據(jù)庫的數(shù)據(jù)目錄, 避免以后docker刪除重新運行MySQL容器時數(shù)據(jù)丟失
-e MYSQL_ROOT_PASSWORD=123456   設置MySQL數(shù)據(jù)庫root用戶的密碼
--name mysql     設值容器名稱為mysql
mysql:5.7  表示從docker鏡像mysql:5.7中啟動一個容器
--character-set-server=utf8mb4 --collation-server=utf8mb4_general_ci 設值數(shù)據(jù)庫默認編碼


#賦予遠程登錄權限
[root@xdclass ~]# docker exec -it mysql bash  
root@ce7e026432b3:/# mysql -u root -p

mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123456' WITH GRANT OPTION;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql>  FLUSH PRIVILEGES;

#導入數(shù)據(jù)
 order.sql  stock.sql  product.sql

在K8s中部署微服務

product

#構建鏡像
[root@prometheus product-service-biz]#  docker build -t 192.168.153.20/ms/product:v1 .
#推送鏡像
[root@prometheus product-service-biz]#  docker push 192.168.153.20/ms/product:v1
#k8s集群中發(fā)布
[root@k8s-m1 k8s]# kubectl apply -f product.yaml 

order

#構建鏡像
[root@prometheus order-service-biz]# docker build -t 192.168.153.20/ms/order:v1 .
#推送鏡像
[root@prometheus order-service-biz]# docker push 192.168.153.20/ms/order:v1
#k8s集群中發(fā)布
[root@k8s-m1 k8s]# kubectl apply -f order.yaml 

stock

#構建鏡像
[root@prometheus stock-service-biz]# docker build -t 192.168.153.20/ms/stock:v1 .
#推送鏡像
[root@prometheus stock-service-biz]# docker push 192.168.153.20/ms/stock:v1
#k8s集群中發(fā)布
[root@k8s-m1 k8s]# kubectl apply -f stock.yaml 

gateway

#構建鏡像
[root@prometheus gateway-service]# docker build -t 192.168.153.20/ms/gateway:v1 .
#推送鏡像
[root@prometheus gateway-service]# docker push 192.168.153.20/ms/gateway:v1
#k8s集群中發(fā)布
[root@k8s-m1 k8s]# kubectl apply -f gateway.yaml 
#hosts
192.168.153.27 gateway.ctnrs.com

portal

#構建鏡像
[root@prometheus portal-service]# docker build -t 192.168.153.20/ms/portal:v1 .
#推送鏡像
[root@prometheus portal-service]# docker push 192.168.153.20/ms/portal:v1
#k8s集群中發(fā)布

portal.ctnrs.com

查看相關服務

[root@k8s-m1 k8s]# kubectl get pod,svc,ing -n ms
NAME                           READY   STATUS    RESTARTS   AGE
pod/eureka-0                   1/1     Running   2          86m
pod/eureka-1                   1/1     Running   2          84m
pod/eureka-2                   1/1     Running   1          83m
pod/gateway-6c7b6f7c85-g9srj   1/1     Running   1          70m
pod/order-65b848c67c-r7stp     1/1     Running   0          6m58s
pod/portal-78ccc5768c-wvt5f    1/1     Running   1          70m
pod/product-59c88fbf7f-snrkf   1/1     Running   0          7m4s
pod/stock-c9b89d8b-p4wvd       1/1     Running   0          6m51s

NAME              TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/eureka    ClusterIP   None         <none>        8888/TCP   86m
service/gateway   ClusterIP   10.0.0.101   <none>        9999/TCP   70m
service/portal    ClusterIP   10.0.0.44    <none>        8080/TCP   70m

NAME                                CLASS    HOSTS               ADDRESS   PORTS   AGE
ingress.networking.k8s.io/eureka    <none>   eureka.ctnrs.com              80      86m
ingress.networking.k8s.io/gateway   <none>   gateway.ctnrs.com             80      70m
ingress.networking.k8s.io/portal    <none>   portal.ctnrs.com              80      70m
1640014289523.png
1640058334935.png
1640058349907.png
http://gateway.ctnrs.com/product/queryAllProduct?page=1&limit=10

{"status":200,"msg":"success","result":[{"id":1,"productName":"測試商品1","price":99.99,"stock":99},{"id":2,"productName":"美女","price":999.0,"stock":87},{"id":3,"productName":"Q幣","price":100.0,"stock":77},{"id":4,"productName":"貂皮大衣很厚很厚的那種","price":9999.0,"stock":66}]}

http://gateway.ctnrs.com/order/queryAllOrder
{"status":200,"msg":"success","result":[{"id":1,"orderNumber":"0j889r86wo0tng9x","orderProductName":"美女","orderPrice":999.0,"count":1,"buyDate":"2021-12-21T03:40:32.000+0000"}]}

Skywalking

介紹

? 多種監(jiān)控手段??梢酝ㄟ^語言探針和 service mesh 獲得監(jiān)控是數(shù)據(jù)。
? 多個語言自動探針。包括 Java,.NET Core 和 Node.JS。
? 輕量高效。無需大數(shù)據(jù)平臺,和大量的服務器資源。
? 模塊化。UI、存儲、集群管理都有多種機制可選。
? 支持告警。
? 優(yōu)秀的可視化解決方案

架構

1640089757346.png

部署

部署ES數(shù)據(jù)庫

docker run --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" -d elasticsearch:7.7.0

部署Skywalking OAP

[root@k8s-m1 ~]# yum install java-11-openjdk –y
[root@k8s-m1 ~]# tar zxvf apache-skywalking-apm-es7-8.3.0.tar.gz
[root@k8s-m1 ~]# cd apache-skywalking-apm-bin-es7/
[root@k8s-m1 ~]# vi config/application.yml
storage:
selector: ${SW_STORAGE:elasticsearch7} #這里使用elasticsearch7
...
elasticsearch7:
nameSpace: ${SW_NAMESPACE:""}
clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:192.168.0.10:9200} # 指定ES地址

#啟動OAP和UI:
[root@k8s-m1 bin]# ./startup.sh 
SkyWalking OAP started successfully!
SkyWalking Web Application started successfully!
#訪問UI:
http://192.168.153.25:8080


#collector.backend_service為Skywalking服務器,11800端口復制收集數(shù)據(jù)
[root@k8s-m1 agent]# ss -antp|grep 11800
LISTEN     0      128         :::11800                   :::*                   users:(("java",pid=59156,fd=269))

1640092815351.png

Dockerfile

#啟動Java程序以探針方式集成Agent(以eureka為例),每個服務都要加,重新構建:

java -jar -javaagent:/skywalking/skywalking-agent.jar=agent.service_name=ms-eureka,agent.instance_name=$(echo $HOSTNAME | awk -F- '{print $1"-"$NF}'),
collector.backend_service=192.168.153.25:11800 -Deureka.instance.hostname=${MY_POD_NAME}.eureka.ms /eureka-service.jar    


構建發(fā)布

#啟動mysql服務
docker start mysql

#啟動es
docker start elasticsearch

#啟動Skywalking OAP和UI:
[root@k8s-m1 bin]# ./startup.sh 
SkyWalking OAP started successfully!
SkyWalking Web Application started successfully!
#訪問UI:
http://192.168.153.25:8080


#collector.backend_service為Skywalking服務器,11800端口復制收集數(shù)據(jù)
[root@k8s-m1 agent]# ss -antp|grep 11800
LISTEN     0      128         :::11800                   :::*                   users:(("java",pid=59156,fd=269))

#修改Dockerfile重新構建推送
docker build -t 192.168.153.20/ms/eureka:v2 .
docker push 192.168.153.20/ms/eureka:v2
......

#k8s發(fā)布過程中出現(xiàn)oomkill問題
解決:將limit設置的更大

[root@k8s-m1 k8s]# kubectl get pod,svc,ing -n ms
NAME                           READY   STATUS    RESTARTS   AGE
pod/eureka-0                   1/1     Running   0          62m
pod/eureka-1                   1/1     Running   0          61m
pod/eureka-2                   1/1     Running   0          60m
pod/gateway-77776889-r29dt     1/1     Running   0          34m
pod/order-846f7c95b9-dpqh8     1/1     Running   0          30m
pod/portal-66cf475fc4-9ww57    1/1     Running   1          49m
pod/product-554d7d554c-6g87b   1/1     Running   0          30m
pod/stock-546b455df8-nblxn     1/1     Running   0          30m

NAME              TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)    AGE
service/eureka    ClusterIP   None         <none>        8888/TCP   62m
service/gateway   ClusterIP   10.0.0.173   <none>        9999/TCP   34m
service/portal    ClusterIP   10.0.0.94    <none>        8080/TCP   49m

NAME                                CLASS    HOSTS               ADDRESS   PORTS   AGE
ingress.networking.k8s.io/eureka    <none>   eureka.ctnrs.com              80      62m
ingress.networking.k8s.io/gateway   <none>   gateway.ctnrs.com             80      34m
ingress.networking.k8s.io/portal    <none>   portal.ctnrs.com              80      49m

效果展示

1640175090524.png
1640176967598.png

生產環(huán)境踩坑經驗分享

限制了容器資源,還經常被殺死?

在JAVA1.9版本之前,是不能自動發(fā)現(xiàn)docker設置的內存限制,隨著應用負載起伏就會造成內存使用過大,超過limits限制,從而觸發(fā)K8s殺掉該容器。

解決辦法:
? 手動指定JVM堆內存大小

CMD java -jar $JAVA_OPTS /gateway-service.jar
env:
  - name: JAVA_OPTS
    value: "-Xmx1g"
resources:
  requests:
    cpu: 0.5
    memory: 256Mi
  limits:
    cpu: 1
    memory: 1Gi

滾動更新期間造成流量丟失

滾動更新觸發(fā),Pod在刪除過程中,有些節(jié)點kube-proxy還沒來得及同步iptables規(guī)則,從而部分流量請求到Terminating的Pod上,導致請求出錯。
解決辦法:配置preStop回調,在容器終止前優(yōu)雅暫停5秒,給kube-proxy多預留一點時間

lifecycle:
  preStop:
    exec:
      command:
      - sh
      - -c
      - "sleep 5"
      
還可以做一些回調處理,curl......      

滾動更新之健康檢查重要性

滾動更新是默認發(fā)布策略,當配置健康檢查時,滾動更新會根據(jù)Probe狀態(tài)來決定是否繼續(xù)更新以及是否允許接入流量,這樣在整個滾動更新過程中可保證始終會有可用的Pod存在,達到平滑升級
readinessProbe:
  tcpSocket:
    port: 9999
  initialDelaySeconds: 60
  periodSeconds: 10
livenessProbe:
  tcpSocket:
    port: 9999
  initialDelaySeconds: 60
  periodSeconds: 10
?著作權歸作者所有,轉載或內容合作請聯(lián)系作者
【社區(qū)內容提示】社區(qū)部分內容疑似由AI輔助生成,瀏覽時請結合常識與多方信息審慎甄別。
平臺聲明:文章內容(如有圖片或視頻亦包括在內)由作者上傳并發(fā)布,文章內容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。
禁止轉載,如需轉載請通過簡信或評論聯(lián)系作者。

相關閱讀更多精彩內容

友情鏈接更多精彩內容