前言:
流量入口代理作為互聯(lián)網(wǎng)系統(tǒng)的門戶組件,具備眾多選型:從老牌代理 HAProxy、Nginx,到微服務(wù) API 網(wǎng)關(guān) Kong、Zuul,再到容器化 Ingress 規(guī)范與實現(xiàn),不同選型間功能、性能、可擴(kuò)展性、適用場景參差不齊。當(dāng)云原生時代大浪襲來,Envoy 這一 CNCF 畢業(yè)數(shù)據(jù)面組件為更多人所知。那么,優(yōu)秀“畢業(yè)生”Envoy 能否成為云原生時代下流量入口標(biāo)準(zhǔn)組件?
背景 —— 流量入口的眾多選型與場景
在互聯(lián)網(wǎng)體系下,凡是需要對外暴露的系統(tǒng)幾乎都需要網(wǎng)絡(luò)代理:較早出現(xiàn)的 HAProxy、Nginx 至今仍在流行;進(jìn)入微服務(wù)時代后,功能更豐富、管控能力更強的 API 網(wǎng)關(guān)又成為流量入口必備組件;在進(jìn)入容器時代后,Kubernetes Ingress 作為容器集群的入口,是容器時代微服務(wù)的流量入口代理標(biāo)準(zhǔn)。關(guān)于這三類典型的七層代理,核心能力對比如下:

- 從上述核心能力對比來看:
- HAProxy&Nginx 在具備基礎(chǔ)路由功能基礎(chǔ)上,性能、穩(wěn)定性經(jīng)歷多年考驗。Nginx 的下游社區(qū) OpenResty 提供了完善的 Lua 擴(kuò)展能力,使得 Nginx 可以更廣泛的應(yīng)用與擴(kuò)展,如 API 網(wǎng)關(guān) Kong 即是基于 Nginx+OpenResty 實現(xiàn)。
- API 網(wǎng)關(guān)作為微服務(wù)對外 API 流量暴露的基礎(chǔ)組件,提供比較豐富的功能和動態(tài)管控能力。
- Ingress 作為 Kubernetes 入口流量的標(biāo)準(zhǔn)規(guī)范,具體能力視實現(xiàn)方式而定。如基于 Nginx 的 Ingress 實現(xiàn)能力更接近于 Nginx,Istio Ingress Gateway 基于 Envoy+Istio 控制面實現(xiàn),功能上更加豐富(本質(zhì)上 Istio Ingress Gateway 能力上強于通常的 Ingress 實現(xiàn),但未按照 Ingress 規(guī)范實現(xiàn))。
那么問題來了:同樣是流量入口,在云原生技術(shù)趨勢下,能否找到一個能力全面的技術(shù)方案,讓流量入口標(biāo)準(zhǔn)化?
Envoy 核心能力介紹
Envoy 是一個為云原生應(yīng)用設(shè)計的開源邊緣與服務(wù)代理(ENVOY IS AN OPEN SOURCE EDGE AND SERVICE PROXY, DESIGNED FOR CLOUD-NATIVE APPLICATIONS,@envoyproxy.io),是云原生計算基金會(CNCF)第三個畢業(yè)的項目,GitHub 目前有 13k+ Star。Envoy 有以下主要特性:
- 基于現(xiàn)代 C++ 開發(fā)的 L4/L7 高性能代理。
- 透明代理。
- 流量管理。支持路由、流量復(fù)制、分流等功能。
- 治理特性。支持健康檢查、熔斷、限流、超時、重試、故障注入。
- 多協(xié)議支持。支持 HTTP/1.1,HTTP/2,GRPC,WebSocket 等協(xié)議代理與治理。
- 負(fù)載均衡。加權(quán)輪詢、加權(quán)最少請求、Ring hash、Maglev、隨機(jī)等算法支持。支持區(qū)域感知路由、故障轉(zhuǎn)移等特性。
- 動態(tài)配置 API。提供健壯的管控代理行為的接口,實現(xiàn) Envoy 動態(tài)配置熱更新。
- 可觀察性設(shè)計。提供七層流量高可觀察性,原生支持分布式追蹤。
- 支持熱重啟。可實現(xiàn) Envoy 的無縫升級。
- 自定義插件能力。Lua 與多語言擴(kuò)展沙箱 WebAssembly。
- 總體來說,Envoy 是一個功能與性能都非常優(yōu)秀的“雙優(yōu)生”。在實際業(yè)務(wù)流量入口代理場景下,Envoy 具備先天優(yōu)勢,可以作為云原生技術(shù)趨勢流量入口的標(biāo)準(zhǔn)技術(shù)方案:
較 HAProxy、Nginx 更豐富的功能
相較于 HAProxy、Nginx 提供流量代理所需的基本功能(更多高級功能通常需要通過擴(kuò)展插件方式實現(xiàn)),Envoy 本身基于 C++ 已經(jīng)實現(xiàn)了相當(dāng)多代理所需高級功能,如高級負(fù)載均衡、熔斷、限流、故障注入、流量復(fù)制、可觀測性等。更為豐富的功能不僅讓 Envoy 天生就可以用于多種場景,原生 C++ 的實現(xiàn)相較經(jīng)過擴(kuò)展的實現(xiàn)方式性能優(yōu)勢更為明顯。與 Nginx 相當(dāng),遠(yuǎn)高于傳統(tǒng) API 網(wǎng)關(guān)的性能
在性能方面,Envoy 與 Nginx 在常用協(xié)議代理(如 HTTP)上性能相當(dāng)。與傳統(tǒng) API 網(wǎng)關(guān)相比,性能優(yōu)勢明顯.
目前Service Mesh已經(jīng)進(jìn)入了以Istio為代表的第二代,由Data Panel(Proxy)、Control Panel兩部分組成。Istio是對Service Mesh的產(chǎn)品化實踐,幫助微服務(wù)實現(xiàn)了分層解耦,架構(gòu)圖如下:

HTTPProxy資源規(guī)范
apiVersion: projectcontour.io/v1 #API群組及版本;
kind: HTTPProxy #CRD資源的名稱;
metadata:
name <string>
namespace <string> #名稱空間級別的資源
spec:
virtualhost <VirtualHost> #定義FQDN格式的虛擬主機(jī),類似于Ingress中host fqdn <string> #虛擬主機(jī)FQDN格式的名稱
tls <TLS> #啟用HTTPS,且默認(rèn)以301將HTTP請求重定向至HTTPS
secretName <string> #存儲于證書和私鑰信息的Secret資源名稱
minimumProtocolVersion <string> #支持的SSL/TLS協(xié)議的最低版本
passthrough <boolean> #是否啟用透傳模式,啟用時控制器不卸載HTTPS會話
clientvalidation <DownstreamValidation> #驗證客戶端證書,可選配置
caSecret <string> #用于驗證客戶端證書的CA的證書
routes <[ ]Route> #定義路由規(guī)則
conditions <[]Condition> #流量匹配條件,支持PATH前綴和標(biāo)頭匹配兩種檢測機(jī)制
prefix <String> #PATH路徑前綴匹配,類似于Ingress中的path字段
permitInsecure <Boolean> #是否禁止默認(rèn)的將HTTP重定向到HTTPS的功能
services <[ ]Service> #后端服務(wù),會對應(yīng)轉(zhuǎn)換為Envoy的Cluster定義
name <String> #服務(wù)名稱
port <Integer> #服務(wù)端口
protocol <string> #到達(dá)后端服務(wù)的協(xié)議,可用值為tls、h2或者h(yuǎn)2c
validation <UpstreamValidation> #是否校驗服務(wù)端證書
caSecret <string>
subjectName <string> #要求證書中使用的Subject值
HTTPProxy 高級路由資源規(guī)范
spec:
routes <[]Route> #定義路由規(guī)則
conditions <[]Condition>
prefix <String>
header <Headercondition> #請求報文標(biāo)頭匹配
name <String> #標(biāo)頭名稱
present <Boolean> #true表示存在該標(biāo)頭即滿足條件,值false沒有意義
contains <String> #標(biāo)頭值必須包含的子串
notcontains <string> #標(biāo)頭值不能包含的子串
exact <String> #標(biāo)頭值精確的匹配
notexact <string> #標(biāo)頭值精確反向匹配,即不能與指定的值相同
services <[ ]Service>#后端服務(wù),轉(zhuǎn)換為Envoy的Cluster
name <String>
port <Integer>
protocol <String>
weight <Int64> #服務(wù)權(quán)重,用于流量分割
mirror <Boolean> #流量鏡像
requestHeadersPolicy <HeadersPolicy> #到上游服務(wù)器請求報文的標(biāo)頭策略
set <[ ]HeaderValue> #添加標(biāo)頭或設(shè)置指定標(biāo)頭的值
name <String>
value <String>
remove <[]String>#移除指定的標(biāo)頭
responseHeadersPolicy <HeadersPolicy> #到下游客戶端響應(yīng)報文的標(biāo)頭策略
loadBalancerPolicy <LoadBalancerPolicy> #指定要使用負(fù)載均衡策略
strategy <String>#具體使用的策略,支持Random、RoundRobin、Cookie
#和weightedLeastRequest,默認(rèn)為RoundRobin;
requestHeadersPolicy <HeadersPolicy> #路由級別的請求報文標(biāo)頭策略
reHeadersPolicy <HeadersPolicy> #路由級別的響應(yīng)報文標(biāo)頭策略
pathRewritePolicy <PathRewritePolicy> #URL重寫
replacePrefix <[]ReplacePrefix>
prefix <String> #PATH路由前綴
replacement <string> #替換為的目標(biāo)路徑
HTTPProxy服務(wù)彈性 健康檢查資源規(guī)范
spec:
routes <[]Route>
timeoutPolicy <TimeoutPolicy> #超時策略
response <String> #等待服務(wù)器響應(yīng)報文的超時時長
idle <String> # 超時后,Envoy維持與客戶端之間連接的空閑時長
retryPolicy <RetryPolicy> #重試策略
count <Int64> #重試的次數(shù),默認(rèn)為1
perTryTimeout <String> #每次重試的超時時長
healthCheckPolicy <HTTPHealthCheckPolicy> # 主動健康狀態(tài)檢測
path <String> #檢測針對的路徑(HTTP端點)
host <String> #檢測時請求的虛擬主機(jī)
intervalSeconds <Int64> #時間間隔,即檢測頻度,默認(rèn)為5秒
timeoutSeconds <Int64> #超時時長,默認(rèn)為2秒
unhealthyThresholdCount <Int64> # 判定為非健康狀態(tài)的閾值,即連續(xù)錯誤次數(shù)
healthyThresholdCount <Int64> # 判定為健康狀態(tài)的閾值
Envoy部署
$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml
[root@k8s-master Ingress]# kubectl get ns
NAME STATUS AGE
default Active 14d
dev Active 13d
ingress-nginx Active 29h
kube-node-lease Active 14d
kube-public Active 14d
kube-system Active 14d
kubernetes-dashboard Active 21h
longhorn-system Active 21h
projectcontour Active 39m #新增名稱空間
test Active 12d
[root@k8s-master Ingress]# kubectl nget pod -n projectcontour
[root@k8s-master Ingress]# kubectl get pod -n projectcontour
NAME READY STATUS RESTARTS AGE
contour-5449c4c94d-mqp9b 1/1 Running 3 37m
contour-5449c4c94d-xgvqm 1/1 Running 5 37m
contour-certgen-v1.18.1-82k8k 0/1 Completed 0 39m
envoy-n2bs9 2/2 Running 0 37m
envoy-q777l 2/2 Running 0 37m
envoy-slt49 1/2 Running 2 37m
[root@k8s-master Ingress]# kubectl get svc -n projectcontour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour ClusterIP 10.100.120.94 <none> 8001/TCP 39m
envoy LoadBalancer 10.97.48.41 <pending> 80:32668/TCP,443:32278/TCP 39m #因為不是Iaas平臺上 顯示為pending狀態(tài),表示一直在申請資源而掛起,不影響通過NodePort的方式訪問
[root@k8s-master Ingress]# kubectl api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND
...
extensionservices extensionservice,extensionservices projectcontour.io true ExtensionService
httpproxies proxy,proxies projectcontour.io true HTTPProxy
tlscertificatedelegations tlscerts projectcontour.io true TLSCertificateDelegation
- 創(chuàng)建虛擬機(jī) www.ik8s.io
[root@k8s-master Ingress]# cat httpproxy-demo.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: httpproxy-demo
namespace: default
spec:
virtualhost:
fqdn: www.ik8s.io #虛擬主機(jī)
tls:
secretName: ik8s-tls
minimumProtocolVersion: "tlsv1.1" #最低兼容的協(xié)議版本
routes :
- conditions:
- prefix: /
services :
- name: demoapp-deploy #后端svc
port: 80
permitInsecure: true #明文訪問是否重定向 true為否
[root@k8s-master Ingress]# kubectl apply -f httpproxy-demo.yaml
httpproxy.projectcontour.io/httpproxy-demo configured
- 查看代理httpproxy或 httpproxies
[root@k8s-master Ingress]# kubectl get httpproxy
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
httpproxy-demo www.ik8s.io ik8s-tls valid Valid HTTPProxy
[root@k8s-master Ingress]# kubectl get httpproxies
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
httpproxy-demo www.ik8s.io ik8s-tls valid Valid HTTPProxy
[root@k8s-master Ingress]# kubectl describe httpproxy httpproxy-demo
...
Spec:
Routes:
Conditions:
Prefix: /
Permit Insecure: true
Services:
Name: demoapp-deploy
Port: 80
Virtualhost:
Fqdn: www.ik8s.io
Tls:
Minimum Protocol Version: tlsv1.1
Secret Name: ik8s-tls
Status:
Conditions:
Last Transition Time: 2021-09-13T08:44:00Z
Message: Valid HTTPProxy
Observed Generation: 2
Reason: Valid
Status: True
Type: Valid
Current Status: valid
Description: Valid HTTPProxy
Load Balancer:
Events: <none>
[root@k8s-master Ingress]# kubectl get svc -n projectcontour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour ClusterIP 10.100.120.94 <none> 8001/TCP 39m
envoy LoadBalancer 10.97.48.41 <pending> 80:32668/TCP,443:32278/TCP 39m
- 添加hosts 訪問測試
[root@bigyong ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
...
192.168.54.171 www.ik8s.io
[root@bigyong ~]# curl www.ik8s.io:32668
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-9lnpq, ServerIP: 192.168.12.39!
[root@bigyong ~]# curl www.ik8s.io:32668
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-gw6qp, ServerIP: 192.168.113.39!
- HTTPS訪問
[root@bigyong ~]# curl https://www.ik8s.io:32278
curl: (60) Peer's certificate issuer has been marked as not trusted by the user.
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
[root@bigyong ~]# curl -k https://www.ik8s.io:32278 #忽略證書不受信問題 訪問成功
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: deployment-demo-867c7d9d55-9lnpq, Se
示例1:訪問控制
- 創(chuàng)建2個Pod 分別使用不同版本
[root@k8s-master Ingress]# kubectl create deployment demoappv11 --image='ikubernetes/demoapp:v1.1' -n dev
deployment.apps/demoappv11 created
[root@k8s-master Ingress]# kubectl create deployment demoappv12 --image='ikubernetes/demoapp:v1.2' -n dev
deployment.apps/demoappv12 created
- 創(chuàng)造與之對應(yīng)的Svc
[root@k8s-master Ingress]# kubectl create service clusterip demoappv11 --tcp=80 -n dev
service/demoappv11 created
[root@k8s-master Ingress]# kubectl create service clusterip demoappv12 --tcp=80 -n dev
service/demoappv12 created
[root@k8s-master Ingress]# kubectl get svc -n dev
kuNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoappv11 ClusterIP 10.99.204.65 <none> 80/TCP 19s
demoappv12 ClusterIP 10.97.211.38 <none> 80/TCP 17s
[root@k8s-master Ingress]# kubectl describe svc demoappv11 -n dev
Name: demoappv11
Namespace: dev
Labels: app=demoappv11
Annotations: <none>
Selector: app=demoappv11
Type: ClusterIP
IP: 10.99.204.65
Port: 80 80/TCP
TargetPort: 80/TCP
Endpoints: 192.168.12.53:80
Session Affinity: None
Events: <none>
[root@k8s-master Ingress]# kubectl describe svc demoappv12 -n dev
Name: demoappv12
Namespace: dev
Labels: app=demoappv12
Annotations: <none>
Selector: app=demoappv12
Type: ClusterIP
IP: 10.97.211.38
Port: 80 80/TCP
TargetPort: 80/TCP
Endpoints: 192.168.51.79:80
Session Affinity: None
Events: <none>
- 訪問測試
[root@k8s-master Ingress]# curl 10.99.204.65
iKubernetes demoapp v1.1 !! ClientIP: 192.168.4.170, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
[root@k8s-master Ingress]# curl 10.97.211.38
iKubernetes demoapp v1.2 !! ClientIP: 192.168.4.170, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
- 部署Envoy httpproxy
[root@k8s-master Ingress]# cat httpproxy-headers-routing.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: httpproxy-headers-routing
namespace: dev
spec:
virtualhost:
fqdn: www.ilinux.io
routes: #路由
- conditions:
- header:
name: X-Canary #header中包含X-Canary:true
present: true
- header:
name: User-Agent #header中包含curl
contains: curl
services: #滿足以上兩個條件路由到demoappv11
- name: demoappv11
port: 80
- services: #其他不滿足條件路由到demoapp12
- name: demoappv12
port: 80
[root@k8s-master Ingress]# kubectl apply -f httpproxy-headers-routing.yaml
httpproxy.projectcontour.io/httpproxy-headers-routing unchanged
[root@k8s-master Ingress]# kubectl get httpproxy -n dev
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
httpproxy-headers-routing www.ilinux.io valid Valid HTTPProxy
[root@k8s-master Ingress]# kubectl get svc -n projectcontour
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour ClusterIP 10.100.120.94 <none> 8001/TCP 114m
envoy LoadBalancer 10.97.48.41 <pending> 80:32668/TCP,443:32278/TCP 114m
- 訪問測試
[root@bigyong ~]# cat /etc/hosts #添加hosts
...
192.168.54.171 www.ik8s.io www.ilinux.io
[root@bigyong ~]# curl http://www.ilinux.io #默認(rèn)為1.2版本
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
[root@bigyong ~]# curl http://www.ilinux.io
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
- 因為通過curl訪問 所以在添加信息頭中添加 X-Canary:true即可滿足條件 為1.1版本
[root@bigyong ~]# curl -H "X-Canary:true" http://www.ilinux.io
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
[root@bigyong ~]# curl -H "X-Canary:true" http://www.ilinux.io
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
[root@k8s-master Ingress]# kubectl delete -f httpproxy-headers-routing.yaml
httpproxy.projectcontour.io "httpproxy-headers-routing" deleted
示例2:流量切割 金絲雀發(fā)布
- 先小比例發(fā)布,沒問題后在發(fā)布到全部
- 部署部署Envoy httpproxy 流量比列分別為10%、90%流量
[root@k8s-master Ingress]# cat httpproxy-traffic-splitting.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: httpproxy-traffic-splitting
namespace: dev
spec:
virtualhost:
fqdn: www.ilinux.io
routes:
- conditions:
- prefix: /
services:
- name: demoappv11
port: 80
weight: 90 #v1.1版本為90%流量
- name: demoappv12
port: 80
weight: 10 #v1.2版本為10%流量
[root@k8s-master Ingress]# kubectl get httpproxy -n dev
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
httpproxy-traffic-splitting www.ilinux.io valid Valid HTTPProxy
- 訪問測試
[root@bigyong ~]# while true; do curl http://www.ilinux.io; sleep .1; done #v1.1 v1.2的比大約是9:1
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.2 !! ClientIP: 192.168.113.54, ServerName: demoappv12-64c664955b-lkchk, ServerIP: 192.168.51.79!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
示例3:鏡像發(fā)布
[root@k8s-master Ingress]# cat httpproxy-traffic-mirror.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: httpproxy-traffic-mirror
namespace: dev
spec:
virtualhost:
fqdn: www.ilinux.io
routes:
- conditions:
- prefix: /
services :
- name: demoappv11
port: 80
- name: demoappv12
port: 80
mirror: true #鏡像訪問
[root@k8s-master Ingress]# kubectl apply -f httpproxy-traffic-mirror.yaml
[root@k8s-master Ingress]# kubectl get httpproxy -n dev
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
httpproxy-traffic-mirror www.ilinux.io valid Valid HTTPProxy
[root@k8s-master Ingress]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
demoappv11-59544d568d-5gg72 1/1 Running 0 74m
demoappv12-64c664955b-lkchk 1/1 Running 0 74m
```shell
- 訪問測試
```shell
#都是v1.1版本
[root@bigyong ~]# while true; do curl http://www.ilinux.io; sleep .1; done
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
iKubernetes demoapp v1.1 !! ClientIP: 192.168.113.54, ServerName: demoappv11-59544d568d-5gg72, ServerIP: 192.168.12.53!
- 查看v1.2版本Pod日志 有相同流量訪問并顯示訪問正常
[root@k8s-master Ingress]# kubectl get pod -n dev
NAME READY STATUS RESTARTS AGE
demoappv11-59544d568d-5gg72 1/1 Running 0 74m
demoappv12-64c664955b-lkchk 1/1 Running 0 74m
[root@k8s-master Ingress]# kubectl logs demoappv12-64c664955b-lkchk -n dev
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
192.168.4.170 - - [13/Sep/2021 09:35:01] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:46:24] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:46:28] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:46:29] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:47:12] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:47:25] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:50] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 09:50:52] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:49] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:50] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:51] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:52] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:53] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:53] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:56] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:03:56] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:04:07] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:04:14] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:04:28] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:05:14] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:05:16] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:57] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:58] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
192.168.113.54 - - [13/Sep/2021 10:41:59] "GET / HTTP/1.1" 200 -
[root@k8s-master Ingress]# kubectl delete -f httpproxy-traffic-mirror.yaml
httpproxy.projectcontour.io "httpproxy-traffic-mirror" deleted
示例4 :自定義調(diào)度算法
[root@k8s-master Ingress]# cat httpproxy-lb-strategy.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: httpproxy-lb-strategy
namespace: dev
spec:
virtualhost:
fqdn: www.ilinux.io
routes:
- conditions:
- prefix: /
services:
- name: demoappv11
port: 80
- name: demoappv12
port: 80
loadBalancerPolicy:
strategy: Random #隨機(jī)訪問策略
示例5: HTTPProxy服務(wù)彈性 健康檢查
[root@k8s-master Ingress]# cat httpproxy-retry-timeout.yaml
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: httpproxy-retry-timeout
namespace: dev
spec:
virtualhost:
fqdn: www.ilinux.io
routes:
- timeoutPolicy:
response: 2s #響應(yīng)時間為2s 2s內(nèi)沒有響應(yīng)為超時
idle: 5s #空閑5s
retryPolicy:
count: 3 #重試3次
perTryTimeout: 500ms #重試時間
services:
- name: demoappv12
port: 80
參考鏈接:
https://baijiahao.baidu.com/s?id=1673615010327758104&wfr=spider&for=pc