1)、簡述HA cluster原理
i). HA cluster的定義
集群cluster是指使用一組計算機作為一個整體向用戶提供一組網(wǎng)絡(luò)資源。
在集群中的每個計算機系統(tǒng)稱為集群節(jié)點(node)。集群可隨著業(yè)務(wù)的增長,通過添加新的節(jié)點的方式來提升集群性能。集群的類型包括:Load Balance、High Availability、High Performance這三種,而我們通常所說的HA cluster就是High availability cluster。
集群類型: LB(lvs/nginx(http/upstream,stream/upstream))、HA、HP
ii). HA cluster 的性能衡量及工作方式
HA cluster 性能公式:
HA=MTBF/(MTBF+MTTR)*100%
MTBF: 平均故障間隔時間
MTTR: 故障的平均恢復時間
其計算值的范圍為0-1,計算得到的結(jié)果越接近1,說明此HA cluster 就越穩(wěn)定。
指標: 99%,...,99.999%,99.9999%
99% 意味著一年宕機時間不超過4天;99.9% 意味一年宕機時間不超過10小時;99.99% 意味一年宕機時間不超過1小時;99.999% 意味一年宕機時間不超過6分鐘。
iii). HA cluster的工作方式
主備方式
即HA cluster集群中的節(jié)點以主備的方式運行,主機處于工作狀態(tài),備機處于監(jiān)控準備狀態(tài);當主機出現(xiàn)宕機狀態(tài)時,備機接管主機的一切工作, 待主機恢復正常后,備機再根據(jù)事先設(shè)置的設(shè)定來決定是否把服務(wù)切換到主機上運行。
雙主方式
即HA cluster 集群中的節(jié)點均已主機方式運行,互相之間同時運行維護各自的服務(wù)工作并相互檢測。當任意一臺主機宕機后,另一臺主機會接管它的一切工作,保證服務(wù)正常運行。
iii). HA cluster的運行原理
自動偵測(Auto-Detect)階段 由主機上的軟件通過冗余偵測線,經(jīng)由復雜的監(jiān)聽程序。邏輯判斷,來相互偵測對方運行的情況,所檢查的項目有:主機硬件(CPU和周邊)、主機網(wǎng)絡(luò)、主機操作系統(tǒng)、數(shù)據(jù)庫引擎及其它應用程序、主機與磁盤陣列連線。為確保偵測的正確性,而防止錯誤的判斷,可設(shè)定安全偵測時間,包括偵測時間間隔,偵測次數(shù)以調(diào)整安全系數(shù),并且由主機的冗余通信連線,將所匯集的訊息記錄下來,以供維護參考。
自動切換(Auto-Switch)階段 某一主機如果確認對方故障,則正常主機除了繼續(xù)進行原來的任務(wù),還將依據(jù)各種容錯備援模式接管預先設(shè)定的備援作業(yè)程序,并進行后續(xù)的程序及服務(wù),此類故障切換又被稱為failover。
自動恢復(Auto-Recovery)階段 在正常主機代替故障主機工作后,故障主機可離線進行修復工作。在故障主機修復后,透過冗余通訊線與原正常主機連線,自動切換回修復完成的主機上。整個恢復過程完成由HA相關(guān)軟件自動完成,亦可依據(jù)預先配置,選擇恢復動作為半自動或不恢復。而某資源的主節(jié)點故障后重新修改上線后,將轉(zhuǎn)移至其它節(jié)點的資源重新切回的過程通常稱為failback。
2)、keepalived實現(xiàn)主從、主主架構(gòu)
測試環(huán)境:共5臺主機
RealServer1: 192.168.10.114/24
RealServer1: 192.168.10.224/24
DirectorServer1: 192.168.10.226/24 VirtualServer: 192.168.10.10/24
DirectorServer2: 192.168.10.228/24 VirtualServer: 192.168.10.10/24
keepalived的主從架構(gòu)
i). 配置RealServer端環(huán)境
[root@rs1 ~]#ntpdate ntp1.aliyun.com
31 Dec 23:50:12 ntpdate[1617]: step time server 120.25.115.20 offset 20.688191 sec
[root@rs1 ~]#systemctl stop firewalld.service
[root@rs1 ~]#systemctl disable firewalld.service
[root@rs1 ~]#getenforce
Disabled
ii). 配置nginx測試主頁 (RS1和RS2配置類似)
[root@rs1 ~]#yum install nginx -y
[root@rs1 ~]#vim /usr/share/nginx/html/index.html
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS1_Server </h1>
[root@rs1 html]#systemctl start nginx.service
[root@rs1 html]#ss -tnl
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 128 *:111 *:*
LISTEN 0 128 *:80
iii). 配置lvs-dr模型腳本文件
[root@rs1 html]#vim RS.sh
#!/bin/bash
#
vip=192.168.10.10
mask=255.255.255.255
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig lo:0 $vip netmask $mask broadcast $vip up
route add -host $vip dev lo:0
;;
stop)
ifconfig lo:0 down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
;;
*)
echo "Usage: $(basename $0) start|stop"
exit 1
;;
esac
[root@rs1 html]#bash -n RS.sh
[root@rs1 html]#bash -x RS.sh start
[root@rs1 html]#scp RS.sh 192.168.10.224:/root/
iiii). 配置DirectorServer端(DR1和DR2配置類似)
[root@dr1 ~]#ntpdate ntp.aliyun.com
1 Jan 00:35:12 ntpdate[1653]: step time server 203.107.6.88 offset 20.667238 sec
[root@dr1 ~]#systemctl stop firewalld.service
[root@dr1 ~]#systemctl disable firewalld.service
[root@dr1 ~]#getenforce
Disabled
iiiii). 配置keepalived文件
(DR2配置需要做相應IP的調(diào)整,包括狀態(tài)類型BACKUP以及優(yōu)先級)
[root@dr1 ~]#yum install ipvsadm keepalived -y
[root@dr1 ~]#vim keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id 192.168.10.226
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 571f97b2
}
virtual_ipaddress {
192.168.10.10/24 dev ens33 Label ens33:0
}
}
virtual_server 192.168.10.10 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.10.114 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.10.224 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@dr1 ~]#systemctl start keepalived
[root@dr1 ~]#ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0c:29:66:40:a6 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.226/24 brd 192.168.10.255 scope global noprefixroute dynamic ens33
valid_lft 11937sec preferred_lft 11937sec
inet 192.168.10.10/24 scope global secondary ens33
valid_lft forever preferred_lft forever
DR2同樣參照上述配置進行設(shè)置并啟動.
iv).客戶端進行測試
[root@CentOS6 ~]#for i in {1..20}; do curl http://192.168.10.10/index.html; done
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
關(guān)閉dr1的keepalived服務(wù),查看dr2狀態(tài)已經(jīng)發(fā)生改變
[root@dr1 ~]#systemctl stop keepalived
[root@dr2 keepalived]#systemctl status keepalived
● keepalived.service - LVS and VRRP High Availability Monitor
Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
Active: active (running) since 二 2019-01-01 02:17:37 CST; 8s ago
Process: 49596 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 49597 (keepalived)
Tasks: 3
CGroup: /system.slice/keepalived.service
├─49597 /usr/sbin/keepalived -D
├─49598 /usr/sbin/keepalived -D
└─49599 /usr/sbin/keepalived -D
1月 01 02:17:37 dr2 Keepalived_vrrp[49599]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
1月 01 02:17:42 dr2 Keepalived_vrrp[49599]: VRRP_Instance(VI_1) Transition to MASTER STATE
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: VRRP_Instance(VI_1) Entering MASTER STATE
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: VRRP_Instance(VI_1) setting protocol VIPs.
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
查看服務(wù)調(diào)度一切正常,說明keepalived主從配置生效,反之亦然
[root@CentOS6 ~]#for i in {1..20}; do curl http://192.168.10.10/index.html; done
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
keepalived的主主架構(gòu)
在上述基礎(chǔ)主從基礎(chǔ)做對應的調(diào)整
i). RS方面腳本做對應調(diào)整
[root@rs1 html]#cat RS2.sh
#!/bin/bash
#
vip=192.168.10.99
mask=255.255.255.255
case $1 in
start)
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
ifconfig lo:1 $vip netmask $mask broadcast $vip up
route add -host $vip dev lo:1
;;
stop)
ifconfig lo:1 down
echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
;;
*)
echo "Usage: $(basename $0) start|stop"
exit 1
;;
esac
傳輸給RS2主機,并都啟用腳本
[root@rs1 html]#scp RS2.sh 192.168.10.224:/root/
[root@rs1 html]#bash -n RS2.sh
[root@rs1 html]#bash -x RS2.sh start
ii). DR方面對conf文件添加對應的主備參數(shù)
DR1的配置文件:
[root@dr1 ~]#vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id 192.168.10.226
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 1
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 571f97b2
}
virtual_ipaddress {
192.168.10.10
}
}
virtual_server 192.168.10.10 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.10.114 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.10.224 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
vrrp_instance VI_2 {
state BACKUP
interface ens33
virtual_router_id 2
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 572f97b2
}
virtual_ipaddress {
192.168.10.99
}
}
virtual_server 192.168.10.99 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.10.114 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.10.224 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@dr1 ~]#ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.10.10:80 rr
-> 192.168.10.114:80 Route 1 0 0
-> 192.168.10.224:80 Route 1 0 0
TCP 192.168.10.99:80 rr
-> 192.168.10.114:80 Route 1 0 0
-> 192.168.10.224:80 Route 1 0 0
DR2的配置文件:
[root@dr2 ~]#cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id 192.168.10.228
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 1
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 571f97b2
}
virtual_ipaddress {
192.168.10.10
}
}
virtual_server 192.168.10.10 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.10.114 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.10.224 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
vrrp_instance VI_2 {
state MASTER
interface ens33
virtual_router_id 2
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 572f97b2
}
virtual_ipaddress {
192.168.10.99
}
}
virtual_server 192.168.10.99 80 {
delay_loop 6
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.10.114 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
real_server 192.168.10.224 80 {
weight 1
HTTP_GET {
url {
path /index.html
status_code 200
}
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
[root@dr2 ~]#ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.10.10:80 rr
-> 192.168.10.114:80 Route 1 0 0
-> 192.168.10.224:80 Route 1 0 0
TCP 192.168.10.99:80 rr
-> 192.168.10.114:80 Route 1 0 0
-> 192.168.10.224:80 Route 1 0 0
iii). 客戶機測試
[root@CentOS6 ~]#for i in {1..20}; do curl http://192.168.10.10/index.html; done
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
[root@CentOS6 ~]#for i in {1..20}; do curl http://192.168.10.99/index.html; done
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
3)、簡述http協(xié)議緩存原理及常用首部講解
程序的運行具有局部性的特征
時間局部性: 一個數(shù)據(jù)被訪問過之后,可能很快會被再次訪問到
空間局部性: 一個數(shù)據(jù)被訪問時,其周邊的數(shù)據(jù)也有可能被訪問到
cache: 命中
熱區(qū): 局部性
- 時效性:
- 緩存空間耗盡: LRU,最近最少使用算法
- 過期: 緩存清理
緩存命中率: hit/(hit+miss)
- (0,1)
- 頁面命中率: 基于頁面數(shù)量進行衡量
- 字節(jié)命中率: 基于頁面的體積進行衡量
緩存與否:
- 私有數(shù)據(jù): private, private cache
- 公共數(shù)據(jù): public, public or private cache
http協(xié)議緩存的原理

基于nginx的反代服務(wù)時,為了加速性能,可以開啟nginx緩存;如果這nginx為負載均衡器時,還要承擔緩存的功能,在高并發(fā)下,會面臨帶寬瓶頸;因此在規(guī)模交大時,會在反代服務(wù)器后面添加專門用于緩存的服務(wù)器,來提供緩存功能。這樣讓代理功能的服務(wù)器只負責代理,讓緩存功能的服務(wù)器只負責緩存,當前端主機請求資源時,它所指向的上游服務(wù)器就不在是真正的服務(wù)器,而是緩存服務(wù)器,他們之間是通過http請求和http響應報文來通信;因此,代理服務(wù)器取資源時緩存服務(wù)器如果本地未能命中,會到后端服務(wù)器讀取數(shù)據(jù),取到數(shù)據(jù)后按照緩存策略是否可緩存,如果可緩存就把數(shù)據(jù)緩存到本地,并響應給前端主機;如果緩存服務(wù)器能命中,則緩存服務(wù)器直接響應,省去了到后端讀取數(shù)據(jù)的過程
常用首部
- Cache-related Header Fields
- The most important caching header fields are;
- Expire: 過期時間
- Expries: Thu, 22 Oct 2026 06:34:30 GMT
- Cache-Control: max-age=
- Etag
- If-None-Match
- Last-Modified
- If-Modified-Since
- Vary
- Age
- 緩存有效性判斷機制:
- 過期時間: Expires
- HTTP/1.0
- Expires
- HTTP/1.1
- Cache-Control: maxage=
- Cache-Control: s-maxage=
- 條件式請求:
- Last-Modified/If-Modified-Since
- Etag/If-None-Match
- Expires: Thu, 13 Aug 2026 02:05:12 GMT
- Cache-Control: maxage=315360000
- Etag:"1ec5-502264e2ae4c0"
- Last-Modified: Wed, 03 Sep 2014 10:00:27 GMT
4)、簡述回源原理和CDN常見多級緩存
一、CDN回源
回源原理
i). 源站內(nèi)容有更新的時候,源站主動把內(nèi)容推送到CDN節(jié)點。
ii). 常規(guī)的CDN都是回源的。即:當有用戶訪問某一個URL的時候,如果被解析到的那個CDN節(jié)點沒有緩存響應的內(nèi)容,或者是緩存已經(jīng)到期,就會回源站去獲取。如果沒有人訪問,那么CDN節(jié)點不會主動去源站拿的。
iii). 回源域名一般是cdn領(lǐng)域的專業(yè)術(shù)語,通常情況下,是直接用ip進行回源的,但是如果客戶源站有多個ip,并且ip地址會經(jīng)常變化,對于cdn廠商來說,為了避免經(jīng)常更改配置(回源ip),會采用回源域名方式進行回源,這樣即使源站的ip變化了,也不影響原有的配置。
二、CDN常見多級緩存
1、CDN概念
- CDN的全稱是Content Delivery Network,即內(nèi)容分發(fā)網(wǎng)絡(luò)。其基本思路是盡可能避開互聯(lián)網(wǎng)上有可能影響數(shù)據(jù)傳輸速度和穩(wěn)定性的瓶頸和環(huán)節(jié),使內(nèi)容傳輸?shù)母?、更穩(wěn)定。通過在網(wǎng)絡(luò)各處放置節(jié)點服務(wù)器所構(gòu)成的在現(xiàn)有的互聯(lián)網(wǎng)基礎(chǔ)之上的一層智能虛擬網(wǎng)絡(luò),CDN系統(tǒng)能夠?qū)崟r地根據(jù)網(wǎng)絡(luò)流量和各節(jié)點的連接、負載狀況以及到用戶的距離和響應時間等綜合信息將用戶的請求重新導向離用戶最近的服務(wù)節(jié)點上。其目的是使用戶可就近取得所需內(nèi)容,解決 Internet網(wǎng)絡(luò)擁擠的狀況,提高用戶訪問網(wǎng)站的響應速度。
2、CDN工作方法
- 客戶端瀏覽器先檢查是否有本地緩存是否過期,如果過期,則向CDN邊緣節(jié)點發(fā)起請求,CDN邊緣節(jié)點會檢測用戶請求數(shù)據(jù)的緩存是否過期,如果沒有過期,則直接響應用戶請求,此時一個完成http請求結(jié)束;如果數(shù)據(jù)已經(jīng)過期,那么CDN還需要向源站發(fā)出回源請求(back to the source request),來拉取最新的數(shù)據(jù)。CDN的典型拓撲圖如下:

3、CDN緩存
瀏覽器本地緩存失效后,瀏覽器會向CDN邊緣節(jié)點發(fā)起請求。類似瀏覽器緩存,CDN邊緣節(jié)點也存在著一套緩存機制。
4、CDN緩存的缺點
CDN的分流作用不僅減少了用戶的訪問延時,也減少的源站的負載。但其缺點也很明顯:當網(wǎng)站更新時,如果CDN節(jié)點上數(shù)據(jù)沒有及時更新,即便用戶再瀏覽器使用Ctrl +F5的方式使瀏覽器端的緩存失效,也會因為CDN邊緣節(jié)點沒有同步最新數(shù)據(jù)而導致用戶訪問異常。
5、CDN緩存策略
CDN邊緣節(jié)點緩存策略因服務(wù)商不同而不同,但一般都會遵循h(huán)ttp標準協(xié)議,通過http響應頭中的Cache-control: max-age的字段來設(shè)置CDN邊緣節(jié)點數(shù)據(jù)緩存時間。
當客戶端向CDN節(jié)點請求數(shù)據(jù)時,CDN節(jié)點會判斷緩存數(shù)據(jù)是否過期,若緩存數(shù)據(jù)并沒有過期,則直接將緩存數(shù)據(jù)返回給客戶端;否則,CDN節(jié)點就會向源站發(fā)出回源請求,從源站拉取最新數(shù)據(jù),更新本地緩存,并將最新數(shù)據(jù)返回給客戶端。
CDN服務(wù)商一般會提供基于文件后綴、目錄多個維度來指定CDN緩存時間,為用戶提供更精細化的緩存管理。
CDN緩存時間會對“回源率”產(chǎn)生直接的影響。若CDN緩存時間較短,CDN邊緣節(jié)點上的數(shù)據(jù)會經(jīng)常失效,導致頻繁回源,增加了源站的負載,同時也增大的訪問延時;若CDN緩存時間太長,會帶來數(shù)據(jù)更新時間慢的問題。開發(fā)者需要增對特定的業(yè)務(wù),來做特定的數(shù)據(jù)緩存時間管理。
6、CDN緩存刷新
CDN邊緣節(jié)點對開發(fā)者是透明的,相比于瀏覽器Ctrl+F5的強制刷新來使瀏覽器本地緩存失效,開發(fā)者可以通過CDN服務(wù)商提供的“刷新緩存”接口來達到清理CDN邊緣節(jié)點緩存的目的。這樣開發(fā)者在更新數(shù)據(jù)后,可以使用“刷新緩存”功能來強制CDN節(jié)點上的數(shù)據(jù)緩存過期,保證客戶端在訪問時,拉取到最新的數(shù)據(jù)。
5)、varnish實現(xiàn)緩存對象及反代后端主機
請求報文用于通知緩存服務(wù)如何使用緩存響應請求:
cache-request-directive =
"no-cache"
"no-store"
"max-age" "=" delta-seconds
"max-stale" [ "=" delta-seconds ]
"min-fresh" "=" delta-seconds
"no-transform"
"only-if-cached"
cache-extension
響應報文用于通知緩存服務(wù)器如何存儲上級服務(wù)器響應的內(nèi)容
cache-response-directive =
"public"
"public" [ "=" <"> 1#field-name <">]
"no-cache" [ "=" <"> 1#field-name <">],可緩存,但響應給客戶端之前需要revalidation
"no-store", 不允許存儲響應內(nèi)容于緩存中
"no-transform"
"must-revalidate"
"proxy-revalidate"
"max-age" "=" delta-seconds
"s-maxage" "=" delta-seconds
cache-extension
開源解決方案:
- squid:
- varnish:
- varnish官方站點: https://varnish-cache.org/
- Community
- Enterprise
- 程序架構(gòu):
- Manager進程
- Cache進程,包含多種類型的線程
- accept, worker, expiry...
- shared memory log:
- 統(tǒng)計數(shù)據(jù): 計數(shù)器
- 日志區(qū)域: 日志記錄
- varnishlog, varnishncsa, varnishstat....
- 配置接口: VCL
- varnish Configuration Language
- vcl complier --> c complier --> shared object
- varnish Configuration Language
- varnish的程序環(huán)境:
- /etc/varnish/varnish.params: 配置varnish服務(wù)進程的工作特性,例如監(jiān)聽的地址和端口,緩存機制
- /etc/varnish/default.vcl: 配置各Child/Cache進程的緩存工作屬性
- 主程序:
- /usr/sbin/varnishd
- CLI interface:
- /usr/bin/varnishadm
- Shared Memory Log交互工具:
- /usr/bin/varnishhist
- /usr/bin/varnishlog
- /usr/bin/varnishncsa
- /usr/bin/varnishstat
- /usr/bin/varnishtop
- 測試工具程序
- /usr/bin/varnishtest
- VCL配置文件重載程序
- /usr/sbin/varnish_reload_vcl
- Systemd Unit File:
- /usr/lib/systemd/system/varnish.service
- varnish服務(wù)
- /usr/lib/systemd/system/varnishlog.service
- /usr/lib/systemd/system/varnishncsa.service
- 日志持久服務(wù)
- /usr/lib/systemd/system/varnish.service
- varnish的緩存存儲機制(Storage Types)
- -S [name=]type[,options]
- malloc[,size]
- 內(nèi)存存儲,[,size]用于定義空間大小;重啟后所有緩存項失效
- file[,path[,size[,granularity]]]
- 磁盤文件存儲,黑盒; 重啟后所有緩存項失效
- persistent,path,size
- 文件存儲,黑盒; 重啟后所有緩存項有效; 實驗階段
- varnish程序的選項:
- 程序選項: /etc/varnish/varnish.params文件
- -a address[:port][,address[:port]],默認為6081端口
- -T address[:port],默認為6082端口
- -s [name=]type[,options],定義緩存存儲機制
- -u user
- -g group
- -f config: VCL配置文件
- -F: 運行于前臺
- ....
- 運行時參數(shù): /etc/varnish/varnish.params文件, DEAMON_OPTS
- DAEMON_OPTS="-p thread_pool_min=5 -p thread_pool_max=500 -p thread_pool_timeout=300"
- -p param=value: 設(shè)定運行參數(shù)及其值;可重復使用多次
- -r param[,param...]: 設(shè)定指定的參數(shù)為只讀狀態(tài)
- 程序選項: /etc/varnish/varnish.params文件
- 重載vcl配置文件:
~]# varnish_reload_vcl
- varnishadm
-S /etc/varnish/secret -T [ADDRESS:]PORT
help [<command>]
ping [<timestamp>]
auth <response>
quit
banner
status
start
stop
vcl.load <configname> <filename>
vcl.inline <configname> <quoted_VCLstring>
vcl.use <configname>
vcl.discard <configname>
vcl.list
param.show [-i] [<param>]
param.set <param> <value>
panic.show
panic.clear
storage.list
vcl.show [-v] <configname>
backend.list [<backend_expression>]
backend.set_health <backend_expression> <state>
ban <field> <operator> <arg> [&& <field> <oper> <arg>]...
ban.list
- 配置文件相關(guān):
- vcl.list
- vcl.load: 裝載, 加載并編譯
- vcl.use: 激活
- vcl.discard: 刪除
- vcl.show [-v] <\configname>: 查看指定的配置文件的詳細信息
- 運行時參數(shù):
- param.show -l: 顯示列表
- param.show <\PARAM>
- param.set <\PARAM> <\VALUE>
- 緩存存儲:
- storage.list
- 后端服務(wù)器:
- backend.list
VCL:
- "域"專有類型的配置語言
- state engine: 狀態(tài)引擎
- VCL有多個狀態(tài)引擎,狀態(tài)之間存在相關(guān)性,但狀態(tài)引擎彼此間互相隔離; 每個狀態(tài)引擎可使用return(x)指明關(guān)聯(lián)至哪個下一級引擎;每個狀態(tài)引擎對應于vcl文件中的一個配置段,即為subroutine
- vcl_hash --> return(hit) --> vcl_hit
- vcl_recv的默認配置:
sub vcl_recv {
if (req.method == "PRI") {
/* We do not support SPDY or HTTP/2.0 */
return (synth(405));
}
if (req.method != "GET" &&
req.method != "HEAD" &&
req.method != "PUT" &&
req.method != "POST" &&
req.method != "TRACE" &&
req.method != "OPTIONS" &&
req.method != "DELETE") {
/* Non-RFC2616 or CONNECT which is weird. */
return (pipe);
}
if (req.method != "GET" && req.method != "HEAD") {
/* We only deal with GET and HEAD by default */
return (pass);
}
if (req.http.Authorization || req.http.Cookie) {
/* Not cacheable by default */
return (pass);
}
return (hash);
}
- Client Side:
- vcl_recv, vcl_pass, vcl_hit, vcl_miss, vcl_pipe, vcl_purge, vcl_synth, vcl_deliver
- vcl_recv:
- hash: vcl_hash
- pass: vcl_pass
- pipe: vcl_pipe
- synth: vcl_synth
- purge: vcl_hash --> vcl_purge
- vcl_hash:
- lookup:
- hit: vcl_hit
- miss: vcl_miss
- pass, hit_for_pass: vcl_pass
- purge: vcl_purge
- lookup:
- Backend Side:
- vcl_backend_fetch, vcl_backend_response, vcl_backend_error
- 兩個特殊的引擎:
- vcl_inti: 在處理任何請求之前要執(zhí)行的vcl代碼: 主要用于初始化VMODs;
- vcl_fini: 所有的請求都已經(jīng)結(jié)束,在vcl配置被丟棄時調(diào)用; 主要用于清理VMODs;
vcl的語法格式:
- (1) VCL files start with vcl 4.0
- (2) //,# and /* foo */ for comments
- (3) Subroutines are declared with the sub keyword; 例如sub vcl_recv{...}
- (4) No loops, state-limited variables(受限于引擎的內(nèi)建變量)
- (5) Terminating statements with a keyword for next action as argument of the return() function, i.e:return(action); 用于實現(xiàn)狀態(tài)引擎轉(zhuǎn)換
- (6) Domain-specific
The VCL Finite State Machine
- (1) Each request is processed separately
- (2) Each request is independent from others at any given time
- (3) States are related, but isolated
- (4) return(action); exits one state and instructs Varnish to proceed to the next state
- (5) Built-in VCL code is always present and appended below your own VCL
三類主要語法
sub subroutine {
...
}
if CONDITION {
...
}else{
...
}
return(),hash_data()
VCL Built-in Functions and Keywords
- 函數(shù):
- regsub(str,regex,sub)
- regsuball(str,regex,sub)
- ban(boolean expression)
- hash_data(input)
- synthetic(str)
- keywords:
- call subroutine, return(action), new, set, unset
操作符:
- ==, !=, ~, >, >=, <, <=
- 邏輯操作符: &&, ||, !
- 變量賦值: =
- 舉例: obj,hits
if(obj.hits>0) {
set resp.http.X-Cache = "HIT via" + server.ip;
}else{
set resp.http.X-Cache = "MISS via" + server.ip;
}
變量類型:
內(nèi)建變量:
req.*: request, 表示由客戶端發(fā)來的請求報文相關(guān);
req.http.*
req.http.User-Agent, req.http.Referer,...
bereq.*: 由varnish發(fā)往BE主機的httpd請求先關(guān)
bereq.http.*
beresp.*: 由BE主機響應給varnish的響應報文相關(guān)
beresp.http.*
resp.*: 由varnish響應給client相關(guān)
obj.*: 存儲在緩存空間中的緩存兌現(xiàn)的屬性; 只讀;
常用變量:
bereq.*,req.*:
bereq.http.HEADERS
bereq.request: 請求方法
bereq.url: 請求的url
bereq.proto: 請求的協(xié)議版本
bereq.backend: 指明要調(diào)用的后端主機
req.http.Cookie: 客戶端的請求報文中Cookie首部的值
req.http.User-Agent ~ "chrome"
beresp.*.resp.*:
beresp.http.HEADERS
beresp.status: 響應的狀態(tài)碼
beresp.proto: 協(xié)議版本
beresp.backend.name: BE主機的主機名
beresp.ttl: BE主機響應的內(nèi)容的余下的可緩存時長
obj.*
obj.hits: 此對象從緩存中命中的次數(shù)
obj.ttl: 對象的ttl值
server.*
server.ip
server.hostname
client.*
client.ip
用戶自定義
- set
- unset
示例1: 強制對某類資源的請求不檢查緩存
vcl_recv {
if(req.url ~ "(?i)^/(login|admin)") {
return(pass);
}
}
示例2: 對于特定類型的資源,例如公開的圖片等,取消其私有標識,并強行設(shè)定其可以由varnish緩存的時長
if(beresp.http.cache-control !~ "s-maxage") {
if(bereq.url ~ "(?i)\.(jpg|jpeg|png|gif|css|js)$") {
unset beresp.http.Set-Cookie;
set beresp.ttl=3600s;
}
}
示例3:
if(req.restarts == 0) {
if(req.http.X-Fowarded-For) {
set.req.http.X-Forwarded-For = req.http.X-forwarded-For + "," + client.ip;
}else {
set.req.http.X-Forwarded-For = client.ip;
}
}
緩存對象的修剪: purge, ban
- (1) 能執(zhí)行purge操作
sub vcl_purge {
return(synth(200,"Purged"));
}
- (2) 何時執(zhí)行purge操作
sub vcl_recv {
if(req.method == "PURGE") {
return(purge);
}
...
}
- 添加此請求的訪問控制法則:
acl purgers {
"127.0.0.1";
"192.168.0.0"/24;
}
sub vcl_recv {
# allow PURGE from localhost and 192.168.0...
if (req.method == "PURGE") {
if (!client.ip ~ purgers) {
return (synth(405, "Purging not allowed for " + client.ip));
}
return (purge);
}
}
sub vcl_purge {
set req.method = "GET";
return (restart);
}
Banning
- (1)varnishadm:
ban <filed> <operator> <arg>
示例:
ban req.url ~ ^/javascripts
- (2)在配置文件中定義,使用ban()函數(shù)
示例:
if (req.method == "BAN") {
ban("req.http.host == " + req.http.host + " && req.url == " + req.url);
# Throw a synthetic page so the request won't go to the backend.
return(synth(200, "Ban added"));
}


如何設(shè)定使用多個后端主機
backend default {
.host = "172.16.100.6";
.port = "80";
}
backend appsrv {
.host = "172.16.100.7";
.port = "80";
}
sub req.recv {
if(req.url ~ "(?i)\.php$") {
set req.backend_hint = appsrv;
}else {
set req.backend_hint = default;
}
...
}
Director
- varnish module
- 使用前需要導入
- import directors;
示例:
- import directors;
- 使用前需要導入
backend server1 {
.host =
.port =
}
backend server2 {
.host =
.port =
}
sub vcl_init {
new GROUP_NAME = directors.round.robin();
GROUP_NAME.add_backend(server1);
GROUP_NAME.add_backend(server2);
}
sub vcl_recv {
set req.backend_hint = GROUP_NAME.backend();
}
基于cookie的session sticky
sub vcl_init {
new h = directors.hash();
h.add_backend(one, 1); // backend 'one' with weight '1'
h.add_backend(two, 1); // backend 'two' with weight '1'
}
sub vcl_recv {
// pick a backend based on the cookie header of the client
set req.backend_hint = h.backend(req.http.cookie);
}
BE Health Check
backend BE_NAME {
.host =
.probe =
.url =
.timeout =
.interval =
.window =
.threshold =
}
}
- .probe: 定義健康狀態(tài)檢測方法
- .url: 檢測時請求的URL,默認為"/"
- .request: 發(fā)出的具體請求
- .request =
- "GET /.healthtest.html HTTP/1.1"
- "Host:www.magedu.com"
- "Connection:close"
- .request =
- .windows: 基于最近的多少次檢查來判斷其健康狀態(tài)
- .threshhold: 最近.window中定義的這么次檢查中只有.threshhold定義的次數(shù)是成功的
- .interval: 檢查頻度
- .timeout: 超時時長
- .expected_response: 期望的響應碼,默認為200
健康狀態(tài)檢測的配置方式:
- (1) probe PB_NAME = {}
backend NAME = {
.probe = PB_NAME;
...
}
- (2) backend NAME {}
backend NAME = {
.probe = {
...
}
}
示例:
probe check {
.url = "/healthcheck.html";
.timeout = 1s;
.interval = 2s;
.window = 5;
.threshold = 4;
}
backend default {
.host = "10.1.0.68";
.port = "80";
.probe = check;
}
backend appsrv {
.host = "10.1.0.69";
.port = "80";
.probe = check;
}
[圖片上傳失敗...(image-1f6caf-1546287959236)]
設(shè)置后端的主機屬性
backend BE_NAME {
...
.connect_timeout = 0.5S;
.first_byte_timeout = 20S;
.between_bytes_timeout = 5S;
.max_connections = 50;
}
varnish的運行時參數(shù):
- 線程模型:
- cache-worker
- cache-main
- ban lurker
- acceptor
- epoll/kqueue
- ...
線程相關(guān)的參數(shù):
在線程池內(nèi)部,其每一個請求由一個線程來處理,其worker線程的最大數(shù)決定了varnish的并發(fā)響應能力
- thread_pools: Number of worker thread pools. 最好小于或等于CPU核心數(shù)量
- thread_pool_max: Maximum number of worker threads per pool. 每線程池的最大線程數(shù)
- thread_pool_min: Minimum number of worker threads per pool. 額外意義為"最大空閑線程數(shù)"
- 最大并發(fā)連接數(shù)=thread_poos * thread_pool_max
- thread_pool_timeout: Period of time before idle threads are destroyed.
- thread_pool_add_delay: Period of time to wait for subsequent thread creation.
- thread_pool_destroy_delay: Added time to thread_pool_timeout.
Timer相關(guān)的參數(shù):
- send_timeout:
- timeout_idle:
- timeout_req
- 設(shè)置方式:
- vcl.param
- param.set
- 永久有效的方法:
- varnish.params
- DEAMON_OPTS="-p PARAM1=VALUE -p PARAM2=VALUE"
- varnish.params
varnish日志區(qū)域
-
shared memory log
- 計數(shù)器
- 日志信息
-
varnishstat - Varnish Cache statistics
- -1
- -1 -f FILED_NAME
- -l: 可用于-f選項指定的字段名稱列表
- MAIN.cache_hit
- MAIN.cache_miss
### varnishstat -1 -f MAIN.cache_hit -f MAIN.cache_miss
### varnishstat -l -f MAIN -f MEMPOOL
- varnishtop -Varnish log entry ranking
- -1
- -l taglist,可以同時使用多個-l選項,也可以一個選項跟上多個標簽
- -I <[taglist:]regex>
- -x taglist: 排除列表
- -X <[taglist:]regex>
- varnishlog - Display Varnish logs
- varnishncsa - Display Varnish logs in Apache/NCSA combined log format