1、Nginx+Keepalived實(shí)現(xiàn)站點(diǎn)高可用
linux cluster類型
LB:nginx負(fù)載,varnish(director module)haproxy,lvs
HA:keepalived,heartbeat 采用冗余方式為活動(dòng)設(shè)備提供備用設(shè)備,活動(dòng)設(shè)備出現(xiàn)故障時(shí),備用設(shè)備主動(dòng)代替活動(dòng)設(shè)備工作
HP:
keepalived 主要是通過(guò)vrrp虛擬路由虛擬路由冗余協(xié)議實(shí)現(xiàn)ip地址轉(zhuǎn)移,結(jié)合api接口腳本實(shí)現(xiàn)高可用
keepalived實(shí)現(xiàn)過(guò)程
準(zhǔn)備兩臺(tái)機(jī)器
192.168.1.198?
192.168.1.196
兩臺(tái)機(jī)器都要同步時(shí)間 ntpdate?ntp1.aliyun.com?
關(guān)閉防火墻或者修改防火墻規(guī)則放行keepalive的報(bào)文
keepalive的被收錄在base倉(cāng)庫(kù)中,可直接安裝
yum install keepalived 兩臺(tái)節(jié)點(diǎn)都安裝keepalived
keepalived的三個(gè)大配置配置
? ? ? GLOBAL CONFIGURATION #全局配置
? ? ? VRRPD CONFIGURATION #VRRP虛擬路由配置
? ? ? LVS CONFIGURATION #LVS相關(guān)的配置
簡(jiǎn)單配置示例
! Configuration File for keepalived
global_defs { #全局配置
? notification_email { #配置郵件地址
? ? ? ? root@localhost
? }
? notification_email_from keepalived@localhost
? smtp_server 127.0.0.1 #郵件地址
? smtp_connect_timeout 30#超時(shí)時(shí)長(zhǎng)
? router_id node1.com #主機(jī)id
? vrrp_skip_check_adv_addr
? vrrp_strict
? vrrp_garp_interval 0
? vrrp_gna_interval 0
? vrrp_mcast_group4 224.0.0.1 #組播地址,用于發(fā)通告信息
? vrrp_iptables
}
vrrp_instance VI_1 { #這是一個(gè)實(shí)例 虛擬路由
? ? state MASTER #表示為主節(jié)點(diǎn)
? ? interface ens33 #在自己真實(shí)網(wǎng)卡配置
? ? virtual_router_id 51 #配置一個(gè)id
? ? priority 100 #優(yōu)先級(jí)
? ? advert_int 1
? ? authentication { #跟驗(yàn)證有關(guān)
? ? ? ? auth_type PASS #驗(yàn)證類型
? ? ? ? auth_pass 1111 #密碼
? ? }
? ? virtual_ipaddress { #定義虛擬路由的ip地址 接口,和標(biāo)簽
? ? ? ? 192.168.1.254/24 brd 192.168.1.255 dev ens33 label ens33:1
? ? }
}
配置完需要將這個(gè)配置文件拷貝至另外一臺(tái)備用機(jī)器,并且需要將 state master 改成 state backup,優(yōu)先級(jí)需要改。改完開(kāi)啟服務(wù)即可生效
keepalived消息通知機(jī)制
通過(guò)notify調(diào)用腳本實(shí)現(xiàn)通知機(jī)制
# notify scripts, alert as above
? ? ? ? ? notify_master <STRING>|<QUOTED-STRING> [username [groupname]] #在示例中定義腳本,主機(jī)轉(zhuǎn)為主節(jié)點(diǎn)時(shí)通知
? ? ? ? ? notify_backup <STRING>|<QUOTED-STRING> [username [groupname]] #主機(jī)轉(zhuǎn)為備用機(jī)器時(shí)的腳本
? ? ? ? ? notify_fault <STRING>|<QUOTED-STRING> [username [groupname]] #主機(jī)宕機(jī)時(shí)調(diào)用的腳本
? ? ? ? ? notify_stop <STRING>|<QUOTED-STRING> [username [groupname]]? ? ? # executed when stopping vrrp #實(shí)例停止時(shí)使用腳本
? ? ? ? ? notify <STRING>|<QUOTED-STRING> [username [groupname]]?
通知腳本的使用方式:
示例通知腳本:
#!/bin/bash
#
contact='root@localhost'
notify() {
local mailsubject="$(hostname) to be $1, vip floating"
local mailbody="$(date +'%F %T'): vrrp transition, $(hostname) changed to be $1"
echo "$mailbody" | mail -s "$mailsubject" $contact
}
case $1 in
master)
notify master
;;
backup)
notify backup
;;
fault)
notify fault
;;
*)
echo "Usage: $(basename $0) {master|backup|fault}"
exit 1
;;
esac
腳本的調(diào)用方法:
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
高可用的ipvs集群示例:
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.100.19
}
vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 14
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 571f97b2
}
virtual_ipaddress {
10.1.0.93/16 dev eno16777736
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
}
virtual_server 10.1.0.93 80 { #虛擬服務(wù)。vip地址
delay_loop 3 #對(duì)后端real server 3秒檢測(cè)一次
lb_algo rr#算法
lb_kind DR#lvs類型
protocol TCP?
sorry_server 127.0.0.1 80 #say sorry服務(wù)器
real_server 10.1.0.69 80 { #后端真實(shí)服務(wù)器
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
real_server 10.1.0.71 80 {
weight 1
HTTP_GET {
url {
path /
status_code 200
}
connect_timeout 1
nb_get_retry 3
delay_before_retry 1
}
}
}
實(shí)驗(yàn)過(guò)程
準(zhǔn)備機(jī)器
ipvs,以及keepalived部署在兩臺(tái)機(jī)器中192.168.1.196 198 后端realserver 部署兩臺(tái)nginx 192.168.1.201 202
在前端機(jī)器部署nginx。用于實(shí)現(xiàn)后端機(jī)器宕機(jī)時(shí)say sorry
設(shè)定后端real主機(jī)參數(shù),使用DR類型,設(shè)定腳本,修改arp報(bào)文參數(shù)。并添加ip地址
在兩臺(tái)real server 中執(zhí)行
#!/bin/bash
vip=192.168.1.254 #設(shè)置為虛擬路由的ip地址
interface="lo:0"
case $1 in
start)
? ? ? ? echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
? ? ? ? echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
? ? ? ? echo 2 > /proc/sys/net/ipv4/conf/all/arp_ignore
? ? ? ? echo 2 > /proc/sys/net/ipv4/conf/lo/arp_ignore
? ? ? ? ifconfig $interface $vip netmask 255.255.255.255 broadcast $vip up
? ? ? ? route add -host $vip $interface
? ? ? ? ;;
stop)
? ? ? ? ifconfig $interface down
? ? ? ? echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
? ? ? ? echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
? ? ? ? echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
? ? ? ? echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
? ? ? ? ;;
*)
? ? ? ? echo canshu cuowu
esac
修改配置文件,添加virtual_server字段,在后端添加兩臺(tái)real服務(wù)字段。會(huì)自動(dòng)生成ipvsadm規(guī)則


停掉一臺(tái)real server 斷開(kāi)連接幾秒后會(huì)全部調(diào)度到real 1中

keepalived調(diào)用外部的輔助腳本進(jìn)行資源監(jiān)控,并根據(jù)監(jiān)控的結(jié)果狀態(tài)能實(shí)現(xiàn)優(yōu)先動(dòng)態(tài)調(diào)整;
分兩步:(1) 先定義一個(gè)腳本;(2) 在vrrp實(shí)例中調(diào)用此腳本;
vrrp_script <SCRIPT_NAME> {
script ""
interval INT
weight -INT
rise 2
fall 3
}
track_script {
SCRIPT_NAME_1
SCRIPT_NAME_2
...
}
? ? ? ? 注意:
? ? ? ? ? ? vrrp_script chk_down {
? ? ? ? ? ? ? ? script "/bin/bash -c '[[ -f /etc/keepalived/down ]]' && exit 1 || exit 0"
? ? ? ? ? ? ? ? interval 1
? ? ? ? ? ? ? ? weight -10
? ? ? ? ? ? }
? ? ? ? ? ? ? ? [[ -f /etc/keepalived/down ]]要特別地作為bash的參數(shù)的運(yùn)行!
示例:高可用nginx服務(wù)
修改keepalived配置文件,添加一個(gè)外部腳本,檢測(cè)nginx服務(wù)。如果出現(xiàn)故障則自動(dòng)重啟nginx
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.100.19
}
vrrp_script chk_nginx {
script "killall -0 nginx && exit 0 || exit 1"
interval 1
weight -5
fall 2
rise 1
}
vrrp_instance VI_1 {
state MASTER
interface eno16777736
virtual_router_id 14
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 571f97b2
}
virtual_ipaddress {
10.1.0.93/16 dev eno16777736
}
track_script {
chk_down
chk_nginx
}
notify_master "/etc/keepalived/notify.sh master"
notify_backup "/etc/keepalived/notify.sh backup"
notify_fault "/etc/keepalived/notify.sh fault"
2、實(shí)現(xiàn)keepalived主主模型
雙主模型
需要配置兩個(gè)實(shí)例路由,一個(gè)主機(jī)作為一個(gè)實(shí)例的主,一個(gè)實(shí)例的備
! Configuration File for keepalived
global_defs { #全局配置
? notification_email { #配置郵件地址
? ? ? ? root@localhost
? }
? notification_email_from keepalived@localhost
? smtp_server 127.0.0.1 #郵件地址
? smtp_connect_timeout 30#超時(shí)時(shí)長(zhǎng)
? router_id node1.com #主機(jī)id
? vrrp_skip_check_adv_addr
? vrrp_strict
? vrrp_garp_interval 0
? vrrp_gna_interval 0
? vrrp_mcast_group4 224.0.0.1 #組播地址,用于發(fā)通告信息
? vrrp_iptables
}
vrrp_instance VI_1 { #這是一個(gè)實(shí)例 虛擬路由
? ? state MASTER #表示為主節(jié)點(diǎn)
? ? interface ens33 #在自己真實(shí)網(wǎng)卡配置
? ? virtual_router_id 51 #配置一個(gè)id
? ? priority 100 #優(yōu)先級(jí)
? ? advert_int 1
? ? authentication { #跟驗(yàn)證有關(guān)
? ? ? ? auth_type PASS #驗(yàn)證類型
? ? ? ? auth_pass 1111 #密碼
? ? }
? ? virtual_ipaddress { #定義虛擬路由的ip地址 接口,和標(biāo)簽
? ? ? ? 192.168.1.254/24 brd 192.168.1.255 dev ens33 label ens33:1
? ? }
}
vrrp_instance VI_2 { #定義第二個(gè)虛擬路由
? ? state BACKUP #在這個(gè)路由中本機(jī)為備用節(jié)點(diǎn)
? ? interface ens33 #網(wǎng)卡名
? ? virtual_router_id 55 #id不能和第一個(gè)相同
? ? priority 98 #優(yōu)先級(jí)。因?yàn)槭莻溆?。?yōu)先級(jí)不能太高
? ? advert_int 1
? ? authentication {
? ? ? ? auth_type PASS
? ? ? ? auth_pass 1111
? ? }
? ? virtual_ipaddress { #定義另外一個(gè)地址,自己作為此地址的備用地址
? ? ? ? 192.168.1.253/24 brd 192.168.1.255 dev ens33 label ens33:3
? ? }
}

在本機(jī)定義好之后需要復(fù)制到另外一個(gè)節(jié)點(diǎn),在另一個(gè)節(jié)點(diǎn)將第二個(gè)虛擬路由配置為主節(jié)點(diǎn)

配置成功
systemctl start keepalived啟動(dòng)服務(wù)我這里先啟動(dòng)第二臺(tái)機(jī)器
啟動(dòng)之后第二臺(tái)機(jī)器會(huì)獲取兩個(gè)地址,通告通告兩次,一次為id為55的,優(yōu)先級(jí)100,(這是第二個(gè)虛擬路由的master)一次為id為51的,優(yōu)先級(jí)為99,這是第一臺(tái)虛擬路由,為備用節(jié)點(diǎn)


現(xiàn)在啟動(dòng)第一臺(tái)機(jī)器 systemctl start keepalived

啟動(dòng)之后他會(huì)搶占本機(jī)作為優(yōu)先級(jí)的虛擬路由設(shè)備的ip地址作為主節(jié)點(diǎn)

3、Haproxy+Keepalived實(shí)現(xiàn)站點(diǎn)高可用
創(chuàng)建haproxy腳本
設(shè)置可執(zhí)行權(quán)限chmod +x check_haproxy.sh,腳本內(nèi)容如下:
#!/bin/bash
#auto check haprox process
killall -0 haproxy
if
???[[ $? -ne 0 ]];then
???/etc/init.d/keepalived stop
fi
haproxy+keealived Master端keepalived.conf配置文件如下:
! Configuration File for keepalived
?global_defs {
?notification_email {
??????xxx@139.com
?}
????notification_email_from wgkgood@139.com
????smtp_server 127.0.0.1
????smtp_connect_timeout 30
????router_id LVS_DEVEL
?}
?vrrp_script chk_haproxy {
????script "/data/sh/check_haproxy.sh"
???interval 2
???weight 2
?}
?# VIP1
?vrrp_instance VI_1 {
?????state ?MASTER
?????interface eth0
?????lvs_sync_daemon_inteface eth0
?????virtual_router_id 151
?????priority 100
?????advert_int 5
?????nopreempt
?????authentication {
?????????auth_typePASS
?????????auth_pass 2222
?????}
?????virtual_ipaddress {
?????????192.168.0.133
?????}
?????track_script {
?????chk_haproxy
????}
?}
1.1.6創(chuàng)建haproxy腳本
設(shè)置可執(zhí)行權(quán)限chmod +x check_haproxy.sh,腳本內(nèi)容如下:
#!/bin/bash
#auto check haprox process
killall -0 haproxy
if
???[[ $? -ne 0 ]];then
???/etc/init.d/keepalived stop
fi
Haproxy+keealived Backup端keepalived.conf配置文件如下:
! Configuration File for keepalived
?global_defs {
?notification_email {
? ? ? xxx@139.com
?}
????notification_email_from wgkgood@139.com
????smtp_server 127.0.0.1
????smtp_connect_timeout 30
????router_id LVS_DEVEL
?}
?vrrp_script chk_haproxy {
????script "/data/sh/check_haproxy.sh"
???interval 2
???weight 2
?}
?# VIP1
?vrrp_instance VI_1 {
?????state ?BACKUP
?????interface eth0
?????lvs_sync_daemon_inteface eth0
?????virtual_router_id 151
?????priority ?90
?????advert_int 5
?????nopreempt
?????authentication {
?????????auth_typePASS
?????????auth_pass 2222
?????}
?????virtual_ipaddress {
?????????192.168.0.133
?????}
?????track_script {
?????chk_haproxy
????}
?}
4、搭建tomcat服務(wù)器,并通過(guò)nginx反向代理訪問(wèn)
軟件架構(gòu)模式:
? ? 分層架構(gòu);表現(xiàn)層,業(yè)務(wù)層,持久層,數(shù)據(jù)庫(kù)層
? ? 事件驅(qū)動(dòng)架構(gòu);分布式異步架構(gòu),
? ? 微內(nèi)核架構(gòu),及插件式架構(gòu)
? ? 微服務(wù)架構(gòu),
jdk:java 開(kāi)發(fā)工具箱
servlet:java用于開(kāi)發(fā)web服務(wù)器網(wǎng)頁(yè)類庫(kù)
安裝jdk工具,這里使用openjdk
yum install?java-1.8.0-openjdk-devel #安裝devel版本,會(huì)自動(dòng)解決其他依賴關(guān)系
wget http://mirrors.tuna.tsinghua.edu.cn/apache/tomcat/tomcat-8/v8.5.45/bin/apache-tomcat-8.5.45.tar.gz #下載tomcat二進(jìn)制安裝包
tar xf apache-tomcat-8.5.45.tar.gz -C /usr/local/ #解壓至usr/local目錄中
ln -s apache-tomcat-8.5.45.tar.gz tomcat #創(chuàng)建軟連接方便以后修改
useradd tomcat #添加用戶,修改屬組 ,tomcat默認(rèn)以普通身份運(yùn)行,需要修改文件權(quán)限
chown -R .tomcat .
chmod g+r conf/*
chmod g+rx conf/
chown -R tomcat logs/ temp/ work/
vim /etc/profile.d/cols.sh #修改tomcat命令行配置。
PS1='[\e[32;40m\u@\h \W\e[m]$ '
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/tomcat/bin
catalina.sh start #啟動(dòng)tomcat

8009為管理接口,8080提供服務(wù)

tomcat內(nèi)部關(guān)鍵 類
Tomcat的核心組件:server.xml
<Server>
<Service>
<connector/>
<connector/>
...
<Engine>
<Host>
<Context/>
<Context/>
...
</Host>
<Host>
...
</Host>
...
</Engine>
</Service>
</Server>
每一個(gè)組件都由一個(gè)Java“類”實(shí)現(xiàn),這些組件大體可分為以下幾個(gè)類型:
頂級(jí)組件:Server
服務(wù)類組件:Service
連接器組件:http, https, ajp(apache jserv protocol)
容器類:Engine, Host, Context
被嵌套類:valve, logger, realm, loader, manager, ...
集群類組件:listener, cluster, ...
部署(deploy)webapp的相關(guān)操作:
deploy:將webapp的源文件放置于目標(biāo)目錄(網(wǎng)頁(yè)程序文件存放目錄),配置tomcat服務(wù)器能夠基于web.xml和context.xml文件中定義的路徑來(lái)訪問(wèn)此webapp;將其特有的類和依賴的類通過(guò)class loader裝載至JVM;
部署有兩種方式:
自動(dòng)部署:auto deploy
手動(dòng)部署:
冷部署:把webapp復(fù)制到指定的位置,而后才啟動(dòng)tomcat;
熱部署:在不停止tomcat的前提下進(jìn)行部署;
部署工具:manager、ant腳本、tcd(tomcat client deployer)等;
undeploy:反部署,停止webapp,并從tomcat實(shí)例上卸載webapp;
start:?jiǎn)?dòng)處于停止?fàn)顟B(tài)的webapp;
stop:停止webapp,不再向用戶提供服務(wù);其類依然在jvm上;
redeploy:重新部署;
JSP WebAPP的組織結(jié)構(gòu):
/: webapps的根目錄
index.jsp, index.html:主頁(yè);
WEB-INF/:當(dāng)前webapp的私有資源路徑;通常用于存儲(chǔ)當(dāng)前webapp的web.xml和context.xml配置文件;
META-INF/:類似于WEB-INF/;
classes/:類文件,當(dāng)前webapp所提供的類;
lib/:類文件,當(dāng)前webapp所提供的類,被打包為jar格式;
tomcat的配置文件構(gòu)成:
server.xml:主配置文件;
web.xml:每個(gè)webapp只有“部署”后才能被訪問(wèn),它的部署方式通常由web.xml進(jìn)行定義,其存放位置為WEB-INF/目錄中;此文件為所有的webapps提供默認(rèn)部署相關(guān)的配置;
context.xml:每個(gè)webapp都可以專用的配置文件,它通常由專用的配置文件context.xml來(lái)定義,其存放位置為WEB-INF/目錄中;此文件為所有的webapps提供默認(rèn)配置;
tomcat-users.xml:用戶認(rèn)證的賬號(hào)和密碼文件;
catalina.policy:當(dāng)使用-security選項(xiàng)啟動(dòng)tomcat時(shí),用于為tomcat設(shè)置安全策略;
catalina.properties:Java屬性的定義文件,用于設(shè)定類加載器路徑,以及一些與JVM調(diào)優(yōu)相關(guān)參數(shù);
logging.properties:日志系統(tǒng)相關(guān)的配置; log4j
手動(dòng)提供一測(cè)試類應(yīng)用,并冷部署: #示例
#?mkidr -pv /usr/share/tomcat/webapps/myapp/{classes,lib,WEB-INF}
創(chuàng)建文件/usr/local/tomcat/myapp/test/index.jsp
<%@ page language="java" %>
<%@ page import="java.util.*" %>
<html>
????<head>
????????<title>Test Page</title>
????</head>
????<body>
????????????<% out.println("hello world");
????????????%>
</body>
</html>
#將index文件放再myapp目錄中,index.jsp文件會(huì)自動(dòng)部署

work目錄中記錄了代碼的轉(zhuǎn)換之后的源代碼

登錄gui的tomcat后端
默認(rèn)訪問(wèn)tomcat后臺(tái)時(shí)會(huì)提示我們輸入賬戶密碼,需要在tomcat-user文件中啟用賬戶,并且關(guān)聯(lián)至對(duì)應(yīng)賬戶

<role rolename="admin-gui"/> #開(kāi)啟圖形界面 管理端口的賬戶
<user name="admin" password="adminadmin" roles="admin,manager,admin-gui,admin-script,manager-gui,manager-scrip
t,manager-jmx,manager-status" /> #創(chuàng)建一個(gè)賬戶,關(guān)聯(lián)至gui接口,這里關(guān)聯(lián)了多個(gè)賬戶,



tomcat的常用組件配置:
Server:代表tomcat instance,即表現(xiàn)出的一個(gè)java進(jìn)程;監(jiān)聽(tīng)在8005端口,只接收“SHUTDOWN”。各server監(jiān)聽(tīng)的端口不能相同,因此,在同一物理主機(jī)啟動(dòng)多個(gè)實(shí)例時(shí),需要修改其監(jiān)聽(tīng)端口為不同的端口;
Service:用于實(shí)現(xiàn)將一個(gè)或多個(gè)connector組件關(guān)聯(lián)至一個(gè)engine組件;
Connector組件:端點(diǎn)
負(fù)責(zé)接收請(qǐng)求,常見(jiàn)的有三類http/https/ajp;
進(jìn)入tomcat的請(qǐng)求可分為兩類:
(1) standalone : 請(qǐng)求來(lái)自于客戶端瀏覽器;
(2) 由其它的web server反代:來(lái)自前端的反代服務(wù)器;
nginx --> http connector --> tomcat
httpd(proxy_http_module) --> http connector --> tomcat
httpd(proxy_ajp_module) --> ajp connector --> tomcat
httpd(mod_jk) --> ajp connector --> tomcat
屬性:
port="8080"
protocol="HTTP/1.1"
connectionTimeout="20000" #單位毫秒
address:監(jiān)聽(tīng)的IP地址;默認(rèn)為本機(jī)所有可用地址;
maxThreads:最大并發(fā)連接數(shù),默認(rèn)為200;
enableLookups:是否啟用DNS查詢功能;
acceptCount:等待隊(duì)列的最大長(zhǎng)度;
secure:
sslProtocol:
Engine組件:Servlet實(shí)例,即servlet引擎,其內(nèi)部可以一個(gè)或多個(gè)host組件來(lái)定義站點(diǎn); 通常需要通過(guò)defaultHost屬性來(lái)定義默認(rèn)的虛擬主機(jī);
屬性:
name=
defaultHost="localhost"
jvmRoute=
Host組件:位于engine內(nèi)部用于接收請(qǐng)求并進(jìn)行相應(yīng)處理的主機(jī)或虛擬主機(jī),示例:
<Host name="localhost"? appBase="webapps" #tomcat僅支持基于主機(jī)名的識(shí)別虛擬主機(jī)
unpackWARs="true" autoDeploy="true">
</Host>
Webapp ARchives
常用屬性說(shuō)明:
(1) appBase:此Host的webapps的默認(rèn)存放目錄,指存放非歸檔的web應(yīng)用程序的目錄或歸檔的WAR文件目錄路徑;可以使用基于$CATALINA_BASE變量所定義的路徑的相對(duì)路徑;
(2) autoDeploy:在Tomcat處于運(yùn)行狀態(tài)時(shí),將某webapp放置于appBase所定義的目錄中時(shí),是否自動(dòng)將其部署至tomcat;
示例:
? ? ? <Host name="tc1.magedu.com" appBase="/appdata/webapps" unpackWARs="true" autoDeploy="true">
</Host>
# mkdir -pv /appdata/webapps
# mkdir -pv /appdata/webapps/ROOT/{lib,classes,WEB-INF}
提供一個(gè)測(cè)試頁(yè)即可;
Context組件:
示例:
#URL路徑,本地文件路徑,是否支持重載
<Context path="/PATH" docBase="/PATH/TO/SOMEDIR" reloadable=""/>
Valve組件:
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log" suffix=".txt"
pattern="%h %l %u %t "%r" %s %b" />
#官方文檔日志 https://tomcat.apache.org/tomcat-7.0-doc/api/org/apache/catalina/valves/AccessLogValve.html
Valve存在多種類型:
定義訪問(wèn)日志:org.apache.catalina.valves.AccessLogValve
定義訪問(wèn)控制:org.apache.catalina.valves.RemoteAddrValve
<Valve className="org.apache.catalina.valves.RemoteAddrValve" deny="172\.16\.100\.67"/>
nginx實(shí)現(xiàn)反代
Client (http) --> nginx (reverse proxy)(http) --> tomcat (http connector) #本機(jī)實(shí)現(xiàn)反代
location / {
proxy_pass http://tc1.magedu.com:8080;
}
location ~* \.(jsp|do)$ {
proxy_pass http://tc1.magedu.com:8080;
}

因?yàn)閳D片和jsp的路徑不在一塊,反代時(shí)沒(méi)有l(wèi)ocation圖片位置路徑,所以代理時(shí)加載不了圖片

LAMT:Linux Apache(httpd) MySQL Tomcat
httpd的代理模塊:
proxy_module
proxy_http_module:適配http協(xié)議客戶端;
proxy_ajp_module:適配ajp協(xié)議客戶端;
Client (http) --> httpd (proxy_http_module)(http) --> tomcat? (http connector)
Client (http) --> httpd (proxy_ajp_module)(ajp) --> tomcat? (ajp connector)
Client (http) --> httpd (mod_jk)(ajp) --> tomcat? (ajp connector)
proxy_http_module代理配置示例:
<VirtualHost *:80>
ServerName? ? ? tc1.magedu.com
ProxyRequests Off
ProxyVia? ? ? ? On
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
ProxyPass / http://tc1.magedu.com:8080/
ProxyPassReverse / http://tc1.magedu.com:8080/
<Location />
Require all granted
</Location>
</VirtualHost>
? ? ? ? ? ? ? ? <LocationMatch "\.(jsp|do)$>
? ? ? ? ? ? ? ? ? ? ProxyPass / http://tc1.magedu.com:8080/
? ? ? ? ? ? ? ? </LocationMatch>
proxy_ajp_module代理配置示例:
<VirtualHost *:80>
ServerName? ? ? tc1.magedu.com
ProxyRequests Off
ProxyVia? ? ? ? On
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
ProxyPass / ajp://tc1.magedu.com:8009/
ProxyPassReverse / ajp://tc1.magedu.com:8009/
<Location />
Require all granted
</Location>
</VirtualHost>
對(duì)tomcat做負(fù)載均衡
docker pull tomcat:8.5-slim #拉取tomcat鏡像,作為后端服務(wù)器
docker run --name tc1 --hostname tc1.com -d -v /data/tc1:/usr/local/tomcat/webapps/myapp tomcat:8.5-slim
docker run --name tc2 --hostname tc2.com -d -v /data/tc1:/usr/local/tomcat/webapps/myapp tomcat:8.5-slim? #啟動(dòng)容器綁定掛載卷,指定主機(jī)名
[root@centos7 tc1]$ mkdir -p lib classes WEB-INF #創(chuàng)建目錄,和index.jsp 需要在兩臺(tái)機(jī)器上創(chuàng)建此index文件
[root@centos7 tc1]$ vim index.jsp
<%@ page language="java" %>
<html>
<head><title>TomcatA</title></head>
<body>
<h1><font color="red">TomcatA.magedu.com</font></h1>
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>

修改nginx配置文件定義負(fù)載集群主機(jī)組及反代的配置
upstream tcsrvs {
? ? ? ? ? ? ? ? server 172.17.0.2:8080;
? ? ? ? ? ? ? ? server 172.17.0.3:8080;
? ? ? ? }
location /myapp/ {
? ? ? ? ? ? ? ? proxy_pass http://tcsrvs/myapp/;
? ? ? ? }

httpd會(huì)話粘性的實(shí)現(xiàn)方法:
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED
<proxy balancer://tcsrvs>
BalancerMember http://172.18.100.67:8080 route=TomcatA loadfactor=1
BalancerMember http://172.18.100.68:8080 route=TomcatB loadfactor=2
ProxySet lbmethod=byrequests
ProxySet stickysession=ROUTEID
</Proxy>
<VirtualHost *:80>
ServerName lb.magedu.com
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
<Location />
Require all granted
</Location>
</VirtualHost>
啟用管理接口:
<Location /balancer-manager>
SetHandler balancer-manager
ProxyPass !
Require all granted
</Location>
示例程序:
演示效果,在TomcatA上某context中(如/test),提供如下頁(yè)面
<%@ page language="java" %>
<html>
<head><title>TomcatA</title></head>
<body>
<h1><font color="red">TomcatA.magedu.com</font></h1>
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>
演示效果,在TomcatB上某context中(如/test),提供如下頁(yè)面
<%@ page language="java" %>
<html>
<head><title>TomcatB</title></head>
<body>
<h1><font color="blue">TomcatB.magedu.com</font></h1>
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>
第二種方式:
<proxy balancer://tcsrvs>
BalancerMember ajp://172.18.100.67:8009
BalancerMember ajp://172.18.100.68:8009
ProxySet lbmethod=byrequests
</Proxy>
<VirtualHost *:80>
ServerName lb.magedu.com
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
<Location />
Require all granted
</Location>
<Location /balancer-manager>
SetHandler balancer-manager
ProxyPass !
Require all granted
</Location>
</VirtualHost>
保持會(huì)話的方式參考前一種方式。
Tomcat Session Replication Cluster:
(1) 配置啟用集群,將下列配置放置于<engine>或<host>中;
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.0.0.4"
port="45564"
frequency="500"
dropTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="auto"
port="4000"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
確保Engine的jvmRoute屬性配置正確。
(2) 配置webapps
編輯WEB-INF/web.xml,添加<distributable/>元素;
注意:CentOS 7上的tomcat自帶的文檔中的配置示例有語(yǔ)法錯(cuò)誤;
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
綁定的地址為auto時(shí),會(huì)自動(dòng)解析本地主機(jī)名,并解析得出的IP地址作為使用的地址;
5、搭建Tomcat,并基于memcached實(shí)現(xiàn)會(huì)話共享
https://github.com/magro/memcached-session-manager/wiki/SetupAndConfiguration?借助msm部署 mamcached session manager 的java擴(kuò)展庫(kù)實(shí)現(xiàn)
搭建后端tomcat會(huì)話replication cluster
后端tomcat 服務(wù)器地址 192.168.80.134 192.168.80.130
前端調(diào)度器nginx地址 192.168.80.133,192.168.1.196
先下載對(duì)應(yīng)的擴(kuò)張jar包
wget http://repo1.maven.org/maven2/de/javakaffee/msm/memcached-session-manager/2.3.2/memcached-session-manager-2.3.2.jar
wget http://repo1.maven.org/maven2/de/javakaffee/msm/memcached-session-manager-tc7/2.3.2/memcached-session-manager-tc7-2.3.2.jar
wget http://repo1.maven.org/maven2/net/spy/spymemcached/2.12.3/spymemcached-2.12.3.jar
wget http://repo1.maven.org/maven2/de/javakaffee/msm/msm-kryo-serializer/2.3.2/msm-kryo-serializer-2.3.2.jar
wget http://repo1.maven.org/maven2/com/esotericsoftware/kryo/4.0.2/kryo-4.0.2.jar
wget?http://repo1.maven.org/maven2/de/javakaffee/kryo-serializers/0.42/kryo-serializers-0.42.jar
wget?http://repo1.maven.org/maven2/com/esotericsoftware/minlog/1.3.0/minlog-1.3.0.jar
wget?http://repo1.maven.org/maven2/com/esotericsoftware/reflectasm/1.11.7/reflectasm-1.11.7.jar
wget?http://repo1.maven.org/maven2/org/ow2/asm/asm/6.2/asm-6.2.jar
wget?http://repo1.maven.org/maven2/org/objenesis/objenesis/2.6/objenesis-2.6.jar
mv /etc/tomcat/*.jar . #把所有下載的jav包放到tomcat擴(kuò)展庫(kù)目錄 /usr/share/java/tomcat/ 目錄中
vim /etc/tomcat/server.xml#修改配置文件。在context中增加別名目錄。并且加載memcached節(jié)點(diǎn)端口實(shí)現(xiàn)共享會(huì)話
后端兩臺(tái)機(jī)器同樣的這樣操作,修改細(xì)節(jié)即可,如ip地址等等
<Context path="/myapp" docBase="/webapps/myapp" reloadable="">
? ? ? ? <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
? ? memcachedNodes="m1:192.168.80.134:11211,m2:192.168.80.130:11211"
? ? failoverNodes="m1"
? ? requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
? ? transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
? ? />
? ? ? ? </Context>
#啟動(dòng)memcached 服務(wù)
systemctl start memcached
啟動(dòng)tomcat
6、搭建Nginx+Tomcat服務(wù)
搭建后端tomcat會(huì)話replication cluster
后端tomcat 服務(wù)器地址 192.168.80.132 192.168.80.130
前端調(diào)度器nginx地址 192.168.80.133,192.168.1.196
安裝jdk,tomcat軟件包
yum install java-1.8.0-openjdk-devel tomcat tomcat-webapps tomcat-admin-webapps tomcat-docs-webapp -y
創(chuàng)建創(chuàng)建測(cè)試頁(yè)目錄及測(cè)試頁(yè)#后端兩臺(tái)機(jī)器都要操作
mkdir /webapps/myapp/{lib,class,WEB-INF} -pv
vim /webapps/myapp/index.jsp
<%@ page language="java" %>
<html>
<head><title>TomcatA</title></head>
<body>
<h1><font color="red">TomcatA.com</font></h1>
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>
#修改tomcat配置文件
添加官方推薦的集群配置文件
https://tomcat.apache.org/tomcat-7.0-doc/cluster-howto.html
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"
? ? ? ? ? ? ? ? channelSendOptions="8">
? ? ? ? ? <Manager className="org.apache.catalina.ha.session.DeltaManager"
? ? ? ? ? ? ? ? ? expireSessionsOnShutdown="false"
? ? ? ? ? ? ? ? ? notifyListenersOnReplication="true"/>
? ? ? ? ? <Channel className="org.apache.catalina.tribes.group.GroupChannel">
? ? ? ? ? ? <Membership className="org.apache.catalina.tribes.membership.McastService"
? ? ? ? ? ? ? ? ? ? ? ? address="228.0.0.4"
? ? ? ? ? ? ? ? ? ? ? ? port="45564"
? ? ? ? ? ? ? ? ? ? ? ? frequency="500"
? ? ? ? ? ? ? ? ? ? ? ? dropTime="3000"/>
? ? ? ? ? ? <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
? ? ? ? ? ? ? ? ? ? ? address="192.168.80.132"
? ? ? ? ? ? ? ? ? ? ? port="4000"
? ? ? ? ? ? ? ? ? ? ? autoBind="100"
? ? ? ? ? ? ? ? ? ? ? selectorTimeout="5000"
? ? ? ? ? ? ? ? ? ? ? maxThreads="6"/>
? ? ? ? ? ? <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
? ? ? ? ? ? ? <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
? ? ? ? ? ? </Sender>
? ? ? ? ? ? <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
? ? ? ? ? ? <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
? ? ? ? ? </Channel>
? ? ? ? ? <Valve className="org.apache.catalina.ha.tcp.ReplicationValve"
? ? ? ? ? ? ? ? filter=""/>
? ? ? ? ? <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
? ? ? ? ? <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
? ? ? ? ? ? ? ? ? ? tempDir="/tmp/war-temp/"
? ? ? ? ? ? ? ? ? ? deployDir="/tmp/war-deploy/"
? ? ? ? ? ? ? ? ? ? watchDir="/tmp/war-listen/"
? ? ? ? ? ? ? ? ? ? watchEnabled="false"/>
? ? ? ? ? <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
? ? ? ? ? <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
? ? ? ? </Cluster>
在host配置端配置一個(gè)別名。指向我們剛剛創(chuàng)建的目錄

安裝官方文檔提示,修改web.xml文件加入 <distributable/>字段

[root@centos7 tomcat]# cp web.xml /webapps/myapp/WEB-INF/
vim web.xml

啟動(dòng)服務(wù)

修改nginx配置文件
upstream tcsrvs {
? ? ? ? ? ? ? ? server 192.168.80.130:8080;
? ? ? ? ? ? ? ? server 192.168.80.132:8080;
? ? ? ? }
location / {
? ? ? ? ? ? ? ? proxy_pass http://tcsrvs;
? ? ? ? }


