一、集群環(huán)境規(guī)劃配置
生產(chǎn)環(huán)境不要使用一主多從,要使用多主多從。這里使用三臺(tái)主機(jī)進(jìn)行測(cè)試一臺(tái)Master(172.16.20.111),兩臺(tái)Node(172.16.20.112和172.16.20.113)
1、設(shè)置主機(jī)名
CentOS7安裝完成之后,設(shè)置固定ip,三臺(tái)主機(jī)做相同設(shè)置
vi /etc/sysconfig/network-scripts/ifcfg-ens33
#在最下面ONBOOT改為yes,新增固定地址IPADDR,172.16.20.111,172.16.20.112,172.16.20.113
ONBOOT=yes
IPADDR=172.16.20.111
三臺(tái)主機(jī)ip分別設(shè)置好之后,修改hosts文件,設(shè)置主機(jī)名
#master 機(jī)器上執(zhí)行
hostnamectl set-hostname master
#node1 機(jī)器上執(zhí)行
hostnamectl set-hostname node1
#node2 機(jī)器上執(zhí)行
hostnamectl set-hostname node2
vi /etc/hosts
172.16.20.111 master
172.16.20.112 node1
172.16.20.113 node2
2、時(shí)間同步
開(kāi)啟chronyd服務(wù)
systemctl start chronyd
設(shè)置開(kāi)機(jī)啟動(dòng)
systemctl enable chronyd
測(cè)試
date
3、禁用firewalld和iptables(測(cè)試環(huán)境)
systemctl stop firewalld
systemctl disable firewalld
systemctl stop iptables
systemctl disable iptables
4、禁用selinux
vi /etc/selinux/config
SELINUX=disabled
5、禁用swap分區(qū)
注釋掉 /dev/mapper/centos-swap swap
vi /etc/fstab
# 注釋掉
# /dev/mapper/centos-swap swap
6、修改linux的內(nèi)核參數(shù)
vi /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
#重新加載配置
sysctl -p
#加載網(wǎng)橋過(guò)濾模塊
modprobe br_netfilter
#查看網(wǎng)橋過(guò)濾模塊
lsmod | grep br_netfilter
7、配置ipvs
安裝ipset和ipvsadm
yum install ipset ipvsadm -y
添加需要加載的模塊(整個(gè)執(zhí)行)
cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
添加執(zhí)行權(quán)限
chmod +x /etc/sysconfig/modules/ipvs.modules
執(zhí)行腳本
/bin/bash /etc/sysconfig/modules/ipvs.modules
查看是否加載成功
lsmod | grep -e -ip_vs -e nf_conntrack_ipv4
以上完成設(shè)置之后,一定要執(zhí)行重啟使配置生效
reboot
二、Docker環(huán)境安裝配置
1、安裝依賴
docker依賴于系統(tǒng)的一些必要的工具:
yum install -y yum-utils device-mapper-persistent-data lvm2
2、添加軟件源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum clean all
yum makecache fast
3、安裝docker-ce
#查看可以安裝的docker版本
yum list docker-ce --showduplicates
#選擇安裝需要的版本,直接安裝最新版,可以執(zhí)行 yum -y install docker-ce
yum install --setopt=obsoletes=0 docker-ce-19.03.13-3.el7 -y
4、啟動(dòng)服務(wù)
#通過(guò)systemctl啟動(dòng)服務(wù)
systemctl start docker
#通過(guò)systemctl設(shè)置開(kāi)機(jī)啟動(dòng)
systemctl enable docker
5、查看安裝版本
啟動(dòng)服務(wù)使用docker version查看一下當(dāng)前的版本:
docker version
6、 配置鏡像加速
通過(guò)修改daemon配置文件/etc/docker/daemon.json加速,如果使用k8s,這里一定要設(shè)置 "exec-opts": ["native.cgroupdriver=systemd"]。 "insecure-registries" : ["172.16.20.175"]配置是可以通過(guò)http從我們的harbor上拉取數(shù)據(jù)。
vi /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"registry-mirrors": ["https://eiov0s1n.mirror.aliyuncs.com"],
"insecure-registries" : ["172.16.20.175"]
}
sudo systemctl daemon-reload && sudo systemctl restart docker
7、安裝docker-compose
如果網(wǎng)速太慢,可以直接到 https://github.com/docker/compose/releases 選擇對(duì)應(yīng)的版本進(jìn)行下載,然后上傳到服務(wù)器/usr/local/bin/目錄。
sudo curl -L "https://github.com/docker/compose/releases/download/v2.0.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
注意:(非必須設(shè)置)開(kāi)啟Docker遠(yuǎn)程訪問(wèn) (這里不是必須開(kāi)啟的,生產(chǎn)環(huán)境不要開(kāi)啟,開(kāi)啟之后,可以在開(kāi)發(fā)環(huán)境直連docker)
vi /lib/systemd/system/docker.service
修改ExecStart,添加 -H tcp://0.0.0.0:2375

ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375 --containerd=/run/containerd/containerd.sock
修改后執(zhí)行以下命令:
systemctl daemon-reload && service docker restart
測(cè)試是否能夠連得上:
curl http://localhost:2375/version

三、Harbor私有鏡像倉(cāng)庫(kù)安裝配置(重新設(shè)置一臺(tái)服務(wù)器172.16.20.175,不要放在K8S的主從服務(wù)器上)
首先需要按照前面的步驟,在環(huán)境上安裝Docker,才能安裝Harbor。
1、選擇合適的版本進(jìn)行下載,下載地址:
https://github.com/goharbor/harbor/releases
2、解壓
tar -zxf harbor-offline-installer-v2.2.4.tgz
3、配置
cd harbor
mv harbor.yml.tmpl harbor.yml
vi harbor.yml
4、將hostname改為當(dāng)前服務(wù)器地址,注釋掉https配置。
......
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 172.16.20.175
# http related config
http:
# port for http, default is 80. If https enabled, this port will redirect to https port
port: 80
# https related config
#https:
# https port for harbor, default is 443
# port: 443
# The path of cert and key files for nginx
# certificate: /your/certificate/path
# private_key: /your/private/key/path
......
5、執(zhí)行安裝命令
mkdir /var/log/harbor/
./install.sh
6、查看安裝是否成功
[root@localhost harbor]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de1b702759e7 goharbor/harbor-jobservice:v2.2.4 "/harbor/entrypoint.…" 13 seconds ago Up 9 seconds (health: starting) harbor-jobservice
55b465d07157 goharbor/nginx-photon:v2.2.4 "nginx -g 'daemon of…" 13 seconds ago Up 9 seconds (health: starting) 0.0.0.0:80->8080/tcp, :::80->8080/tcp nginx
d52f5557fa73 goharbor/harbor-core:v2.2.4 "/harbor/entrypoint.…" 13 seconds ago Up 10 seconds (health: starting) harbor-core
4ba09aded494 goharbor/harbor-db:v2.2.4 "/docker-entrypoint.…" 13 seconds ago Up 11 seconds (health: starting) harbor-db
647f6f46e029 goharbor/harbor-portal:v2.2.4 "nginx -g 'daemon of…" 13 seconds ago Up 11 seconds (health: starting) harbor-portal
70251c4e234f goharbor/redis-photon:v2.2.4 "redis-server /etc/r…" 13 seconds ago Up 11 seconds (health: starting) redis
21a5c408afff goharbor/harbor-registryctl:v2.2.4 "/home/harbor/start.…" 13 seconds ago Up 11 seconds (health: starting) registryctl
b0937800f88b goharbor/registry-photon:v2.2.4 "/home/harbor/entryp…" 13 seconds ago Up 11 seconds (health: starting) registry
d899e377e02b goharbor/harbor-log:v2.2.4 "/bin/sh -c /usr/loc…" 13 seconds ago Up 12 seconds (health: starting) 127.0.0.1:1514->10514/tcp harbor-log
7、harbor的啟動(dòng)停止命令
docker-compose down #停止
docker-compose up -d #啟動(dòng)
8、訪問(wèn)harbor管理臺(tái)地址,上面配置的hostname, http://172.16.20.175 (默認(rèn)用戶名/密碼: admin/Harbor12345):
三、Kubernetes安裝配置
1、切換鏡像源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2、安裝kubeadm、kubelet和kubectl
yum install -y kubelet kubeadm kubectl
3、配置kubelet的cgroup
vi /etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
4、啟動(dòng)kubelet并設(shè)置開(kāi)機(jī)啟動(dòng)
systemctl start kubelet && systemctl enable kubelet
5、初始化k8s集群(只在Master執(zhí)行)
初始化
kubeadm init --kubernetes-version=v1.22.3 \
--apiserver-advertise-address=172.16.20.111 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.20.0.0/16 --pod-network-cidr=10.222.0.0/16

創(chuàng)建必要文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
6、加入集群(只在Node節(jié)點(diǎn)執(zhí)行)
在Node節(jié)點(diǎn)(172.16.20.112和172.16.20.113)運(yùn)行上一步初始化成功后顯示的加入集群命令
kubeadm join 172.16.20.111:6443 --token fgf380.einr7if1eb838mpe \
--discovery-token-ca-cert-hash sha256:fa5a6a2ff8996b09effbf599aac70505b49f35c5bca610d6b5511886383878f7
在Master查看集群狀態(tài)
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 2m54s v1.22.3
node1 NotReady <none> 68s v1.22.3
node2 NotReady <none> 30s v1.22.3
7、安裝網(wǎng)絡(luò)插件(只在Master執(zhí)行)
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
鏡像加速:修改kube-flannel.yml文件,將quay.io/coreos/flannel:v0.15.0 改為 quay.mirrors.ustc.edu.cn/coreos/flannel:v0.15.0
執(zhí)行安裝
kubectl apply -f kube-flannel.yml
再次查看集群狀態(tài),(需要等待一段時(shí)間大概1-2分鐘)發(fā)現(xiàn)STATUS都是Ready。
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 42m v1.22.3
node1 Ready <none> 40m v1.22.3
node2 Ready <none> 39m v1.22.3
8、集群測(cè)試
使用kubectl安裝部署nginx服務(wù)
kubectl create deployment nginx --image=nginx --replicas=1
kubectl expose deploy nginx --port=80 --target-port=80 --type=NodePort
查看服務(wù)
[root@master ~]# kubectl get pod,svc
NAME READY STATUS RESTARTS AGE
pod/nginx-6799fc88d8-z5tm8 1/1 Running 0 26s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.20.0.1 <none> 443/TCP 68m
service/nginx NodePort 10.20.17.199 <none> 80:32605/TCP 9s
服務(wù)顯示service/nginx的PORT(S)為80:32605/TCP, 我們?cè)跒g覽器中訪問(wèn)主從地址的32605端口,查看nginx是否運(yùn)行
http://172.16.20.111:32605/
http://172.16.20.112:32605/
http://172.16.20.113:32605/
成功后顯示如下界面:

9、安裝Kubernetes管理界面Dashboard
??Kubernetes可以通過(guò)命令行工具kubectl完成所需要的操作,同時(shí)也提供了方便操作的管理控制界面,用戶可以用 Kubernetes Dashboard 部署容器化的應(yīng)用、監(jiān)控應(yīng)用的狀態(tài)、執(zhí)行故障排查任務(wù)以及管理 Kubernetes 各種資源。
1、下載安裝配置文件recommended.yaml ,注意在https://github.com/kubernetes/dashboard/releases查看Kubernetes 和 Kubernetes Dashboard的版本對(duì)應(yīng)關(guān)系。

# 執(zhí)行下載
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
2、修改配置信息,在service下添加 type: NodePort和nodePort: 30010
vi recommended.yaml
......
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
# 新增
nodeName: Master
# 新增
type: NodePort
ports:
- port: 443
targetPort: 8443
# 新增
nodePort: 30010
......
注釋掉以下信息,否則不能安裝到master服務(wù)器
# Comment the following tolerations if Dashboard must not be deployed on master
#tolerations:
# - key: node-role.kubernetes.io/master
# effect: NoSchedule
新增nodeName: master,安裝到master服務(wù)器
......
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
nodeName: master
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.4.0
imagePullPolicy: Always
......
3、執(zhí)行安裝部署命令
kubectl apply -f recommended.yaml
4、查看運(yùn)行狀態(tài)命令,可以看到service/kubernetes-dashboard 已運(yùn)行,訪問(wèn)端口為30010
[root@master ~]# kubectl get pod,svc -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-c45b7869d-6k87n 0/1 ContainerCreating 0 10s
pod/kubernetes-dashboard-576cb95f94-zfvc9 0/1 ContainerCreating 0 10s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 10.20.222.83 <none> 8000/TCP 10s
service/kubernetes-dashboard NodePort 10.20.201.182 <none> 443:30010/TCP 10s
5、創(chuàng)建訪問(wèn)Kubernetes Dashboard的賬號(hào)
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard
kubectl create clusterrolebinding dashboard-admin-rb --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:dashboard-admin
6、查詢?cè)L問(wèn)Kubernetes Dashboard的token
[root@master ~]# kubectl get secrets -n kubernetes-dashboard | grep dashboard-admin
dashboard-admin-token-84gg6 kubernetes.io/service-account-token 3 64s
[root@master ~]# kubectl describe secrets dashboard-admin-token-84gg6 -n kubernetes-dashboard
Name: dashboard-admin-token-84gg6
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: 2d93a589-6b0b-4ed6-adc3-9a2eeb5d1311
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1099 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImRmbVVfRy15QzdfUUF4ZmFuREZMc3dvd0IxQ3ItZm5SdHVZRVhXV3JpZGcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tODRnZzYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMmQ5M2E1ODktNmIwYi00ZWQ2LWFkYzMtOWEyZWViNWQxMzExIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.xsDBLeZdn7IO0Btpb4LlCD1RQ2VYsXXPa-bir91VXIqRrL1BewYAyFfZtxU-8peU8KebaJiRIaUeF813x6WbGG9QKynL1fTARN5XoH-arkBTVlcjHQ5GBziLDE-KU255veVqORF7J5XtB38Ke2n2pi8tnnUUS_bIJpMTF1s-hV0aLlqUzt3PauPmDshtoerz4iafWK0u9oWBASQDPPoE8IWYU1KmSkUNtoGzf0c9vpdlUw4j0UZE4-zSoMF_XkrfQDLD32LrG56Wgpr6E8SeipKRfgXvx7ExD54b8Lq9DyAltr_nQVvRicIEiQGdbeCu9dwzGyhg-cDucULTx7TUgA
7、在頁(yè)面訪問(wèn)Kubernetes Dashboard,注意一定要使用https,https://172.16.20.111:30010 ,輸入token登錄成功后就進(jìn)入了后臺(tái)管理界面,原先命令行的操作就可以在管理界面進(jìn)操作了


四、GitLab安裝配置
??GitLab是可以部署在本地環(huán)境的Git項(xiàng)目倉(cāng)庫(kù),這里介紹如何安裝使用,在開(kāi)發(fā)過(guò)程中我們將代碼上傳到本地倉(cāng)庫(kù),然后Jenkins從倉(cāng)庫(kù)中拉取代碼打包部署。
1、下載需要的安裝包,下載地址 https://packages.gitlab.com/gitlab/gitlab-ce/ ,我們這里下載最新版gitlab-ce-14.4.1-ce.0.el7.x86_64.rpm,當(dāng)然在項(xiàng)目開(kāi)發(fā)中需要根據(jù)自己的需求選擇穩(wěn)定版本

2、點(diǎn)擊需要安裝的版本,會(huì)提示安裝命令,按照上面提示的命令進(jìn)行安裝即可
curl -s https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.rpm.sh | sudo bash
sudo yum install gitlab-ce-14.4.1-ce.0.el7.x86_64
3、配置并啟動(dòng)Gitlab
gitlab-ctl reconfigure
4、查看Gitlab狀態(tài)
gitlab-ctl status
5、設(shè)置初始登錄密碼
cd /opt/gitlab/bin
sudo ./gitlab-rails console
# 進(jìn)入控制臺(tái)之后執(zhí)行
u=User.where(id:1).first
u.password='root1234'
u.password_confirmation='root1234'
u.save!
quit
5、瀏覽器訪問(wèn)服務(wù)器地址,默認(rèn)是80端口,所以直接訪問(wèn)即可,在登錄界面輸入我們上面設(shè)置的密碼root/root1234。


6、設(shè)置界面為中文
User Settings ----> Preferences ----> Language ----> 簡(jiǎn)體中文 ----> 刷新界面

7、Gitlab常用命令
gitlab-ctl stop
gitlab-ctl start
gitlab-ctl restart
五、使用Docker安裝配置Jenkins+Sonar(代碼質(zhì)量檢查)
??實(shí)際項(xiàng)目應(yīng)用開(kāi)發(fā)過(guò)程中,單獨(dú)為SpringCloud工程部署一臺(tái)運(yùn)維服務(wù)器,不要安裝在Kubernetes服務(wù)器上,同樣按照上面的步驟安裝docker和docker-compose,然后使用docker-compose構(gòu)建Jenkins和Sonar。
1、創(chuàng)建宿主機(jī)掛載目錄并賦權(quán)
mkdir -p /data/docker/ci/nexus /data/docker/ci/jenkins/lib /data/docker/ci/jenkins/home /data/docker/ci/sonarqube /data/docker/ci/postgresql
chmod -R 777 /data/docker/ci/nexus /data/docker/ci/jenkins/lib /data/docker/ci/jenkins/home /data/docker/ci/sonarqube /data/docker/ci/postgresql
2、新建Jenkins+Sonar安裝腳本jenkins-compose.yml腳本,這里的Jenkins使用的是Docker官方推薦的鏡像jenkinsci/blueocean,在實(shí)際使用中發(fā)現(xiàn),即使不修改插件下載地址,也可以下載插件,所以比較推薦這個(gè)鏡像。
version: '3'
networks:
prodnetwork:
driver: bridge
services:
sonardb:
image: postgres:12.2
restart: always
ports:
- "5433:5432"
networks:
- prodnetwork
volumes:
- /data/docker/ci/postgresql:/var/lib/postgresql
environment:
- POSTGRES_USER=sonar
- POSTGRES_PASSWORD=sonar
sonar:
image: sonarqube:8.2-community
restart: always
ports:
- "19000:9000"
- "19092:9092"
networks:
- prodnetwork
depends_on:
- sonardb
volumes:
- /data/docker/ci/sonarqube/conf:/opt/sonarqube/conf
- /data/docker/ci/sonarqube/data:/opt/sonarqube/data
- /data/docker/ci/sonarqube/logs:/opt/sonarqube/logs
- /data/docker/ci/sonarqube/extension:/opt/sonarqube/extensions
- /data/docker/ci/sonarqube/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
environment:
- TZ=Asia/Shanghai
- SONARQUBE_JDBC_URL=jdbc:postgresql://sonardb:5432/sonar
- SONARQUBE_JDBC_USERNAME=sonar
- SONARQUBE_JDBC_PASSWORD=sonar
nexus:
image: sonatype/nexus3
restart: always
ports:
- "18081:8081"
networks:
- prodnetwork
volumes:
- /data/docker/ci/nexus:/nexus-data
jenkins:
image: jenkinsci/blueocean
user: root
restart: always
ports:
- "18080:8080"
networks:
- prodnetwork
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/localtime:/etc/localtime:ro
- $HOME/.ssh:/root/.ssh
- /data/docker/ci/jenkins/lib:/var/lib/jenkins/
- /usr/bin/docker:/usr/bin/docker
- /data/docker/ci/jenkins/home:/var/jenkins_home
depends_on:
- nexus
- sonar
environment:
- NEXUS_PORT=8081
- SONAR_PORT=9000
- SONAR_DB_PORT=5432
cap_add:
- ALL
3、在jenkins-compose.yml文件所在目錄下執(zhí)行安裝啟動(dòng)命令
docker-compose -f jenkins-compose.yml up -d
安裝成功后,展示以下信息
[+] Running 5/5
? Network root_prodnetwork Created 0.0s
? Container root-sonardb-1 Started 1.0s
? Container root-nexus-1 Started 1.0s
? Container root-sonar-1 Started 2.1s
? Container root-jenkins-1 Started 4.2s
4、查看服務(wù)的啟動(dòng)情況
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
52779025a83e jenkins/jenkins:lts "/sbin/tini -- /usr/…" 4 minutes ago Up 3 minutes 50000/tcp, 0.0.0.0:18080->8080/tcp, :::18080->8080/tcp root-jenkins-1
2f5fbc25de58 sonarqube:8.2-community "./bin/run.sh" 4 minutes ago Restarting (0) 21 seconds ago root-sonar-1
4248a8ba71d8 sonatype/nexus3 "sh -c ${SONATYPE_DI…" 4 minutes ago Up 4 minutes 0.0.0.0:18081->8081/tcp, :::18081->8081/tcp root-nexus-1
719623c4206b postgres:12.2 "docker-entrypoint.s…" 4 minutes ago Up 4 minutes 0.0.0.0:5433->5432/tcp, :::5433->5432/tcp root-sonardb-1
2b6852a57cc2 goharbor/harbor-jobservice:v2.2.4 "/harbor/entrypoint.…" 5 days ago Up 29 seconds (health: starting) harbor-jobservice
ebf2dea994fb goharbor/nginx-photon:v2.2.4 "nginx -g 'daemon of…" 5 days ago Restarting (1) 46 seconds ago nginx
adfaa287f23b goharbor/harbor-registryctl:v2.2.4 "/home/harbor/start.…" 5 days ago Up 7 minutes (healthy) registryctl
8e5bcca3aaa1 goharbor/harbor-db:v2.2.4 "/docker-entrypoint.…" 5 days ago Up 7 minutes (healthy) harbor-db
ebe845e020dc goharbor/harbor-portal:v2.2.4 "nginx -g 'daemon of…" 5 days ago Up 7 minutes (healthy) harbor-portal
68263dea2cfc goharbor/harbor-log:v2.2.4 "/bin/sh -c /usr/loc…" 5 days ago Up 7 minutes (healthy) 127.0.0.1:1514->10514/tcp harbor-log
我們發(fā)現(xiàn) jenkins端口映射到了18081 ,但是sonarqube沒(méi)有啟動(dòng),查看日志發(fā)現(xiàn)sonarqube文件夾沒(méi)有權(quán)限訪問(wèn),日志上顯示容器目錄的權(quán)限不夠,但實(shí)際是宿主機(jī)的權(quán)限不夠,這里需要給宿主機(jī)賦予權(quán)限
chmod 777 /data/docker/ci/sonarqube/logs
chmod 777 /data/docker/ci/sonarqube/bundled-plugins
chmod 777 /data/docker/ci/sonarqube/conf
chmod 777 /data/docker/ci/sonarqube/data
chmod 777 /data/docker/ci/sonarqube/extension
執(zhí)行重啟命令
docker-compose -f jenkins-compose.yml restart
再次使用命令查看服務(wù)啟動(dòng)情況,就可以看到j(luò)enkins映射到18081,sonarqube映射到19000端口,我們?cè)跒g覽器就可以訪問(wèn)jenkins和sonarqube的后臺(tái)界面了


5、Jenkins登錄初始化
從Jenkins的登錄界面提示可以知道,默認(rèn)密碼路徑為/var/jenkins_home/secrets/initialAdminPassword,這里顯示的事Docker容器內(nèi)部的路徑,實(shí)際對(duì)應(yīng)我們上面服務(wù)器設(shè)置的路徑為/data/docker/ci/jenkins/home/secrets/initialAdminPassword ,我們打開(kāi)這個(gè)文件并輸入密碼就可以進(jìn)入Jenkins管理界面

6、選擇安裝推薦插件,安裝完成之后,根據(jù)提示進(jìn)行下一步操作,直到進(jìn)入管理后臺(tái)界面



備注:
- sonarqube默認(rèn)用戶名密碼: admin/admin
- 卸載命令:docker-compose -f jenkins-compose.yml down -v
六、Jenkins自動(dòng)打包部署配置
??項(xiàng)目部署有多種方式,從最原始的可運(yùn)行jar包直接部署到JDK環(huán)境下運(yùn)行,到將可運(yùn)行的jar包放到docker容器中運(yùn)行,再到現(xiàn)在比較流行的把可運(yùn)行的jar包和docker放到k8s的pod環(huán)境中運(yùn)行。每一種新的部署方式都是對(duì)原有部署方式的改進(jìn)和優(yōu)化,這里不著重介紹每種方式的優(yōu)缺點(diǎn),只簡(jiǎn)單說(shuō)明一下使用Kubernetes 的原因:Kubernetes 主要提供彈性伸縮、服務(wù)發(fā)現(xiàn)、自我修復(fù),版本回退、負(fù)載均衡、存儲(chǔ)編排等功能。
??日常開(kāi)發(fā)部署過(guò)程中的基本步驟如下:
- 提交代碼到gitlab代碼倉(cāng)庫(kù)
- gitlab通過(guò)webhook觸發(fā)Jenkins構(gòu)建代碼質(zhì)量檢查
- Jenkins需通過(guò)手動(dòng)觸發(fā),來(lái)拉取代碼、編譯、打包、構(gòu)建Docker鏡像、發(fā)布到私有鏡像倉(cāng)庫(kù)Harbor、執(zhí)行kubectl命令從Harbor拉取Docker鏡像部署至k8s
1、安裝Kubernetes plugin插件、Git Parameter插件(用于流水線參數(shù)化構(gòu)建)、
Extended Choice Parameter
插件(用于多個(gè)微服務(wù)時(shí),選擇需要構(gòu)建的微服務(wù))、 Pipeline Utility Steps插件(用于讀取maven工程的.yaml、pom.xml等)和 Kubernetes Continuous Deploy(一定要使用1.0版本,從官網(wǎng)下載然后上傳) ,Jenkins --> 系統(tǒng)管理 --> 插件管理 --> 可選插件 --> Kubernetes plugin /Git Parameter/Extended Choice Parameter ,選中后點(diǎn)擊Install without restart按鈕進(jìn)行安裝




??Blueocean目前還不支持Git Parameter插件和Extended Choice Parameter插件,Git Parameter是通過(guò)Git Plugin讀取分支信息,我們這里使用Pipeline script而不是使用Pipeline script from SCM,是因?yàn)槲覀儾幌M褬?gòu)建信息放到代碼里,這樣做可以開(kāi)發(fā)和部署分離。
2、配置Kubernetes plugin插件,Jenkins --> 系統(tǒng)管理 --> 節(jié)點(diǎn)管理 --> Configure Clouds --> Add a new cloud -> Kubernetes

3、增加kubernetes證書(shū)
cat ~/.kube/config
# 以下步驟暫不使用,將certificate-authority-data、client-certificate-data、client-key-data替換為~/.kube/config里面具體的值
#echo certificate-authority-data | base64 -d > ca.crt
#echo client-certificate-data | base64 -d > client.crt
#echo client-key-data | base64 -d > client.key
# 執(zhí)行以下命令,自己設(shè)置密碼
#openssl pkcs12 -export -out cert.pfx -inkey client.key -in client.crt -certfile ca.crt
系統(tǒng)管理-->憑據(jù)-->系統(tǒng)-->全局憑據(jù)

4、添加訪問(wèn)Kubernetes的憑據(jù)信息,這里填入上面登錄Kubernetes Dashboard所創(chuàng)建的token即可,添加完成之后選擇剛剛添加的憑據(jù),然后點(diǎn)擊連接測(cè)試,如果提示連接成功,那么說(shuō)明我們的Jenkins可以連接Kubernetes了


5、jenkins全局配置jdk、git和maven
jenkinsci/blueocean鏡像默認(rèn)安裝了jdk和git,這里需要登錄容器找到路徑,然后配置進(jìn)去。
通過(guò)命令進(jìn)入jenkins容器,并查看JAVA_HOEM和git路徑
[root@localhost ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0520ebb9cc5d jenkinsci/blueocean "/sbin/tini -- /usr/…" 2 days ago Up 30 hours 50000/tcp, 0.0.0.0:18080->8080/tcp, :::18080->8080/tcp root-jenkins-1
[root@localhost ~]# docker exec -it 0520ebb9cc5d /bin/bash
bash-5.1# echo $JAVA_HOME
/opt/java/openjdk
bash-5.1# which git
/usr/bin/git
通過(guò)命令查詢可知,JAVA_HOME=/opt/java/openjdk GIT= /usr/bin/git , 在Jenkins全局工具配置中配置

Maven可以在宿主機(jī)映射的/data/docker/ci/jenkins/home中安裝,然后配置時(shí),配置容器路徑為/var/jenkins_home下的Maven安裝路徑

在系統(tǒng)配置中設(shè)置MAVEN_HOME供Pipeline script調(diào)用,如果執(zhí)行腳本時(shí)提示沒(méi)有權(quán)限,那么在宿主Maven目錄的bin目錄下執(zhí)行chmod 777 *

6、為k8s新建harbor-key,用于k8s拉取私服鏡像,配置在代碼的k8s-deployment.yml中使用。
kubectl create secret docker-registry harbor-key --docker-server=172.16.20.175 --docker-username='robot$gitegg' --docker-password='Jqazyv7vvZiL6TXuNcv7TrZeRdL8U9n3'
7、新建pipeline流水線任務(wù)

8、配置流水線任務(wù)參數(shù)

9、配置pipeline發(fā)布腳本
在流水線下面選擇Pipeline script

pipeline {
agent any
parameters {
gitParameter branchFilter: 'origin/(.*)', defaultValue: 'master', name: 'Branch', type: 'PT_BRANCH', description:'請(qǐng)選擇需要構(gòu)建的代碼分支'
choice(name: 'BaseImage', choices: ['openjdk:8-jdk-alpine'], description: '請(qǐng)選擇基礎(chǔ)運(yùn)行環(huán)境')
choice(name: 'Environment', choices: ['dev','test','prod'],description: '請(qǐng)選擇要發(fā)布的環(huán)境:dev開(kāi)發(fā)環(huán)境、test測(cè)試環(huán)境、prod 生產(chǎn)環(huán)境')
extendedChoice(
defaultValue: 'gitegg-gateway,gitegg-oauth,gitegg-plugin/gitegg-code-generator,gitegg-service/gitegg-service-base,gitegg-service/gitegg-service-extension,gitegg-service/gitegg-service-system',
description: '請(qǐng)選擇需要構(gòu)建的微服務(wù)',
multiSelectDelimiter: ',',
name: 'ServicesBuild',
quoteValue: false,
saveJSONParameterToFile: false,
type: 'PT_CHECKBOX',
value:'gitegg-gateway,gitegg-oauth,gitegg-plugin/gitegg-code-generator,gitegg-service/gitegg-service-base,gitegg-service/gitegg-service-extension,gitegg-service/gitegg-service-system',
visibleItemCount: 6)
string(name: 'BuildParameter', defaultValue: 'none', description: '請(qǐng)輸入構(gòu)建參數(shù)')
}
environment {
PRO_NAME = "gitegg"
BuildParameter="${params.BuildParameter}"
ENV = "${params.Environment}"
BRANCH = "${params.Branch}"
ServicesBuild = "${params.ServicesBuild}"
BaseImage="${params.BaseImage}"
k8s_token = "7696144b-3b77-4588-beb0-db4d585f5c04"
}
stages {
stage('Clean workspace') {
steps {
deleteDir()
}
}
stage('Process parameters') {
steps {
script {
if("${params.ServicesBuild}".trim() != "") {
def ServicesBuildString = "${params.ServicesBuild}"
ServicesBuild = ServicesBuildString.split(",")
for (service in ServicesBuild) {
println "now got ${service}"
}
}
if("${params.BuildParameter}".trim() != "" && "${params.BuildParameter}".trim() != "none") {
BuildParameter = "${params.BuildParameter}"
}
else
{
BuildParameter = ""
}
}
}
}
stage('Pull SourceCode Platform') {
steps {
echo "${BRANCH}"
git branch: "${Branch}", credentialsId: 'gitlabTest', url: 'http://172.16.20.188:2080/root/gitegg-platform.git'
}
}
stage('Install Platform') {
steps{
echo "==============Start Platform Build=========="
sh "${MAVEN_HOME}/bin/mvn -DskipTests=true clean install ${BuildParameter}"
echo "==============End Platform Build=========="
}
}
stage('Pull SourceCode') {
steps {
echo "${BRANCH}"
git branch: "${Branch}", credentialsId: 'gitlabTest', url: 'http://172.16.20.188:2080/root/gitegg-cloud.git'
}
}
stage('Build') {
steps{
script {
echo "==============Start Cloud Parent Install=========="
sh "${MAVEN_HOME}/bin/mvn -DskipTests=true clean install -P${params.Environment} ${BuildParameter}"
echo "==============End Cloud Parent Install=========="
def workspace = pwd()
for (service in ServicesBuild) {
stage ('buildCloud${service}') {
echo "==============Start Cloud Build ${service}=========="
sh "cd ${workspace}/${service} && ${MAVEN_HOME}/bin/mvn -DskipTests=true clean package -P${params.Environment} ${BuildParameter} jib:build -Djib.httpTimeout=200000 -DsendCredentialsOverHttp=true -f pom.xml"
echo "==============End Cloud Build ${service}============"
}
}
}
}
}
stage('Sync to k8s') {
steps {
script {
echo "==============Start Sync to k8s=========="
def workspace = pwd()
mainpom = readMavenPom file: 'pom.xml'
profiles = mainpom.getProfiles()
def version = mainpom.getVersion()
def nacosAddr = ""
def nacosConfigPrefix = ""
def nacosConfigGroup = ""
def dockerHarborAddr = ""
def dockerHarborProject = ""
def dockerHarborUsername = ""
def dockerHarborPassword = ""
def serverPort = ""
def commonDeployment = "${workspace}/k8s-deployment.yaml"
for(profile in profiles)
{
// 獲取對(duì)應(yīng)配置
if (profile.getId() == "${params.Environment}")
{
nacosAddr = profile.getProperties().getProperty("nacos.addr")
nacosConfigPrefix = profile.getProperties().getProperty("nacos.config.prefix")
nacosConfigGroup = profile.getProperties().getProperty("nacos.config.group")
dockerHarborAddr = profile.getProperties().getProperty("docker.harbor.addr")
dockerHarborProject = profile.getProperties().getProperty("docker.harbor.project")
dockerHarborUsername = profile.getProperties().getProperty("docker.harbor.username")
dockerHarborPassword = profile.getProperties().getProperty("docker.harbor.password")
}
}
for (service in ServicesBuild) {
stage ('Sync${service}ToK8s') {
echo "==============Start Sync ${service} to k8s=========="
dir("${workspace}/${service}") {
pom = readMavenPom file: 'pom.xml'
echo "group: artifactId: ${pom.artifactId}"
def deployYaml = "k8s-deployment-${pom.artifactId}.yaml"
yaml = readYaml file : './src/main/resources/bootstrap.yml'
serverPort = "${yaml.server.port}"
if(fileExists("${workspace}/${service}/k8s-deployment.yaml")){
commonDeployment = "${workspace}/${service}/k8s-deployment.yaml"
}
else
{
commonDeployment = "${workspace}/k8s-deployment.yaml"
}
script {
sh "sed 's#{APP_NAME}#${pom.artifactId}#g;s#{IMAGE_URL}#${dockerHarborAddr}#g;s#{IMAGE_PROGECT}#${PRO_NAME}#g;s#{IMAGE_TAG}#${version}#g;s#{APP_PORT}#${serverPort}#g;s#{SPRING_PROFILE}#${params.Environment}#g' ${commonDeployment} > ${deployYaml}"
kubernetesDeploy configs: "${deployYaml}", kubeconfigId: "${k8s_token}"
}
}
echo "==============End Sync ${service} to k8s=========="
}
}
echo "==============End Sync to k8s=========="
}
}
}
}
}
常見(jiàn)問(wèn)題:
1、Pipeline Utility Steps 第一次執(zhí)行會(huì)報(bào)錯(cuò)Scripts not permitted to use method或者Scripts not permitted to use staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods getProperties java.lang.Object
解決:系統(tǒng)管理-->In-process Script Approval->點(diǎn)擊 Approval


2、通過(guò)NFS服務(wù)將所有容器的日志統(tǒng)一存放在NFS的服務(wù)端
3、Kubernetes Continuous Deploy,使用1.0.0版本,否則報(bào)錯(cuò),不兼容
4、解決docker注冊(cè)到內(nèi)網(wǎng)問(wèn)題
spring:
cloud:
inetutils:
ignored-interfaces: docker0
5、配置ipvs模式,kube-proxy監(jiān)控Pod的變化并創(chuàng)建相應(yīng)的ipvs規(guī)則。ipvs相對(duì)iptables轉(zhuǎn)發(fā)效率更高。除此以外,ipvs支持更多的LB算法。
kubectl edit cm kube-proxy -n kube-system
修改mode: "ipvs"

重新加載kube-proxy配置文件
kubectl delete pod -l k8s-app=kube-proxy -n kube-system
查看ipvs規(guī)則
ipvsadm -Ln
6、k8s集群內(nèi)部訪問(wèn)外部服務(wù),nacos,redis等
- a、內(nèi)外互通模式,在部署的服務(wù)設(shè)置hostNetwork: true
spec:
hostNetwork: true
- b、Endpoints模式
kind: Endpoints
apiVersion: v1
metadata:
name: nacos
namespace: default
subsets:
- addresses:
- ip: 172.16.20.188
ports:
- port: 8848
apiVersion: v1
kind: Service
metadata:
name: nacos
namespace: default
spec:
type: ClusterIP
ports:
- port: 8848
targetPort: 8848
protocol: TCP
- c、service的type: ExternalName模式,“ExternalName” 使用 CNAME 重定向,因此無(wú)法執(zhí)行端口重映射,域名使用
EndPoints和type: ExternalName
以上外部新建yaml,不要用內(nèi)部的,這些需要在環(huán)境設(shè)置時(shí)配置好。
7、k8s常用命令:
查看pod: kubectl get pods
查看service: kubectl get svc
查看endpoints: kubectl get endpoints
安裝: kubectl apply -f XXX.yaml
刪除:kubectl delete -f xxx.yaml
刪除pod: kubectl delete pod podName
刪除service: kubectl delete service serviceName
進(jìn)入容器: kubectl exec -it podsNamexxxxxx -n default -- /bin/sh
GitEgg-Cloud是一款基于SpringCloud整合搭建的企業(yè)級(jí)微服務(wù)應(yīng)用開(kāi)發(fā)框架,開(kāi)源項(xiàng)目地址:
Gitee: https://gitee.com/wmz1930/GitEgg
GitHub: https://github.com/wmz1930/GitEgg
歡迎感興趣的小伙伴Star支持一下。