1. 為什么要有service
pod具有不停銷毀、創(chuàng)建的特征,每個Pod又有自己分配的IP,Pod的創(chuàng)建、銷毀意味著IP不停的變更,前端如何跟蹤IP地址變得非常困難,service就是來解決這個問題的。它可以實現(xiàn)監(jiān)控Pod的變化,并對外提供一個固定的訪問入口(負(fù)載均衡的入口)。
2. Service 存在的意義
- 防止Pod失聯(lián)(服務(wù)發(fā)現(xiàn))
- 定義一組Pod的訪問策略(負(fù)載均衡)
3. Pod與Service的關(guān)系
- 通過label-selector相關(guān)聯(lián)
- 通過Service實現(xiàn)Pod的負(fù)載均衡(TCP/UDP 4層)

4. Service三種常用類型
- ClusterIP:集群內(nèi)部使用
- NodePort:對外暴露應(yīng)用
- LoadBalancer:對外暴露應(yīng)用,適用公有云
ClusterIP:默認(rèn),分配一個穩(wěn)定的IP地址,即VIP,只能在集群內(nèi)部訪問(同Namespace內(nèi)的Pod)。

NodePort:在每個節(jié)點上啟用一個端口(端口建議固定,不要隨機(jī)分配)來暴露服務(wù),可以在集群外部訪問。也會分配一個穩(wěn)定內(nèi)部集群IP地址。
訪問地址:<NodeIP>:<NodePort>

LoadBalancer:與NodePort類似,在每個節(jié)點上啟用一個端口來暴露服務(wù)。除此之外,Kubernetes會請求底層云平臺上的負(fù)載均衡器,將每個Node([NodeIP]:[NodePort])作為后端添加進(jìn)去。

5. Service代理模式
在kubernetes集群中,每個node運行一個kube-proxy進(jìn)程。kube-proxy負(fù)責(zé)為Service實現(xiàn)一種VIP(虛擬IP)的形式。
cat /opt/kubernetes/cfg/kube-proxy.conf ---node上設(shè)置代理模式
底層流量轉(zhuǎn)發(fā)與負(fù)載均衡實現(xiàn):
- Iptables
- IPVS

iptables模式在檢測到分配的第一個pod鏈接失敗后,會自動分配其他pod進(jìn)行重試。
iptable會創(chuàng)建很多規(guī)則(更新,非增量)
iptable會從上到下逐條匹配(延遲大)
iptables -L 查看規(guī)則

相對于iptables模式,IPVS模式下的kube-proxy重定向通信延遲更短,同步代理規(guī)則性能更好,
ipvsadm -ln 查看ipvs規(guī)則

6. 集群內(nèi)部DNS服務(wù)
DNS服務(wù)監(jiān)視Kubernetes API,為每一個Service創(chuàng)建DNS記錄用于域名解析。
Core DNS 是目前kubernetes系統(tǒng)內(nèi)部默認(rèn)的DNS服務(wù)。
- 安裝DNS:
coredns.yaml內(nèi)容:
# Warning: This is a file generated from the base underscore template file: coredns.yaml.base
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
log
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
containers:
- name: coredns
image: lizhenliang/coredns:1.6.7
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 512Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.0.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
使用kubectl apply -f coredns.yaml部署。
使用kubectl get pods -n kube-system查看。
ClusterIP A記錄格式:<service-name>.<namespace-name>.svc.cluster.local
示例:my-svc.my-namespace.svc.cluster.local
