安裝helm服務(wù)以及搭建Helm測試環(huán)境

要求

  • You must have Kubernetes installed. We recommend version 1.4.1 or later.
  • You should also have a local configured copy of kubectl.

Helm will figure out where to install Tiller by reading your Kubernetes configuration file (usually $HOME/.kube/config). This is the same file that kubectl uses.

下載

HELM_VERSION=${K8S_VERSION:-"2.5.0"}
HELM="helm-v${HELM_VERSION}-linux-amd64"

curl -L https://storage.googleapis.com/kubernetes-helm/$HELM.tar.gz -o $HELM.tar.gz

tar -xvzf  $HELM.tar.gz -C /tmp

mv /tmp/linux-amd64/helm /usr/local/bin/helm

各release版本:

https://github.com/kubernetes/helm/releases

tiller(helm服務(wù)端)

安裝

  • in-cluster安裝(安裝在k8s集群上)

    helm init
    

    正常的話,會在k8s集群的kube-system安裝一個tiller pod.
    默認使用的是~/.kube/config中的CurrentContext來指定部署的k8s集群.

    可以通過設(shè)置環(huán)境變量$KUBECONFIG指定kubectl配置文件以及使用--context指定context來指定部署的集群.

  • local 安裝

     /bin/tiller
    

    這種情況下默認會訪問kubectl默認配置文件($HOME/.kube/conf)的CurrentContext關(guān)聯(lián)的k8s集群(用于存放數(shù)據(jù)等等).

    也可以通過$KUBECONFIG來指定連接的k8s集群的配置文件

    必須通知helm不要連接集群上的tiller,而連接到本地安裝的tiller.兩種方式

    • helm --host=<ip>
    • export HELM_HOST=localhost:44134

指定集群安裝說明

As with the rest of the Helm commands, 'helm init' discovers Kubernetes clusters
by reading $KUBECONFIG (default '~/.kube/config') and using the default context.

helm指定特定的kubectl配置中特定的context dev描述的集群去部署:

export KUBECONFIG="/path/to/kubeconfig"
helm init --kube-context="dev"

存儲

tiller支持兩種存儲:

  • memory
  • storage.

無論使用哪種部署方式,這兩種存儲都可以使用.memory存儲在tiller重啟后,release等數(shù)據(jù)會丟失.

現(xiàn)象

執(zhí)行helm init后,會

root@node01:~# helm init
Creating /root/.helm 
Creating /root/.helm/repository 
Creating /root/.helm/repository/cache 
Creating /root/.helm/repository/local 
Creating /root/.helm/plugins 
Creating /root/.helm/starters 
Creating /root/.helm/repository/repositories.yaml 
$HELM_HOME has been configured at /root/.helm.

Tiller (the helm server side component) has been installed into your Kubernetes Cluster.

在k8s集群kube-system namespace下安裝了deployment tiller-deploy和service tiller-deploy.

補充 :

  • 如果執(zhí)行helm init --client-only,不會安裝tiller,只會創(chuàng)建helm home中目錄中的文件,并配置$HELM_HOME環(huán)境變量
  • 如果$HELM_HOME目錄下已經(jīng)存在欲創(chuàng)建的文件,不會重新創(chuàng)建或者修改.不存在的文件/目錄則會創(chuàng)建.

TroubleShooting

Context deadline exceeded

root@node01:~# helm version --debug
[debug] SERVER: "localhost:44134"
Client: &version.Version{SemVer:"v2.5.0", GitCommit:"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6", GitTreeState:"clean"}
[debug] context deadline exceeded
Error: cannot connect to Tiller

https://github.com/kubernetes/helm/issues/2409
未解決.

嘗試了幾次,又成功了.

  1. unset HELM_HOST(之前設(shè)置了HELM_HOST為127.0.0.1:44134,而且更改了svc tiller-deploy為NodePort)
  2. 卸載后(移除tiller相關(guān)svc,deploy以及/root/.helm目錄),重新安裝
  3. 正常了.

socat not found

root@node01:~# helm version
Client: &version.Version{SemVer:"v2.5.0", GitCommit:"012cb0ac1a1b2f888144ef5a67b8dab6c2d45be6", GitTreeState:"clean"}
E0711 10:09:50.160064   10916 portforward.go:332] an error occurred forwarding 33491 -> 44134: error forwarding port 44134 to pod tiller-deploy-542252878-15h67_kube-system, uid : unable to do port forwarding: socat not found.
Error: cannot connect to Tiller

已解決:
在kubelet node上安裝socat即可.https://github.com/kubernetes/helm/issues/966

卸載

  • helm reset將會移除tiller在k8s集群上創(chuàng)建的pod

  • 當(dāng)出現(xiàn)上面的context deadline exceeded時, helm reset同樣會報該錯誤.執(zhí)行heml reset -f強制刪除k8s集群上的pod.

  • 當(dāng)要移除helm init創(chuàng)建的目錄等數(shù)據(jù)時,執(zhí)行helm reset --remove-helm-home

補充

2.5版本安裝的tiller,在出現(xiàn)context deadline exceeded時,使用2.4版本的helm執(zhí)行heml reset --remove-helm-home --force并不能移除tiller創(chuàng)建的pod和配置.這是2.4版本的問題.

測試環(huán)境

本地tiller

tiller

wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./tiller 
[main] 2017/07/26 14:59:54 Starting Tiller v2.5+unreleased (tls=false)
[main] 2017/07/26 14:59:54 GRPC listening on :44134
[main] 2017/07/26 14:59:54 Probes listening on :44135
[main] 2017/07/26 14:59:54 Storage driver is ConfigMap

參考

https://docs.helm.sh/using_helm/#running-tiller-locally

When Tiller is running locally, it will attempt to connect to the Kubernetes cluster that is configured by kubectl. (Run kubectl config view to see which cluster that is.)

  • kubectl config view讀的就是~/.kube/config文件

helm

wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ export HELM_HOST=localhost:44134
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm init --client-only
Creating /home/wwh/.helm 
Creating /home/wwh/.helm/repository 
Creating /home/wwh/.helm/repository/cache 
Creating /home/wwh/.helm/repository/local 
Creating /home/wwh/.helm/plugins 
Creating /home/wwh/.helm/starters 
Creating /home/wwh/.helm/cache/archive 
Creating /home/wwh/.helm/repository/repositories.yaml 
$HELM_HOME has been configured at /home/wwh/.helm.
Not installing Tiller due to 'client-only' flag having been set
Happy Helming!

必須要執(zhí)行helm init --client-only來初始化helm home下的目錄結(jié)構(gòu).否則helm repo list會報以下的錯誤:

wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm repo list
Error: open /home/wwh/.helm/repository/repositories.yaml: no such file or directory

警告

這種方法如果k8s集群,沒有辦法測試helm install ./testChart --dry-run類似的命令,
即使通過./tiller -storage=memory配置存儲為內(nèi)存

本地tiller,但指定后端的K8s集群

tiller

在本地運行tiller,但指定后端運行的k8s集群

//指定后端k8s集群的路徑,tiller在初始化kube client的時候會使用該配置文件
//作為kube client的配置文件.
  export KUBECONFIG=/tmp/k8sconfig-688597196
  ./tiller

helm

helm還是和之前的一樣.

tiller存儲測試

實驗configmap

# 用local方安裝tiller,指定后端k8s集群, 存儲為configmap(默認)
 export KUBECONFIG=/tmp/k8sconfig-688597196
 ./tiller 
 
# 啟動另一個shell
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ export HELM_HOST=localhost:44134
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm init --client-only
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm install stable/wordpress --debug
# release安裝成功
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm list
NAME                REVISION    UPDATED                     STATUS      CHART           NAMESPACE
tinseled-warthog    1           Fri Aug 25 17:13:53 2017    DEPLOYED    wordpress-0.6.8 default
# 查看集群的configmap
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ kubectl --kubeconfig=/tmp/k8sconfig-688597196 get configmap --all-namespaces
NAMESPACE     NAME                                 DATA      AGE
kube-public   cluster-info                         2         6d
kube-system   calico-config                        3         6d
kube-system   extension-apiserver-authentication   6         6d
kube-system   kube-proxy                           1         6d
kube-system   tinseled-warthog.v1                  1         1m
# 刪除release, 這時候這個configmap仍然存在
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm delete tinseled-warthog
release "tinseled-warthog" deleted
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ kubectl --kubeconfig=/tmp/k8sconfig-688597196 get configmap --all-namespaces
NAMESPACE     NAME                                 DATA      AGE
kube-public   cluster-info                         2         6d
kube-system   calico-config                        3         6d
kube-system   extension-apiserver-authentication   6         6d
kube-system   kube-proxy                           1         6d
kube-system   tinseled-warthog.v1       
# 執(zhí)行helm delete <release> --purge
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ ./helm delete tinseled-warthog --purge
release "tinseled-warthog" deleted
# configmap上的數(shù)據(jù)已經(jīng)被清除.
wwh@wwh:~/kiongf/go/src/k8s.io/helm/bin$ kubectl --kubeconfig=/tmp/k8sconfig-688597196 get configmap --all-namespaces
NAMESPACE     NAME                                 DATA      AGE
kube-public   cluster-info                         2         6d
kube-system   calico-config                        3         6d
kube-system   extension-apiserver-authentication   6         6d
kube-system   kube-proxy                           1         6d



最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容