ceph cluster for k8s

背景

考慮到k8s需要一個可共享到各個node的存儲,準(zhǔn)備搭建一個ceph,以rbd的方式提供

操作系統(tǒng)準(zhǔn)備

centos 7.2
centos-base.repo epel.repo
ntp同步配置(chrony.conf)
內(nèi)網(wǎng)服務(wù)器,暫時關(guān)閉selinux與防火墻
可選:dns配置,如忽略采用/etc/hosts方式

集群規(guī)劃

主機名 角色
cloud4ourself-c1 mon
cloud4ourself-c1 osd
cloud4ourself-c1 osd

此處名字應(yīng)和hostname -s 輸出相一致

安裝過程

1、創(chuàng)建ceph用戶(所有主機)

useradd ceph
echo 'ceph ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
sed -i 's/Defaults requiretty/#Defaults requiretty/' /etc/sudoers

2、所有主機配置ssh免密登錄(ceph用戶)

ssh-keygen
cat ~/.ssh/id_rsa.pub #將三個服務(wù)器的三行文本copy到一個文本文件中
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
vim ~/.ssh/authorized_keys #將上面的三行文本復(fù)制到此文件
ssh cloud4ourself-c1
ssh cloud4ourself-c2
ssh cloud4ourself-c3 #都是免登錄

3、配置集群(cloud4ourself-c1)

mkdir k8s && cd k8s
sudo yum install http://download.ceph.com/rpm-hammer/el7/noarch/ceph-release-1-1.el7.noarch.rpm -y
sudo yum install ceph-deploy -y
$ceph-deploy new cloud4ourself-c1

[ceph@cloud4ourself-c1 ~]$ ceph-deploy new cloud4ourself-c1

[ceph@cloud4ourself-c1 k8s]$ ls
ceph.conf  ceph.log  ceph.mon.keyring

echo "osd pool default size = 2" >> ceph.conf

4、安裝

ceph-deploy install cloud4ourself-c1 cloud4ourself-c2 cloud4ourself-c3
上面是安裝最新版本的ceph,如果安裝指定版本
ceph-deploy install cloud4ourself-c1 --repo-url=http://mirrors.aliyun.com/ceph/rpm-hammer/el7/
如有如下錯誤

[c2][WARNIN] Error: Package: 1:ceph-selinux-10.2.6-0.el7.x86_64 (Ceph)
[c2][WARNIN]            Requires: selinux-policy-base >= 3.13.1-102.el7_3.13
[c2][WARNIN]            Installed: selinux-policy-targeted-3.13.1-60.el7.noarch (@anaconda)
[c2][WARNIN]                selinux-policy-base = 3.13.1-60.el7
[c2][WARNIN]            Available: selinux-policy-minimum-3.13.1-102.el7.noarch (base)
[c2][WARNIN]                selinux-policy-base = 3.13.1-102.el7
[c2][WARNIN]            Available: selinux-policy-minimum-3.13.1-102.el7_3.4.noarch (updates)
[c2][WARNIN]                selinux-policy-base = 3.13.1-102.el7_3.4
[c2][WARNIN]            Available: selinux-policy-minimum-3.13.1-102.el7_3.7.noarch (updates)
[c2][WARNIN]                selinux-policy-base = 3.13.1-102.el7_3.7
[c2][WARNIN]            Available: selinux-policy-mls-3.13.1-102.el7.noarch (base)
[c2][WARNIN]                selinux-policy-base = 3.13.1-102.el7
[c2][WARNIN]            Available: selinux-policy-mls-3.13.1-102.el7_3.4.noarch (updates)
[c2][WARNIN]                selinux-policy-base = 3.13.1-102.el7_3.4
[c2][WARNIN]            Available: selinux-policy-mls-3.13.1-102.el7_3.7.noarch (updates)
[c2][WARNIN]                selinux-policy-base = 3.13.1-102.el7_3.7
[c2][WARNIN]            Available: selinux-policy-targeted-3.13.1-102.el7.noarch (base)
[c2][WARNIN]                selinux-policy-base = 3.13.1-102.el7
[c2][WARNIN]            Available: selinux-policy-targeted-3.13.1-102.el7_3.4.noarch (updates)
[c2][WARNIN]                selinux-policy-base = 3.13.1-102.el7_3.4
[c2][WARNIN]            Available: selinux-policy-targeted-3.13.1-102.el7_3.7.noarch (updates)
[c2][WARNIN]                selinux-policy-base = 3.13.1-102.el7_3.7
[c2][DEBUG ]  You could try running: rpm -Va --nofiles --nodigest
[c2][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: yum -y install ceph ceph-radosgw

可能是相關(guān)repo沒有更新。

如有如下錯誤

[c1][DEBUG ] Retrieving https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[c1][WARNIN]    file /etc/yum.repos.d/ceph.repo from install of ceph-release-1-1.el7.noarch conflicts with file from package ceph-release-1-1.el7.noarch
[c1][DEBUG ] Preparing...                          ########################################
[c1][ERROR ] RuntimeError: command returned non-zero exit status: 1
[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm

可執(zhí)行sudo rpm -e ceph-release
然后再試。

monitor初始化

[ceph@cloud4ourself-c1 k8s]$ ceph-deploy mon create-initial
[ceph@cloud4ourself-c1 k8s]$ ls 
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.bootstrap-rgw.keyring  ceph.client.admin.keyring  ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

cloud4ourself-c2\cloud4ourself-c3主機

sudo mkdir -p /var/local/cephfs
sudo chown ceph:ceph  /var/local/cephfs

cloud4ourself-c1主機
(由于使用了虛擬機,使用xfs文件系統(tǒng)替代磁盤)

ceph-deploy osd prepare  cloud4ourself-c2:/var/local/cephfs cloud4ourself-c3:/var/local/cephfs
ceph-deploy osd activate cloud4ourself-c2:/var/local/cephfs cloud4ourself-c3:/var/local/cephfs
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
ceph -s

后續(xù)

增加兩個mon
修改ceph.conf

public_network=10.9.5.0/24

ceph-deploy --overwrite-conf mon add cloud4ourself-c2
ceph-deploy --overwrite-conf mon add cloud4ourself-c3

在測試k8s過程中,關(guān)閉一個node后,發(fā)現(xiàn)存在rbd lock的情況

Mar  8 17:53:18 cloud4ourself-mytest2 kubelet: E0308 17:53:18.391278    2162 nestedpendingoperations.go:262] Operation for "\"kubernetes.io/rbd/a539e906-03d7-11e7-9826-fa163eec323b-pvc-a55f4a04-0317-11e7-9826-fa163eec323b\" (\"a539e906-03d7-11e7-9826-fa163eec323b\")" failed. No retries permitted until 2017-03-08 17:55:18.391251562 +0800 CST (durationBeforeRetry 2m0s). Error: MountVolume.SetUp failed for volume "kubernetes.io/rbd/a539e906-03d7-11e7-9826-fa163eec323b-pvc-a55f4a04-0317-11e7-9826-fa163eec323b" (spec.Name: "pvc-a55f4a04-0317-11e7-9826-fa163eec323b") pod "a539e906-03d7-11e7-9826-fa163eec323b" (UID: "a539e906-03d7-11e7-9826-fa163eec323b") with: rbd: image kubernetes-dynamic-pvc-52f919af-0321-11e7-b778-fa163eec323b is locked by other nodes

解決方式是收到解鎖
rbd lock remove

參考鏈接:
http://docs.ceph.org.cn/start/
http://www.cnblogs.com/clouding/p/6115447.html
http://tonybai.com/2017/02/17/temp-fix-for-pod-unable-mount-cephrbd-volume/
http://tonybai.com/2016/11/07/integrate-kubernetes-with-ceph-rbd/

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • 系統(tǒng)環(huán)境: centos73.10.0-514.26.2.el7.x86_64 機器數(shù)量:五臺 硬盤:四塊一塊為系...
    think_lonely閱讀 5,024評論 0 5
  • 近期在linux上搭建了用于分布式存儲的----GlusterFS和Ceph這兩個開源的分布式文件系統(tǒng)。 前言--...
    ZNB_天玄閱讀 2,920評論 0 0
  • 一、概述 Ceph是一個分布式存儲系統(tǒng),誕生于2004年,最早致力于開發(fā)下一代高性能分布式文件系統(tǒng)的項目。隨著云計...
    魏鎮(zhèn)坪閱讀 49,876評論 3 54
  • ceph簡介 Ceph是一個分布式存儲系統(tǒng),誕生于2004年,是最早致力于開發(fā)下一代高性能分布式文件系統(tǒng)的項目。隨...
    愛吃土豆的程序猿閱讀 6,171評論 0 21
  • libvirt三種接口: 命令行:virsh 圖形化:virt-manager Web:webvirtmgr 命令...
    Arteezy_Xie閱讀 3,957評論 0 6

友情鏈接更多精彩內(nèi)容