一、實(shí)驗(yàn)背景
1. 內(nèi)網(wǎng)環(huán)境下,無(wú)法連接互聯(lián)網(wǎng),需要搭建ceph,為分布式集群提供ceph文件系統(tǒng)
2. 要實(shí)現(xiàn)腳本的自動(dòng)化安裝,shell腳本或者ansible playbook,不使用ceph-deploy工具
我們需要在一臺(tái)能聯(lián)網(wǎng)的實(shí)驗(yàn)機(jī)機(jī)器上,將ceph集群安裝所需的主包及其依賴一次性下載,編寫(xiě)安裝腳本,然后在目標(biāo)機(jī)器上搭建本地yum源,實(shí)現(xiàn)離線安裝。
我們先實(shí)現(xiàn)搭建本地倉(cāng)庫(kù),在目標(biāo)機(jī)器上手動(dòng)安裝。
二、實(shí)驗(yàn)環(huán)境
操作系統(tǒng):CentOS7.5 Minimal
聯(lián)網(wǎng)的實(shí)驗(yàn)機(jī): 192.168.1.101
cephServer(node01): 192.168.1.103??
cephServer(node01)數(shù)據(jù)盤(pán):/dev/sdb 100G
cephClient: 192.168.1.106
三、在聯(lián)網(wǎng)的實(shí)驗(yàn)機(jī)下載ceph主包及其依賴
添加ceph官方y(tǒng)um鏡像倉(cāng)庫(kù)
#? vi? ?/etc/yum.repos.d/ceph.repo
##################################################
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-luminous/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
##################################################

# yum clean all
# yum repolist?
# yum list all |grep ceph

# yum? -y install epel-release?
# yum -y install yum-utils?
# yum -y install createrepo?
# mkdir /root/cephDeps?
# repotrack??ceph ceph-mgr ceph-mon ceph-mds ceph-osd ceph-fuse?ceph-radosgw? -p??/root/cephDeps
# createrepo? -v???/root/cephDeps
# tar? -zcf? ??cephDeps.tar.gz???/root/cephDeps
四、在cephServer(node01)上搭建 本地yum源
將cephDeps.tar.gz拷貝到cephServer(node01)服務(wù)器
#? tar? -zxf??cephDeps.tar.gz?
# vim? build_localrepo.sh?
##################################################
#!/bin/bash
parent_path=$( cd "$(dirname "${BASH_SOURCE}")" ; pwd -P )
cd "$parent_path"
mkdir /etc/yum.repos.d/backup
mv /etc/yum.repos.d/*.repo? /etc/yum.repos.d/backup
# create local repositry
rm -rf /tmp/localrepo
mkdir -p /tmp/localrepo
cp -rf? ./cephDeps/*? /tmp/localrepo
echo "
[localrepo]
name=Local Repository
baseurl=file:///tmp/localrepo
gpgcheck=0
enabled=1"? > /etc/yum.repos.d/ceph.repo
yum clean all
##################################################
#? sh? -x??build_localrepo.sh??
#? yum repolist??

五、在cephServer(node01)上離線安裝單機(jī)ceph
關(guān)閉selinux
# setenforce 0
# sed? -i? 's/^SELINUX=.*/SELINUX=permissive/g'? /etc/selinux/config
設(shè)置防火墻,放行相關(guān)端口
# systemctl? start? firewalld
# systemctl enable firewalld?
# firewall-cmd --zone=public --add-port=6789/tcp?--permanent
# firewall-cmd --zone=public --add-port=6800-7300/tcp?--permanent
# firewall-cmd --reload
用本地yum源安裝ceph組件
#? yum -y install ceph ceph-mds ceph-mgr ceph-osd ceph-mon

# yum list installed | grep ceph

# ll /etc/ceph/
# ll /var/lib/ceph/

配置ceph組件
創(chuàng)建集群id
#?uidgen?
用uidgen 生成一個(gè)uuid 例如 ee741368-4233-4cbc-8607-5d36ab314dab
創(chuàng)建ceph主配置文件
# vim? /etc/ceph/ceph.conf
######################################
[global]
fsid = ee741368-4233-4cbc-8607-5d36ab314dab??
mon_initial_members = node01
mon_host = 192.168.1.103
mon_max_pg_per_osd = 300
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 1
osd_pool_default_min_size = 1
osd_journal_size = 1024
osd_crush_chooseleaf_type = 0
public_network = 192.168.1.0/24
cluster_network = 192.168.1.0/24
[mon]
mon allow pool delete = true
###################################

1.部署mon
創(chuàng)建mon密鑰
#? ?ceph-authtool? --create-keyring? /tmp/ceph.mon.keyring? --gen-key? -n mon.? --cap mon 'allow *'
#? cat /tmp/ceph.mon.keyring

創(chuàng)建管理密鑰
#? ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *'
# ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd'
# cat /etc/ceph/ceph.client.admin.keyring
# cat /var/lib/ceph/bootstrap-osd/ceph.keyring

將管理密鑰都導(dǎo)入到mon密鑰中
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring
# ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
# cat /tmp/ceph.mon.keyring

創(chuàng)建monitor map
# monmaptool --create --add node01 192.168.1.103 --fsid ee741368-4233-4cbc-8607-5d36ab314dab /tmp/monmap

創(chuàng)建mon的目錄,啟動(dòng)mon
#?mkdir? ?/var/lib/ceph/mon/ceph-node1
# chown?-R? ceph:ceph? /var/lib/ceph/
# chown?ceph:ceph???/tmp/monmap? ?/tmp/ceph.mon.keyring
#??sudo -u ceph ceph-mon --mkfs -i node01? --monmap /tmp/monmap? ?--keyring /tmp/ceph.mon.keyring
# ll /var/lib/ceph/mon/ceph-node01/

啟動(dòng)mon服務(wù)
# systemctl start ceph-mon@node01.service
# systemctl enable ceph-mon@node01.service
#?systemctl status ceph-mon@node01.service

# ceph -s

2.部署osd
cephServer(node01)數(shù)據(jù)盤(pán):/dev/sdb 100G
# lsblk

創(chuàng)建osd
#???ceph-volume lvm create --data /dev/sdb
#? ll /dev/mapper/
#? ll? /var/lib/ceph/osd/ceph-0/


# ceph auth list

啟動(dòng)osd服務(wù)
# systemctl start? ceph-osd@0.service
# systemctl enable? ceph-osd@0.service
# systemctl status? ceph-osd@0.service

# ceph -s

3.部署mgr
創(chuàng)建密鑰
# mkdir /var/lib/ceph/mgr/ceph-node01
# ceph auth get-or-create mgr.node01 mon 'allow profile mgr' osd 'allow *' mds 'allow *'? >??/var/lib/ceph/mgr/ceph-node01/keyring
# chown -R ceph:ceph /var/lib/ceph/mgr

啟動(dòng)mgr服務(wù)
# systemctl start ceph-mgr@node01.service
# systemctl enable ceph-mgr@node01.service
# systemctl status ceph-mgr@node01.service

# ceph -s

查看mgr模塊
# ceph mgr module ls??

4.部署mds
創(chuàng)建mds數(shù)據(jù)目錄
#? mkdir -p? /var/lib/ceph/mds/ceph-node01
創(chuàng)建秘鑰
#? ceph-authtool? --create-keyring? /var/lib/ceph/mds/ceph-node01/keyring? ?--gen-key? -n? ?mds.node01
導(dǎo)入秘鑰
#?ceph auth add mds.node01? osd "allow rwx" mds "allow" mon "allow profile mds"? -i? /var/lib/ceph/mds/ceph-node01/keyring
# chown -R ceph:ceph /var/lib/ceph/mds
# ceph auth list

啟動(dòng)mds服務(wù)
# systemctl start ceph-mds@node01.service
# systemctl enable ceph-mds@node01.service
# systemctl status ceph-mds@node01.service

# ceph osd tree

5.?創(chuàng)建Ceph Pool
一個(gè)ceph集群可以有多個(gè)pool,每個(gè)pool是邏輯上的隔離單位,不同的pool可以有完全不一樣的數(shù)據(jù)處理方式,比如Replica Size(副本數(shù))、Placement Groups、CRUSH Rules、快照、所屬者等。
pg_num設(shè)置參考:https://ceph.com/pgcalc

# ceph osd pool create cephfs_data 128
# ceph osd pool create cephfs_metadata 128
# ceph fs new cephfs cephfs_metadata cephfs_data
# ceph fs ls
# ceph -s

# ceph --show-config | grep mon_max_pg_per_osd

集群osd 數(shù)量較少,如果創(chuàng)建了大量的pool,每個(gè)pool要占用一些pg ,ceph集群默認(rèn)每塊磁盤(pán)都有默認(rèn)值,為250 pgs,不過(guò)這個(gè)默認(rèn)值是可以調(diào)整的,但調(diào)整得過(guò)大或者過(guò)小都會(huì)對(duì)集群的性能產(chǎn)生一定影響。
# vim /etc/ceph/ceph.conf
################################
mon_max_pg_per_osd = 300
################################

# systemctl restart ceph-mgr@node01.service
#?systemctl status ceph-mgr@node01.service
# ceph --show-config | grep "mon_max_pg_per_osd"

# ceph osd lspools

cephServer節(jié)點(diǎn) 服務(wù)正常啟動(dòng)后各服務(wù)狀態(tài),服務(wù)進(jìn)程、日志文件、端口監(jiān)聽(tīng)一覽




# ll /etc/ceph/

# ll /var/lib/ceph/
#? tree?/var/lib/ceph/

# cd /var/lib/ceph/
# ll bootstrap-*


六、安裝配置cephClient
客戶端要掛載使用cephfs的目錄,有兩種方式:
1. 使用linux kernel client
2.? 使用ceph-fuse
這兩種方式各有優(yōu)劣勢(shì),kernel client的特點(diǎn)在于它與ceph通信大部分都在內(nèi)核態(tài)進(jìn)行,因此性能要更好,缺點(diǎn)是L版本的cephfs要求客戶端支持一些高級(jí)特性,ceph FUSE就是簡(jiǎn)單一些,還支持配額,缺點(diǎn)就是性能比較差,實(shí)測(cè)全ssd的集群,性能差不多為kernel client的一半。
關(guān)閉selinux
# setenforce 0
# sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
方式一:使用linux kernel client
在cephSever服務(wù)器上獲取admin認(rèn)證key
# cat /etc/ceph/ceph.client.admin.keyring

默認(rèn)采用ceph-deploy部署ceph集群是開(kāi)啟了cephx認(rèn)證,需要掛載secret-keyring,即集群mon節(jié)點(diǎn)/etc/ceph/ceph.client.admin.keyring文件中的”key”值,采用secretfile可不用暴露keyring,但有1個(gè)bug,始終報(bào)錯(cuò):libceph: bad option at 'secretfile=/etc/ceph/admin.secret'
Bug地址:https://bugzilla.redhat.com/show_bug.cgi?id=1030402
# mount -t ceph 192.168.1.103:6789:/? /mnt -o name=admin,secret=AQDZRfJcn4i0BRAAAHXMjFmkEZX2oO/ron1mRA==
# mount -l? | grep ceph?
# df -hT?

方式二:使用ceph-fuse
在cephClient上搭建 本地yum源
將cephDeps.tar.gz拷貝到cephClient)服務(wù)器
#? tar? -zxf??cephDeps.tar.gz?
# vim? build_localrepo.sh?
##################################################
#!/bin/bash
parent_path=$( cd "$(dirname "${BASH_SOURCE}")" ; pwd -P )
cd "$parent_path"
mkdir /etc/yum.repos.d/backup
mv /etc/yum.repos.d/*.repo? /etc/yum.repos.d/backup
# create local repositry
rm -rf /tmp/localrepo
mkdir -p /tmp/localrepo
cp -rf? ./cephDeps/*? /tmp/localrepo
echo "
[localrepo]
name=Local Repository
baseurl=file:///tmp/localrepo
gpgcheck=0
enabled=1"? > /etc/yum.repos.d/ceph.repo
yum clean all
##################################################
#? sh? -x??build_localrepo.sh??
#? yum repolist??

安裝ceph-fuse 相關(guān)組件
#? yum -y install ceph-fuse
# rpm -ql ceph-fuse

創(chuàng)建ceph-fuse 相關(guān)目錄,從cephServer拷貝配置文件和秘鑰
#? mkdir? /etc/ceph
#? scp? root@192.168.1.103:/etc/ceph/ceph.client.admin.keyring? /etc/ceph
#? scp? root@192.168.1.103:/etc/ceph/ceph.conf? ? /etc/ceph?
#? chmod? 600? /etc/ceph/ceph.client.admin.keyring
創(chuàng)建ceph-fuse的service文件
#? cp /usr/lib/systemd/system/ceph-fuse@.service? ?/etc/systemd/system/ceph-fuse.service
#? vim? /etc/systemd/system/ceph-fuse.service?
##############################################
[Unit]
Description=Ceph FUSE client
After=network-online.target local-fs.target time-sync.target
Wants=network-online.target local-fs.target time-sync.target
Conflicts=umount.target
PartOf=ceph-fuse.target
[Service]
EnvironmentFile=-/etc/sysconfig/ceph
Environment=CLUSTER=ceph
ExecStart=/usr/bin/ceph-fuse -f -o rw,noexec,nosuid,nodev? /mnt
TasksMax=infinity
Restart=on-failure
StartLimitInterval=30min
StartLimitBurst=3
[Install]
WantedBy=ceph-fuse.target
########################################################

我們將cephfs掛載在客戶端/mnt下
# systemctl daemon-reload
# systemctl? start ceph-fuse.service
# systemctl? enable? ceph-fuse.service
# systemctl? status? ceph-fuse.service

# systemctl? start ceph-fuse.target
# systemctl? enable ceph-fuse.target
# systemctl? status ceph-fuse.target

#? df? -hT

測(cè)試寫(xiě)入一個(gè)大文件
#? dd if=/dev/zero of=/mnt/test? bs=1M count=10000
# df? -hT

設(shè)置cephFS 掛載子目錄
從上面的可以看出,掛載cephfs的時(shí)候,源目錄使用的是/,如果一個(gè)集群只提供給一個(gè)用戶使用就太浪費(fèi)了,能不能把集群切分成多個(gè)目錄,多個(gè)用戶自己掛載自己的目錄進(jìn)行讀寫(xiě)呢?
# ceph-fuse --help

使用admin掛載了cephfs的/之后,只需在/中創(chuàng)建目錄,這些創(chuàng)建后的目錄就成為cephFS的子樹(shù),其他用戶經(jīng)過(guò)配置,是可以直接掛載這些子樹(shù)目錄的,具體步驟為:
1. 使用admin掛載了/之后,創(chuàng)建了/ceph
#?mkdir?-p?/opt/tmp
#?ceph-fuse?/opt/tmp
#?mkdir? /opt/tmp/ceph
#??umount? /opt/tmp
#??rm?-rf? /opt/tmp
2. 設(shè)置ceph-fuse.service,掛載子目錄
# vim /etc/systemd/system/ceph-fuse.service
################################################
[Unit]
Description=Ceph FUSE client
After=network-online.target local-fs.target time-sync.target
Wants=network-online.target local-fs.target time-sync.target
Conflicts=umount.target
PartOf=ceph-fuse.target
[Service]
EnvironmentFile=-/etc/sysconfig/ceph
Environment=CLUSTER=ceph
ExecStart=/usr/bin/ceph-fuse -f -o rw,noexec,nosuid,nodev? /mnt? ?-r /ceph
TasksMax=infinity
Restart=on-failure
StartLimitInterval=30min
StartLimitBurst=3
[Install]
WantedBy=ceph-fuse.target
###################################################################

#?systemctl daemon-reload?
#?systemctl start ceph-fuse.service
#?systemctl enable ceph-fuse.service
#?systemctl status ceph-fuse.service

# systemctl? start ceph-fuse.target
# systemctl? enable ceph-fuse.target
# systemctl? status ceph-fuse.target

#? df? -hT

cephClient節(jié)點(diǎn) 服務(wù)正常啟動(dòng)后各服務(wù)狀態(tài),服務(wù)進(jìn)程、日志文件、端口監(jiān)聽(tīng)一覽


當(dāng)然,這篇文章我們只講了ceph的文件系統(tǒng)cephFS,關(guān)于另外兩種存儲(chǔ) 塊存儲(chǔ)和對(duì)象存儲(chǔ),大家可以參閱相關(guān)資料,自行解決!
七、參考
Ceph基礎(chǔ)知識(shí)
https://www.cnblogs.com/zywu-king/p/9064032.html
centos7離線搭建Ceph塊存儲(chǔ)和對(duì)象存儲(chǔ)
https://pianzong.club/2018/11/05/install-ceph-offline/
分布式文件系統(tǒng)Ceph
https://blog.csdn.net/dapao123456789/article/category/2197933
ceph-deploy?v2.0.0初始化磁盤(pán)
https://blog.51cto.com/3168247/2088865
Ceph告警:too many PGs per OSD處理
http://www.itdecent.cn/p/f2b20a175702
ceph (luminous 版) pool 管理
https://blog.csdn.net/signmem/article/details/78594340
ceph集群添加了一個(gè)osd之后,該osd的狀態(tài)始終為down
https://blog.51cto.com/xiaowangzai/2173309?source=dra
CentOS7.x上ceph的單機(jī)部署和cephFS文件系統(tǒng)的使用
http://www.itdecent.cn/p/736fc03bd164
Ceph?bluestore?和?ceph-volume
http://xcodest.me/ceph-bluestore-and-ceph-volume.html
Ceph PGs per Pool Calculator
https://ceph.com/pgcalc
MANUAL?DEPLOYMENT
http://docs.ceph.com/docs/master/install/manual-deployment/#manager-daemon-configuration
CEPH-MGR ADMINISTRATOR’S GUIDE
http://docs.ceph.com/docs/master/mgr/administrator/#mgr-administrator-guide?tdsourcetag=s_pcqq_aiomsg
CREATE A CEPH FILESYSTEM
http://docs.ceph.com/docs/master/cephfs/createfs
http://docs.ceph.org.cn/cephfs/createfs
Redhat/MANUALLY INSTALLING RED HAT CEPH STORAGE
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/installation_guide_for_red_hat_enterprise_linux/manually-installing-red-hat-ceph-storage
WHAT IS RED HAT CEPH STORAGE?
https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/installation_guide_for_red_hat_enterprise_linux/what_is_red_hat_ceph_storage