Ceph Luminous(12.2.4) 環(huán)境搭建

0.準(zhǔn)備集群

vm: 3臺(tái) CentOS-7.2-x64(3.10.0-693.5.2.el7.x86_64) 1核, 1G Mem, 20G SSD x 2

subnet: 10.0.4.0/26

主機(jī) IP 功能
node1 10.0.4.14(內(nèi)網(wǎng)), 192.168.20.52 100Mbps (公網(wǎng)) ceph-deploy, mon, mgr, osd
node2 10.0.4.6 (內(nèi)網(wǎng)), 192.168.20.59 100Mbps (公網(wǎng)) mon, osd
node3 10.0.4.15(內(nèi)網(wǎng)), 192.168.20.58 100Mbps (公網(wǎng)) mon, osd
  1. 修改主機(jī)名(所有節(jié)點(diǎn)),參考 如何在CentOS 7上修改主機(jī)名
[root@node{1,2,3} ~]# hostnamectl set-hostname node1
[root@node{1,2,3} ~]# hostnamectl --pretty

[root@node{1,2,3} ~]# hostnamectl --static
node1
[root@node{1,2,3} ~]# hostnamectl --transient
node1
[root@node{1,2,3} ~]# cat /etc/hosts
...
127.0.0.1 node1
  1. 創(chuàng)建用戶,參考 CREATE A CEPH DEPLOY USER
[root@node{1,2,3} ~]# sudo useradd -d /home/search -m search    # search 用戶
[root@node{1,2,3} ~]# sudo passwd search
[root@node{1,2,3} ~]# echo "search ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/search
search ALL = (root) NOPASSWD:ALL
[root@node{1,2,3} ~]# sudo chmod 0440 /etc/sudoers.d/search
  1. 實(shí)現(xiàn)SSH無密登陸,參考 ENABLE PASSWORD-LESS SSH
[root@node1 ~]# su search
[search@node1 ~]$ cat ~/.ssh/config
#~/.ssh/config

Host node2
    Hostname 10.0.4.6
    User search

Host node3
    Hostname 10.0.4.15
    User search

[search@node1 ~]$ chmod 0600 ~/.ssh/config
[search@node1 ~]$ ssh-copy-id search@node2
[search@node1 ~]$ ssh-copy-id search@node3

1.環(huán)境預(yù)檢

TODO: 防火墻設(shè)置

  1. 配置ntp服務(wù),參考 CEPH NODE SETUP
[search@node{1,2,3} ~]$ sudo yum install ntp ntpdate -y
[search@node{1,2,3} ~]$ sudo ntpdate pool.ntp.org
[search@node{1,2,3} ~]$ sudo systemctl enable ntpd.service
[search@node{1,2,3} ~]$ sudo systemctl enable ntpdate.service
[search@node{1,2,3} ~]$ sudo systemctl start ntpd.service
  1. PRIORITIES/PREFERENCES
[search@node{1,2,3} ~]$ sudo yum install yum-plugin-priorities -y
  1. 配置源,參考 RPM PACKAGES, INSTALLING WITH RPM
[search@node{1,2,3} ~]$ cat /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for $basearch
# luminous, centos7
baseurl=https://download.ceph.com/rpm-luminous/el7/$basearch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=https://download.ceph.com/rpm-luminous/el7/SRPMS
enabled=0
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
  1. 更新源,安裝依賴
[search@node{1,2,3} ~]$ sudo yum makecache
[search@node{1,2,3} ~]$ sudo yum update -y
[search@node{1,2,3} ~]$ sudo yum install snappy leveldb gdisk python-argparse gperftools-libs -y
  1. 安裝ceph-deploy(node1),參考 INSTALL CEPH DEPLOY
[search@node1 ~]$ sudo yum install -y ceph-deploy
[search@node1 ~]$ ceph-deploy --version
2.0.0

2.搭建集群

  1. 創(chuàng)建集群操作目錄
[search@node1 ~]$ mkdir my-cluster
[search@node1 ~]$ cd my-cluster
  1. 創(chuàng)建集群,參考 CREATE A CLUSTER
[search@node1 my-cluster]$ cat /etc/hosts    # 配置node1 /etc/hosts
...
10.0.4.14       node1
10.0.4.6        node2
10.0.4.15       node3

[search@node1 my-cluster]$ ceph-deploy new node1 node2 node3
[search@node1 my-cluster]$ ls
ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

[search@node1 my-cluster]$ cat ceph.conf     # 修改配置,添加public network
[global]
fsid = 57e12384-fd45-422b-bd0a-49da4149c1da
mon_initial_members = node1, node2, node3
mon_host = 10.0.4.14,10.0.4.6,10.0.4.15
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = 10.0.4.0/26        # add
  1. 安裝ceph luminous(12.2.4),參考 CREATE A CLUSTER
[search@node1 my-cluster]$ ceph-deploy install --release luminous node1 node2 node3

由于網(wǎng)速慢原因,安裝可能會(huì)失敗,我們可自行在各個(gè)節(jié)點(diǎn)手動(dòng)安裝 ceph

[search@node{1,2,3} ~]$ sudo yum install -y epel-release
[search@node{1,2,3} ~]$ sudo yum install -y ceph ceph-radosgw

[search@node1 ~]$ ceph --version
ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable)

[search@node2 ~]$ ceph --version
ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable)

[search@node3 ~]$ ceph --version
ceph version 12.2.4 (52085d5249a80c5f5121a76d6288429f35e4e77b) luminous (stable)
  1. 初始化 mon,參考 CREATE A CLUSTER
[search@node1 my-cluster]$ ceph-deploy mon create-initial
[search@node1 my-cluster]$ ls
ceph.bootstrap-mds.keyring  ceph.bootstrap-osd.keyring  ceph.client.admin.keyring  ceph-deploy-ceph.log
ceph.bootstrap-mgr.keyring  ceph.bootstrap-rgw.keyring  ceph.conf                  ceph.mon.keyring
  1. 賦予各個(gè)節(jié)點(diǎn)使用命令免用戶名權(quán)限
[search@node1 my-cluster]$ ceph-deploy admin node1 node2 node3
  1. 部署mgr(luminus+)
[search@node1 my-cluster]$ ceph-deploy mgr create node1
  1. 添加osd,參考 CREATE A CLUSTER
[search@node{1,2,3} ~]$ sudo fdisk -l       # 每臺(tái) vm 均掛載 2塊 20G SSD
Disk /dev/sda: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b467e

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048    41943006    20970479+  83  Linux

Disk /dev/sdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[search@node{1,2,3} ~]$ sudo lsblk -f
NAME   FSTYPE LABEL UUID                                 MOUNTPOINT
sda
└─sda1 xfs          6f15c206-f516-4ee8-a4b7-89ad880647db /
sdb

[search@node1 my-cluster]$ ceph-deploy osd create --data /dev/vdb node1
[search@node1 my-cluster]$ ceph-deploy osd create --data /dev/vdb node2
[search@node1 my-cluster]$ ceph-deploy osd create --data /dev/vdb node3
  1. 查看集群狀態(tài)及osd tree
[search@node1 my-cluster]$ sudo ceph -s
  cluster:
    id:     57e12384-fd45-422b-bd0a-49da4149c1da
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum node2,node1,node3
    mgr: node1(active)
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage:   3164 MB used, 58263 MB / 61428 MB avail
    pgs:

[search@node1 my-cluster]$ sudo ceph osd tree
ID CLASS WEIGHT  TYPE NAME      STATUS REWEIGHT PRI-AFF
-1       0.05846 root default
-3       0.01949     host node1
 0   hdd 0.01949         osd.0      up  1.00000 1.00000
-5       0.01949     host node2
 1   hdd 0.01949         osd.1      up  1.00000 1.00000
-7       0.01949     host node3
 2   hdd 0.01949         osd.2      up  1.00000 1.00000

3.Dashboard配置

  1. 創(chuàng)建管理域密鑰
[search@node1 my-cluster]$ sudo ceph auth get-or-create mgr.node1 mon 'allow profile mgr' osd 'allow *' mds 'allow *'
[mgr.node1]
    key = AQC7os1ao1nFFhAANIB3V697rwKAHPc6ZiUPcw==
  1. 開啟 ceph-mgr 管理域
[search@node1 my-cluster]$ sudo ceph-mgr -i node1
  1. 查看ceph狀態(tài),確認(rèn)mgr狀態(tài)為 active
[search@node1 my-cluster]$ sudo ceph -s
  cluster:
    id:     57e12384-fd45-422b-bd0a-49da4149c1da
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum node2,node1,node3
    mgr: node1(active, starting)        # here
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 bytes
    usage:   3164 MB used, 58263 MB / 61428 MB avail
    pgs:
  1. 打開 dashboard 模塊
[search@node1 my-cluster]$ sudo ceph mgr module enable dashboard
  1. 綁定開啟 dashboard 模塊的 ceph-mgr 節(jié)點(diǎn)的 ip 地址
[search@node1 my-cluster]$ sudo ceph config-key set mgr/dashboard/node1/server_addr 192.168.20.52
set mgr/dashboard/node1/server_addr

# dashboard 默認(rèn)運(yùn)行在7000端口
[search@node1 my-cluster]$ sudo netstat -tunpl | grep ceph-mgr # 確認(rèn)7000端口服務(wù)運(yùn)行
tcp        0      0 10.0.4.14:6800          0.0.0.0:*               LISTEN      12499/ceph-mgr
tcp6       0      0 :::7000                 :::*                    LISTEN      12499/ceph-mgr
  1. 打開 http://192.168.20.52:7000 面板
dashboard.png

4.pool創(chuàng)建及使用

# 創(chuàng)建 pool
[search@node1 my-cluster]$ sudo ceph osd pool create rbd 128 128
[search@node1 my-cluster]$ sudo rbd pool init rbd

# 刪除 pool
[search@node1 my-cluster]$ sudo ceph osd pool rm rbd rbd --yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you
can destroy a pool

[search@node1 my-cluster]$ cat ceph.conf
...
mon_allow_pool_delete = true

[search@node1 my-cluster]$ ceph-deploy --overwrite-conf config push node1
[search@node1 my-cluster]$ ceph-deploy --overwrite-conf config push node2
[search@node1 my-cluster]$ ceph-deploy --overwrite-conf config push node3

[search@node{1,2,3} ~]$ sudo systemctl restart ceph-mon.targe       # 重啟mon
[search@node1 my-cluster]$ sudo ceph osd pool rm rbd rbd --yes-i-really-really-mean-it

參考資料

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • 系統(tǒng)環(huán)境: centos73.10.0-514.26.2.el7.x86_64 機(jī)器數(shù)量:五臺(tái) 硬盤:四塊一塊為系...
    think_lonely閱讀 5,028評(píng)論 0 5
  • ceph簡(jiǎn)介 Ceph是一個(gè)分布式存儲(chǔ)系統(tǒng),誕生于2004年,是最早致力于開發(fā)下一代高性能分布式文件系統(tǒng)的項(xiàng)目。隨...
    愛吃土豆的程序猿閱讀 6,171評(píng)論 0 21
  • Ceph官方版本目前支持的糾刪碼很有限,實(shí)驗(yàn)室這塊希望能夠整合我們自主開發(fā)的糾刪碼BRS(Binary Reed–...
    LeeHappen閱讀 3,952評(píng)論 0 5
  • 近期在linux上搭建了用于分布式存儲(chǔ)的----GlusterFS和Ceph這兩個(gè)開源的分布式文件系統(tǒng)。 前言--...
    ZNB_天玄閱讀 2,920評(píng)論 0 0
  • 集群管理 每次用命令啟動(dòng)、重啟、停止Ceph守護(hù)進(jìn)程(或整個(gè)集群)時(shí),必須指定至少一個(gè)選項(xiàng)和一個(gè)命令,還可能要指定...
    Arteezy_Xie閱讀 19,914評(píng)論 0 19

友情鏈接更多精彩內(nèi)容