kolla-ceph 4:支持bcache設(shè)備和iscsi及多路徑設(shè)備

系列鏈接

  1. http://www.itdecent.cn/p/f18a1b3a4920 如何用kolla來部署容器化ceph集群
  2. http://www.itdecent.cn/p/a39f226d5dfb 修復(fù)一些部署中遇到的問題
  3. http://www.itdecent.cn/p/d520fed237c0 在kolla ceph中引入device classes特性
  4. http://www.itdecent.cn/p/d6e047e1ad06 支持bcache設(shè)備和iscsi及多路徑設(shè)備
  5. http://www.itdecent.cn/p/ab8251fc991a 一個ceph容器化部署編排項目

本篇主要介紹如何在kolla-ceph中支持bcache設(shè)備和iscsi及多路徑設(shè)備.

commit url

kolla和kolla-ansible本身不支持bcache磁盤和多路徑磁盤, 我提交了這兩個commit來支持以上磁盤,和原有實現(xiàn)最大的區(qū)別是, 原有實現(xiàn)利用partname來建立osd分區(qū)軟鏈接, 我是使用partuuid來建立軟鏈接, 具體實現(xiàn)我參考了ceph-disk的做法.

kolla: https://review.opendev.org/#/c/599961/

kolla-ansible: https://review.opendev.org/#/c/599962/

Kolla Ceph 使用Bcache磁盤

Bcache介紹

Bcache是Linux內(nèi)核塊設(shè)備層cache,支持多塊HDD使用同一塊SSD或者NVME作為緩存盤。它讓SSD作為HDD的緩存成為了可能。由于SSD價格昂貴,存儲空間小,而HDD價格低廉,存儲空間大,因此采用SSD作為緩存,HDD作為數(shù)據(jù)存儲盤,既解決了SSD容量太小,又解決了HDD運行速度太慢的問題。

為什么要在bluestore中使用Bcache磁盤

我們知道, bluestore不使用本地文件系統(tǒng),直接接管裸設(shè)備,由于操作系統(tǒng)支持的aio操作只支持directIO,所以對Block設(shè)備的寫操作直接寫入磁盤。相比filestore, 跳過寫日志的步驟, 寫兩次變成寫一次, 理論上寫入速度應(yīng)該變大. 所以設(shè)計bluestore的初衷是為高速磁盤使用, 但是沒辦法, 經(jīng)費決定著我們必須以普通磁盤為主. 對于普通磁盤來說,它的IO瓶頸決定了性能的上限, 為了提高這個上限, 我們需要加一層緩存給它, 這就是bcache的目的.

構(gòu)建Bcache磁盤

以下都在我的虛擬機(jī)上進(jìn)行測試, 環(huán)境為centos7

節(jié)點 ssd磁盤 普通磁盤
ceph-node1 sdb sdc,sdd
ceph-node2 sdb sdc,sdd
ceph-node3 sdb sdc,sdd
  • 首先對ssd磁盤進(jìn)行分區(qū), 我們要做的是一個ssd磁盤(sdb)對應(yīng)兩個普通磁盤(sdc,sdd)
sudo sgdisk --zap-all -- /dev/sdb

parted /dev/sdb -s -- mklabel gpt mkpart  bcache0  1  25000
parted /dev/sdb -s mkpart bcache1  25001  100%
  • 安裝bcache
# 我的環(huán)境缺少以下兩個包, blkid和uuid, 根據(jù)build的錯誤自行安裝對應(yīng)包
yum install libblkid-devel uuid -y

#安裝bcache-tools
git clone https://evilpiepirate.org/git/bcache-tools.git
cd bcache-tools
make
make install

# 內(nèi)核加載bcache模塊
modprobe bcache
  • 清除舊的bcache分區(qū)數(shù)據(jù)
dd if=/dev/zero of=/dev/sdb1 bs=512k count=200
dd if=/dev/zero of=/dev/sdb2 bs=512k count=200

ps: bcache提示可以使用wipefs -a /dev/sdb1來清除, 但是這個命令在我的環(huán)境上有個bug, 比如我想清除之前的bcache緩存重新分盤, 但是執(zhí)行wipefs命令后緩存磁盤又會出現(xiàn)在/sys/fs/bcache下面.導(dǎo)致后續(xù)操作都出現(xiàn)"Device or resource busy".

  • 清除bcache后端設(shè)備分區(qū)
sudo sgdisk --zap-all -- /dev/sdc
sudo sgdisk --zap-all -- /dev/sdd
  • 新建bcache設(shè)備
make-bcache -C /dev/sdb1 -B /dev/sdb --writeback
make-bcache -C /dev/sdb2 -B /dev/sdc --writeback
  • 查看
[root@ceph-node1 bcache]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdd               8:48   0   50G  0 disk
└─bcache1       252:128  0   50G  0 disk
sdb               8:16   0   50G  0 disk
├─sdb2            8:18   0 26.7G  0 part
│ └─bcache1     252:128  0   50G  0 disk
└─sdb1            8:17   0 23.3G  0 part
  └─bcache0     252:0    0   50G  0 disk
sdc               8:32   0   50G  0 disk
└─bcache0       252:0    0   50G  0 disk

[root@ceph-node1 bcache]# fdisk -l

Disk /dev/bcache0: 53.7 GB, 53687083008 bytes, 104857584 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/bcache1: 53.7 GB, 53687083008 bytes, 104857584 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

現(xiàn)在我們可以把bcache磁盤當(dāng)做正常的磁盤使用了.

使用bcache磁盤來部署kolla ceph

  • 準(zhǔn)備kolla ceph 磁盤
sudo sgdisk --zap-all -- /dev/bcache0
sudo sgdisk --zap-all -- /dev/bcache1

sudo /sbin/parted  /dev/bcache0  -s  -- mklabel  gpt  mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO1  1 -1
sudo /sbin/parted  /dev/bcache1  -s  -- mklabel  gpt  mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO2  1 -1

使用我的commit可部署成功.

  • 如果使用kolla和kolla-ansible的原有代碼去部署, 會報個錯誤:
"+ sudo -E kolla_set_configs\n
INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\n
INFO:__main__:Validating config file\n
INFO:__main__:Kolla config strategy set to: COPY_ALWAYS\n
INFO:__main__:Copying service configuration files\n
INFO:__main__:Copying /var/lib/kolla/config_files/ceph.conf to /etc/ceph/ceph.conf\n
INFO:__main__:Setting permission for /etc/ceph/ceph.conf\n
INFO:__main__:Copying /var/lib/kolla/config_files/ceph.client.admin.keyring to /etc/ceph/ceph.client.admin.keyring\n
INFO:__main__:Setting permission for /etc/ceph/ceph.client.admin.keyring\n
INFO:__main__:Writing out command to execute\n
++ cat /run_command\n
+ CMD='/usr/bin/ceph-osd -f  --public-addr 10.34.135.160 --cluster-addr 10.34.135.160'\n
+ ARGS=\n
+ [[ ! -n '' ]]\n
+ . kolla_extend_start\n
++ [[ ! -d /var/log/kolla/ceph ]]\n
+++ stat -c %a /var/log/kolla/ceph\n
++ [[ 2755 != \\7\\5\\5 ]]\n
++ chmod 755 /var/log/kolla/ceph\n
++ [[ -n 0 ]]\n
++ CEPH_JOURNAL_TYPE_CODE=45B0969E-9B03-4F30-B4C6-B4B80CEFF106\n
++ CEPH_OSD_TYPE_CODE=4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D\n
++ CEPH_OSD_BS_WAL_TYPE_CODE=0FC63DAF-8483-4772-8E79-3D69D8477DE4\n
++ CEPH_OSD_BS_DB_TYPE_CODE=CE8DF73C-B89D-45B0-AD98-D45332906d90\n
++ ceph quorum_status\n
++ [[ False == \\F\\a\\l\\s\\e ]]\n
++ [[ bluestore == \\b\\l\\u\\e\\s\\t\\o\\r\\e ]]\n
++ [[ /dev/bcache0 =~ /dev/loop ]]\n
++ sgdisk --zap-all -- /dev/bcache01\n
Problem opening /dev/bcache01 for reading! Error is 2.\n
The specified file does not exist!\n
Problem opening '' for writing! Program will now terminate.\n
Warning! MBR not overwritten! Error is 2!\n",

從日志中可以看到kolla識別的磁盤如下:

{
            "bs_blk_device": "",
            "bs_blk_label": "",
            "bs_blk_partition_num": "",
            "bs_db_device": "",
            "bs_db_label": "",
            "bs_db_partition_num": "",
            "bs_wal_device": "",
            "bs_wal_label": "",
            "bs_wal_partition_num": "",
            "device": "/dev/bcache0",
            "external_journal": false,
            "fs_label": "",
            "fs_uuid": "",
            "journal": "",
            "journal_device": "",
            "journal_num": 0,
            "partition": "/dev/bcache0",
            "partition_label": "KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO1",
            "partition_num": "1"
        }

出錯的原因就是這段代碼(kolla/docker/ceph/ceph-osd/extend_start.sh):

if [[ "${OSD_BS_DEV}" =~ "/dev/loop" ]]; then
    sgdisk --zap-all -- "${OSD_BS_DEV}""p${OSD_BS_PARTNUM}"
else
    sgdisk --zap-all -- "${OSD_BS_DEV}""${OSD_BS_PARTNUM}"
fi

kolla的代碼中只有當(dāng)設(shè)備是/dev/loop才給子分區(qū)前面加p,而bcache0的第一個分區(qū)是bcache0p1, kolla只能處理成而bcache01,所以會出現(xiàn)這個錯誤.

卸載bcache磁盤

#刪除后端設(shè)備
echo 1 > /sys/block/bcache<N>/bcache/stop

# 刪除cache設(shè)備
echo 1 > /sys/fs/bcache/<uuid>/unregister

ps: 注意刪除順序, 如果先刪除了cache設(shè)備,而沒有停止綁定的后端設(shè)備, 則cache設(shè)備會自動恢復(fù)

多路徑磁盤

節(jié)點 磁盤 用途 IP
ceph-node1 sdb,sdc,sdd 目標(biāo)節(jié)點 192.168.10.11
ceph-node2 sdb,sdc,sdd 目標(biāo)節(jié)點 192.168.10.12
ceph-node3 sdb,sdc 目標(biāo)節(jié)點 192.168.10.13
ceph-node4 sdb,sdc,sdd 源節(jié)點, 雙網(wǎng)卡 192.168.10.14/192.168.11.14

源節(jié)點初始化

  • 安裝相關(guān)包
yum install targetd targetcli -y

systemctl enable target && systemctl start target
  • 準(zhǔn)備邏輯卷
sudo sgdisk --zap-all -- /dev/sdb
sudo sgdisk --zap-all -- /dev/sdc
sudo sgdisk --zap-all -- /dev/sdd

pvcreate /dev/sdb
vgcreate vg00 /dev/sdb
lvcreate -l 100%free -n lv00 vg00

pvcreate /dev/sdc
vgcreate vg01 /dev/sdc
lvcreate -l 100%free -n lv01 vg01

pvcreate /dev/sdd
vgcreate vg02 /dev/sdd
lvcreate -l 100%free -n lv02 vg02
  • 查看邏輯卷
[root@ceph-node3 irteamsu]# lvs
  LV   VG     Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root centos -wi-ao----  45.99g                                                    
  swap centos -wi-ao----   3.00g                                                    
  lv00 vg00   -wi-a----- <50.00g                                                    
  lv01 vg01   -wi-a----- <50.00g                                                    
  lv02 vg02   -wi-a----- <50.00g
  • 進(jìn)入targetcli
[root@ceph-node3 irteamsu]# targetcli
targetcli shell version 2.1.fb46
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/>
  • 創(chuàng)建多路徑磁盤
/backstores/block create disk0 /dev/vg00/lv00
iscsi/ create iqn.2017-05.con.benet:disk0
/iscsi/iqn.2017-05.con.benet:disk0/tpg1/acls create iqn.2017-05.com.benet:192.168.10.11
/iscsi/iqn.2017-05.con.benet:disk0/tpg1/luns create /backstores/block/disk0


/backstores/block create disk1 /dev/vg01/lv01
iscsi/ create iqn.2017-05.con.benet:disk1
/iscsi/iqn.2017-05.con.benet:disk1/tpg1/acls create iqn.2017-05.com.benet:192.168.10.12
/iscsi/iqn.2017-05.con.benet:disk1/tpg1/luns create /backstores/block/disk1

/backstores/block create disk2 /dev/vg02/lv02
iscsi/ create iqn.2017-05.con.benet:disk2
/iscsi/iqn.2017-05.con.benet:disk2/tpg1/acls create iqn.2017-05.com.benet:192.168.10.13
/iscsi/iqn.2017-05.con.benet:disk2/tpg1/luns create /backstores/block/disk2
  • 查看
/> ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 3]
  | | o- disk0 ..................................................................... [/dev/vg00/lv00 (50.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- disk1 ..................................................................... [/dev/vg01/lv01 (50.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- disk2 ..................................................................... [/dev/vg02/lv02 (50.0GiB) write-thru activated]
  | |   o- alua ................................................................................................... [ALUA Groups: 1]
  | |     o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 3]
  | o- iqn.2017-05.con.benet:disk0 ....................................................................................... [TPGs: 1]
  | | o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  | |   o- acls .......................................................................................................... [ACLs: 1]
  | |   | o- iqn.2017-05.com.benet:192.168.10.11 .................................................................. [Mapped LUNs: 1]
  | |   |   o- mapped_lun0 ................................................................................. [lun0 block/disk0 (rw)]
  | |   o- luns .......................................................................................................... [LUNs: 1]
  | |   | o- lun0 ................................................................ [block/disk0 (/dev/vg00/lv00) (default_tg_pt_gp)]
  | |   o- portals .................................................................................................... [Portals: 1]
  | |     o- 0.0.0.0:3260 ..................................................................................................... [OK]
  | o- iqn.2017-05.con.benet:disk1 ....................................................................................... [TPGs: 1]
  | | o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  | |   o- acls .......................................................................................................... [ACLs: 1]
  | |   | o- iqn.2017-05.com.benet:192.168.10.12 .................................................................. [Mapped LUNs: 1]
  | |   |   o- mapped_lun0 ................................................................................. [lun0 block/disk1 (rw)]
  | |   o- luns .......................................................................................................... [LUNs: 1]
  | |   | o- lun0 ................................................................ [block/disk1 (/dev/vg01/lv01) (default_tg_pt_gp)]
  | |   o- portals .................................................................................................... [Portals: 1]
  | |     o- 0.0.0.0:3260 ..................................................................................................... [OK]
  | o- iqn.2017-05.con.benet:disk2 ....................................................................................... [TPGs: 1]
  |   o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  |     o- acls .......................................................................................................... [ACLs: 1]
  |     | o- iqn.2017-05.com.benet:192.168.10.13.................................................................. [Mapped LUNs: 1]
  |     |   o- mapped_lun0 ................................................................................. [lun0 block/disk2 (rw)]
  |     o- luns .......................................................................................................... [LUNs: 1]
  |     | o- lun0 ................................................................ [block/disk2 (/dev/vg02/lv02) (default_tg_pt_gp)]
  |     o- portals .................................................................................................... [Portals: 1]
  |       o- 0.0.0.0:3260 ..................................................................................................... [OK]
  o- loopback ......................................................................................................... [Targets: 0]

目標(biāo)節(jié)點建立多路徑

  • 安裝包及配置
yum -y install iscsi-initiator-utils

# 配置InitiatorName, 以ceph-node1為例
vi /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2017-05.com.benet:192.168.10.11

systemctl enable iscsi && systemctl start iscsi
  • 掃描設(shè)備并展示設(shè)備
[root@ceph-node1 irteamsu]# iscsiadm -m discovery -t st -p 192.168.10.14
192.168.10.14:3260,1 iqn.2017-05.con.benet:disk0
192.168.10.14:3260,1 iqn.2017-05.con.benet:disk1
192.168.10.14:3260,1 iqn.2017-05.con.benet:disk2
[root@ceph-node1 irteamsu]# iscsiadm -m discovery -t st -p 192.168.11.14
192.168.11.14:3260,1 iqn.2017-05.con.benet:disk0
192.168.11.14:3260,1 iqn.2017-05.con.benet:disk1
192.168.11.14:3260,1 iqn.2017-05.con.benet:disk2
  • 遇到問題:
# 3.10.0-327.el7.x86_64內(nèi)核的節(jié)點配置后掃描設(shè)備報錯
[root@ceph-node3 ~]# iscsiadm -m discovery -t st -p 192.168.10.14
iscsiadm: Cannot perform discovery. Invalid Initiatorname.
iscsiadm: Could not perform SendTargets discovery: invalid parameter

重啟后解決
  • 連接設(shè)備
# 以ceph-node1為例
iscsiadm -m node -T iqn.2017-05.con.benet:disk0 -p 192.168.10.14 --op update -n node.startup -v automatic
iscsiadm -m node -T iqn.2017-05.con.benet:disk0 -p 192.168.11.14 --op update -n node.startup -v automatic
  • 查看網(wǎng)絡(luò)磁盤
Disk /dev/sde: 53.7 GB, 53682896896 bytes, 104849408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes


Disk /dev/sdf: 53.7 GB, 53682896896 bytes, 104849408 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 4194304 bytes
  • 配置多路徑
yum install device-mapper-multipath -y
systemctl enable multipathd.service && systemctl restart multipathd.service

vi /etc/multipath.conf
blacklist {
    devnode "^sda"
}
defaults {
    user_friendly_names yes
    path_grouping_policy multibus
    failback immediate
    no_path_retry fail
}
  • 多路徑不能自動識別問題(在4.20.2-1.el7.elrepo.x86_64內(nèi)核上出現(xiàn))
[root@ceph-node2 ~]# multipath -v3
...
Apr 28 11:11:16 | mpathc: pgfailback = -2 (config file default)
Apr 28 11:11:16 | mpathc: pgpolicy = multibus (config file default)
Apr 28 11:11:16 | mpathc: selector = service-time 0 (internal default)
Apr 28 11:11:16 | mpathc: features = 0 (config file default)
Apr 28 11:11:16 | mpathc: hwhandler = 0 (internal default)
Apr 28 11:11:16 | mpathc: rr_weight = 1 (internal default)
Apr 28 11:11:16 | mpathc: minio = 1 rq (config file default)
Apr 28 11:11:16 | mpathc: no_path_retry = -1 (config file default)
Apr 28 11:11:16 | mpathc: pg_timeout = NONE (internal default)
Apr 28 11:11:16 | mpathc: fast_io_fail_tmo = 5 (config file default)
Apr 28 11:11:16 | mpathc: retain_attached_hw_handler = 1 (config file default)
Apr 28 11:11:16 | mpathc: deferred_remove = 1 (config file default)
Apr 28 11:11:16 | delay_watch_checks = DISABLED (internal default)
Apr 28 11:11:16 | delay_wait_checks = DISABLED (internal default)
Apr 28 11:11:16 | skip_kpartx = 1 (config file default)
Apr 28 11:11:16 | unpriv_sgio = 1 (config file default)
Apr 28 11:11:16 | mpathc: remove queue_if_no_path from '0'
Apr 28 11:11:16 | mpathc: assembled map [0 0 1 1 service-time 0 2 1 8:64 1 8:80 1]
Apr 28 11:11:16 | mpathc: set ACT_CREATE (map does not exist)
Apr 28 11:11:16 | ghost_delay = -1 (config file default)
Apr 28 11:11:16 | mpathc: domap (0) failure for create/reload map
Apr 28 11:11:16 | mpathc: ignoring map
Apr 28 11:11:16 | const prioritizer refcount 2
Apr 28 11:11:16 | directio checker refcount 2
Apr 28 11:11:16 | const prioritizer refcount 1
Apr 28 11:11:16 | directio checker refcount 1
Apr 28 11:11:16 | unloading const prioritizer
Apr 28 11:11:16 | unloading directio checker

查了一下,主要原因是新版的多路徑插件需要啟用scsi-mq:

https://access.redhat.com/documentation/zh-cn/red_hat_enterprise_linux/7/html/7.2_release_notes/storage

# 如果需要使用scsi-mq,需要添加scsi_mod.use_blk_mq=y dm_mod.use_blk_mq=y到內(nèi)核啟動參數(shù),提升盤讀寫性能

# 在grub.cfg中找到對應(yīng)的內(nèi)核, 加入?yún)?shù)scsi_mod.use_blk_mq=y dm_mod.use_blk_mq=y
vi /boot/grub2/grub.cfg

### BEGIN /etc/grub.d/10_linux ###
menuentry 'CentOS Linux (4.20.2-1.el7.elrepo.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-693.el7.x86_64-advanced-be679149-35c2-4143-b8c4-34a594f1b15f' {
        load_video
        set gfxpayload=keep
        insmod gzio
        insmod part_msdos
        insmod xfs
        set root='hd0,msdos1'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 --hint='hd0,msdos1'  9f370650-6e47-4d78-b54d-420c0068cf6b
        else
          search --no-floppy --fs-uuid --set=root 9f370650-6e47-4d78-b54d-420c0068cf6b
        fi
        linux16 /vmlinuz-4.20.2-1.el7.elrepo.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8 scsi_mod.use_blk_mq=y dm_mod.use_blk_mq=y
        initrd16 /initramfs-4.20.2-1.el7.elrepo.x86_64.img
}

# 然后需要reboot機(jī)器
reboot
# 檢查是否生效
[root@ceph-node2 ~]# cat /proc/cmdline
BOOT_IMAGE=/vmlinuz-4.20.2-1.el7.elrepo.x86_64 root=/dev/mapper/centos-root ro crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet LANG=en_US.UTF-8 scsi_mod.use_blk_mq=y dm_mod.use_blk_mq=y
[root@ceph-node2 ~]# cat /sys/module/scsi_mod/parameters/use_blk_mq
Y

重新執(zhí)行multipath -v3后出現(xiàn)多路徑磁盤

  • 對路徑的磁盤進(jìn)行初始化,即可用來部署ceph(使用我的commit)
sudo sgdisk --zap-all -- /dev/mapper/mpatha
sudo /sbin/parted  /dev/mapper/mpatha  -s  -- mklabel  gpt  mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS_FOO1  1 -1

ps: 多路徑磁盤和bcache磁盤都會使用p + number的子分區(qū)后綴, kolla的代碼并不支持, 然后在kolla/docker/kolla-toolbox/find_disks.py中也不支持對發(fā)現(xiàn)多路徑磁盤的專門邏輯.

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

友情鏈接更多精彩內(nèi)容