一: rbd 介绍
块是字节序列 (例如, 一个 512 字节的数据块). 基于块的存储接口是使用旋转介质(例如硬盘, CD, 软盘甚至传统的 9-track tape) 存储数据的最常用方法. 块设备接口的无处不在, 使虚拟块设备成为与海量数据存储系统 (如 Ceph) 进行交互的理想候选者.
Ceph 块设备经过精简配置, 可调整大小, 并在 Ceph 集群中的多个 OSD 上存储条带化数据, ceph 块设备利用了 RADOS 功能, 例如快照, 复制和一致性. Ceph 的 RADOS 块设备 (RBD) 使用内核模块或 librbd 库与 OSD 进行交互.
'
Ceph 的块设备对内核设备, KVMS 例如 QEMU, 基于云的计算系统, 例如 OpenStack 和 CloudStack, 提供高性能和无限的可扩展性 . 你可以使用同一群集同时操作 Ceph RADOS 网关, Ceph 的文件系统和 Ceph 块设备.
二: 创建与使用块设备
创建池和块
- [[email protected] ~]# ceph osd pool create block 6
- pool 'block' created
为客户端创建用户, 并将密钥文件 scp 到客户端
- [[email protected] ~]# ceph auth get-or-create client.rbd mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=block'| tee ./ceph.client.rbd.keyring
- [client.rbd]
- key = AQA04PpdtJpbGxAAd+lCJFQnDfRlWL5cFUShoQ==
- [[email protected] ~]#scp ceph.client.rbd.keyring [email protected]:/etc/ceph
客户端创建一个大小为 2G 的块设备
[[email protected] /]# rbd create block/rbd0 --size 2048 --name client.rbd
映射此块设备到客户端
- [[email protected] /]# rbd map --image block/rbd0 --name client.rbd
- /dev/rbd0
- [[email protected] /]# rbd showmapped --name client.rbd
- id pool image snap device
- 0 block rbd0 - /dev/rbd0
注意: 这里可能会报如下的错误
- [[email protected] /]# rbd map --image block/rbd0 --name client.rbd
- rbd: sysfs write failed
- In some cases useful info is found in syslog - try "dmesg | tail".
- rbd: map failed: (2) No such file or directory
解决方法有三种, 看我这篇博客 rbd: sysfs write failed 解决办法 https://blog.51cto.com/11093860/2459916
创建文件系统, 并挂载块设备
- [[email protected] /]# fdisk -l /dev/rbd0
- Disk /dev/rbd0: 2147 MB, 2147483648 bytes, 4194304 sectors
- Units = sectors of 1 * 512 = 512 bytes
- Sector size (logical/physical): 512 bytes / 512 bytes
- I/O size (minimum/optimal): 4194304 bytes / 4194304 bytes
- [[email protected] /]# mkfs.xfs /dev/rbd0
- meta-data=/dev/rbd0 isize=512 agcount=8, agsize=65536 blks
- = sectsz=512 attr=2, projid32bit=1
- = crc=1 finobt=0, sparse=0
- data = bsize=4096 blocks=524288, imaxpct=25
- = sunit=1024 swidth=1024 blks
- naming =version 2 bsize=4096 ascii-ci=0 ftype=1
- log =internal log bsize=4096 blocks=2560, version=2
- = sectsz=512 sunit=8 blks, lazy-count=1
- realtime =none extsz=4096 blocks=0, rtextents=0
- [[email protected] /]# mount /dev/rbd0 /ceph-rbd0
- [[email protected] /]# df -Th /ceph-rbd0
- Filesystem Type Size Used Avail Use% Mounted on
- /dev/rbd0 xfs 2.0G 33M 2.0G 2% /ceph-rb
写入数据测试
- [[email protected] /]# dd if=/dev/zero of=/ceph-rbd0/file count=100 bs=1M
- 100+0 records in
- 100+0 records out
- 104857600 bytes (105 MB) copied, 0.0674301 s, 1.6 GB/s
- [[email protected] /]# ls -lh /ceph-rbd0/file
- -rw-r--r-- 1 root root 100M Dec 19 10:50 /ceph-rbd0/file
做成系统服务
- [[email protected] /]#cat /usr/local/bin/rbd-mount
- #!/bin/bash
- # Pool name where block device image is stored
- export poolname=block
- # Disk image name
- export rbdimage0=rbd0
- # Mounted Directory
- export mountpoint0=/ceph-rbd0
- # Image mount/unmount and pool are passed from the systemd service as arguments
- # Are we are mounting or unmounting
- if [ "$1" == "m" ]; then
- modprobe rbd
- rbd feature disable $rbdimage0 object-map fast-diff deep-flatten
- rbd map $rbdimage0 --id rbd --keyring /etc/ceph/ceph.client.rbd.keyring
- mkdir -p $mountpoint0
- mount /dev/rbd/$poolname/$rbdimage0 $mountpoint0
- fi
- if [ "$1" == "u" ]; then
- umount $mountpoint0
- rbd unmap /dev/rbd/$poolname/$rbdimage0
- fi
- [[email protected] ~]# cat /etc/systemd/system/rbd-mount.service
- [Unit]
- Description=RADOS block device mapping for $rbdimage in pool $poolname"
- Conflicts=shutdown.target
- Wants=network-online.target
- After=NetworkManager-wait-online.service
- [Service]
- Type=oneshot
- RemainAfterExit=yes
- ExecStart=/usr/local/bin/rbd-mount m
- ExecStop=/usr/local/bin/rbd-mount u
- [Install]
- WantedBy=multi-user.target
开机自动挂载
- [[email protected] ~]#systemctl daemon-reload
- [[email protected] ~]#systemctl enable rbd-mount.service
来源: http://www.bubuko.com/infodetail-3339799.html