学习 ceph 时,看到 crush 规则的时候,crush 策略最小为 osd,但是这个 osd 定义的是真实的 osd 还是指单块磁盘?为了验证一下,自己用测试机模拟了一下单台机器使用一块磁盘搭建 ceph。
配置 ceph 源,这里使用的阿里云的源
- # yum install --nogpgcheck -y epel-release
- # rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7
- # vim /etc/yum.repos.d/ceph.repo
- [Ceph]
- name=Ceph packages for $basearch
- baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/$basearch
- enabled=1
- gpgcheck=1
- type=rpm-md
- gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
- priority=1
- [Ceph-noarch]
- name=Ceph noarch packages
- baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch
- enabled=1
- gpgcheck=1
- type=rpm-md
- gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
- priority=1
- [ceph-source]
- name=Ceph source packages
- baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
- enabled=1
- gpgcheck=1
- type=rpm-md
- gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
- priority=1
安装 ceph
- # yum update -y
- # yum install ceph-deploy -y
- # yum install ntp ntpdate ntp-doc openssh-server yum-plugin-priorities -y
- # vim /etc/hosts
- 172.16.10.167 admin-node #主机的IP和主机名,不写会无法连接,或者可以修改ceph配置文件mon_initial_members改成主机名
- # mkdir my-cluster
- # cd my-cluster
- # ceph-deploy new admin-node
- # vim ceph.conf
- osd pool default size = 3 #创建3个副本
- public_network = 172.16.10.0/24 #公用网络
- cluster_network = 172.16.10.0/24 #集群网络
- # ceph-deploy install admin-node
- # fdisk /dev/sdb #划分三个相同大小的分区
- # ceph-deploy mon create-initial
- # ceph-deploy admin admin-node
- # chmod +r /etc/ceph/ceph.client.admin.keyring
- # ceph-disk prepare --cluster ceph --cluster-uuid f453a207-a05c-475b-971d-91ff6c1f6f48 --fs-type xfs /dev/sdb1
- # ceph-disk prepare --cluster ceph --cluster-uuid f453a207-a05c-475b-971d-91ff6c1f6f48 --fs-type xfs /dev/sdb2
- # ceph-disk prepare --cluster ceph --cluster-uuid f453a207-a05c-475b-971d-91ff6c1f6f48 --fs-type xfs /dev/sdb3
- 上面的uuid使用ceph -s可以查看,就是第一行cluster后面的那串字符,配置文件中可以修改
- # ceph-disk activate /dev/sdb1
- # ceph-disk activate /dev/sdb2
- # ceph-disk activate /dev/sdb3
- # ceph osd getcrushmap -o a.map
- # crushtool -d a.map -o a
- # vim a
- rule replicated_ruleset {
- ruleset 0
- type replicated
- min_size 1
- max_size 10
- step take default
- step chooseleaf firstn 0 type osd #默认为host,修改为osd
- step emit
- # crushtool -c a -o b.map
- # ceph osd setcrushmap -i b.map
- # ceph osd tree
- # ceph -s
- 搭建完成
- 通过测试结果可以看出来,使用一块磁盘就可以搭建ceph集群,crush策略中的osd指的是真实的osd。
来源: