Ceph 添加OSD 移除OSD
来源:原创
时间:2020-08-16
作者:脚本小站
分类:云原生
添加OSD
配置DNS:
vim /etc/hosts
配置:Chronyd:
yum install chrony -y && systemctl start chronyd.service && systemctl enable chronyd.service && timedatectl set-timezone Asia/Shanghai && chronyc -a makestep
添加epel yum源:
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
添加ceph yum源:
cat > /etc/yum.repos.d/ceph.repo <<EOF [ceph] name=Ceph packages for \$basearch baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch enabled=1 priority=2 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-noarch] name=Ceph noarch packages baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch enabled=1 priority=2 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-source] name=Ceph source packages baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS enabled=0 priority=2 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc EOF
安装ceph:
yum install snappy leveldb gdisk python-argparse gperftools-libs ceph -y
拷贝配置文件:
scp ceph.conf ceph.client.admin.keyring root@ceph04:/etc/ceph/
拷贝keyring文件:
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph04:/var/lib/ceph/bootstrap-osd/
修改属组属主:
chown -R ceph.ceph /var/lib/ceph/bootstrap-osd/
像集群中添加OSD:
ceph-volume lvm create --bluestore --data /dev/vdb ceph-volume lvm create --bluestore --data /dev/vdc
查看OSD状态:
~]# ceph osd status +----+--------+-------+-------+--------+---------+--------+---------+-----------+ | id | host | used | avail | wr ops | wr data | rd ops | rd data | state | +----+--------+-------+-------+--------+---------+--------+---------+-----------+ | 0 | ceph01 | 1029M | 18.9G | 0 | 0 | 0 | 0 | exists,up | | 1 | ceph01 | 1029M | 18.9G | 0 | 0 | 0 | 0 | exists,up | | 2 | ceph02 | 1029M | 18.9G | 0 | 0 | 0 | 0 | exists,up | | 3 | ceph02 | 1029M | 18.9G | 0 | 0 | 0 | 0 | exists,up | | 4 | ceph03 | 1029M | 18.9G | 0 | 0 | 0 | 0 | exists,up | | 5 | ceph03 | 1029M | 18.9G | 0 | 0 | 0 | 0 | exists,up | | 6 | ceph04 | 1029M | 18.9G | 0 | 0 | 0 | 0 | exists,up | +----+--------+-------+-------+--------+---------+--------+---------+-----------+
移除OSD
官方文档:
http://docs.ceph.org.cn/rados/operations/add-or-rm-osds/
移除OSD:
~]# ceph osd out 3 marked out osd.3.
查看数据迁移:
~]# ceph -w ... data: pools: 7 pools, 112 pgs objects: 240 objects, 3.9 MiB usage: 33 GiB used, 67 GiB / 100 GiB avail pgs: 21.429% pgs not active 14/720 objects degraded (1.944%) 82 active+clean 21 peering 5 active+recovery_wait+degraded 3 remapped+peering 1 active+recovering # 状态改变 io: recovery: 0 B/s, 0 objects/s ... 2020-08-15 08:44:46.258524 mon.ceph01 [INF] Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/720 objects degraded (0.556%), 3 pgs degraded) 2020-08-15 08:44:51.444009 mon.ceph01 [WRN] Health check update: 70/720 objects misplaced (9.722%) (OBJECT_MISPLACED)
迁移完数据再查看集群状态:
~]# ceph -s cluster: id: ad554251-33fd-4dd9-89a0-45a42e8958c2 health: HEALTH_WARN clock skew detected on mon.ceph02, mon.ceph03 services: mon: 3 daemons, quorum ceph01,ceph02,ceph03 mgr: ceph03(active) osd: 6 osds: 6 up, 5 in rgw: 3 daemons active data: pools: 7 pools, 112 pgs objects: 240 objects, 3.9 MiB usage: 34 GiB used, 86 GiB / 120 GiB avail pgs: 112 active+clean # 状态改为正常
查看OSD的状态:这时已经变成down了
~]# ceph osd tree ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 0.11691 root default -3 0.03897 host ceph01 0 hdd 0.01949 osd.0 up 1.00000 1.00000 3 hdd 0.01949 osd.3 down 0 1.00000 -5 0.03897 host ceph02 1 hdd 0.01949 osd.1 up 1.00000 1.00000 4 hdd 0.01949 osd.4 up 1.00000 1.00000 -7 0.03897 host ceph03 2 hdd 0.01949 osd.2 up 1.00000 1.00000 5 hdd 0.01949 osd.5 up 1.00000 1.00000
删除crush map,也可以反编译crushmap方式删除
~]# ceph osd crush remove osd.3 removed item id 3 name 'osd.3' from crush map
这时在查看crush map,在 devices 出可以看到osd.3 已近被删除了
~]# ceph osd getcrushmap -o /tmp/mycrushmap.bin 14 [root@ceph01 ~]# crushtool -d /tmp/mycrushmap.bin -o ./mycrushmap.txt [root@ceph01 ~]# cat mycrushmap.txt ... # devices device 0 osd.0 class hdd device 1 osd.1 class hdd device 2 osd.2 class hdd device 4 osd.4 class hdd device 5 osd.5 class hdd ...
删除 OSD 认证密钥:使用命令查看OSD ceph auth ls 就看不到osd.3的秘钥了
~]# ceph auth del osd.3 updated
删除osd:
~]# ceph osd rm 3 removed osd.3
这时就可以看到没有osd3了
~]# ceph osd ls 0 1 2 4 5
更新ceph.conf文件,将osd.3的信息删除后再同步配置。
~]# ceph-deploy --overwrite-conf admin ceph01 ceph02 ceph03
卸载磁盘:
~]# umount /var/lib/ceph/osd/ceph-3