Ceph 手动安装
安装前的准备:
说明:由于资源有限以下实验环境都是单节点集群。
DNS:各节点解析
两个网卡:eth0(公)eth1(私)
主机名。
时间服务chronyd。
准备yum源:
手动安装ceph官方文档:
https://docs.ceph.com/docs/master/install/ https://docs.ceph.com/docs/nautilus/install/
配置yum源官方文档:
https://docs.ceph.com/docs/master/install/get-packages/ https://docs.ceph.com/docs/nautilus/install/get-packages/
将yum源改为aliyun的源:阿里的yum源地址 https://mirrors.aliyun.com/ceph 根据对应目录逐个修改即可,结果如下:
cat > /etc/yum.repos.d/ceph.repo <<EOF [ceph] name=Ceph packages for \$basearch baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch enabled=1 priority=2 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-noarch] name=Ceph noarch packages baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch enabled=1 priority=2 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-source] name=Ceph source packages baseurl=https://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS enabled=0 priority=2 gpgcheck=1 type=rpm-md gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc EOF
配置epel yum源:安装ceph需要epel里的包。
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
安装一些依赖包:
yum install snappy leveldb gdisk python-argparse gperftools-libs -y
安装ceph:
yum install ceph -y
创建Ceph集群先创建单节点集群:mon、mgr、mds(可选),再创建osd,单节点创建成功之后再逐个添加mon、mgr等。
创建MON:这部分照着官方文档做即可。
https://docs.ceph.com/docs/master/install/manual-deployment/ https://docs.ceph.com/docs/nautilus/install/manual-deployment/
步骤如下:详情请看官方文档。
创建配置文件:
cat > /etc/ceph/ceph.conf <<EOF [global] fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 mon initial members = ceph01 mon host = 10.3.149.71 public network = 10.3.149.0/24 cluster network = 192.168.1.0/24 auth cluster required = cephx auth service required = cephx auth client required = cephx osd pool default size = 3 osd pool default min size = 2 EOF
注意:fsid为您的集群生成一个唯一的ID。
~]$ uuidgen a7f64266-0894-4f1e-a635-d0aeaca0e993
初始化mon:
sudo ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *' sudo ceph-authtool --create-keyring /etc/ceph/ceph.client.admin.keyring --gen-key -n client.admin --cap mon 'allow *' --cap osd 'allow *' --cap mds 'allow *' --cap mgr 'allow *' sudo ceph-authtool --create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring --gen-key -n client.bootstrap-osd --cap mon 'profile bootstrap-osd' --cap mgr 'allow r' sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /etc/ceph/ceph.client.admin.keyring sudo ceph-authtool /tmp/ceph.mon.keyring --import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring sudo chown ceph:ceph /tmp/ceph.mon.keyring # 下面几项都要替换参数 monmaptool --create --add `hostname -s` 192.168.0.1 --fsid a7f64266-0894-4f1e-a635-d0aeaca0e993 /tmp/monmap sudo -u ceph mkdir /var/lib/ceph/mon/ceph-`hostname -s` sudo -u ceph ceph-mon --mkfs -i `hostname -s` --monmap /tmp/monmap --keyring /tmp/ceph.mon.keyring
启动msgr2:这是nautilus 版之后的新东西。
ceph mon enable-msgr2
开机启动:
systemctl enable ceph-mon@`hostname -s`.service systemctl start ceph-mon@`hostname -s`.service
创建MGR:
创建目录:
sudo -u ceph mkdir /var/lib/ceph/mgr/ceph-`hostname -s` cd /var/lib/ceph/mgr/ceph-`hostname -s`
创建keyring:
ceph auth get-or-create mgr.`hostname -s` mon 'allow profile mgr' osd 'allow *' mds 'allow *' > keyring # mgr.ceph03 注意替换节点名称
启动:
systemctl enable ceph-mgr@`hostname -s`.service systemctl start ceph-mgr@`hostname -s`.service
创建OSD:从输出中可以看到创建OSD的过程,osd id等信息。
]# ceph-volume lvm create --bluestore --data /dev/vdb
查看osd进程:
]# ps -ef | grep ceph
OSD进程已经启动:
systemctl enable ceph-osd@0.service systemctl start ceph-osd@0.service
添加MDS:
~]# mkdir -p /var/lib/ceph/mds/ceph-`hostname -s` ~]# chown ceph.ceph /var/lib/ceph/mds/ceph-`hostname -s` ~]# chown -R ceph.ceph /var/lib/ceph/mds/ceph-`hostname -s` ~]# ceph-authtool --create-keyring /var/lib/ceph/mds/ceph-`hostname -s`/keyring --gen-key -n mds.`hostname -s` creating /var/lib/ceph/mds/ceph-`hostname -s`/keyring
创建秘钥:
sudo -u ceph ceph auth add mds.`hostname -s` osd "allow rwx" mds "allow" mon "allow profile mds" -i /var/lib/ceph/mds/ceph-`hostname -s`/keyring added key for mds.`hostname -s`
启动:
systemctl enable ceph-mds@`hostname -s`.service systemctl start ceph-mds@`hostname -s`.service
查看mds:
]# ps -ef | grep ceph
查看集群状态:
]# ceph -s cluster: id: 1e8ac1a4-02c6-45cf-b57f-1e72713d0368 health: HEALTH_WARN OSD count 1 < osd_pool_default_size 3 services: mon: 1 daemons, quorum ceph03 (age 7m) mgr: ceph03(active, since 2h) mds: 1 up:standby osd: 1 osds: 1 up (since 111m), 1 in (since 111m) data: pools: 0 pools, 0 pgs objects: 0 objects, 0 B usage: 1.0 GiB used, 7.0 GiB / 8.0 GiB avail pgs:
添加MON:
官方文档:
https://docs.ceph.com/docs/master/rados/operations/add-or-rm-mons/ https://docs.ceph.com/docs/nautilus/rados/operations/add-or-rm-mons/
将配置文件、权限文件拷贝到新节点:将新mon节点添加到配置文件中 mon host = 192.168.0.20, 192.168.0.21, 192.168.0.22 。
scp ceph.conf ceph.client.admin.keyring root@ceph03:/etc/ceph/
创建目录:
sudo -u ceph mkdir /var/lib/ceph/mon/ceph-`hostname -s`
创建keyring:
ceph auth get mon. -o /tmp/keyring
创建monmap:
ceph mon getmap -o /tmp/monmap
为mon指定map和keyring:
ceph-mon --cluster ceph --mkfs -i `hostname -s` --monmap /tmp/monmap --keyring /tmp/keyring
修改属组属主:
chown -R ceph.ceph /var/lib/ceph/mon/ceph-`hostname -s`/
启动mon:
systemctl enable ceph-mon@`hostname -s`.service systemctl start ceph-mon@`hostname -s`.service
查看服务:
ceph -s
继续添加OSD:一个OSD是不够的,所以要继续添加OSD。
https://docs.ceph.com/docs/master/rados/operations/add-or-rm-osds/ https://docs.ceph.com/docs/nautilus/rados/operations/add-or-rm-osds/
ceph-volume命令官方文档:
https://docs.ceph.com/docs/master/man/8/ceph-volume/ https://docs.ceph.com/docs/nautilus/man/8/ceph-volume/
在其他节点上添加OSD到集群需要先将 /var/lib/ceph/bootstrap-osd/ceph.keyring 文件拷贝到新的节点的相同位置。
scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@ceph02:/var/lib/ceph/bootstrap-osd/
修改属组属主:
chown -R ceph.ceph /var/lib/ceph/bootstrap-osd/
预备:
~]# ceph-volume lvm prepare --bluestore --data /dev/vdb
说明:当执行过prepare之后使用 ceph auth list 命令可以看到新生成的 osd id,在 /var/lib/ceph/osd/ceph-1/fsid 中可以看到 osd fsid。
ceph-volume lvm activate --bluestore <osd id> <osd fsid>
激活:
~]# ceph-volume lvm activate --bluestore 1 313fa1d5-ae86-43ed-b85d-2f0ccab57ae9
也可以将两步合成一步:逐步将新的OSD引入集群是为了避免较大数据的重新均衡。
ceph-volume lvm create --bluestore --data /dev/vdc
查看集群:
ceph -s
修改crushmap:集群深度优先算法的故障域默认为host,需要改成osd,否者集群创建pool后会出现 100.000% pgs not active 的状态。
ceph osd getcrushmap -o ./crushmap crushtool -d ./crushmap -o ./crushmap.txt sed -i 's/step chooseleaf firstn 0 type host/step chooseleaf firstn 0 type osd/' ./crushmap.txt crushtool -c ./crushmap.txt -o ./crushmap-new ceph osd setcrushmap -i ./crushmap-new
创建RGW:
安装radosgw:
yum install ceph-radosgw -y
创建存储池:
ceph osd pool create .rgw.root 32 32 ceph osd pool create default.rgw.control 32 32 ceph osd pool create default.rgw.meta 32 32 ceph osd pool create default.rgw.log 32 32
创建权限等文件:注意替换host名称。
ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring chown ceph:ceph /etc/ceph/ceph.client.radosgw.keyring ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.rgw.`hostname -s` --gen-key ceph-authtool -n client.rgw.`hostname -s` --cap osd 'allow rwx' --cap mon 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.rgw.`hostname -s` -i /etc/ceph/ceph.client.radosgw.keyring
修改配置文件:
[client.rgw.ceph01] host = ceph01 keyring=/etc/ceph/ceph.client.radosgw.keyring log file=/var/log/radosgw/client.radosgw.gateway.log rgw_frontends = civetweb port=8080
创建相应的目录:
mkdir /var/log/radosgw chown ceph:ceph /var/log/radosgw
启动服务:
systemctl enable ceph-radosgw@rgw.`hostname -s` systemctl start ceph-radosgw@rgw.`hostname -s` systemctl restart ceph-radosgw@rgw.`hostname -s`
访问服务:
curl 10.3.149.71:8080 <?xml version="1.0" encoding="UTF-8"?><ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>
查看未启用的存储池:如果集群有 application not enabled on 4 pool(s)
ceph health detail
启用存储池:
ceph osd pool application enable .rgw.root rgw
RGW multi-site 多站点设置 www.scriptjc.com/article/1174 。