ceph 安装(二)
CentOS Linux release 7.7.1908 (Core)
三台主机ssh对等性,需要配置。
1,在所有服务器和客户端节点安装Ceph。
yum -y install ceph
2,在ceph1节点额外安装ceph-deploy。
yum -y install ceph-deploy
==============
ceph-deploy命令报错:
==============
ceph-deploy
Traceback (most recent call last):
File "/usr/bin/ceph-deploy", line 18, in <module>
from ceph_deploy.cli import main
File "/usr/lib/python2.7/site-packages/ceph_deploy/cli.py", line 1, in <module>
import pkg_resources
ImportError: No module named pkg_resources
wget --no-check-certificate https://files.pythonhosted.org/packages/b5/96/af1686ea8c1e503f4a81223d4a3410e7587fd52df03083de24161d0df7d4/setuptools-46.1.3.zip
python setup.py install
3,部署MON节点(在ceph1操作)
创建集群。
mkdir /etc/ceph
cd /etc/ceph
ceph-deploy new ceph1 ceph2 ceph3
初始化监视器并收集密钥。
ceph-deploy mon create-initial
将ceph.client.admin.keyring拷贝到各个节点上。
ceph-deploy --overwrite-conf admin ceph1 ceph2 ceph3
查看是否配置成功。
ceph -s
[root@ceph1 ceph]# ceph -s
cluster:
id: 8e333595-aa34-47b2-9275-2cba6a37c7b4
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 41s)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
4,部署MGR节点
ceph-deploy mgr create ceph1 ceph2 ceph3
查看MGR是否部署成功。
ceph -s
[root@ceph1 ceph]# ceph -s
cluster:
id: 8e333595-aa34-47b2-9275-2cba6a37c7b4
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 2m)
mgr: ceph1(active, since 17s), standbys: ceph2, ceph3
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
5,部署OSD节点
ceph-deploy osd create --data /dev/sdb ceph1
ceph-deploy osd create --data /dev/sdc ceph1
ceph-deploy osd create --data /dev/sdd ceph1
ceph-deploy osd create --data /dev/sdb ceph2
ceph-deploy osd create --data /dev/sdc ceph2
ceph-deploy osd create --data /dev/sdd ceph2
ceph-deploy osd create --data /dev/sdb ceph3
ceph-deploy osd create --data /dev/sdc ceph3
ceph-deploy osd create --data /dev/sdd ceph3
创建成功后,查看是否正常,即OSD是否都为up。
ceph -s
cluster:
id: 95ea3896-7692-4b2a-8e69-1b7a65a14568
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph1,ceph2,ceph3 (age 3m)
mgr: ceph1(active, since 2m), standbys: ceph2, ceph3
osd: 9 osds: 9 up (since 6s), 9 in (since 6s)
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 9.0 GiB used, 135 GiB / 144 GiB avail
pgs:
[root@ceph1 ceph]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.14035 root default
-3 0.04678 host ceph1
0 hdd 0.01559 osd.0 up 1.00000 1.00000
1 hdd 0.01559 osd.1 up 1.00000 1.00000
2 hdd 0.01559 osd.2 up 1.00000 1.00000
-5 0.04678 host ceph2
3 hdd 0.01559 osd.3 up 1.00000 1.00000
4 hdd 0.01559 osd.4 up 1.00000 1.00000
5 hdd 0.01559 osd.5 up 1.00000 1.00000
-7 0.04678 host ceph3
6 hdd 0.01559 osd.6 up 1.00000 1.00000
7 hdd 0.01559 osd.7 up 1.00000 1.00000
8 hdd 0.01559 osd.8 up 1.00000 1.00000
- 点赞
- 收藏
- 关注作者
评论(0)