ceph-anisble部署
【摘要】 ceph-anisble部署
自从要上kubernetes开始一直考虑容器的数据持久化存储是个问题,调研后决定要使用ceph,但是使用ceph-deploy是能部署上去,但是本地环境和线上环境都是用手动部署的话...... 手动部署是不可能手动部署了!
环境说明
hostname | ip | role | os |
---|---|---|---|
ceph-deploy01 | 192.168.10.1 | admin/deploy | centos_7.4_x64/3.10.0-862.3.3.el7.x86_64 |
ceph-master01 | 192.168.10.2 | mon/osd/mgr | centos_7.4_x64/3.10.0-862.3.3.el7.x86_64 |
ceph-master02 | 192.168.10.3 | mon/osd/mgr | centos_7.4_x64/3.10.0-862.3.3.el7.x86_64 |
ceph-master03 | 192.168.10.4 | mon/osd/mgr | centos_7.4_x64/3.10.0-862.3.3.el7.x86_64 |
建议系统内核升级到4.x,不然容易出各种问题,不要给自己找不必要的麻烦。
升级内核
# 载入公钥
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# 安装ELRepo
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 载入elrepo-kernel元数据
yum --disablerepo=\* --enablerepo=elrepo-kernel repolist
# 查看可用的rpm包
yum --disablerepo=\* --enablerepo=elrepo-kernel list kernel*
# 安装最新版本的kernel
yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml.x86_64
#查看默认启动顺序
awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
CentOS Linux (3.10.0-862.3.3.el7.x86_64) 7 (Core)
CentOS Linux (3.10.0-514.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-cacfb77f99dc43f5a7d9b51bbedf922d) 7 (Core)
#默认启动的顺序是从0开始,新内核是从头插入(目前位置在0,而4.4.4的是在1),所以需要选择0。
grub2-set-default 0
reboot
uname -a
# 删除老内核以及内核工具
rpm -qa|grep kernel|grep 3.10
rpm -qa|grep kernel|grep 3.10|xargs yum remove -y
# 安装新版本工具包
yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml-tools.x86_64
rpm -qa|grep kernel
内核版本我这里选的的ml,如果求稳妥可以选择lt版本。
ansible安装
pip install ansible==2.4.2
严格按照官方文档的要求来,ansible版本过新或旧都会有各种报错。 ceph-ansible官方安装文档
免密登录配置 (略)
可参考之前ceph-deploy
安装
下载项目
wget -c https://github.com/ceph/ceph-ansible/archive/v3.1.7.tar.gz
tar xf v3.1.7.tar.gz
cd ceph-ansible-3.1.7
host变量
vim hosts
### ceph
[mons]
192.168.10.2
192.168.10.3
192.168.10.4
[osds]
192.168.10.2
192.168.10.3
192.168.10.4
[mgrs]
192.168.10.2
192.168.10.3
192.168.10.4
[mdss]
192.168.10.2
192.168.10.3
192.168.10.4
[clients]
192.168.10.1
192.168.10.2
192.168.10.3
192.168.10.4
192.168.10.5
192.168.10.6
开始安装
cp group_vars/all.yml.sample group_vars/all.yml
cp group_vars/osds.yml.sample group_vars/osds.yml
cp site.yml.sample site.yml
vim group_vars/all.yml
ceph_origin: repository
ceph_repository: community
ceph_mirror: http://mirrors.aliyun.com/ceph
ceph_stable_key: http://mirrors.aliyun.com/ceph/keys/release.asc
ceph_stable_release: luminous
ceph_stable_repo: "{{ ceph_mirror }}/rpm-{{ ceph_stable_release }}"
fsid: 54d55c64-d458-4208-9592-36ce881cbcb7 ##通过uuidgen生成
generate_fsid: false
cephx: true
public_network: 192.168.10.0/24
cluster_network: 192.168.10.0/24
monitor_interface: eth0
ceph_conf_overrides:
global:
rbd_default_features: 7
auth cluster required: cephx
auth service required: cephx
auth client required: cephx
osd journal size: 2048
osd pool default size: 3
osd pool default min size: 1
mon_pg_warn_max_per_osd: 1024
osd pool default pg num: 128
osd pool default pgp num: 128
max open files: 131072
osd_deep_scrub_randomize_ratio: 0.01
mgr:
mgr modules: dashboard
mon:
mon_allow_pool_delete: true
client:
rbd_cache: true
rbd_cache_size: 335544320
rbd_cache_max_dirty: 134217728
rbd_cache_max_dirty_age: 10
osd:
osd mkfs type: xfs
osd mount options xfs: "rw,noexec,nodev,noatime,nodiratime,nobarrier"
ms_bind_port_max: 7100
osd_client_message_size_cap: 2147483648
osd_crush_update_on_start: true
osd_deep_scrub_stride: 131072
osd_disk_threads: 4
osd_map_cache_bl_size: 128
osd_max_object_name_len: 256
osd_max_object_namespace_len: 64
osd_max_write_size: 1024
osd_op_threads: 8
osd_recovery_op_priority: 1
osd_recovery_max_active: 1
osd_recovery_max_single_start: 1
osd_recovery_max_chunk: 1048576
osd_recovery_threads: 1
osd_max_backfills: 4
osd_scrub_begin_hour: 23
osd_scrub_end_hour: 7
# bluestore block create: true
# bluestore block db size: 73014444032
# bluestore block db create: true
# bluestore block wal size: 107374182400
# bluestore block wal create: true
vim group_vars/osds.yml
devices:
- /dev/vdc
- /dev/vdd
- /dev/vde
osd_scenario: collocated
osd_objectstore: bluestore
#osd_scenario: non-collocated
#osd_objectstore: bluestore
#devices:
# - /dev/sdc
# - /dev/sdd
# - /dev/sde
#dedicated_devices:
# - /dev/sdf
# - /dev/sdf
# - /dev/sdf
#bluestore_wal_devices:
# - /dev/sdg
# - /dev/sdg
# - /dev/sdg
#
#monitor_address: 192.168.10.125
# 注释不需要的组件
vim site.yml
---
# Defines deployment design and assigns role to server groups
- hosts:
- mons
# - agents
- osds
- mdss
# - rgws
# - nfss
# - restapis
# - rbdmirrors
- clients
- mgrs
# - iscsigws
# - iscsi-gws # for backward compatibility only!
ansible-playbook -i hosts site.yml
至此ceph部署完成,登陆ceph节点检查状态。
清空集群
cp infrastructure-playbooks/purge-cluster.yml purge-cluster.yml # 必须copy到项目根目录下
ansible-playbook -i hosts purge-cluster.yml
ceph-ansible极大的提升工作效率,并且很好的减少人为的手工操作失误,解放劳动力的利器!
【声明】本内容来自华为云开发者社区博主,不代表华为云及华为云开发者社区的观点和立场。转载时必须标注文章的来源(华为云社区)、文章链接、文章作者等基本信息,否则作者和本社区有权追究责任。如果您发现本社区中有涉嫌抄袭的内容,欢迎发送邮件进行举报,并提供相关证据,一经查实,本社区将立刻删除涉嫌侵权内容,举报邮箱:
cloudbbs@huaweicloud.com
- 点赞
- 收藏
- 关注作者
评论(0)