Redian新闻
>
使用cephadm部署ceph集群

使用cephadm部署ceph集群

公众号新闻

一、cephadm介绍

从红帽ceph5开始使用cephadm代替之前的ceph-ansible作为管理整个集群生命周期的工具,包括部署,管理,监控。

cephadm引导过程在单个节点(bootstrap节点)上创建一个小型存储集群,包括一个Ceph Monitor和一个Ceph Manager,以及任何所需的依赖项。

如下图所示:

cephadm可以登录到容器仓库来拉取ceph镜像和使用对应镜像来在对应ceph节点进行部署。ceph容器镜像对于部署ceph集群是必须的,因为被部署的ceph容器是基于那些镜像。

为了和ceph集群节点通信,cephadm使用ssh。通过使用ssh连接,cephadm可以向集群中添加主机,添加存储和监控那些主机。

该节点让集群up的软件包就是cepadm,podman或docker,python3和chrony。这个容器化的版本减少了ceph集群部署的复杂性和依赖性。

1、python3

yum -y install python3

2、podman或者docker来运行容器

# 安装阿里云提供的docker-ceyum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.reposed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repoyum -y install docker-cesystemctl enable docker --now# 配置镜像加速器mkdir -p /etc/dockertee /etc/docker/daemon.json <<-'EOF'{  "registry-mirrors": ["https://bp1bh1ga.mirror.aliyuncs.com"]}EOFsystemctl daemon-reloadsystemctl restart docker

3、时间同步(比如chrony或者NTP)


二、部署ceph集群前准备

2.1、节点准备


节点名称

系统

IP地址

ceph角色

硬盘

node1

Rocky Linux release 8.6

172.24.1.6

mon,mgr,服务器端,管理节点

/dev/vdb,/dev/vdc/,dev/vdd

node2

Rocky Linux release 8.6

172.24.1.7

mon,mgr

/dev/vdb,/dev/vdc/,dev/vdd

node3

Rocky Linux release 8.6

172.24.1.8

mon,mgr

/dev/vdb,/dev/vdc/,dev/vdd

node4

Rocky Linux release 8.6

172.24.1.9

客户端,管理节点


2.2、修改每个节点的/etc/host

172.24.1.6 node1172.24.1.7 node2172.24.1.8 node3172.24.1.9 node4


2.3、在node1节点上做免密登录

[root@node1 ~]# ssh-keygen[root@node1 ~]# ssh-copy-id root@node2[root@node1 ~]# ssh-copy-id root@node3[root@node1 ~]# ssh-copy-id root@node4

三、node1节点安装cephadm

1.安装epel源[root@node1 ~]# yum -y install epel-release2.安装ceph源[root@node1 ~]# yum search release-ceph上次元数据过期检查:0:57:14 前,执行于 2023年02月14日 星期二 14时22分00秒。================= 名称 匹配:release-ceph ============================================centos-release-ceph-nautilus.noarch : Ceph Nautilus packages from the CentOS Storage SIG repositorycentos-release-ceph-octopus.noarch : Ceph Octopus packages from the CentOS Storage SIG repositorycentos-release-ceph-pacific.noarch : Ceph Pacific packages from the CentOS Storage SIG repositorycentos-release-ceph-quincy.noarch : Ceph Quincy packages from the CentOS Storage SIG repository[root@node1 ~]# yum -y install centos-release-ceph-pacific.noarch3.安装cephadm[root@node1 ~]# yum -y install cephadm4.安装ceph-common[root@node1 ~]# yum -y install ceph-common

四、其它节点安装docker-ce,python3

具体过程看标题一。


五、部署ceph集群

5.1、部署ceph集群,顺便把dashboard(图形控制界面)安装上

[root@node1 ~]# cephadm bootstrap --mon-ip 172.24.1.6 --allow-fqdn-hostname --initial-dashboard-user admin --initial-dashboard-password redhat --dashboard-password-noupdateVerifying podman|docker is present...Verifying lvm2 is present...Verifying time synchronization is in place...Unit chronyd.service is enabled and runningRepeating the final host check...docker (/usr/bin/docker) is presentsystemctl is presentlvcreate is presentUnit chronyd.service is enabled and runningHost looks OKCluster fsid: 0b565668-ace4-11ed-960c-5254000de7a0Verifying IP 172.24.1.6 port 3300 ...Verifying IP 172.24.1.6 port 6789 ...Mon IP `172.24.1.6` is in CIDR network `172.24.1.0/24`- internal network (--cluster-network) has not been provided, OSD replication will default to the public_networkPulling container image quay.io/ceph/ceph:v16...Ceph version: ceph version 16.2.11 (3cf40e2dca667f68c6ce3ff5cd94f01e711af894) pacific (stable)Extracting ceph user uid/gid from container image...Creating initial keys...Creating initial monmap...Creating mon...Waiting for mon to start...Waiting for mon...mon is availableAssimilating anything we can from ceph.conf...Generating new minimal ceph.conf...Restarting the monitor...Setting mon public_network to 172.24.1.0/24Wrote config to /etc/ceph/ceph.confWrote keyring to /etc/ceph/ceph.client.admin.keyringCreating mgr...Verifying port 9283 ...Waiting for mgr to start...Waiting for mgr...mgr not available, waiting (1/15)...mgr not available, waiting (2/15)...mgr not available, waiting (3/15)...mgr is availableEnabling cephadm module...Waiting for the mgr to restart...Waiting for mgr epoch 5...mgr epoch 5 is availableSetting orchestrator backend to cephadm...Generating ssh key...Wrote public SSH key to /etc/ceph/ceph.pubAdding key to root@localhost authorized_keys...Adding host node1...Deploying mon service with default placement...Deploying mgr service with default placement...Deploying crash service with default placement...Deploying prometheus service with default placement...Deploying grafana service with default placement...Deploying node-exporter service with default placement...Deploying alertmanager service with default placement...Enabling the dashboard module...Waiting for the mgr to restart...Waiting for mgr epoch 9...mgr epoch 9 is availableGenerating a dashboard self-signed certificate...Creating initial admin user...Fetching dashboard port number...Ceph Dashboard is now available at:
URL: https://node1.domain1.example.com:8443/ User: admin Password: redhat
Enabling client.admin keyring and conf on hosts with "admin" labelYou can access the Ceph CLI with:
sudo /usr/sbin/cephadm shell --fsid 0b565668-ace4-11ed-960c-5254000de7a0 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Please consider enabling telemetry to help improve Ceph:
ceph telemetry on
For more information see:
https://docs.ceph.com/docs/pacific/mgr/telemetry/
Bootstrap complete.

5.2、把集群公钥复制到将成为集群成员的节点

[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node2[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node3[root@node1 ~]# ssh-copy-id -f -i /etc/ceph/ceph.pub root@node4

5.3、添加节点node2,node3,node4(各节点要先安装docker-ce,python3)

[root@node1 ~]# ceph orch host add node2 172.24.1.7Added host 'node2' with addr '172.24.1.7'[root@node1 ~]# ceph orch host add node3 172.24.1.8Added host 'node3' with addr '172.24.1.8'[root@node1 ~]# ceph orch host add node4 172.24.1.9Added host 'node4' with addr '172.24.1.9'


5.4、给node1、node4打上管理员标签,拷贝ceph配置文件和keyring到node4

[root@node1 ~]# ceph orch host label add node1 _adminAdded label _admin to host node1[root@node1 ~]# ceph orch host label add node4 _adminAdded label _admin to host node4[root@node1 ~]# scp /etc/ceph/{*.conf,*.keyring} root@node4:/etc/ceph[root@node1 ~]# ceph orch host lsHOST   ADDR        LABELS  STATUS  node1  172.24.1.6  _admin          node2  172.24.1.7                  node3  172.24.1.8                  node4  172.24.1.9  _admin

5.5、添加mon

[root@node1 ~]# ceph orch apply mon "node1,node2,node3"Scheduled mon update...

5.6、添加mgr

[root@node1 ~]# ceph orch apply mgr --placement="node1,node2,node3"Scheduled mgr update...

5.7、添加osd

[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdb[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdc[root@node1 ~]# ceph orch daemon add osd node1:/dev/vdd[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdb[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdc[root@node1 ~]# ceph orch daemon add osd node2:/dev/vdd[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdb[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdc[root@node1 ~]# ceph orch daemon add osd node3:/dev/vdd或者:[root@node1 ~]# for i in node1 node2 node3; do for j in vdb vdc vdd; do ceph orch daemon add osd $i:/dev/$j; done; doneCreated osd(s) 0 on host 'node1'Created osd(s) 1 on host 'node1'Created osd(s) 2 on host 'node1'Created osd(s) 3 on host 'node2'Created osd(s) 4 on host 'node2'Created osd(s) 5 on host 'node2'Created osd(s) 6 on host 'node3'Created osd(s) 7 on host 'node3'Created osd(s) 8 on host 'node3'
[root@node1 ~]# ceph orch device lsHOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONS node1 /dev/vdb hdd 10.7G 4m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node1 /dev/vdc hdd 10.7G 4m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node1 /dev/vdd hdd 10.7G 4m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node2 /dev/vdb hdd 10.7G 3m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node2 /dev/vdc hdd 10.7G 3m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node2 /dev/vdd hdd 10.7G 3m ago Insufficient space (<10 extents) on vgs, LVM detected, locked node3 /dev/vdb hdd 10.7G 90s ago Insufficient space (<10 extents) on vgs, LVM detected, locked node3 /dev/vdc hdd 10.7G 90s ago Insufficient space (<10 extents) on vgs, LVM detected, locked node3 /dev/vdd hdd 10.7G 90s ago Insufficient space (<10 extents) on vgs, LVM detected, locked

5.8、至此,ceph集群部署完毕!

[root@node1 ~]# ceph -s  cluster:    id:     0b565668-ace4-11ed-960c-5254000de7a0    health: HEALTH_OK   services:    mon: 3 daemons, quorum node1,node2,node3 (age 7m)    mgr: node1.cxtokn(active, since 14m), standbys: node2.heebcb, node3.fsrlxu    osd: 9 osds: 9 up (since 59s), 9 in (since 81s)   data:    pools:   1 pools, 1 pgs    objects: 0 objects, 0 B    usage:   53 MiB used, 90 GiB / 90 GiB avail    pgs:     1 active+clean


5.9、node4节点管理ceph

# 在目录5.4已经将ceph配置文件和keyring拷贝到node4节点[root@node4 ~]# ceph -s-bash: ceph: 未找到命令,需要安装ceph-common# 安装ceph源[root@node4 ~]# yum -y install centos-release-ceph-pacific.noarch# 安装ceph-common[root@node4 ~]# yum -y install ceph-common[root@node4 ~]# ceph -s  cluster:    id:     0b565668-ace4-11ed-960c-5254000de7a0    health: HEALTH_OK   services:    mon: 3 daemons, quorum node1,node2,node3 (age 7m)    mgr: node1.cxtokn(active, since 14m), standbys: node2.heebcb, node3.fsrlxu    osd: 9 osds: 9 up (since 59s), 9 in (since 81s)   data:    pools:   1 pools, 1 pgs    objects: 0 objects, 0 B    usage:   53 MiB used, 90 GiB / 90 GiB avail    pgs:     1 active+clean


链接:https://blog.51cto.com/zengyi/6059434

(版权归原作者所有,侵删)


微信扫码关注该文公众号作者

戳这里提交新闻线索和高质量文章给我们。
相关阅读
头像|𝐒𝐡𝐚𝐫𝐞·招桃花头像Vision Pro正式发售,库克回应价格质疑/超7500万台设备使用Copilot/问界M9大定破4万台鸿发超市「2000 万美元」买下82街前Walmart超市!开设第4家Hông Phát分店!使用 Docker Compose 部署 RabbitMQ 的一些经验与踩坑记录《歌德堡变奏曲1459》儿时的记忆 — 我的左邻右舍 (一)终于不用打孔了!加州 DMV 允许使用“贴纸式车牌”第116章 丰州城LVMH集团主席最信赖的干将之一,Michael Burke 接掌 LVMH时尚集团LVMH集团掌门人 Arnault 荣获法国最高等级荣誉勋章人事六则丨Arnault的四子出任LVMH集团手表部门CEO,《Vogue》老将回归任全球创意总监,Valentino等高管任命Kubernetes高可用集群二进制部署v1.28.0版本LVMH集团全年业绩报告:销售收入增长13%,中国顾客数量达到疫情前的两倍Swatch集团2023年净销售额增长12.6%,Harry Winston营收今年将突破10亿瑞郎大关谷歌DeepMind科学家「被爆将离职创业」!曾参与AlphaGo、Alphafold工作,首轮融资或超2亿美元Apollo 配置中心的部署与使用经验Schiaparelli 会被 Tod's 老板卖给 LVMH集团吗?人事四则|丝芙兰首席数字官来自 LVMH集团总部,M.A.C.、宝黎研萃、Eighth Day 任命高管Mysql集群之PXC-Docker安装战功累累,他如何成为“LVMH集团成功的关键人物”?LVMH集团旗下奢华旅游酒店公司贝梦徳发布2024“转型之年”蓝图中文翻译首发|英国允许法官使用ChatGPT写裁决书,官方指南全文翻译「think step by step」还不够,让模型「think more steps」更有用使用 GDM 设置来自定义 GNOME 中的登录屏幕 | Linux 中国黄牌!奢侈品业走出低谷?LVMH集团股价单日大涨12.8%,Arnault 重回世界首富Tommy Hilfiger、Calvin Klein 品牌的母公司 PVH集团上季度净利润大涨187%(完整版)LVMH集团主席关于奢侈品行业的最新研判:我们很高兴能够放慢脚步!红色日记 悼念总理 1.21-31传:LVMH集团掌门人Bernard Arnault 计划提名两个儿子进入集团董事会【赠送GPT账号】如何使用ChatGPT完成科研、程序开发、论文写作等,看看这篇!LVMH集团正式进军影视娱乐业!AHA 中国之声|首个大规模集群随机试验发布,降低血压可显著降低高血压患者的痴呆风险MTA服务变更: 长周末F/Q/M部分线路暂停、换乘提示 女子指控市长亚当斯30年前性侵犯K8s部署Jumpserver并使用Istio对外暴露服务
logo
联系我们隐私协议©2024 redian.news
Redian新闻
Redian.news刊载任何文章,不代表同意其说法或描述,仅为提供更多信息,也不构成任何建议。文章信息的合法性及真实性由其作者负责,与Redian.news及其运营公司无关。欢迎投稿,如发现稿件侵权,或作者不愿在本网发表文章,请版权拥有者通知本网处理。