CEPH快速部署(Centos7+Jewel)

前端之家收集整理的这篇文章主要介绍了CEPH快速部署(Centos7+Jewel)前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

ceph介绍

Ceph是统一存储系统,支持三种接口。

Object:有原生的API,而且也兼容Swift和S3的API

Block:支持精简配置、快照、克隆

File:Posix接口,支持快照

Ceph也是分布式存储系统,它的特点是:

高扩展性:使用普通x86服务器,支持10~1000台服务器,支持TB到PB级的扩展。

高可靠性:没有单点故障,多数据副本,自动管理,自动修复。

性能:数据分布均衡,并行化度高。对于objects storage和block storage,不需要元数据服务器。


ceph架构

组件

wKioL1hHwnTQYV9NAAJmDpAPLSs348.png


Ceph的底层是RADOS,它的意思是“A reliable,autonomous,distributed object storage”。 RADOS由两个组件组成:

OSD: Object Storage Device,提供存储资源。

Monitor:维护整个Ceph集群的全局状态。


环境:三台装有Centos7的 主机,每个主机有三个磁盘(虚拟机磁盘要大于100G)

[root@mon-1cluster]#cat/etc/redhat-release
CentOSLinuxrelease7.2.1511(Core)
[root@mon-1cluster]#lsblk
NAMEMAJ:MINRMSIZEROTYPEMOUNTPOINT
fd02:014K0disk
sda8:0020G0disk
├─sda18:10500M0part/boot
└─sda28:2019.5G0part
├─centos-root253:0018.5G0lvm/
└─centos-swap253:101G0lvm[SWAP]
sdb8:160200G0disk
sdc8:320200G0disk
sdd8:480200G0disk
sr011:01603M0rom


修改主机名并添加hosts

vim/etc/hostname#由于7和6.5修改主机名的方法不一样,这里举个例子
mon-1
[root@osd-1ceph-osd-1]#cat/etc/hosts
192.168.50.123mon-1
192.168.50.124osd-1
192.168.50.125osd-2

配置ssh无密码登陆

[root@localhost~]#ssh-keygen-trsa-P''
Generatingpublic/privatersakeypair.
Enterfileinwhichtosavethekey(/root/.ssh/id_rsa):
Youridentificationhasbeensavedin/root/.ssh/id_rsa.
Yourpublickeyhasbeensavedin/root/.ssh/id_rsa.pub.
Thekeyfingerprintis:
62:b0:4c:aa:e5:37:92:89:4d:db:c3:38:e2:f1:2a:d6root@admin-node
Thekey'srandomartimageis:
+--[RSA2048]----+
||
||
|o|
|+o|
|+ooS|
|BB..|
|+.@*|
|oooEo|
|oo..|
+-----------------+
ssh-copy-idmon-1
ssh-copy-idosd-1
ssh-copy-idosd-2


2.集群配置

主机IP功能

wKiom1hH40LQm4yrAAAv7KPvlzY871.png

3.环境清理

如果之前部署失败了,不必删除ceph客户端,或者重新搭建虚拟机,只需要在每个节点上执行如下指令即可将环境清理至刚安装完ceph客户端时的状态!强烈建议在旧集群上搭建之前清理干净环境,否则会发生各种异常情况。

psaux|grepceph|awk'{print$2}'|xargskill-9
ps-ef|grepceph
#确保此时所有ceph进程都已经关闭!!!如果没有关闭,多执行几次。
umount/var/lib/ceph/osd/*
rm-rf/var/lib/ceph/osd/*
rm-rf/var/lib/ceph/mon/*
rm-rf/var/lib/ceph/mds/*
rm-rf/var/lib/ceph/bootstrap-mds/*
rm-rf/var/lib/ceph/bootstrap-osd/*
rm-rf/var/lib/ceph/bootstrap-mon/*
rm-rf/var/lib/ceph/tmp/*
rm-rf/etc/ceph/*
rm-rf/var/run/ceph/*


安装部署流程


yum源及ceph的安装

需要在每个主机上执行以下命令

yumcleanall
rm-rf/etc/yum.repos.d/*.repo
wget-O/etc/yum.repos.d/CentOS-Base.repohttp://mirrors.aliyun.com/repo/Centos-7.repo
wget-O/etc/yum.repos.d/epel.repohttp://mirrors.aliyun.com/repo/epel-7.repo
sed-i'/aliyuncs/d'/etc/yum.repos.d/CentOS-Base.repo
sed-i'/aliyuncs/d'/etc/yum.repos.d/epel.repo
sed-i's/$releasever/7.2.1511/g'/etc/yum.repos.d/CentOS-Base.repo

增加ceph源

vim/etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0

安装ceph客户端

yum-yinstallcephceph-radosgwntpntpdate
关闭selinux和firewalld
sed-i's/SELINUX=.*/SELINUX=disabled/'/etc/selinux/config#重启机器
systemctlstopfirewalld
systemctldisablefirewalld

同步各个节点时间:

ntpdate-stime-a.nist.gov
echontpdate-stime-a.nist.gov>>/etc/rc.d/rc.local
echo"0101***/usr/sbin/ntpdate-stime-a.nist.gov>>/dev/null2>&1">>/etc/crontab

开始部署

在部署节点(mon-1)安装ceph-deploy,下文的部署节点统一指mon-1:

[root@mon-1~]#yum-yinstallceph-deploy
[root@mon-1~]#ceph-deploy--version
1.5.36
[root@mon-1~]#ceph-v
cephversion10.2.3(ecc23778eb545d8dd55e2e4735b53cc93f92e65b)

在部署节点创建部署目录

[root@mon-1~]#mkdircluster
[root@mon-1~]#cdcluster
[root@mon-1cluster]#ceph-deploynewmon-1osd-1osd-2
[ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf
[ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deploynewmon-1osd-1osd-2
[ceph_deploy.cli][INFO]ceph-deployoptions:
[ceph_deploy.cli][INFO]username:None
[ceph_deploy.cli][INFO]func:<functionnewat0x1803230>
[ceph_deploy.cli][INFO]verbose:False
[ceph_deploy.cli][INFO]overwrite_conf:False
[ceph_deploy.cli][INFO]quiet:False
[ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x186d440>
[ceph_deploy.cli][INFO]cluster:ceph
[ceph_deploy.cli][INFO]ssh_copykey:True
[ceph_deploy.cli][INFO]mon:['mon-1','osd-1','osd-2']
[ceph_deploy.cli][INFO]public_network:None
[ceph_deploy.cli][INFO]ceph_conf:None
[ceph_deploy.cli][INFO]cluster_network:None
[ceph_deploy.cli][INFO]default_release:False
[ceph_deploy.cli][INFO]fsid:None
[ceph_deploy.new][DEBUG]Creatingnewclusternamedceph
[ceph_deploy.new][INFO]makingsurepasswordlessSSHsucceeds
[ceph_deploy.new][DEBUG]Writinginitialconfigtoceph.conf...#如果能准确无误的走到这一步就算成功了

目录内容如下

[root@mon-1cluster]#ll
total12
-rw-r--r--1rootroot197Dec602:24ceph.conf
-rw-r--r--1rootroot2921Dec602:24ceph-deploy-ceph.log
-rw-------1rootroot73Dec602:24ceph.mon.keyring

根据自己的IP配置向ceph.conf中添加public_network,并稍微增大mon之间时差允许范围(默认为0.05s,现改为2s):

[root@mon-1cluster]#echopublic_network=192.168.50.0/24>>ceph.conf
[root@mon-1cluster]#echomon_clock_drift_allowed=2>>ceph.conf
[root@mon-1cluster]#catceph.conf
[global]
fsid=0865fe85-1655-4208-bed6-274cae945746
mon_initial_members=mon-1,osd-1,osd-2
mon_host=192.168.50.123,192.168.50.124,192.168.50.125
auth_cluster_required=cephx
auth_service_required=cephx
auth_client_required=cephx
public_network=192.168.50.0/24
mon_clock_drift_allowed=2

部署监控节点

[root@mon-1cluster]#ceph-deploymoncreate-initial
[ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf
[ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deploymoncreate-initial
[ceph_deploy.cli][INFO]ceph-deployoptions:
[ceph_deploy.cli][INFO]username:None
[ceph_deploy.cli][INFO]verbose:False
[ceph_deploy.cli][INFO]overwrite_conf:False
[ceph_deploy.cli][INFO]subcommand:create-initial
[ceph_deploy.cli][INFO]quiet:False
[ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x2023fc8>
[ceph_deploy.cli][INFO]cluster:ceph
[ceph_deploy.cli][INFO]func:<functionmonat0x201d140>
[ceph_deploy.cli][INFO]ceph_conf:None
[ceph_deploy.cli][INFO]default_release:False
[ceph_deploy.cli][INFO]keyrings:None
[ceph_deploy.mon][DEBUG]Deployingmon,clustercephhostsmon-1osd-1osd-2
......省略
[ceph_deploy.gatherkeys][INFO]Storingceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO]Storingceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO]keyring'ceph.mon.keyring'alreadyexists
[ceph_deploy.gatherkeys][INFO]Storingceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO]Storingceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO]Destroytempdirectory/tmp/tmpVLiIFr

目录内容如下

[root@mon-1cluster]#ll
total76
-rw-------1rootroot113Dec622:49ceph.bootstrap-mds.keyring
-rw-------1rootroot113Dec622:49ceph.bootstrap-osd.keyring
-rw-------1rootroot113Dec622:49ceph.bootstrap-rgw.keyring
-rw-------1rootroot129Dec622:49ceph.client.admin.keyring
-rw-r--r--1rootroot300Dec622:47ceph.conf
-rw-r--r--1rootroot50531Dec622:49ceph-deploy-ceph.log
-rw-------1rootroot73Dec622:46ceph.mon.keyring

查看集群状态

[root@mon-1ceph]#ceph-s
cluster1b27aaf2-8b29-49b1-b50e-7ccb1f72d1fa
healthHEALTH_ERR
noosds
monmape1:1monsat{mon-1=192.168.50.123:6789/0}
electionepoch3,quorum0mon-1
osdmape1:0osds:0up,0in
flagssortbitwise
pgmapv2:64pgs,1pools,0bytesdata,0objects
0kBused,0kB/0kBavail
64creating

开始部署OSD:

[root@mon-1cluster]#ceph-deploy--overwrite-confosdpreparemon-1:/dev/sdbmon-1:/dev/sdcmon-1:/dev/sddosd-1:/dev/sdbosd-1:/dev/sdcosd-1:/dev/sddosd-2:/dev/sdbosd-2:/dev/sdcosd-2:/dev/sdd--zap-disk
这里如果部署osd-2有问题,就删除程序和目录,重新分区以后成功创建osd-2
ceph-deploy--overwrite-confosdprepareosd-2:/dev/sdbosd-2:/dev/sdcosd-2:/dev/sdd
[root@mon-1~]#lsblk
NAMEMAJ:MINRMSIZEROTYPEMOUNTPOINT
fd02:014K0disk
sda8:0020G0disk
├─sda18:10500M0part/boot
└─sda28:2019.5G0part
├─centos-root253:0018.5G0lvm/
└─centos-swap253:101G0lvm[SWAP]
sdb8:1602T0disk
├─sdb18:1702T0part/var/lib/ceph/osd/ceph-0
└─sdb28:1805G0part
sdc8:3202T0disk
├─sdc18:3302T0part/var/lib/ceph/osd/ceph-1
└─sdc28:3405G0part
sdd8:4802T0disk
├─sdd18:4902T0part/var/lib/ceph/osd/ceph-2
└─sdd28:5005G0part
sr011:01603M0rom
[root@osd-1~]#lsblk
NAMEMAJ:MINRMSIZEROTYPEMOUNTPOINT
fd02:014K0disk
sda8:0020G0disk
├─sda18:10500M0part/boot
└─sda28:2019.5G0part
├─centos-root253:0018.5G0lvm/
└─centos-swap253:101G0lvm[SWAP]
sdb8:1602T0disk
├─sdb18:1702T0part/var/lib/ceph/osd/ceph-3
└─sdb28:1805G0part
sdc8:3202T0disk
├─sdc18:3302T0part/var/lib/ceph/osd/ceph-4
└─sdc28:3405G0part
sdd8:4802T0disk
├─sdd18:4902T0part/var/lib/ceph/osd/ceph-5
└─sdd28:5005G0part
sr011:01603M0rom
[root@osd-2~]#lsblk
NAMEMAJ:MINRMSIZEROTYPEMOUNTPOINT
fd02:014K0disk
sda8:0020G0disk
├─sda18:10500M0part/boot
└─sda28:2019.5G0part
├─centos-root253:0018.5G0lvm/
└─centos-swap253:101G0lvm[SWAP]
sdb8:1602T0disk
├─sdb18:1702T0part/var/lib/ceph/osd/ceph-6
└─sdb28:1805G0part
sdc8:3202T0disk
├─sdc18:3302T0part/var/lib/ceph/osd/ceph-7
└─sdc28:3405G0part
sdd8:4802T0disk
├─sdd18:4902T0part/var/lib/ceph/osd/ceph-8
└─sdd28:5005G0part
sr011:01603M0rom


这里有个WARN,去掉这个WARN只需要增加rbd池的PG就好

[root@mon-1cluster]#cephosdpoolsetrbdpg_num128
setpool0pg_numto128
[root@mon-1cluster]#cephosdpoolsetrbdpgp_num128
setpool0pgp_numto128
[root@mon-1cluster]#ceph-s
cluster0865fe85-1655-4208-bed6-274cae945746
healthHEALTH_OK
monmape3:2monsat{mon-1=192.168.50.123:6789/0,osd-1=192.168.50.124:6789/0}
electionepoch30,quorum0,1mon-1,osd-1
osdmape58:9osds:9up,9in
flagssortbitwise
pgmapv161:128pgs,0objects
310MBused,18377GB/18378GBavail
128active+clean

给各个节点推送config配置

请不要直接修改某个节点的/etc/ceph/ceph.conf文件的,而是要去部署节点(此处为ceph-1:/root/cluster/ceph.conf)目录下修改。因为节点到几十个的时候,不可能一个个去修改的,采用推送的方式快捷安全!

修改完毕后,执行如下指令,将conf文件推送至各个节点:

[root@mon-1cluster]#ceph-deploy--overwrite-confconfigpushmon-1osd-1osd-2
[ceph_deploy.conf][DEBUG]foundconfigurationfileat:/root/.cephdeploy.conf
[ceph_deploy.cli][INFO]Invoked(1.5.36):/usr/bin/ceph-deployadminmon-1osd-1osd-2
[ceph_deploy.cli][INFO]ceph-deployoptions:
[ceph_deploy.cli][INFO]username:None
[ceph_deploy.cli][INFO]verbose:False
[ceph_deploy.cli][INFO]overwrite_conf:False
[ceph_deploy.cli][INFO]quiet:False
[ceph_deploy.cli][INFO]cd_conf:<ceph_deploy.conf.cephdeploy.Confinstanceat0x1a34e18>
[ceph_deploy.cli][INFO]cluster:ceph
[ceph_deploy.cli][INFO]client:['mon-1','osd-2']
[ceph_deploy.cli][INFO]func:<functionadminat0x1964f50>
[ceph_deploy.cli][INFO]ceph_conf:None
[ceph_deploy.cli][INFO]default_release:False
[ceph_deploy.admin][DEBUG]Pushingadminkeysandconftomon-1
[mon-1][DEBUG]connectedtohost:mon-1
[mon-1][DEBUG]detectplatforminformationfromremotehost
[mon-1][DEBUG]detectmachinetype
[mon-1][DEBUG]writeclusterconfigurationto/etc/ceph/{cluster}.conf
[root@mon-1cluster]#ceph-deploy--overwrite-confconfigpushceph-1ceph-2ceph-3
[ceph_deploy.admin][DEBUG]Pushingadminkeysandconftoosd-1
[osd-1][DEBUG]connectedtohost:osd-1
[osd-1][DEBUG]detectplatforminformationfromremotehost
[osd-1][DEBUG]detectmachinetype
[osd-1][DEBUG]writeclusterconfigurationto/etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG]Pushingadminkeysandconftoosd-2
[osd-2][DEBUG]connectedtohost:osd-2
[osd-2][DEBUG]detectplatforminformationfromremotehost
[osd-2][DEBUG]detectmachinetype
[osd-2][DEBUG]writeclusterconfigurationto/etc/ceph/{cluster}.conf



此时,需要重启各个节点的monitor服务


mon和osd的启动方式

#mon-1为各个monitor所在节点的主机名。
systemctlstartceph-mon@mon-1.service
systemctlrestartceph-mon@mon-1.service
systemctlstopceph-mon@mon-1.service
#0为该节点的OSD的id,可以通过`cephosdtree`查看
systemctlstart/stop/restartceph-osd@0.service


遇到的报错

1.Monitor clock skew detected

http://www.iyunv.com/thread-157755-1-1.html

1. Monitor clock skew detected

[root@mon-1cluster]#ceph-s
clusterf25ad2c5-fd2a-4fcc-a522-344eb498fee5
healthHEALTH_ERR
clockskewdetectedonmon.osd-2
64pgsarestuckinactiveformorethan300seconds
64pgsstuckinactive
noosds
Monitorclockskewdetected

添加配置参数:

vim/etc/ceph/ceph.conf
monclockdriftallowed=2
monclockdriftwarnbackoff=30


同步配置文件

ceph-deploy--overwrite-confadminosd-1osd-2

重启mon服务

systemctlrestartceph-mon@osd-2.service

问题总结:

本问题主要是mon节点服务器,时间偏差比较大导致,本次遇到问题为测试环境,通过修改ceph对时间偏差阀值,规避的告警信息,线上业务环境,注意排查服务器时间同步问题。


参考网址:

http://xuxiaopang.com/2016/10/09/ceph-quick-install-el7-jewel/

http://bbs.ceph.org.cn/question/205

http://www.docoreos.com/?p=86

http://www.jb51.cc/blog/xiangpingli/article/p-5979182.html

原文链接:https://www.f2er.com/centos/379541.html

猜你在找的CentOS相关文章