CEPH简介@H_404_5@
是一种已经震撼了整个存储行业的最热门的软件定义存储技术(SoftwareDefined Storage,SDS)。它是要给开源项目,为块存储、文件存储和对象存储提供了统一的软件定义解决方案。CEPH旨在提供一个扩展性强大、性能优越且无单点故障的分布式存储系统。从一开始,CEPH就被设计为能在通用商业硬件上运行,并且支持高度扩展(逼近甚至超过艾字节的数量)。@H_404_5@
《Ceph Cookbook》 @H_404_5@
一、环境准备@H_404_5@
1.Vmware安装Ubuntu14.04共3台(分别是node1、node2、node3)。同时安装openssh-server服务。@H_404_5@
node1作为monitor节点。node2和node3作为osd设备,并挂载适当大小的硬盘。@H_404_5@
2.启用两张网卡,eth0设置为仅主机模式(在虚拟机中仅主机网络不受外网干扰,集群健壮性更好),eth1设置为桥接模式。仅主机网络需设置静态IP,桥接网络随意。 @H_404_5@
root@node1:~# vim /etc/network/interfaces@H_404_5@
auto eth1@H_404_5@
iface eth1 inet static@H_404_5@
address 10.10.11.101@H_404_5@
netmask 255.255.0.0@H_404_5@
gateway 10.10.1.1@H_404_5@
dns-nameservers 10.10.1.1 8.8.8.8 @H_404_5@
auto eth0@H_404_5@
iface eth0 inet static@H_404_5@
address 192.168.107.21@H_404_5@
3.分别修改这三台设备的源,将apt的源文件替换成阿里云源,因为国内的源下载速度比较快。@H_404_5@
root@ node1:~# cat /etc/apt/sources.list@H_404_5@
deb http://mirrors.aliyun.com/ubuntu/ trusty main restricteduniverse multiverse@H_404_5@
deb http://mirrors.aliyun.com/ubuntu/ trusty-security mainrestricted universe multiverse@H_404_5@
deb http://mirrors.aliyun.com/ubuntu/ trusty-updates mainrestricted universe multiverse@H_404_5@
deb http://mirrors.aliyun.com/ubuntu/ trusty-proposed mainrestricted universe multiverse@H_404_5@
deb http://mirrors.aliyun.com/ubuntu/ trusty-backports mainrestricted universe multiverse@H_404_5@
deb-src http://mirrors.aliyun.com/ubuntu/ trusty main restricteduniverse multiverse@H_404_5@
deb-src http://mirrors.aliyun.com/ubuntu/ trusty-security mainrestricted universe multiverse @H_404_5@
deb-src http://mirrors.aliyun.com/ubuntu/ trusty-updates mainrestricted universe multiverse@H_404_5@
deb-src http://mirrors.aliyun.com/ubuntu/ trusty-proposed mainrestricted universe multiverse@H_404_5@
deb-src http://mirrors.aliyun.com/ubuntu/ trusty-backports mainrestricted universe multiverse@H_404_5@
root@ node1:~# apt_get update@H_404_5@
root@ node1:~#@H_404_5@
二、开始部署(建议直接用root用户,以下以root用户为例)@H_404_5@
1、修改每个节点的主机名分别为node1node2 node3 (hostnamectl set-hostname $HOSTNAME)。并修改每台机器的host文件 (做一个硬解析),修改后可以通过名字访问各个设备。@H_404_5@
root@node1:~/ceph-cluster# cat /etc/hosts@H_404_5@
127.0.0.1 localhost@H_404_5@
127.0.1.1 ubuntu14@H_404_5@
# The following lines are desirable for IPv6 capable hosts@H_404_5@
::1localhost ip6-localhost ip6-loopback@H_404_5@
ff02::1 ip6-allnodes@H_404_5@
ff02::2 ip6-allrouters@H_404_5@
192.168.107.21 node1@H_404_5@
192.168.107.22 node2@H_404_5@
192.168.107.23 node3@H_404_5@
2. 安装ceph-deploy(只在管理节点上执行)@H_404_5@
wget -q -O-'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudoapt-key add -
echo deb http://mirrors.163.com/ceph/debian-jewel$(lsb_release -sc) main |tee/etc/apt/sources.list.d/ ceph.list
apt-get update
apt-get install ceph-deploy@H_404_5@
4、 用无密码的SSH连接到每个Ceph节点来配置你的ceph-deploy 管理节点. 保留密码为空。@H_404_5@
ssh-keygen@H_404_5@
将密钥拷贝到其他osd设备上去。@H_404_5@
ssh-copy-idroot@node1@H_404_5@
ssh-copy-idroot@node2@H_404_5@
ssh-copy-idroot@node3@H_404_5@
测试:此时您可以直接用ssh node2直接登陆到node2主机,不需要输入密码。@H_404_5@
5、 修改你的ceph-deploy管理节点的~/.ssh/config 文件使它能像你创建的用户(比如一样记录至节点 .可能不存在 这个配置文件,但是你可以创建一个这样的配置文件。配置文件中的内容如下:@H_404_5@
root@ node1:~/ceph-cluster# cat ~/.ssh/config @H_404_5@
Host node1@H_404_5@
User root@H_404_5@
Host node2@H_404_5@
Host node3@H_404_5@
User root@H_404_5@
6、创建一个自己的集群目录(目录名随意)@H_404_5@
mkdir ceph-cluster@H_404_5@
cd ceph-cluster@H_404_5@
7、创建一个新的集群@H_404_5@
ceph-deploy new node1@H_404_5@
8、修改ceph.conf配置文件 vim ceph.conf @H_404_5@
osd pool default size = 3 ---》设置副本个数,不建议超过osd数量@H_404_5@
osd pool default min size = 1 --->设置可写最小副本个数@H_404_5@
osd pool default pg num = 333 ---》设置pool的pg个数@H_404_5@
osd pool default pgp num = 333--》设置pool的pgp_num个数@H_404_5@
9、分别在所有的设备上安装ceph@H_404_5@
ceph-deploy install node1 node2 node3@H_404_5@
注意:安装的时候很有可能出现各种不同的、不知名错误,如果解决不了也可以用以下方式在各节点单独安装,安装完成后可:@H_404_5@
http://mirrors.163.com/ceph/debian-jewel$(lsb_release -sc) main | tee /etc/apt/sources.list.d/ceph.list
apt-get update
apt-get install ceph@H_404_5@
10、在上创建监视线程,并且初始化:@H_404_5@
ceph-deploy mon create-initial@H_404_5@
chmod 777 ceph.client.admin.keyring@H_404_5@
chmod 777 /etc/ceph/ceph.client.admin.keyring@H_404_5@
12、列出各主机上面可以用的磁盘:@H_404_5@
ceph-deploy disk list node1 node2 node3@H_404_5@
13、格式化上的磁盘为xfs,并创建线程。@H_404_5@
ceph-deploy disk zapnode2:/dev/sdb@H_404_5@
ceph-deploy osd create node2:/dev/sdb@H_404_5@
ceph-deploy disk zap node3:/dev/sdb@H_404_5@
ceph-deploy osd create node3:/dev/sdb@H_404_5@
ceph-deploy admin node1 node2 node3@H_404_5@
chmod+r /etc/ceph/ceph.client.admin.keyring@H_404_5@
15. 查看ceph的安装状态,如果集群健康则完成部署,若集群不健康则根据具体问题修改相关参数即可health ok@H_404_5@
ceph�Cs@H_404_5@
16.一切正常则在此拍摄快照@H_404_5@