CentOS 6.6 上使用 luci/ricci 安装配置 RHCS 集群

前端之家收集整理的这篇文章主要介绍了CentOS 6.6 上使用 luci/ricci 安装配置 RHCS 集群前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。


@H_403_3@

1.配置@H_403_3@ RHCS 集群的前提:@H_403_3@

  1. 时间同步@H_403_3@

  2. 名称解析,这里使用修改@H_403_3@/etc/hosts 文件@H_403_3@

  3. 配置好@H_403_3@ yum 源,@H_403_3@CentOS 6 的默认的就行@H_403_3@

  4. 关闭防火墙(或者开放集群所需通信端口),和@H_403_3@selinux,@H_403_3@

  5. 关闭@H_403_3@ NetworkManager 服务@H_403_3@

@H_403_3@

2. RHCS 所需要的主要软件包为@H_403_3@cman和@H_403_3@ rgmanager

cman: 是集群基础信息层,在@H_403_3@ CentOS 6中依赖@H_403_3@ corosync

rgmanager: 是集群资源管理器,@H_403_3@ 类似于@H_403_3@pacemaker 功能@H_403_3@

luci: 提供了管理@H_403_3@ rhcs 集群的@H_403_3@ web 界面@H_403_3@,luci 管理集群主要是通过跟@H_403_3@ ricci 通信来完成的。@H_403_3@

ricci: 安装在集群的节点的接收来自@H_403_3@ luci 管理请求的代理。@H_403_3@

luci 跟@H_403_3@ ricci 的关系就好像@H_403_3@ ambari-server 跟@H_403_3@ ambari-agent 一样。@H_403_3@


3.环境说明@H_403_3@:

luci:192.168.6.31cent1.test.com
ricci:192.168.6.32cent2.test.com
ricci:192.168.6.33cent3.test.com
ricci:192.168.6.34cent4.test.com

我这里已经配好了主机名了,但是其他的如时间同步,配置@H_403_3@/etc/hosts/ 等都没执行,为了方便,所以写了个@H_403_3@ playbook 来进行初始化一下@H_403_3@

---
-hosts:hdpservers
remote_user:root
vars:
tasks:
-name:addsynctimecron
cron:name='synctime'minute='*/5'job='/usr/sbin/ntpdate192.168.6.31'
-name:shutdowniptables
service:name={{item.name}}state={{item.state}}enabled={{item.enabled}}
with_items:
-{name:iptables,state:stopped,enabled:no}
-{name:NetworkManager,enabled:no}
tags:stopservice
-name:copyselinuxconffile
copy:src={{item.src}}dest={{item.dest}}owner={{item.owner}}group={{item.group}}mode={{item.mode}}
with_items:
-{src=\'#\'" /etc/selinux/config',dest:/etc/selinux/config,owner:root,group:root,mode:'0644'}
-{src=\'#\'" /etc/hosts',dest:/etc/hosts,mode:'0644'}
-name:cmdoffselinux
shell:setenforce0

执行这个 playbook,进行初始化

[root@cent1yaml]#ansible-playbookbase.yml

4.在@H_403_3@ cent1 上安装@H_403_3@ luci,luci 是一个@H_403_3@ python 程序,依赖很多@H_403_3@python包@H_403_3@

[root@cent1~]#yuminstallluci

启动@H_403_3@ luci@H_403_3@

[root@cent3~]#/etc/init.d/lucistart
Addingfollowingauto-detectedhostIDs(IPaddresses/domainnames),correspondingto`cent3'address,totheconfigurationofself-managedcertificate`/var/lib/luci/etc/cacert.config'(youcanchangethembyediting`/var/lib/luci/etc/cacert.config',removingthegeneratedcertificate`/var/lib/luci/certs/host.pem'andrestartingluci):
(nonesuitablefound,youcanstilldoitmanuallyasmentionedabove)

Generatinga2048bitRSAprivatekey
writingnewprivatekeyto'/var/lib/luci/certs/host.pem'
正在启动saslauthd:[确定]
Startluci...[确定]
Pointyourwebbrowsertohttps://cent1.hfln.com:8084(orequivalent)toaccessluci

现在可以在前台登录@H_403_3@luci 了,看清是@H_403_3@ https 哦@H_403_3@

wKiom1hyEoyzMOQgAACbKZtlLzM010.png-wh_50

账号密码就是这台主机的账号和密码@H_403_3@

wKioL1hyEsGBvBTUAACQpOagIiI054.png-wh_50

登录成功啦@H_403_3@,@H_403_3@现在来配置@H_403_3@ rhcs @H_403_3@的集群,这个只是用来管理集群的,@H_403_3@真正的集群还没开始装呢。@H_403_3@

5.在@H_403_3@ cnet2,cent3,cent4 @H_403_3@中安装@H_403_3@ ricci,ricci @H_403_3@也依赖很多软件,这里使用@H_403_3@ ansible @H_403_3@直接在三个节点上装,@H_403_3@ @H_403_3@当然我已经配好了@H_403_3@ cent1 @H_403_3@到@H_403_3@ @H_403_3@其他节点的免密钥登录了@H_403_3@@H_403_3@

[root@cent1~]#ansiblerhcs-myum-a"name=ricci"

装好@H_403_3@ricci @H_403_3@之后还要在@H_403_3@ node @H_403_3@节点上给@H_403_3@ ricci @H_403_3@用户设置密码,@H_403_3@ricci@H_403_3@用户就是运行@H_403_3@ ricci@H_403_3@进程的用户,这个密码一会要用@H_403_3@,@H_403_3@这里就简单粗暴了,这个密码还可以用@H_403_3@ ccs@H_403_3@命令来进行设置@H_403_3@

[root@cent1~]#ansiblerhcs-mshell-a"echo'123456'|passwd--stdinricci"

@H_403_3@

启动@H_403_3@ ricci@H_403_3@

[root@cent1~]#ansiblerhcs-mservice-a"name=riccistate=startedenabled=yes"
[root@cent2~]#ss-tunlp|grepricci
tcpLISTEN05:::11111:::*users:(("ricci",3237,3))

@H_403_3@

ricci @H_403_3@监听在@H_403_3@ 11111 @H_403_3@端口,像这种操作当然也是可以写到@H_403_3@ playbook @H_403_3@当中的@H_403_3@


6. @H_403_3@现在可以在@H_403_3@web @H_403_3@界面上配置@H_403_3@集群了,比如创建@H_403_3@/@H_403_3@添加@H_403_3@/@H_403_3@删除一个集群,管理@H_403_3@node,resource,fence device,servicegroups,Failover Domains @H_403_3@等等集群的全生命周期都可以在这里完成。@H_403_3@


@H_403_3@

这里演示一个关于@H_403_3@ web@H_403_3@服务的高可用服务@H_403_3@

Manage Clusters--> Create@H_403_3@ @H_403_3@是创建一个集群@H_403_3@

wKioL1hyFQ7xT45yAADbD9J-0DI383.png-wh_50

@H_403_3@

这个界面还算简单吧;@H_403_3@

wKioL1hyFZ7wiKFQAADMU6p_c2A753.png-wh_50

@H_403_3@

Create Cluster @H_403_3@之后,那么就开始尝试安装集群软件了@H_403_3@.@H_403_3@

在任意一个@H_403_3@node@H_403_3@上可以看到@H_403_3@ ricci @H_403_3@的工作进程:@H_403_3@

[root@cent2~]#psaux|grepricci
ricci34530.10.42136644400?S<s17:180:00ricci-uricci
ricci34890.00.1549121908?S<s17:220:00/usr/libexec/ricci/ricci-worker-f/var/lib/ricci/queue/1500004777
root34900.20.5485525136?S17:220:00ricci-modrpm
root35670.00.0103252880pts/0S+17:240:00grepricci

@H_403_3@

/var/lib/ricci/queue/@H_403_3@目录下存放的是@H_403_3@ luci @H_403_3@发给@H_403_3@ ricci @H_403_3@的任务文件是@H_403_3@ XML @H_403_3@格式的@H_403_3@

[root@cent2~]#file/var/lib/ricci/queue/1500004777
/var/lib/ricci/queue/1500004777:XMLdocumenttext

@H_403_3@7. 安装成功了@H_403_3@

wKioL1hyFj2g6fBUAADptWp3XSk187.png-wh_50

可以点任何一个@H_403_3@node @H_403_3@进去看看@H_403_3@

wKiom1hyFl-iji5ZAABejjAPvnA610.png-wh_50

如果这底下的服务没启动的话,可以尝试手动起一下,一般来说是@H_403_3@OK@H_403_3@的。@H_403_3@


8.@H_403_3@添加资源@H_403_3@


这里没有@H_403_3@ fence @H_403_3@设备,不关注这个,添加两个公共资源,并添加一个服务,然后来启动服务@H_403_3@

Resources -->Add @H_403_3@: @H_403_3@添加一个资源@H_403_3@

wKiom1hyFp6gPQcQAABHuqot2Z4055.png-wh_50

@H_403_3@

添加一个虚拟@H_403_3@IP@H_403_3@,这里的@H_403_3@ mask @H_403_3@要写成上面这样,不能写成@H_403_3@ 255.255.255.0 @H_403_3@这种,否则会导致无法添加@H_403_3@IP@H_403_3@

rgmanagerStartingstoppedserviceservice:web1
rgmanagerstartonip"192.168.6.100/255.255.255.0"returned1(genericerror)
rgmanager#68:Failedtostartservice:web1;returnvalue:1

添加一个@H_403_3@script@H_403_3@资源@H_403_3@

wKioL1hyFvCgVf2kAAAvcOMojQQ397.png-wh_50


9.@H_403_3@添加@H_403_3@ Service@H_403_3@

这里的资源是共公的,假如这个集群内有多个服务,那么都可以使用这些资源,也可以在@H_403_3@

Service Groups @H_403_3@添加一个私有的资源。@H_403_3@

现在添加一个@H_403_3@Service:@H_403_3@


@H_403_3@

Service Groups--> Add @H_403_3@: @H_403_3@添加一个@H_403_3@ Service,@H_403_3@

@H_403_3@

wKiom1hyF62x9i_rAACcvdyoeb0068.png-wh_50

Add Resource @H_403_3@将刚才建立的两个资源添加进来;@H_403_3@@H_403_3@

wKioL1hyF_3BN0Y-AAC6KjlqtDs615.png-wh_50

@H_403_3@@H_403_3@

现在在@H_403_3@集群的节点上用命令查看一下,集群内的任何节点都可以@H_403_3@

[root@cent3~]#clustat
ClusterStatusforha1@SunJan817:47:402017
MemberStatus:Quorate

MemberNameIDStatus
--------------------
cent2.test.com1Online,rgmanager
cent3.test.com2Online,Local,rgmanager
cent4.test.com3Online,rgmanager

ServiceNameOwner(Last)State
---------------------------
service:web1cent2.test.comstarted

在@H_403_3@ cent2 @H_403_3@上@H_403_3@ ip @H_403_3@和@H_403_3@httpd @H_403_3@服务都已经起来了@H_403_3@

[root@cent2~]#ipa
1:lo:<LOOPBACK,UP,LOWER_UP>mtu65536qdiscnoqueuestateUNKNOWN
link/loopback00:00:00:00:00:00brd00:00:00:00:00:00
inet127.0.0.1/8scopehostlo
inet6::1/128scopehost
valid_lftforeverpreferred_lftforever
2:eth0:<BROADCAST,MULTICAST,LOWER_UP>mtu1500qdiscpfifo_faststateUPqlen1000
link/ether00:0c:29:91:b3:11brdff:ff:ff:ff:ff:ff
inet192.168.6.32/24brd192.168.6.255scopeglobaleth0
inet192.168.6.100/24scopeglobalsecondaryeth0
inet6fe80::20c:29ff:fe91:b311/64scopelink
valid_lftforeverpreferred_lftforever
[root@cent2~]#netstat-tunlp|grep80
tcp00:::80:::*LISTEN34901/httpd

@H_403_3@

10.测试故障转移@H_403_3@:


关于@H_403_3@ rhcs 中@H_403_3@ service 的健康状态检测@H_403_3@,可以通过@H_403_3@ /var/log/cluster/rgmanager.log 日志来查看@H_403_3@

Jan0818:56:59rgmanager[ip]Checking192.168.6.100/24,Level10
Jan0818:56:59rgmanager[ip]192.168.6.100/24presentoneth0
Jan0818:56:59rgmanager[ip]Linkforeth0:Detected
Jan0818:56:59rgmanager[ip]Linkdetectedoneth0
Jan0818:56:59rgmanager[ip]Localpingto192.168.6.100succeeded

这里可以看到他会尝试查看和@H_403_3@ ping 192.168.6.100 ,这是针对@H_403_3@ IP 资源的检测方式@H_403_3@

Jan0818:55:49rgmanager[script]Executing/etc/rc.d/init.d/httpdstatus

上面是@H_403_3@ script 资源的检测方式则是仅仅去用脚本来执行@H_403_3@ status 参数。@H_403_3@

在我尝试将@H_403_3@/etc/init.d/httpd/ stop 后,日志出现了如下:@H_403_3@

Jan0818:56:59rgmanager[script]Executing/etc/rc.d/init.d/httpdstatus
Jan0818:56:59rgmanager[script]script:http1:statusof/etc/rc.d/init.d/httpdFailed(returned3)
#这里发现检测失败了
Jan0818:56:59rgmanagerstatusonscript"http1"returned1(genericerror)
Jan0818:56:59rgmanagerStoppingserviceservice:web1
Jan0818:56:59rgmanager[script]Executing/etc/rc.d/init.d/httpdstop
Jan0818:56:59rgmanager[ip]RemovingIPv4address192.168.6.100/24frometh0
#以上几步在这个节点停止了web1服务
Jan0818:57:09rgmanagerServiceservice:web1isrecovering
Jan0818:57:14rgmanagerServiceservice:web1isnowrunningonmember2
#将web1服务在member2上恢复了,member2也就是cent3.test.com

@H_403_3@

查看转移后的集群状态:@H_403_3@

[root@cent3~]#clustat
ClusterStatusforha1@SunJan820:25:262017
MemberStatus:Quorate

MemberNameIDStatus
--------------------
cent2.test.com1Online,rgmanager

ServiceNameOwner(Last)State
---------------------------
service:web1cent3.test.comstarted

@H_403_3@


@H_403_3@

如果这种@H_403_3@ script 的资源不符合你的需求,那么可以尝试@H_403_3@ apache 资源。即使你认为这种@H_403_3@ script 的资源检查方式过于简单,也可以在脚本里添加功能来达到你的目的。@H_403_3@


11.尝试关闭节点,查看 Service 转移情况:

在关掉@H_403_3@ cent3 之后,@H_403_3@service 转移到了@H_403_3@ cent4上@H_403_3@

[root@cent2~]#clustat
ClusterStatusforha1@SunJan820:35:422017
MemberStatus:Quorate

MemberNameIDStatus
--------------------
cent2.test.com1Online,rgmanager
cent3.test.com2Offline
cent4.test.com3Online,rgmanager

ServiceNameOwner(Last)State
---------------------------
service:web1cent4.test.comstarted

接着关掉了@H_403_3@ cent4,Service 又转移到了@H_403_3@ cent2

[root@cent2~]#clustat
ClusterStatusforha1@SunJan820:36:272017
MemberStatus:Quorate

MemberNameIDStatus
--------------------
cent2.test.com1Online,rgmanager
cent3.test.com2Offline
cent4.test.com3Online

ServiceNameOwner(Last)State
---------------------------
service:web1cent2.test.comstarted

这里的 cent4.test.com 仍然显示 Online 是因为正在关机当中,尚未真正关闭

过了几秒,弹出了以下提示信息:@H_403_3@

[root@cent2~]#
Messagefromsyslogd@cent2atJan820:36:42...
rgmanager[5685]:#1:QuorumDissolved

日志里显示:@H_403_3@

Jan0820:35:01rgmanagerMember2shuttingdown

Jan0820:36:18rgmanagerMember3shuttingdown
Jan0820:36:18rgmanagerStartingstoppedserviceservice:web1
Jan0820:36:18rgmanager[ip]Linkforeth0:Detected
Jan0820:36:19rgmanager[ip]AddingIPv4address192.168.6.100/24toeth0
Jan0820:36:19rgmanager[ip]Pingingaddr192.168.6.100fromdeveth0
Jan0820:36:21rgmanager[ip]SendinggratuitousARP:192.168.6.10000:0c:29:91:b3:11brdff:ff:ff:ff:ff:ff
Jan0820:36:22rgmanager[script]Executing/etc/rc.d/init.d/httpdstart
Jan0820:36:22rgmanagerServiceservice:web1started
Jan0820:36:42rgmanager#1:QuorumDissolved

Messagefromsyslogd@cent2atJan820:36:42...
rgmanager[5685]:#1:QuorumDissolved
Jan0820:36:42rgmanager[script]Executing/etc/rc.d/init.d/httpdstop
Jan0820:36:42rgmanager[ip]RemovingIPv4address192.168.6.100/24frometh0

服务停止了,这是因为@H_403_3@ 法定票数不足的原因@H_403_3@

[root@cent2~]#clustat
Servicestatesunavailable:Operationrequiresquorum
ClusterStatusforha1@SunJan820:37:002017
MemberStatus:Inquorate

MemberNameIDStatus
--------------------
cent2.test.com1Online,Local
cent3.test.com2Offline
cent4.test.com3Offline

猜你在找的CentOS相关文章