目前线上使用2.8.4版本的,一直很稳定,但是数据量越来越大,所以想升级到支持集群的3.X版本
特此记录安装过程
@H_403_5@
下载安装
cd /soft (我习惯放在这个目录下)@H_403_5@
wget http://download.redis.io/releases/redis-3.2.9.tar.gz
tar xzf redis-3.2.9.tar.gz @H_403_5@ cd redis-3.2.9 @H_403_5@make && make install
@H_403_5@
依赖项
yum install automake autoconf ruby rubygems -y@H_403_5@
@H_403_5@
创建六个实例
实例目录
mkdir -p /usr/local/cluster
mkdir -p /usr/local/cluster/6379@H_403_5@
mkdir -p /usr/local/cluster/6380@H_403_5@
mkdir -p /usr/local/cluster/6381@H_403_5@
mkdir -p /usr/local/cluster/6382@H_403_5@
mkdir -p /usr/local/cluster/6383@H_403_5@
mkdir -p /usr/local/cluster/6384
@H_403_5@
修改配置文件
cp /soft/redis-3.2.9/redis.conf /usr/local/cluster/6379/@H_403_5@
打开redis.conf并修改(注意不同文件夹下端口都不一样)
所有port都要修改(一般cp过来的都是6379,可以根据6379查找、修改)@H_403_5@ daemonize yes@H_403_5@ cluster-enabled yes@H_403_5@ cluster-config-file nodes.conf@H_403_5@ cluster-node-timeout 5000@H_403_5@ appendonly ye@H_403_5@
每个文件夹下都一样
@H_403_5@
启动实例
root 7607 1 0 21:32 ? 00:00:02 redis-server 127.0.0.1:6379 [cluster] root 7723 1 0 21:33 ? 00:00:02 redis-server 127.0.0.1:6380 [cluster] root 7748 1 0 21:33 ? 00:00:02 redis-server 127.0.0.1:6381 [cluster] root 7775 1 0 21:33 ? 00:00:02 redis-server 127.0.0.1:6382 [cluster] root 7786 1 0 21:33 ? 00:00:02 redis-server 127.0.0.1:6383 [cluster] root 7847 1 0 21:33 ? 00:00:02 redis-server 127.0.0.1:6384 [cluster]@H_403_5@
配置集群
/usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require': no such file to load -- redis (LoadError) from /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from ./redis-trib.rb:25@H_403_5@ @H_403_5@
>>> Creating cluster >>> Performing hash slots allocation on 6 nodes... Using 3 masters: 127.0.0.1:6379 127.0.0.1:6380 127.0.0.1:6381 Adding replica 127.0.0.1:6382 to 127.0.0.1:6379 Adding replica 127.0.0.1:6383 to 127.0.0.1:6380 Adding replica 127.0.0.1:6384 to 127.0.0.1:6381 M: c274b5d0921e59a50838afd5d2975f1ead774aa0 127.0.0.1:6379 slots:0-5460 (5461 slots) master M: 515b30be975005a102837fd75298ad4ec87cd2e7 127.0.0.1:6380 slots:5461-10922 (5462 slots) master M: 72bbf9cf89ae3eb64230b0b60d720c46400983fb 127.0.0.1:6381 slots:10923-16383 (5461 slots) master S: 1d8a5458a13b4a13b3c21f73981a463bc7dd8652 127.0.0.1:6382 replicates c274b5d0921e59a50838afd5d2975f1ead774aa0 S: f28ecdd64376fd41cd78ec8d1ccf0b3ede2352c1 127.0.0.1:6383 replicates 515b30be975005a102837fd75298ad4ec87cd2e7 S: df431bd756cf57b4711ae3dd64d569e7958e4acc 127.0.0.1:6384 replicates 72bbf9cf89ae3eb64230b0b60d720c46400983fb Can I set the above configuration? (type 'yes' to accept): yes >>> Nodes configuration updated >>> Assign a different config epoch to each node >>> Sending CLUSTER MEET messages to join the cluster Waiting for the cluster to join.... >>> Performing Cluster Check (using node 127.0.0.1:6379) M: c274b5d0921e59a50838afd5d2975f1ead774aa0 127.0.0.1:6379 slots:0-5460 (5461 slots) master 1 additional replica(s) S: f28ecdd64376fd41cd78ec8d1ccf0b3ede2352c1 127.0.0.1:6383 slots: (0 slots) slave replicates 515b30be975005a102837fd75298ad4ec87cd2e7 S: df431bd756cf57b4711ae3dd64d569e7958e4acc 127.0.0.1:6384 slots: (0 slots) slave replicates 72bbf9cf89ae3eb64230b0b60d720c46400983fb M: 72bbf9cf89ae3eb64230b0b60d720c46400983fb 127.0.0.1:6381 slots:10923-16383 (5461 slots) master 1 additional replica(s) M: 515b30be975005a102837fd75298ad4ec87cd2e7 127.0.0.1:6380 slots:5461-10922 (5462 slots) master 1 additional replica(s) S: 1d8a5458a13b4a13b3c21f73981a463bc7dd8652 127.0.0.1:6382 slots: (0 slots) slave replicates c274b5d0921e59a50838afd5d2975f1ead774aa0 [OK] All nodes agree about slots configuration. >>> Check for open slots... >>> Check slots coverage... [OK] All 16384 slots covered.@H_403_5@
cluster_state:ok cluster_slots_assigned:16384 cluster_slots_ok:16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:1 cluster_stats_messages_sent:552 cluster_stats_messages_received:552@H_403_5@
f28ecdd64376fd41cd78ec8d1ccf0b3ede2352c1 127.0.0.1:6383 slave 515b30be975005a102837fd75298ad4ec87cd2e7 0 1497362977647 5 connected df431bd756cf57b4711ae3dd64d569e7958e4acc 127.0.0.1:6384 slave 72bbf9cf89ae3eb64230b0b60d720c46400983fb 0 1497362978648 6 connected 72bbf9cf89ae3eb64230b0b60d720c46400983fb 127.0.0.1:6381 master - 0 1497362978147 3 connected 10923-16383 515b30be975005a102837fd75298ad4ec87cd2e7 127.0.0.1:6380 master - 0 1497362978147 2 connected 5461-10922 1d8a5458a13b4a13b3c21f73981a463bc7dd8652 127.0.0.1:6382 slave c274b5d0921e59a50838afd5d2975f1ead774aa0 0 1497362977146 4 connected c274b5d0921e59a50838afd5d2975f1ead774aa0 127.0.0.1:6379 myself,master - 0 0 1 connected 0-5460@H_403_5@
完成
[root@src]# redis-cli -c set test001 hahaha OK [root@src]# redis-cli -c get test001 "hahaha"@H_403_5@ @H_403_5@