redis3.0.6クラスタのインストール

12603 ワード


(クラスタが正常に動作するには少なくとも3つのプライマリノードが必要です.ここでは6つのredisノードを作成します.そのうち3つはプライマリノードで、3つはセカンダリノードで、対応するredisノードのipとポートの対応関係は以下の通りです.)
127.0.0.1:6379
127.0.0.1:6380
127.0.0.1: 6381
127.0.0.1: 6382
127.0.0.1: 6383
127.0.0.1: 6384

 
一、redisインストール
 
wget http://download.redis.io/releases/redis-3.0.6.tar.gz
tar -zxvf redis-3.0.6.tar.gz
mv redis-3.0.6 redis
mv redis /data/setup/
make
make install

 
 
二、rubyをインストールする環境
yum install ruby
yum install rubygems
gem install redis

 
 
三、クラスタディレクトリの作成
cd /data/setup/redis/
mkdir cluster
cd cluster
mkdir 6379
mkdir 6380
mkdir 6381
mkdir 6382
mkdir 6383
mkdir 6384

 
 
四、構成ファイルredisを修正する.conf
cp /data/setup/redis/redis.conf /data/setup/redis/cluster/
vi redis.conf
#            

daemonize yes
port 6379
pidfile /data/setup/redis/cluster/6379/redis-6379.pid
dbfilename dump-6379.rdb
dir /data/setup/redis/cluster/6379/
logfile /data/setup/redis/cluster/6379/redis-6379.logs
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
appendonly yes

 
#   redis.conf           
cp redis.conf /data/setup/redis/cluster/6379
cp redis.conf /data/setup/redis/cluster/6380
cp redis.conf /data/setup/redis/cluster/6381
cp redis.conf /data/setup/redis/cluster/6382
cp redis.conf /data/setup/redis/cluster/6383
cp redis.conf /data/setup/redis/cluster/6384
#  :         6379/6380/6381/6382/6383/6384    redis.conf    port  

 
五、それぞれRedisを起動する
/data/setup/redis/cluster/6379/redis-6379.conf
redis-server /data/setup/redis/cluster/6380/redis-6380.conf
redis-server /data/setup/redis/cluster/6381/redis-6381.conf
redis-server /data/setup/redis/cluster/6382/redis-6382.conf
redis-server /data/setup/redis/cluster/6383/redis-6383.conf
redis-server /data/setup/redis/cluster/6384/redis-6384.conf

 
 
六:コマンド実行によるクラスタの作成
#--replicasでは、Redis ClusterのMasterノードごとにいくつかのSlaveノードを配置することを指定します.
 
[root@localhost cluster]# /data/setup/redis/src/redis-trib.rb create --replicas 1 172.16.5.240:6379 172.16.5.240:6380 172.16.5.240:6381 172.16.5.240:6382 172.16.5.240:6383 172.16.5.240:6384

 
.16.5.240:6383 172.16.5.240:6384
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
172.16.5.240:6379
172.16.5.240:6380
172.16.5.240:6381
Adding replica 172.16.5.240:6382 to 172.16.5.240:6379
Adding replica 172.16.5.240:6383 to 172.16.5.240:6380
Adding replica 172.16.5.240:6384 to 172.16.5.240:6381
M: 89d58edb12b775a5be489690b1955990271af896 172.16.5.240:6379
   slots:0-5460 (5461 slots) master
M: a795943a5cba83c8ec8cf81146c9e2e4233d2a97 172.16.5.240:6380
   slots:5461-10922 (5462 slots) master
M: 4ad16d3551d88a11edef03711a0c451ef38d89f0 172.16.5.240:6381
   slots:10923-16383 (5461 slots) master
S: 7d138fe67343c13be4b78fe6a969088b08d48cc0 172.16.5.240:6382
   replicates 89d58edb12b775a5be489690b1955990271af896
S: 22631d93e34708b570b078733e73e3d6b584890d 172.16.5.240:6383
   replicates a795943a5cba83c8ec8cf81146c9e2e4233d2a97
S: 4d98f6f561a85278d57ab20bf9aa036fdf31bb17 172.16.5.240:6384
   replicates 4ad16d3551d88a11edef03711a0c451ef38d89f0
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join....
>>> Performing Cluster Check (using node 172.16.5.240:6379)
M: 89d58edb12b775a5be489690b1955990271af896 172.16.5.240:6379
   slots:0-5460 (5461 slots) master
M: a795943a5cba83c8ec8cf81146c9e2e4233d2a97 172.16.5.240:6380
   slots:5461-10922 (5462 slots) master
M: 4ad16d3551d88a11edef03711a0c451ef38d89f0 172.16.5.240:6381
   slots:10923-16383 (5461 slots) master
M: 7d138fe67343c13be4b78fe6a969088b08d48cc0 172.16.5.240:6382
   slots: (0 slots) master
   replicates 89d58edb12b775a5be489690b1955990271af896
M: 22631d93e34708b570b078733e73e3d6b584890d 172.16.5.240:6383
   slots: (0 slots) master
   replicates a795943a5cba83c8ec8cf81146c9e2e4233d2a97
M: 4d98f6f561a85278d57ab20bf9aa036fdf31bb17 172.16.5.240:6384
   slots: (0 slots) master
   replicates 4ad16d3551d88a11edef03711a0c451ef38d89f0
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 
 
 
 
七、クラスタ状態の表示
[root@localhost cluster]# /data/setup/redis/src/redis-trib.rb check 172.16.5.240:6379

 
>>> Performing Cluster Check (using node 172.16.5.240:6379)
S: 89d58edb12b775a5be489690b1955990271af896 172.16.5.240:6379
   slots: (0 slots) slave
   replicates 7d138fe67343c13be4b78fe6a969088b08d48cc0
M: 7d138fe67343c13be4b78fe6a969088b08d48cc0 172.16.5.240:6382
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: a795943a5cba83c8ec8cf81146c9e2e4233d2a97 172.16.5.240:6380
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
M: 4ad16d3551d88a11edef03711a0c451ef38d89f0 172.16.5.240:6381
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
S: 22631d93e34708b570b078733e73e3d6b584890d 172.16.5.240:6383
   slots: (0 slots) slave
   replicates a795943a5cba83c8ec8cf81146c9e2e4233d2a97
S: 4d98f6f561a85278d57ab20bf9aa036fdf31bb17 172.16.5.240:6384
   slots: (0 slots) slave
   replicates 4ad16d3551d88a11edef03711a0c451ef38d89f0
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

 
 
 
 
八、クライアント操作
1.クライアントのログイン
[root@localhost cluster]# redis-cli -c -p 6379

 
 
2.ノードのステータスの表示
 
127.0.0.1:6379> cluster nodes

 
7d138fe67343c13be4b78fe6a969088b08d48cc0 172.16.5.240:6382 master - 0 1452915979733 7 connected 0-5460
a795943a5cba83c8ec8cf81146c9e2e4233d2a97 172.16.5.240:6380 master - 0 1452915980741 2 connected 5461-10922
89d58edb12b775a5be489690b1955990271af896 172.16.5.240:6379 myself,slave 7d138fe67343c13be4b78fe6a969088b08d48cc0 0 0 1 connected
4ad16d3551d88a11edef03711a0c451ef38d89f0 172.16.5.240:6381 master - 0 1452915977717 3 connected 10923-16383
22631d93e34708b570b078733e73e3d6b584890d 172.16.5.240:6383 slave a795943a5cba83c8ec8cf81146c9e2e4233d2a97 0 1452915976709 5 connected
4d98f6f561a85278d57ab20bf9aa036fdf31bb17 172.16.5.240:6384 slave 4ad16d3551d88a11edef03711a0c451ef38d89f0 0 1452915978725 6 connected

 
九、サービス起動
 
cd /etc/init.d/ 

#redis-6379 redis-6380 redis-6381 redis-6382 redis-6383 redis-6384をそれぞれ作成し、対応するポートを変更
 
 
vi redis-6379

   
#次のオプションを追加
# chkconfig: 2345 10 90
# description: Start and Stop redis
 
PATH=/usr/local/bin:/sbin:/usr/bin:/bin
 
REDISPORT=6379 #      
EXEC=/usr/local/bin/redis-server #      
REDIS_CLI=/usr/local/bin/redis-cli #      
 
PIDFILE=/data/setup/redis/cluster/6379/redis-6379.pid
CONF="/data/setup/redis/cluster/6379/redis-6379.conf" #      
 
case "$1" in
        start)
                if [ -f $PIDFILE ]
                then
                        echo "$PIDFILE exists, process is already running or crashed."
                else
                        echo "Starting Redis server..."
                        $EXEC $CONF
                fi
                if [ "$?"="0" ]
                then
                        echo "Redis is running..."
                fi
                ;;
        stop)
                if [ ! -f $PIDFILE ]
                then
                        echo "$PIDFILE exists, process is not running."
                else
                        PID=$(cat $PIDFILE)
                        echo "Stopping..."
                        $REDIS_CLI -p $REDISPORT SHUTDOWN
                        while [ -x $PIDFILE ]
                        do
                                echo "Waiting for Redis to shutdown..."
                                sleep 1
                        done
                        echo "Redis stopped"
                fi
                ;;
        restart|force-reload)
                ${0} stop
                ${0} start
                ;;
        *)
                echo "Usage: /etc/init.d/redis-6379 {start|stop|restart|force-reload}" >&2
                exit 1
esac

  
 
#    
chkconfig redis-6379 on
#    
chkconfig --list

 
 
十、Jedisテスト
package my.redis.demo;
 
import java.io.IOException;
import java.util.HashSet;
import java.util.Set;
 
import redis.clients.jedis.HostAndPort;
import redis.clients.jedis.JedisCluster;
 
public class ClusterTest {
  
   private static JedisCluster jc; 
     static { 
           Set<HostAndPort> jedisClusterNodes = new HashSet<HostAndPort>(); 
           jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6379)); 
           jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6380)); 
           jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6381)); 
           jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6382)); 
           jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6383)); 
           jedisClusterNodes.add(new HostAndPort("172.16.5.240", 6384));
           jc = new JedisCluster(jedisClusterNodes,5000,1000);
           jc = new JedisCluster(jedisClusterNodes,1000,1000);
       } 
    
   public static void main(String[] args) throws IOException, InterruptedException {
      System.out.println("##########################################");
     
      Thread t0 = new Thread(new Runnable() {
 
         @Override
         public void run() {
            for (int i = 0; i < 10000; i++) {
                String key = "key:" + i;
                jc.del(key);
                System.out.println("delete:" + key);
            }
         }
      });
     
      t0.start();
      t0.join();
     
      System.out.println("##########################################");
     
      Thread t1 = new Thread(new Runnable() {
 
         @Override
         public void run() {
            for (int i = 0; i < 10000; i++) {
                String key = "key:" + i;
                jc.set(key, key);
                System.out.println("write:" + key);
            }
         }
      });
 
      t1.start();
      t1.join();
       
        System.out.println("##########################################");
       
      Thread t2 = new Thread(new Runnable() {
 
         @Override
         public void run() {
            for (int i = 0; i < 10000; i++) {
                  String key = "key:" + i; 
                  jc.get(key);
                  System.out.println("read:"+key);
            }
         }
      });
 
      t2.start();
   }
}