mongodbレプリケーションクラスタ

13000 ワード

Mongodbのレプリケーション機能:
    :
    master/slave:
    replica set(   ,   ):           mongodb  
                     oplog ,  mysql     ,      ,
        arbiter:   ,     2s  ,      (        )

              :
        0      :    ,          ,       ,    :      
               :(   0     ),        
                :     0       ,                 
        arbiter:    
    #         :    cfg=rs.conf()     
        arbiterOnly:true
        priority:[0-1000]
        hidden:true
        buildIndexes:
        tags:{loc1:desc1,....}
        slaveDelay:  #    
        votes:0            ,        

Mongodbのレプリケーションアーキテクチャ:
oplog:      mongodb  local    
oplog:      mongodb  local    ,       
           :    
    1、    (initial sync)
    2、     (post-rollback catch-up)
    3、     (sharding chunk migrations)
local:            oplog,    oplog      oplog.rs collection,
    oplog.rs      os     ,         ,--oplogSize

Mongoのデータ同期タイプ:
    
             
              
  
    
           :
        1、          collection
        2、          :  oplog,      
        3、    collection    

マスターコピーオプション:
--only:                (         )
--slavedelay:      ,              ,(    ,),     
--fastsync:                 ,                  ,                    
--autoresync:          ,        
--oplogSize:   oplog   (   MB)

      :
--replSet NAME 
--replIndexPrefetch {none|_id_only|all}        

             :
        
       
    optime:        ,         
        :      
        :

選挙メカニズム:
       :
            
               
       “  ” 
             stepDown()   
                   ,               

monレプリケーション実験の3ノード:
192.168.2.147 node4
192.168.2.33  node3
192.168.3.6  node2

# ntpdate cn.ntp.org.cn
# scp *.rpm root@node3:/data/pkg
# scp *.rpm root@node4:/data/pkg
# yum install -y *.rpm
# mkdir -pv /data/mongodb
# chown -R mongod.mongod /data/mongodb
# vim /etc/mongod.conf
#    dbpath=/data/mongodb
#    #bind_ip=127.0.0.1
#    replSet=mymongo
# scp /etc/mongod.conf root@node3:/etc/mongod.conf
# scp /etc/mongod.conf root@node4:/etc/mongod.conf
# service mongod start

#                 ,                  
#         

# rs.status()        
> rs.status()
{
    "startupStatus" : 3,
    "info" : "run rs.initiate(...) if not yet done for the set",
    "ok" : 0,
    "errmsg" : "can't get local.system.replset config from self or any seed (EMPTYCONFIG)"
}
#        
> rs.initiate()
{
    "info2" : "no configuration explicitly specified -- making one",
    "me" : "node2:27017",
    "info" : "Config now saved locally.  Should come online in about a minute.",
    "ok" : 1
}
#      ,        :mymongo:PRIMARY>
#             :
mymongo:PRIMARY> rs.add("192.168.2.147:27017")
{ "ok" : 1 }
mymongo:PRIMARY> rs.add("192.168.2.33:27017")
{ "ok" : 1 }
#           
mymongo:PRIMARY> rs.status()
{
    "set" : "mymongo",
    "date" : ISODate("2017-05-23T16:22:29Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 0,
            "name" : "node2:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 1088,
            "optime" : Timestamp(1495556519, 1),
            "optimeDate" : ISODate("2017-05-23T16:21:59Z"),
            "electionTime" : Timestamp(1495555526, 1),
            "electionDate" : ISODate("2017-05-23T16:05:26Z"),
            "self" : true
        },
        {
            "_id" : 1,
            "name" : "192.168.2.147:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 35,
            "optime" : Timestamp(1495556519, 1),
            "optimeDate" : ISODate("2017-05-23T16:21:59Z"),
            "lastHeartbeat" : ISODate("2017-05-23T16:22:28Z"),
            "lastHeartbeatRecv" : ISODate("2017-05-23T16:22:29Z"),
            "pingMs" : 3,
            "syncingTo" : "node2:27017"
        },
        {
            "_id" : 2,
            "name" : "192.168.2.33:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 30,
            "optime" : Timestamp(1495556519, 1),
            "optimeDate" : ISODate("2017-05-23T16:21:59Z"),
            "lastHeartbeat" : ISODate("2017-05-23T16:22:29Z"),
            "lastHeartbeatRecv" : ISODate("2017-05-23T16:22:28Z"),
            "pingMs" : 0,
            "syncingTo" : "node2:27017"
        }
    ],
    "ok" : 1
}

#        
mymongo:PRIMARY> use testdb;
mymongo:PRIMARY> for(i=1;i<=100;i++) db.testcoll.insert({name:"user"+i,age:i,book:["book"+i,"apple"]})
WriteResult({ "nInserted" : 1 })
#        ,      ,   ,       ,      slaveOk  。
mymongo:SECONDARY> rs.slaveOk()
mymongo:SECONDARY> db.testcoll.findOne()
{
    "_id" : ObjectId("592462540c61503feddc0103"),
    "name" : "user1",
    "age" : 1,
    "book" : [
        "book1",
        "apple"
    ]
}
#        ,mongodb            
#           。
mymongo:SECONDARY> rs.status()
{
    "set" : "mymongo",
    "date" : ISODate("2017-05-23T16:30:36Z"),
    "myState" : 2,
    "syncingTo" : "192.168.2.147:27017",
    "members" : [
        {
            "_id" : 0,
            "name" : "node2:27017",
            "health" : 0,
            "state" : 8,
            "stateStr" : "(not reachable/healthy)", #      ,       
            "uptime" : 0,
            "optime" : Timestamp(1495556693, 6),
            "optimeDate" : ISODate("2017-05-23T16:24:53Z"),
            "lastHeartbeat" : ISODate("2017-05-23T16:30:35Z"),
            "lastHeartbeatRecv" : ISODate("2017-05-23T16:30:13Z"),
            "pingMs" : 0
        },
        {
            "_id" : 1,
            "name" : "192.168.2.147:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",         #node4     
            "uptime" : 516,
            "optime" : Timestamp(1495556693, 6),
            "optimeDate" : ISODate("2017-05-23T16:24:53Z"),
            "lastHeartbeat" : ISODate("2017-05-23T16:30:34Z"),
            "lastHeartbeatRecv" : ISODate("2017-05-23T16:30:35Z"),
            "pingMs" : 0,
            "electionTime" : Timestamp(1495556989, 1),
            "electionDate" : ISODate("2017-05-23T16:29:49Z")
        },
        {
            "_id" : 2,
            "name" : "192.168.2.33:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 1565,
            "optime" : Timestamp(1495556693, 6),
            "optimeDate" : ISODate("2017-05-23T16:24:53Z"),
            "infoMessage" : "syncing to: 192.168.2.147:27017",
            "self" : true
        }
    ],
    "ok" : 1
}
#             , mongodb     
mymongo:SECONDARY> db.isMaster()
{
    "setName" : "mymongo",
    "setVersion" : 3,
    "ismaster" : false,
    "secondary" : true,
    "hosts" : [
        "192.168.2.33:27017",
        "192.168.2.147:27017",
        "node2:27017"
    ],
    "primary" : "192.168.2.147:27017",      #master   node4
    "me" : "192.168.2.33:27017",
    "maxBsonObjectSize" : 16777216,
    "maxMessageSizeBytes" : 48000000,
    "maxWriteBatchSize" : 1000,
    "localTime" : ISODate("2017-05-23T16:32:33.600Z"),
    "maxWireVersion" : 2,
    "minWireVersion" : 0,
    "ok" : 1
}
#      ,        。
#   mongodb     ,        ,          master  。
mymongo:PRIMARY> cfg=rs.conf()
{
    "_id" : "mymongo",
    "version" : 3,
    "members" : [
        {
            "_id" : 0,
            "host" : "node2:27017"
        },
        {
            "_id" : 1,
            "host" : "192.168.2.147:27017"
        },
        {
            "_id" : 2,
            "host" : "192.168.2.33:27017"
        }
    ]
}
mymongo:PRIMARY> cfg.members[0].priority=2      #   node2       2
2
mymongo:PRIMARY> rs.reconfig(cfg)               #       ,        
2017-05-24T00:36:57.911+0800 DBClientCursor::init call() failed
2017-05-24T00:36:57.916+0800 trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2017-05-24T00:36:57.919+0800 reconnect 127.0.0.1:27017 (127.0.0.1) ok
2017-05-24T00:36:57.923+0800 DBClientCursor::init call() failed
2017-05-24T00:36:57.926+0800 Error: error doing query: failed at src/mongo/shell/query.js:81
2017-05-24T00:36:57.929+0800 trying reconnect to 127.0.0.1:27017 (127.0.0.1) failed
2017-05-24T00:36:57.933+0800 reconnect 127.0.0.1:27017 (127.0.0.1) ok
mymongo:SECONDARY>      #                   

oplog
mongodb     ,   oplog      local    ,oplog.rs
use local
mymongo:PRIMARY> show collections
me
oplog.rs
replset.minvalid
slaves
startup_log
system.indexes
system.replset
mymongo:PRIMARY> db.oplog.rs.findOne()
{
    "ts" : Timestamp(1495555525, 1),
    "h" : NumberLong(0),
    "v" : 2,
    "op" : "n",
    "ns" : "",
    "o" : {
        "msg" : "initiating set"
    }
}
# ts:     
# op:      ,insert,update,delete,          
# ns:       
# o:  document  ,       

-----  oplog     -----
mymongo:PRIMARY> db.printReplicationInfo()
configured oplog size:   2311.57470703125MB
log length start to end: 1892secs (0.53hrs)
oplog first event time:  Wed May 24 2017 00:05:25 GMT+0800 (CST)
oplog last event time:   Wed May 24 2017 00:36:57 GMT+0800 (CST)
now:                     Wed May 24 2017 09:25:52 GMT+0800 (CST)
# configured oplog size:oplog   
# log length start to end:oplog       
# oplog first event time:            
# oplog last event time:             
# now:     

-----  slave  oplog     -----
mymongo:PRIMARY> db.printSlaveReplicationInfo()
source: 192.168.2.147:27017
    syncedTo: Wed May 24 2017 00:36:57 GMT+0800 (CST)
    0 secs (0 hrs) behind the primary 
source: 192.168.2.33:27017
    syncedTo: Wed May 24 2017 00:36:57 GMT+0800 (CST)
    0 secs (0 hrs) behind the primary
# source:   ip     
# syncedTo:           
-----            -----
mymongo:PRIMARY> db.system.replset.findOne()
{
    "_id" : "mymongo",
    "version" : 4,
    "members" : [
        {
            "_id" : 0,
            "host" : "node2:27017",
            "priority" : 2
        },
        {
            "_id" : 1,
            "host" : "192.168.2.147:27017"
        },
        {
            "_id" : 2,
            "host" : "192.168.2.33:27017"
        }
    ]
}

クラスタノードの削除のコピー
        :,
1、  oplog   (oplog       ,        ,            )
2、       +oplog  
3、    0      (       rs.add  )
#####
   mongodb  ,       ,
scp -r /data/mongodb/{dbname}.* [email protected]:/data/mongodb
#          ,              DB  ,     local         
new node1:
chown -R mongod.mongod /data/mongodb
service mongod start
#  replset     primary       :
rs.add("192.168.1.154:27017")
#       ,rs.status()    secondary  。

        :
rs.remove("192.168.1.154")

#               ,     local    
use local
db.slaves.find()
#           ,            
# rs.setpDown(N)    #N    
#                    ,(    )
# rs.freeze(N)      #N    
# rs.freeze(0)      #    

mongodbレプリケーションクラスタ:
1、    ,        ,      ,     slaveOk
2、                oplog    ,    ,         ,       
3、    oplog         ,,     oplog(                ),   oplog     
4、    ,           。
5、        ,     。
6、    ,primary secondary       。
7、oplog           ,    oplog  ,       
8、64      ,oplog          5%