Ceph vstart仮想環境の使用

15286 ワード

概要
 cephは公式にcephの配置ドキュメントを提供していますが、クラスタを配置するには時間がかかります.また、機能検証やbug fix検証のような小さな実験を行う必要はありません.これは時間を浪費するだけでなく、会社の資源に対しても浪費である(土豪会社を除く).そしてcephコミュニティはこの問題に対して非常に友好的で、迅速な方法を提供した.vstartを使用して、実際の環境とほぼ同じ仮想環境を迅速に作成し、今日はどのように使用するかを紹介します.
スタートスタート
前の記事Cephコンパイル構築rpmパッケージでcephソースコードのコンパイル方法を見たことがありますが、まず記事のcmake方式でソースコードをコンパイルします.まずLuminous version:12.2.10 vstartのusageを見てみましょう
usage: ../src/vstart.sh [option]...
ex: ../src/vstart.sh -n -d --mon_num 3 --osd_num 3 --mds_num 1 --rgw_num 1
options:
    -d, --debug
    -s, --standby_mds: Generate standby-replay MDS for each active
    -l, --localhost: use localhost instead of hostname
    -i : bind to specific ip
    -n, --new
    -N, --not-new: reuse existing cluster config (default)
    --valgrind[_{osd,mds,mon,rgw}] 'toolname args...'
    --nodaemon: use ceph-run as wrapper for mon/osd/mds
    --smallmds: limit mds cache size
    -m ip:port      specify monitor address
    -k keep old configuration files
    -x enable cephx (on by default)
    -X disable cephx
    --hitset  : enable hitset tracking
    -e : create an erasure pool
    -o config        add extra config parameters to all sections
    --mon_num specify ceph monitor count
    --osd_num specify ceph osd count
    --mds_num specify ceph mds count
    --rgw_num specify ceph rgw count
    --mgr_num specify ceph mgr count
    --rgw_port specify ceph rgw http listen port
    --rgw_frontend specify the rgw frontend configuration
    --rgw_compression specify the rgw compression plugin
    -b, --bluestore use bluestore as the osd objectstore backend
    --memstore use memstore as the osd objectstore backend
    --cache : enable cache tiering on pool
    --short: short object names only; necessary for ext4 dev
    --nolockdep disable lockdep
    --multimds  allow multimds with maximum active count

新しいクラスタの作成を開始
次のコマンドvstartを使用して、3 monitor、3 mds(1 active、2 stanby)、3 osd(3コピー)のクラスタを作成します.
[root@localhost build]#  sh ../src/vstart.sh -n -d

** going verbose **
rm -f core*
hostname localhost
ip 192.168.12.200
port 40385
/var/ws/ceph-12.2.10/build/bin/ceph-authtool --create-keyring --gen-key --name=mon. /var/ws/ceph-12.2.10/build/keyring --cap mon allow *
creating /var/ws/ceph-12.2.10/build/keyring
/var/ws/ceph-12.2.10/build/bin/ceph-authtool --gen-key --name=client.admin --set-uid=0 --cap mon allow * --cap osd allow * --cap mds allow * --cap mgr allow * /var/ws/ceph-12.2.10/build/keyring
/var/ws/ceph-12.2.10/build/bin/ceph-authtool --gen-key --name=client.rgw --cap mon allow rw --cap osd allow rwx --cap mgr allow rw /var/ws/ceph-12.2.10/build/keyring
/var/ws/ceph-12.2.10/build/bin/monmaptool --create --clobber --add a 192.168.12.200:40385 --add b 192.168.12.200:40386 --add c 192.168.12.200:40387 --print /tmp/ceph_monmap.31812
/var/ws/ceph-12.2.10/build/bin/monmaptool: monmap file /tmp/ceph_monmap.31812
/var/ws/ceph-12.2.10/build/bin/monmaptool: generated fsid 053bf1c1-5bea-466f-bad8-18de4e7f18cf
epoch 0
fsid 053bf1c1-5bea-466f-bad8-18de4e7f18cf
last_changed 2019-01-18 11:22:57.763781
created 2019-01-18 11:22:57.763781
0: 192.168.12.200:40385/0 mon.a
1: 192.168.12.200:40386/0 mon.b
2: 192.168.12.200:40387/0 mon.c
...

/var/ws/ceph-12.2.10/build/bin/ceph-authtool --create-keyring --gen-key --name=mds.c /var/ws/ceph-12.2.10/build/dev/mds.c/keyring
creating /var/ws/ceph-12.2.10/build/dev/mds.c/keyring
/var/ws/ceph-12.2.10/build/bin/ceph -c /var/ws/ceph-12.2.10/build/ceph.conf -k /var/ws/ceph-12.2.10/build/keyring -i /var/ws/ceph-12.2.10/build/dev/mds.c/keyring auth add mds.c mon allow profile mds osd allow * mds allow mgr allow profile mds
added key for mds.c
/var/ws/ceph-12.2.10/build/bin/ceph-mds -i c -c /var/ws/ceph-12.2.10/build/ceph.conf
starting mds.c at -
started.  stop.sh to stop.  see out/ (e.g. 'tail -f out/????') for debug output.

dashboard urls: http://192.168.12.200:41385/
  restful urls: https://192.168.12.200:42385
  w/ user/pass: admin / 84eb0e02-0034-4809-aa81-0c21a468180d

export PYTHONPATH=./pybind:/var/ws/ceph-12.2.10/src/pybind:/var/ws/ceph-12.2.10/build/lib/cython_modules/lib.2:
export LD_LIBRARY_PATH=/var/ws/ceph-12.2.10/build/lib
CEPH_DEV=1
  • クラスタ状態をチェック
  • [root@localhost build]# ./bin/ceph -s
    *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
    2019-01-18 12:07:02.847675 7fcc629b4700 -1 WARNING: all dangerous and experimental features are enabled.
    2019-01-18 12:07:02.907931 7fcc629b4700 -1 WARNING: all dangerous and experimental features are enabled.
      cluster:
        id:     32d792fd-1035-4ad5-a237-0d4851efd5cb
        health: HEALTH_WARN
                no active mgr
    
      services:
        mon: 3 daemons, quorum a,b,c
        mgr: no daemons active
        mds: cephfs_a-1/1/1 up  {0=a=up:active}, 2 up:standby
        osd: 3 osds: 3 up, 3 in
    
      data:
        pools:   2 pools, 16 pgs
        objects: 21 objects, 2.19KiB
        usage:   1.99TiB used, 363GiB / 2.34TiB avail
        pgs:     16 active+clean
    
    
  • 検査mds
  • [root@localhost build]# ./bin/ceph mds dump --format=json-pretty
    *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
    2019-01-18 12:15:45.537644 7f7834519700 -1 WARNING: all dangerous and experimental features are enabled.
    2019-01-18 12:15:45.594255 7f7834519700 -1 WARNING: all dangerous and experimental features are enabled.
    dumped fsmap epoch 8
    
    {
        "epoch": 7,
        "flags": 12,
        "ever_allowed_features": 0,
        "explicitly_allowed_features": 0,
        "created": "2019-01-18 11:24:14.522559",
        "modified": "2019-01-18 11:24:52.769470",
        "tableserver": 0,
        "root": 0,
        "session_timeout": 60,
        "session_autoclose": 300,
        "max_file_size": 1099511627776,
        "last_failure": 0,
        "last_failure_osd_epoch": 0,
        "compat": {
            "compat": {},
            "ro_compat": {},
            "incompat": {
                "feature_1": "base v0.20",
                "feature_2": "client writeable ranges",
                "feature_3": "default file layouts on dirs",
                "feature_4": "dir inode in separate object",
                "feature_5": "mds uses versioned encoding",
                "feature_6": "dirfrag is stored in omap",
                "feature_8": "no anchor table",
                "feature_9": "file layout v2"
            }
        },
        "max_mds": 1,
        "in": [
            0
        ],
        "up": {
            "mds_0": 4140
        },
        "failed": [],
        "damaged": [],
        "stopped": [],
        "info": {
            "gid_4140": {
                "gid": 4140,
                "name": "a",
                "rank": 0,
                "incarnation": 4,
                "state": "up:active",
                "state_seq": 7,
                "addr": "192.168.12.200:6813/3759861531",
                "standby_for_rank": -1,
                "standby_for_fscid": -1,
                "standby_for_name": "",
                "standby_replay": false,
                "export_targets": [],
                "features": 4611087853745930235
            }
        },
        "data_pools": [
            1
        ],
        "metadata_pool": 2,
        "enabled": true,
        "fs_name": "cephfs_a",
        "balancer": "",
        "standby_count_wanted": 1
    }
    
  • pool状態をチェック
  • [root@localhost build]# ./bin/ceph osd pool ls detail
    *** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
    2019-01-18 12:17:28.173025 7ff840996700 -1 WARNING: all dangerous and experimental features are enabled.
    2019-01-18 12:17:28.230196 7ff840996700 -1 WARNING: all dangerous and experimental features are enabled.
    pool 1 'cephfs_data_a' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 13 flags hashpspool stripe_width 0 application cephfs
    pool 2 'cephfs_metadata_a' replicated size 3 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 last_change 13 flags hashpspool stripe_width 0 application cephfs
    
  • クラスタプロファイルでは、ログ・レベルなどの次のプロファイルを手動で変更して、目的の効果を達成できます.
  • [root@localhost build]# cat ./ceph.conf
    ; generated by vstart.sh on Fri Jan 18 11:22:57 UTC 2019
    [client.vstart.sh]
            num mon = 3
            num osd = 3
            num mds = 3
            num mgr = 1
            num rgw = 0
    
    [global]
            fsid = 32d792fd-1035-4ad5-a237-0d4851efd5cb
            osd pg bits = 3
            osd pgp bits = 5  ; (invalid, but ceph should cope!)
            osd pool default size = 3
            osd crush chooseleaf type = 0
            osd pool default min size = 1
            osd failsafe full ratio = .99
            mon osd nearfull ratio = .99
            mon osd backfillfull ratio = .99
            mon osd reporter subtree level = osd
            mon osd full ratio = .99
            mon data avail warn = 2
            mon data avail crit = 1
            erasure code dir = /var/ws/ceph-12.2.10/build/lib
            plugin dir = /var/ws/ceph-12.2.10/build/lib
            osd pool default erasure code profile = plugin=jerasure technique=reed_sol_van k=2 m=1 crush-failure-domain=osd
            rgw frontends = civetweb port=8000
            ; needed for s3tests
            rgw crypt s3 kms encryption keys = testkey-1=YmluCmJvb3N0CmJvb3N0LWJ1aWxkCmNlcGguY29uZgo= testkey-2=aWIKTWFrZWZpbGUKbWFuCm91dApzcmMKVGVzdGluZwo=
            rgw crypt require ssl = false
            rgw lc debug interval = 10
            filestore fd cache size = 32
            run dir = /var/ws/ceph-12.2.10/build/out
            enable experimental unrecoverable data corrupting features = *
            lockdep = true
            auth cluster required = cephx
            auth service required = cephx
            auth client required = cephx
    [client]
            keyring = /var/ws/ceph-12.2.10/build/keyring
            log file = /var/ws/ceph-12.2.10/build/out/$name.$pid.log
            admin socket = /tmp/ceph-asok.ZlA7Lh/$name.$pid.asok
    
    [client.rgw]
    
    [mds]
    
            log file = /var/ws/ceph-12.2.10/build/out/$name.log
            admin socket = /tmp/ceph-asok.ZlA7Lh/$name.asok
            chdir = ""
            pid file = /var/ws/ceph-12.2.10/build/out/$name.pid
            heartbeat file = /var/ws/ceph-12.2.10/build/out/$name.heartbeat
    
    
            debug ms = 1
            debug mds = 20
            debug auth = 20
            debug monc = 20
            debug mgrc = 20
            mds debug scatterstat = true
            mds verify scatter = true
            mds log max segments = 2
            mds debug frag = true
            mds debug auth pins = true
            mds debug subtrees = true
            mds data = /var/ws/ceph-12.2.10/build/dev/mds.$id
            mds root ino uid = 0
            mds root ino gid = 0
    
    [mgr]
            mgr data = /var/ws/ceph-12.2.10/build/dev/mgr.$id
            mgr module path = /var/ws/ceph-12.2.10/src/pybind/mgr
            mon reweight min pgs per osd = 4
            mon pg warn min per osd = 3
    
            log file = /var/ws/ceph-12.2.10/build/out/$name.log
            admin socket = /tmp/ceph-asok.ZlA7Lh/$name.asok
            chdir = ""
            pid file = /var/ws/ceph-12.2.10/build/out/$name.pid
            heartbeat file = /var/ws/ceph-12.2.10/build/out/$name.heartbeat
    
    
            debug ms = 1
            debug monc = 20
        debug mon = 20
            debug mgr = 20
    
    [osd]
    
            log file = /var/ws/ceph-12.2.10/build/out/$name.log
            admin socket = /tmp/ceph-asok.ZlA7Lh/$name.asok
            chdir = ""
            pid file = /var/ws/ceph-12.2.10/build/out/$name.pid
            heartbeat file = /var/ws/ceph-12.2.10/build/out/$name.heartbeat
    
            osd_check_max_object_name_len_on_startup = false
            osd data = /var/ws/ceph-12.2.10/build/dev/osd$id
            osd journal = /var/ws/ceph-12.2.10/build/dev/osd$id/journal
            osd journal size = 100
            osd class tmp = out
            osd class dir = /var/ws/ceph-12.2.10/build/lib
            osd class load list = *
            osd class default list = *
            osd scrub load threshold = 2000.0
            osd debug op order = true
            osd debug misdirected ops = true
            filestore wbthrottle xfs ios start flusher = 10
            filestore wbthrottle xfs ios hard limit = 20
            filestore wbthrottle xfs inodes hard limit = 30
            filestore wbthrottle btrfs ios start flusher = 10
            filestore wbthrottle btrfs ios hard limit = 20
            filestore wbthrottle btrfs inodes hard limit = 30
            osd copyfrom max chunk = 524288
            bluestore fsck on mount = true
            bluestore block create = true
        bluestore block db path = /var/ws/ceph-12.2.10/build/dev/osd$id/block.db.file
            bluestore block db size = 67108864
            bluestore block db create = true
        bluestore block wal path = /var/ws/ceph-12.2.10/build/dev/osd$id/block.wal.file
            bluestore block wal size = 1048576000
            bluestore block wal create = true
    
            debug ms = 1
            debug osd = 25
            debug objecter = 20
            debug monc = 20
            debug mgrc = 20
            debug journal = 20
            debug filestore = 20
            debug bluestore = 30
            debug bluefs = 20
            debug rocksdb = 10
            debug bdev = 20
            debug rgw = 20
        debug reserver = 10
            debug objclass = 20
    
    
    
    [mon]
            mgr initial modules = restful status dashboard balancer
            mon pg warn min per osd = 3
            mon osd allow primary affinity = true
            mon reweight min pgs per osd = 4
            mon osd prime pg temp = true
            crushtool = /var/ws/ceph-12.2.10/build/bin/crushtool
            mon allow pool delete = true
    
            log file = /var/ws/ceph-12.2.10/build/out/$name.log
            admin socket = /tmp/ceph-asok.ZlA7Lh/$name.asok
            chdir = ""
            pid file = /var/ws/ceph-12.2.10/build/out/$name.pid
            heartbeat file = /var/ws/ceph-12.2.10/build/out/$name.heartbeat
    
    
            debug mon = 20
            debug paxos = 20
            debug auth = 20
        debug mgrc = 20
            debug ms = 1
    
            mon cluster log file = /var/ws/ceph-12.2.10/build/out/cluster.mon.$id.log
    [global]
    
    [mon.a]
            host = localhost
            mon data = /var/ws/ceph-12.2.10/build/dev/mon.a
            mon addr = 192.168.12.200:40385
    [mon.b]
            host = localhost
            mon data = /var/ws/ceph-12.2.10/build/dev/mon.b
            mon addr = 192.168.12.200:40386
    [mon.c]
            host = localhost
            mon data = /var/ws/ceph-12.2.10/build/dev/mon.c
            mon addr = 192.168.12.200:40387
    [mgr.x]
            host = localhost
    [osd.0]
            host = localhost
    [osd.1]
            host = localhost
    [osd.2]
            host = localhost
    [mds.a]
            host = localhost
    [mds.b]
            host = localhost
    [mds.c]
            host = localhost
    
  • ログファイルログファイルはbuild/out/の下に保存されており、ログを使用して実験原理や問題の分析を行います.

  • 環境が作成されたら、自分の実験プロジェクトを行うことができます.これは最も簡単なクラスタです.vstartを使用してmultimds、bluesotre、ディレクトリスライスなどの機能をテストすることもできます.
    環境の停止
    [root@localhost build]# sh ../src/stop.sh --
    usage: ../src/stop.sh [all] [mon] [mds] [osd] [rgw]
    

    上のusageに基づいて上のスクリプトを実行すればいいです.