Ceph-deployを使用してCeph環境を迅速に導入

20775 ワード

ローカル環境情報:
       :
    Ceph-deploy 1 
    MON 1 
    OSD 2 

CentOS 7:
    ceph-deploy + monitor(ceph1)
        192.168.122.18
        172.16.34.253
    osd(ceph2)
        192.168.122.38
        172.16.34.184
    osd(ceph3)
        192.168.122.158 
        172.16.34.116

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
全ノード共通インストール構成
ネットワークエージェントの設定(必須でない)
                 ,      。
vim /etc/environment
export http_proxy=xxx
export https_proxy=xxx

1
2
3
4
ホスト名と/etc/hostsの変更
         :
hostnamectl set-hostname ceph1
hostnamectl set-hostname ceph2
hostnamectl set-hostname ceph3

          :
vim /etc/hosts
192.168.122.18 ceph1
192.168.122.38 ceph2
192.168.122.158 ceph3

         :
ping -c 3 ceph1
ping -c 3 ceph2
ping -c 3 ceph3

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
NIC起動の設定
grep ONBOOT /etc/sysconfig/network-scripts/ifcfg-xxx
ONBOOT=yes

1
2
epelを追加してrpmを更新
   Epel  :
sudo yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org*

   Ceph  (        ):
sudo vim /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://download.ceph.com/rpm-luminous/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc

     :
sudo yum update -y

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
ファイアウォールとSELINUXを閉じる
     (   )       iptables   
systemctl status firewalld.service
systemctl stop firewalld.service
systemctl disable firewalld.service

   iptables     :
http://docs.ceph.org.cn/start/quick-start-preflight/#id7

   SELINUX:      (    )+     (    )
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0

       :
grep SELINUX= /etc/selinux/config
getenforce

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
上記操作完了推奨機器(全ノード)
NTPのインストールと構成
sudo yum install ntp ntpdate ntp-doc -y
systemctl restart ntpd
systemctl status ntpd

TODO:            MON   ,        ntp    。
                     NTP。

1
2
3
4
5
6
SSHのインストールと構成
sudo yum install openssh-server -y

      Linux        ssh       
systemctl status sshd

1
2
3
4
Ceph-deployノードインストール
ceph-deployのインストール
sudo yum install ceph-deploy -y

1
SSHの構成(公開鍵生成による秘密アクセスなし)
公式に推奨されているのはceph以外のユーザーを新規作成することですが、こちらは簡単のためrootユーザーを直接使用しています. CEPHを導入するユーザーの作成
       ,          
ssh-keygen
ssh-copy-id root@ceph1
ssh-copy-id root@ceph2
ssh-copy-id root@ceph3

      
ssh ceph1
exit
ssh ceph2
exit
ssh ceph3
exit

      
vim /root/.ssh/config
Host ceph1
   Hostname ceph1
   User root
Host ceph2
   Hostname ceph2
   User root
Host ceph3
   Hostname ceph3
   User root

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
ストレージクラスタのインストール
      
mkdir -p /root/my_cluster
cd /root/my_cluster

    
ceph-deploy new ceph1

      
vim ceph.conf
osd pool default size = 2
public network = 192.168.122.18/24
★    mon_host     public network       !!!

   Ceph
                 ,             ,         。
             ,    。 :[ceph-deploy       ]          。
ceph-deploy install ceph1 ceph2 ceph3

    monitor
ceph-deploy mon create-initial

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
以上のCephクラスタの作成が完了し、monitorが正常に起動し、次はOSDの構成です.
モニタ起動確認
    
    [root@ceph1 my_cluster]# ps -ef | grep ceph
    root     21688 16109  0 08:50 pts/0    00:00:00 grep --color=auto ceph
    ceph     29366     1  0 May17 ?        00:00:13 /usr/bin/ceph-mon -f --cluster ceph --id ceph1 --setuser ceph --setgroup ceph

systemctl   
    [root@ceph1 my_cluster]# systemctl status | grep ceph
    ● ceph1
               │     └─21690 grep --color=auto ceph
                 ├─system-ceph\x2dmon.slice
                 │ └─ceph-mon@ceph1.service
                 │   └─29366 /usr/bin/ceph-mon -f --cluster ceph --id ceph1 --setuser ceph --setgroup ceph

    [root@ceph1 my_cluster]# systemctl status ceph-mon@ceph1.service
    ● ceph-mon@ceph1.service - Ceph cluster monitor daemon
       Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
       Active: active (running) since Wed 2017-05-17 16:50:48 CST; 16h ago
     Main PID: 29366 (ceph-mon)
       CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph1.service
               └─29366 /usr/bin/ceph-mon -f --cluster ceph --id ceph1 --setuser ceph --setgroup ceph
    May 17 16:50:48 ceph1 systemd[1]: Started Ceph cluster monitor daemon.
    May 17 16:50:48 ceph1 systemd[1]: Starting Ceph cluster monitor daemon...
    May 17 16:50:48 ceph1 ceph-mon[29366]: starting mon.ceph1 rank 0 at 192.168.122.18:6789/0 mon_data /var/lib/ceph/mon/ceph-ceph1 fsid 92da5066-e973-4e7e-8524-8dcbc948c93b

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
Ceph-deployインストールプロセス分析
connected to host
installing Ceph
    yum clean all
    yum -y install epel-release
    yum -y install yum-plugin-priorities
    rpm --import https://download.ceph.com/keys/release.asc
    rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
    yum -y install ceph ceph-radosgw

           yum -y install epel-release      

1
2
3
4
5
6
7
8
9
10
OSDの1つを追加(2つの裸ディスクを使用)
ceph-deploy disk zap ceph2:/dev/vdb ceph3:/dev/vdb
ceph-deploy osd prepare ceph2:/dev/vdb ceph3:/dev/vdb

★   :[prepare       OSD 。         ,       ,   activate          activate   (   Ceph   udev   )。]   :
http://docs.ceph.org.cn/rados/deployment/ceph-deploy-osd/

      activate    ,    OSD     ,            OSD              。 :[OSD     (  monitor       )]

ceph-deploy osd activate ceph2:/dev/vdb ceph3:/dev/vdb

      
ceph-deploy admin ceph1 ceph2 ceph3
ceph health

1
2
3
4
5
6
7
8
9
10
11
12
13
OSDの2つを追加(2つのフォルダを使用)
ssh ceph2
sudo mkdir /var/local/osd0
exit

ssh ceph3
sudo mkdir /var/local/osd1
exit

ceph-deploy osd prepare ceph2:/var/local/osd0 ceph3:/var/local/osd1
ceph-deploy osd activate ceph2:/var/local/osd0 ceph3:/var/local/osd1
ceph-deploy admin ceph1 ceph2 ceph3
ceph health

1
2
3
4
5
6
7
8
9
10
11
12
OSD起動確認(モニタ起動確認と同様)
[root@ceph2 ~]# ps -ef | grep ceph
root     15818 15802  0 09:17 pts/0    00:00:00 grep --color=auto ceph
ceph     24426     1  0 May17 ?        00:00:33 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

[root@ceph2 ~]# systemctl status | grep ceph
● ceph2
           │     └─15822 grep --color=auto ceph
             ├─system-ceph\x2dosd.slice
             │ └─ceph-osd@0.service
             │   └─24426 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph

[root@ceph2 ~]# systemctl status ceph-osd@0.service
● ceph-osd@0.service - Ceph object storage daemon
   Loaded: loaded (/usr/lib/systemd/system/ceph-osd@.service; enabled-runtime; vendor preset: disabled)
   Active: active (running) since Wed 2017-05-17 16:56:54 CST; 16h ago
 Main PID: 24426 (ceph-osd)
   CGroup: /system.slice/system-ceph\x2dosd.slice/ceph-osd@0.service
           └─24426 /usr/bin/ceph-osd -f --cluster ceph --id 0 --setuser ceph --setgroup ceph
May 17 16:56:53 ceph2 systemd[1]: Starting Ceph object storage daemon...
May 17 16:56:53 ceph2 ceph-osd-prestart.sh[24375]: create-or-move updating item name 'osd.0' weight 0.0146 at location {host=ceph2,root=default} to crush map
May 17 16:56:54 ceph2 systemd[1]: Started Ceph object storage daemon.
May 17 16:56:54 ceph2 ceph-osd[24426]: starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
May 17 16:56:54 ceph2 ceph-osd[24426]: 2017-05-17 16:56:54.080778 7f0727d3a800 -1 osd.0 0 log_to_monitors {default=true}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
データクリア
     
ceph-deploy purge ceph1 ceph2 ceph3

      
ceph-deploy purgedata ceph1 ceph2 ceph3
ceph-deploy forgetkeys

             
rm -rf /var/lib/ceph/osd/*
rm -rf /var/lib/ceph/mon/*
rm -rf /var/lib/ceph/mds/*
rm -rf /var/lib/ceph/bootstrap-mds/*
rm -rf /var/lib/ceph/bootstrap-osd/*
rm -rf /var/lib/ceph/bootstrap-mon/*
rm -rf /var/lib/ceph/tmp/*
rm -rf /etc/ceph/*
rm -rf /var/run/ceph/*