Cephクラスタ構築

4527 ワード

一、インストール前の準備:
1.システム:Centos 7.4 x64
[root@ceph-node1 ~]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)

2.ホスト:
ホスト名
アドレス
ロール#ロール#
ceph-node1
10.0.70.40
Deploy、mon1、osd*2
ceph-node2
10.0.70.41
mon1、osd*2
ceph-node3
10.0.70.42
mon1、osd*2
3.3台のホスト、1台あたり2つのディスク(ディスクが100 Gより大きい)
[root@ceph-node1 ~]# lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
fd0           2:0    1     4K  0 disk
sda           8:0    0  1000G  0 disk
├─sda1        8:1    0     1G  0 part /boot
└─sda2        8:2    0   999G  0 part
├─cl-root 253:0    0    50G  0 lvm  /
├─cl-swap 253:1    0   7.9G  0 lvm  [SWAP]
└─cl-home 253:2    0 941.1G  0 lvm  /home
sdb           8:16   0  1000G  0 disk
sdc           8:32   0  1000G  0 disk

ドメイン名の解析
ssh-keygen       #  ceph-1   ceph        
ssh-copy-id root@ceph-node1
ssh-copy-id root@cdph-node2
ssh-copy-id root@ceph-node3
vim /etc/hosts    #         
10.0.70.40 ceph-node1
10.0.70.41 ceph-node2
10.0.70.42 ceph-node3
scp /etc/hosts root@ceph-node2:/etc/
scp /etc/hosts root@ceph-node3:/etc/

yumソース構成
yum clean all
rm -rf /etc/yum.repos.d/*.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
sed -i '/aliyuncs/d' /etc/yum.repos.d/CentOS-Base.repo
sed -i '/aliyuncs/d' /etc/yum.repos.d/epel.repo
sed -i 's/$releasever/7/g' /etc/yum.repos.d/CentOS-Base.repo
vim /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/x86_64/
gpgcheck=0
[ceph-noarch]
name=cephnoarch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch/
gpgcheck=0
scp /etc/yum.repos.d/ceph.repo root@ceph-node2:/etc/yum.repos.d/     #         
scp /etc/yum.repos.d/ceph.repo root@ceph-node3:/etc/yum.repos.d/
yum install hdparm  ceph ceph-radosgw rdate  -y    #  ceph   (    )
hdparm -W 0 /dev/sda      #     
rdate -s time-a.nist.gov        #    
echo rdate -s time-a.nist.gov >> /etc/rc.d/rc.local
chmod +x /etc/rc.d/rc.local

二、クラスタ配置
1.管理ノードの配置
yum install ceph-deploy -y
mkdir ~/cluster                     #  my-cluster           
cd  ~/cluster                   #    ceph-deploy            
ceph-deploy new ceph-node1 ceph-node2 ceph-node3   # node1 MON  ceph.conf     
echo "osd_pool_default_size" = 2 >> ~/cluster/ceph.conf    #        
echo public_network=10.0.70.20/24 >> ~/cluster/ceph.conf       #     IP   ceph.conf   public_network
echo mon_clock_drift_allowed = 2 >> ~/cluster/ceph.conf    #  mon        
cat  ~/cluster/ceph.conf
osd_pool_default_size = 2
public_network=10.0.70.20/24
mon_clock_drift_allowed = 2

モニタの導入を開始
ceph-deploy mon create-initial

OSDの導入を開始するには、次の手順に従います.
ceph-deploy --overwrite-conf osd prepare ceph-node1:/dev/sdb ceph-node1:/dev/sdc ceph-node2:/dev/sdb ceph-node2:/dev/sdc  ceph-node3:/dev/sdb ceph-node3:/dev/sdc   --zap-disk
ceph-deploy --overwrite-conf osd activate ceph-node1:/dev/sdb1 ceph-node1:/dev/sdc1  ceph-node2:/dev/sdb1 ceph-node2:/dev/sdc1  ceph-node3:/dev/sdb1 ceph-node3:/dev/sdc1

クラスタのステータスの表示
[root@ceph-node1 cluster]# ceph -s
cluster 466e0a3e-f351-46f3-94a2-5ea976c26fd8
health HEALTH_WARN
15 pgs peering
2 pgs stuck unclean
too few PGs per OSD (21  
  

OSD

[root@ceph-node1 cluster]# ceph osd tree
ID WEIGHT  TYPE NAME           UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 5.82715 root default
-2 1.94238     host ceph-node1
0 0.97119         osd.0            up  1.00000          1.00000
1 0.97119         osd.1            up  1.00000          1.00000
-3 1.94238     host ceph-node2
2 0.97119         osd.2            up  1.00000          1.00000
3 0.97119         osd.3            up  1.00000          1.00000
-4 1.94238     host ceph-node3
4 0.97119         osd.4            up  1.00000          1.00000
5 0.97119         osd.5            up  1.00000          1.00000

1:
[ceph-1][ERROR ] admin_socket: exception getting command descriptions: [Errno 2] No such file or directory
[ceph_deploy.mon][WARNIN] mon.ceph-1 monitor is not yet in quorum, tries left: 5
[ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying
:
ホスト はhostsに すればよい