Ansible + Gluster on Raspberry Pi


Ansible + Gluster on Raspberry Pi

  • Ansible Advent Callender その2 12/22

目的

  • 2台のラズパイ上に OSS の分散ファイルシステムである Gluster Server を構築して Ansible で突っついてみる

大注意!

  • 既存システムへの実行には十分注意しましょう!
  • ぶっ壊れても保証いたしかねます!

システム構成

Gluster Server

ハードウェア

  • Raspberry Pi Model B+ 2台
  • 1TB USB HDD x2 / Raspberry Pi 1台

システム

No. Hostname IP Address
1 gluster01 192.168.100.81
2 gluster02 192.168.100.82

ソフトウェア

  • Gluster Server
$ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 9 (stretch)"
NAME="Raspbian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
VERSION_CODENAME=stretch
ID=raspbian
ID_LIKE=debian
HOME_URL="http://www.raspbian.org/"
SUPPORT_URL="http://www.raspbian.org/RaspbianForums"
BUG_REPORT_URL="http://www.raspbian.org/RaspbianBugs"
$ sudo gluster --version
glusterfs 3.8.8 built on Jan 11 2017 14:07:11
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
  • /data/brick1 と /data/brick2 を xfs でマウントしています
$ mount | grep brick
/dev/sda on /data/brick1 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/sdb on /data/brick2 type xfs (rw,relatime,attr2,inode64,noquota)
  • Ansible 実行環境
$ ansible --version
ansible 2.9.2
  config file = None
  configured module search path = ['/home/hirofumi/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/hirofumi/.local/lib/python3.5/site-packages/ansible
  executable location = /home/hirofumi/.local/bin/ansible
  python version = 3.5.2 (default, Oct  8 2019, 13:06:37) [GCC 5.4.0 20160609]

リファレンス

そもそも Gluster って何? (軽く振り返り)

  • https://ja.wikipedia.org/wiki/GlusterFS

    • 大きなくくりでは 分散ファイルシステム の一つ
    • Gluster Inc. がオープンソースとして開発していました
    • 2011 の買収以降 Red Hat が開発しています
  • 特徴としてはローカルファイルシステム(ext4, xfs etc)を FUSE を介してアクセスする点

  • つまり、専用クライアントを介することなく各Gluster ServerのOS上でファイルを参照することが可能

では、さっそく突っついて見ましょう

本記事執筆時点で以下の3つのモジュールが用意されています。

  • gluster_heal_info – Gather information on self-heal or rebalance status
  • gluster_peer – Attach/Detach peers to/from the cluster
  • gluster_volume – Manage GlusterFS volumes

それぞれ試してみましょう

gluster_heal_info モジュール

  • 以下の Distributed-Replicate な Volume に対して実行してみます
# gluster volume info

Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 78f9a366-e31b-4724-9c6e-c3d765b44663
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gluster01:/data/brick1/gv0
Brick2: gluster02:/data/brick1/gv0
Brick3: gluster01:/data/brick2/gv0
Brick4: gluster02:/data/brick2/gv0
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on
# gluster peer status
Number of Peers: 1

Hostname: gluster02
Uuid: 124bd104-b6af-4f37-8834-1e157ddb712d
State: Peer in Cluster (Connected)
Other names:
192.168.100.82
  • Inventory File:
example
$ cat gluster-host
[gluster]
192.168.100.81
  • Playbook:
example
$ cat gluster-info.yml
---
- hosts: gluster
  user: hirofumi
  become: yes
  tasks:
    - name: Gather self-heal facts about all gluster hosts in the cluster
      gluster_heal_info:
        name: gv0
        status_filter: self-heal
      register: self_heal_status
    - debug:
        var: self_heal_status
  • 実行結果:
example
$ ansible-playbook -i gluster-host gluster-info.yml

PLAY [gluster] ****************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************
ok: [192.168.100.81]

TASK [Gather self-heal facts about all gluster hosts in the cluster] **********************************************************
ok: [192.168.100.81]

TASK [debug] ******************************************************************************************************************
ok: [192.168.100.81] => {
    "self_heal_status": {
        "changed": false,
        "failed": false,
        "glusterfs": {
            "heal_info": [
                {
                    "brick": " gluster01:/data/brick1/gv0",
                    "no_of_entries": "0",
                    "status": "Connected"
                },
                {
                    "brick": " gluster02:/data/brick1/gv0",
                    "no_of_entries": "0",
                    "status": "Connected"
                },
                {
                    "brick": " gluster01:/data/brick2/gv0",
                    "no_of_entries": "0",
                    "status": "Connected"
                },
                {
                    "brick": " gluster02:/data/brick2/gv0",
                    "no_of_entries": "-",
                    "status": "Connected"
                }
            ],
            "rebalance": "",
            "status_filter": "self-heal",
            "volume": "gv0"
        }
    }
}

PLAY RECAP ********************************************************************************************************************
192.168.100.81             : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

gluster_volume モジュール

$ cat gluster-create-volumes.yml
---
- hosts: gluster
  user: hirofumi
  become: yes
  tasks:
    - name: create gluster volume with multiple bricks
      gluster_volume:
        state: present
        name: testgv001
        bricks: /data/brick1/testgv001,/data/brick2/testgv001
        replicas: 2
        cluster:
          - 192.168.100.81
          - 192.168.100.82
        force: true
      run_once: true
$ ansible-playbook -i gluster-host gluster-create-volumes.yml

PLAY [gluster] ****************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************
ok: [192.168.100.81]

TASK [create gluster volume with multiple bricks] *****************************************************************************
changed: [192.168.100.81]

PLAY RECAP ********************************************************************************************************************
192.168.100.81             : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
# gluster volume info testgv001

Volume Name: testgv001
Type: Distributed-Replicate
Volume ID: d96f1351-5bf4-40d0-b592-423d07c4cf67
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.100.81:/data/brick1/testgv001
Brick2: 192.168.100.82:/data/brick1/testgv001
Brick3: 192.168.100.81:/data/brick2/testgv001
Brick4: 192.168.100.82:/data/brick2/testgv001
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on

gluster_peer モジュール

$ cat gluster-peer-attach.yml
---
- hosts: gluster
  user: hirofumi
  become: yes
  tasks:
    - name: Create a trusted storage pool
      gluster_peer:
        state: present
        nodes:
          - 192.168.100.82
      register: peer_status
    - debug:
        var: peer_status
  • 既存の Peer に対する実行なので変化なし
$ ansible-playbook -i gluster-host gluster-peer-attach.yml

PLAY [gluster] ****************************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************
ok: [192.168.100.81]

TASK [Create a trusted storage pool] ******************************************************************************************
ok: [192.168.100.81]

TASK [debug] ******************************************************************************************************************
ok: [192.168.100.81] => {
    "peer_status": {
        "changed": false,
        "failed": false,
        "msg": ""
    }
}

PLAY RECAP ********************************************************************************************************************
192.168.100.81             : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

以上

  • もう一台ノードを追加して gluster_peer モジュール等試せればよかったですね。(反省)