MySQL ExporterとPrometheusの統合


MySQLはインストール済みと仮定して行います.

1.MySQL Exporterのダウンロード


https://prometheus.io/download/#mysqld_exporter
https://github.com/prometheus/mysqld_exporter

2.MySQLアカウントの作成


生成されたアカウントはmysqld exporterの接続に使用されます.
CREATE USER 'test'@'localhost' IDENTIFIED BY 'test123' WITH MAX_USER_CONNECTIONS 3;
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'test'@'localhost';
FLUSH PRIVILEGES;
EXIT;

3. my.cnf「client」オプショングループ設定

  • /etc/mysql/my.cnf
  • [client]
    socket=/var/run/mysqld/mysqld.sock
    user=test
    password=test123
    [Client]:MySQLに関連付けられたThird partyプログラムへの接続を設定する
    Clientグループの詳細パラメータ
    https://dev.mysql.com/doc/refman/8.0/en/mysql-command-options.html
    About MySQL Option files
    https://dev.mysql.com/doc/refman/8.0/en/option-files.html

    4. mysql_exporter.サービス登録


    ExecStartパラメータでExporterの実行フラグを定義します.
    各Flagの機能は、機能項目で確認できます.
    以下の設定のほとんどはDefault設定で、すべての情報を最大限に収集するように設定されています.
    Flag経由cnfとweb listenportを定義します.
    [Unit]
    Description=Prometheus MySQL Exporter
    After=network.target
    User=prometheus
    Group=prometheus
    
    [Service]
    Type=simple
    Restart=always
    ExecStart=/opt/mysqld_exporter-0.14.0.linux-amd64/mysqld_exporter \
    --config.my-cnf=/etc/mysql/my.cnf \
    --web.listen-address=0.0.0.0:9104 \
    --collect.engine_tokudb_status \
    --collect.global_status \
    --collect.global_variables \
    --collect.info_schema.clientstats \
    --collect.info_schema.innodb_metrics \
    --collect.info_schema.innodb_tablespaces \
    --collect.info_schema.innodb_cmp \
    --collect.info_schema.innodb_cmpmem \
    --collect.info_schema.processlist \
    --collect.info_schema.processlist.min_time=0 \
    --collect.info_schema.query_response_time \
    --collect.info_schema.replica_host \
    --collect.info_schema.tables \
    --collect.info_schema.tables.databases=‘*’ \
    --collect.info_schema.tablestats \
    --collect.info_schema.schemastats \
    --collect.info_schema.userstats \
    --collect.mysql.user \
    --collect.perf_schema.eventsstatements \
    --collect.perf_schema.eventsstatements.digest_text_limit=120 \
    --collect.perf_schema.eventsstatements.limit=250 \
    --collect.perf_schema.eventsstatements.timelimit=86400 \
    --collect.perf_schema.eventsstatementssum \
    --collect.perf_schema.eventswaits \
    --collect.perf_schema.file_events \
    --collect.perf_schema.file_instances \
    --collect.perf_schema.file_instances.remove_prefix=false \
    --collect.perf_schema.indexiowaits \
    --collect.perf_schema.memory_events \
    --collect.perf_schema.memory_events.remove_prefix=false \
    --collect.perf_schema.tableiowaits \
    --collect.perf_schema.tablelocks \
    --collect.perf_schema.replication_group_members \
    --collect.perf_schema.replication_group_member_stats \
    --collect.perf_schema.replication_applier_status_by_worker \
    --collect.slave_status \
    --collect.slave_hosts \
    --collect.heartbeat \
    --collect.heartbeat.database=true \
    --collect.heartbeat.table=true \
    --collect.heartbeat.utc
    
    
    [Install]
    WantedBy=multi-user.target
    MySQL Exporter Flag information - github README.md
    https://github.com/prometheus/mysqld_exporter

    5. prometheus.監視ターゲットをymlに登録する

    # my global config
    global:
      scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
      evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
      # scrape_timeout is set to the global default (10s).
    
    # Alertmanager configuration
    alerting:
      alertmanagers:
        - static_configs:
            - targets:
              # - alertmanager:9093
    
    # Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
    rule_files:
      # - "first_rules.yml"
      # - "second_rules.yml"
    
    # A scrape configuration containing exactly one endpoint to scrape:
    # Here it's Prometheus itself.
    scrape_configs:
      # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
      - job_name: "node-prometheus"
    
        # metrics_path defaults to '/metrics'
        # scheme defaults to 'http'.
    
        static_configs:
                - targets: ["192.168.0.1:9090"]
                  labels:
                          group: 'prometheus'
      - job_name: 'mysql-dbaas hansu test'
        scrape_interval: 5s
        static_configs:
                - targets: ["192.168.0.2:9104"]
                  labels:
                          group: 'mysql'