Spark StreamingはFlume,Mysql(FlumeベースのPushモード)を統合し,Mysqlにリアルタイムでデータを保存する


クラスタの割り当ては次のとおりです.
192.168.58.11	spark01
192.168.58.12	spark02
192.168.58.13	spark03
spark  :spark-2.1.0-bin-hadoop2.7
flume  :apache-flume-1.7.0-bin

flumeの構成は次のとおりです.
#flume    
#bin/flume-ng agent -n a4 -f conf/a4.conf -c conf -Dflume.root.logger=INFO,console

#  agent , source、channel、sink   
a4.sources = r1
a4.channels = c1
a4.sinks = k1

#    source
a4.sources.r1.type = spooldir
a4.sources.r1.spoolDir = /opt/kevin/log(    )

#    channel
a4.channels.c1.type = memory
a4.channels.c1.capacity = 10000
a4.channels.c1.transactionCapacity = 100

#    sink
a4.sinks = k1
a4.sinks.k1.type = avro
a4.sinks.k1.channel = c1
#   Windows   
a4.sinks.k1.hostname = 192.168.58.11(IP  )
a4.sinks.k1.port = 1234(   )

#  source、channel、sink
a4.sources.r1.channels = c1
a4.sinks.k1.channel = c1

Spark Streamingプログラム
package com.kk.sparkstreaming.flume

import org.apache.spark.SparkConf
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.Seconds
import org.apache.spark.storage.StorageLevel
import org.apache.log4j.Logger
import org.apache.log4j.Level
import org.apache.spark.streaming.flume.FlumeUtils
import java.sql.Connection
import java.sql.PreparedStatement
import java.sql.DriverManager

object FlumePush {

  def main(args: Array[String]): Unit = {
    //       
    Logger.getLogger("org.apache.spark").setLevel(Level.ERROR)
    Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)

    //  StreamingContext    StreamingContext(conf: SparkConf, batchDuration: Duration)
    val sparkConf = new SparkConf().setAppName("TestFlume") 
    val streamConf = new StreamingContext(sparkConf, Seconds(3)); //  3       

    //    Push  
    val flumeStream = FlumeUtils.createStream(streamConf, "192.168.58.11", 1234, StorageLevel.DISK_ONLY);

    // Flume     :
    //e    FLume        Event
    val data = flumeStream.map { e =>
      {
        new String(e.event.getBody.array())
      }
    }

    val datas = data.map(line => {
      // 1,201.105.101.108,http://mystore.jsp/?productid=1,2017020029,2,1 
      val index: Array[String] = line.split(",");
      val ip = index(1);
      (ip, 1)
    })

    datas.print()//       
    datas.foreachRDD(cs => {
      var conn: Connection = null;
      var ps: PreparedStatement = null;
      try {
        Class.forName("com.mysql.jdbc.Driver").newInstance();
        cs.foreachPartition(f => {
          conn = DriverManager.getConnection("jdbc:mysql://192.168.58.11:3306/storm?useUnicode=true&characterEncoding=utf8", "root", "kevin");
          ps = conn.prepareStatement("insert into result values(?,?)");
          f.foreach(s => {
            ps.setString(1, s._1);
            ps.setInt(2, s._2);
            ps.executeUpdate();
          })
        })
      } catch {
        case t: Throwable => t.printStackTrace() // TODO: handle error
      } finally {
        if (ps != null) {
          ps.close()
        }
        if (conn != null) {
          conn.close();
        }
      }
    })
    streamConf.start()
    streamConf.awaitTermination();
  }
}

pom.xmlプロファイル

	UTF-8
	2.2.1
	2.11.1



	
	
		org.scala-lang
		scala-library
		${scala.version}
	
	
	
	   org.scala-lang
	   scala-compiler
	   ${scala.version}
	
	
	
	   org.scala-lang
	   scala-reflect
	   ${scala.version}
	
	
	
	   org.apache.spark
	   spark-core_2.11
	   ${spark.version}
	
	
	
	   org.apache.spark
	   spark-streaming_2.11
	   ${spark.version}
	
	
	   org.apache.spark
	   spark-sql_2.11
	   ${spark.version}
	
	
	
	org.apache.spark
	spark-streaming-flume_2.11
	2.2.2
	
        


必要なjarパッケージ:spark-streaming-flume_2.10-2.1.0.JArはクラスタで実行するにはspark-streaming-flume_2.10-2.1.0.JArはクラスタの各マシンのsparkインストールディレクトリのjarsディレクトリの下にコピーし、mysqlドライバパッケージテスト:1が必要です.まずSpark Streamingプログラムを起動する必要がある.Flume 3を起動する.ログファイルを/opt/kevin/logにコピー注意:問題をシーケンス化し、起動順序