sparkstreamingはkafkaパラメータ設定を統合し、messageオフセット量はredisに書き込む


kafkaの高度なデータソースはsparkに引き寄せられ、オフセット量の自己メンテナンスはredisに書き込まれ、redis接続プールが確立される.
インポートが必要
org.apache.sparkgroupId>
spark-streaming-kafka-0-10_2.11artifactId>2.2.1version>dependency>redis.clientsgroupId>jedisartifactId>2.9.0version>dependency>
栗:

import java.{lang, util}

import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.kafka.common.TopicPartition
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.log4j.{Level, Logger}
import org.apache.spark.SparkConf
import org.apache.spark.streaming.dstream.InputDStream
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.{Seconds, StreamingContext}
import redis.clients.jedis.Jedis

object WCKafkaRedisApp {

  //  Logger.getLogger("org").setLevel(Level.WARN)

  def main(args: Array[String]): Unit = {
    val conf = new SparkConf().setMaster("local[*]").setAppName("xx")
      //       kafka       
      .set("spark.streaming.kafka.maxRatePerPartition", "100")
      //    
      .set("spark.serilizer", "org.apache.spark.serializer.KryoSerializer")
      //     rdd   
      .set("spark.rdd.compress", "true")
    val ssc = new StreamingContext(conf, Seconds(2))

    //       
    val groupId = "test002"
    val kafkaParams = Map[String, Object](
      "bootstrap.servers" -> "hdp01:9092,hdp02:9092,hdp03:9092",
      "key.deserializer" -> classOf[StringDeserializer],
      "value.deserializer" -> classOf[StringDeserializer],
      "group.id" -> groupId,
      "auto.offset.reset" -> "earliest",
      "enable.auto.commit" -> (false: lang.Boolean)
    )
    val topics = Array("test")

    //       
    var formdbOffset: Map[TopicPartition, Long] = JedisOffset(groupId)

    //  kafka  
    val stream: InputDStream[ConsumerRecord[String, String]] = if (formdbOffset.size == 0) {
      KafkaUtils.createDirectStream[String, String](
        ssc,
        LocationStrategies.PreferConsistent,
        ConsumerStrategies.Subscribe[String, String](topics, kafkaParams)
      )
    } else {
      KafkaUtils.createDirectStream(
        ssc,
        LocationStrategies.PreferConsistent,
        ConsumerStrategies.Assign[String, String](formdbOffset.keys, kafkaParams, formdbOffset)

      )
    }


    //       。
    stream.foreachRDD({
      rdd =>
        //         
        val offsetRange: Array[OffsetRange] = rdd.asInstanceOf[HasOffsetRanges].offsetRanges

        //    
        rdd.flatMap(_.value().split(" ")).map((_, 1)).reduceByKey(_ + _).foreachPartition({
          it =>
            val jedis = RedisUtils.getJedis
            it.foreach({
              v =>
                jedis.hincrBy("wordcount", v._1, v._2.toLong)
            })
            jedis.close()
        })

        //     redis
        val jedis: Jedis = RedisUtils.getJedis
        for (or 

......
import java.util

import org.apache.kafka.common.TopicPartition

object JedisOffset {


  def apply(groupId: String) = {
    var formdbOffset = Map[TopicPartition, Long]()
    val jedis1 = RedisUtils.getJedis
    val topicPartitionOffset: util.Map[String, String] = jedis1.hgetAll(groupId)
    import scala.collection.JavaConversions._
    val topicPartitionOffsetlist: List[(String, String)] = topicPartitionOffset.toList
    for (topicPL  topicPL._2.toLong)
    }
    formdbOffset
  }
}

....redisの接続プールの作成もあります