Apache spark 2个具有相同用户组id的火花流作业

Apache spark 2个具有相同用户组id的火花流作业,apache-spark,apache-kafka,spark-streaming,Apache Spark,Apache Kafka,Spark Streaming,我正在尝试对消费者群体进行实验 这是我的代码片段 public final class App { private static final int INTERVAL = 5000; public static void main(String[] args) throws Exception { Map<String, Object> kafkaParams = new HashMap<>(); kafkaParams.put("bootstrap

我正在尝试对消费者群体进行实验

这是我的代码片段

public final class App {

private static final int INTERVAL = 5000;

public static void main(String[] args) throws Exception {

    Map<String, Object> kafkaParams = new HashMap<>();
    kafkaParams.put("bootstrap.servers", "xxx:9092");
    kafkaParams.put("key.deserializer", StringDeserializer.class);
    kafkaParams.put("value.deserializer", StringDeserializer.class);
    kafkaParams.put("auto.offset.reset", "earliest");
    kafkaParams.put("enable.auto.commit", true);
    kafkaParams.put("auto.commit.interval.ms","1000");
    kafkaParams.put("security.protocol","SASL_PLAINTEXT");
    kafkaParams.put("sasl.kerberos.service.name","kafka");
    kafkaParams.put("retries","3");
    kafkaParams.put(GROUP_ID_CONFIG,"mygroup");
    kafkaParams.put("request.timeout.ms","210000");
    kafkaParams.put("session.timeout.ms","180000");
    kafkaParams.put("heartbeat.interval.ms","3000");
    Collection<String> topics = Arrays.asList("venkat4");

    SparkConf conf = new SparkConf();
    JavaStreamingContext ssc = new JavaStreamingContext(conf, new Duration(INTERVAL));


    final JavaInputDStream<ConsumerRecord<String, String>> stream =
            KafkaUtils.createDirectStream(
                    ssc,
                    LocationStrategies.PreferConsistent(),
                    ConsumerStrategies.<String, String>Subscribe(topics, kafkaParams)
            );

    stream.mapToPair(
            new PairFunction<ConsumerRecord<String, String>, String, String>() {
                @Override
                public Tuple2<String, String> call(ConsumerRecord<String, String> record) {
                    return new Tuple2<>(record.key(), record.value());
                }
            }).print();


    ssc.start();
    ssc.awaitTermination();


}
public final class应用程序{
专用静态最终整数间隔=5000;
公共静态void main(字符串[]args)引发异常{
Map kafkaParams=新HashMap();
kafkaParams.put(“bootstrap.servers”,“xxx:9092”);
kafkaParams.put(“key.deserializer”,StringDeserializer.class);
kafkaParams.put(“value.deserializer”,StringDeserializer.class);
kafkaParams.put(“自动偏移重置”、“最早”);
kafkaParams.put(“enable.auto.commit”,true);
kafkaParams.put(“auto.commit.interval.ms”,“1000”);
kafkaParams.put(“安全协议”,“SASL_明文”);
kafkaParams.put(“sasl.kerberos.service.name”,“kafka”);
卡夫卡帕拉姆斯.普特(“重试”,“3”);
kafkaParams.put(GROUP_ID_CONFIG,“mygroup”);
kafkaParams.put(“request.timeout.ms”,“210000”);
kafkaParams.put(“session.timeout.ms”,“180000”);
卡夫卡帕拉姆斯.普特(“心跳.间隔.毫秒”,“3000”);
集合主题=Arrays.asList(“venkat4”);
SparkConf conf=新的SparkConf();
JavaStreamingContext ssc=新的JavaStreamingContext(conf,新的持续时间(间隔));
最终JavaInputDStream流=
KafkaUtils.createDirectStream(
ssc,
LocationStrategies.PreferConsistent(),
订阅(主题,卡夫卡帕拉)
);
stream.mapToPair(
新PairFunction(){
@凌驾
公用元组2呼叫(消费者记录记录){
返回新的Tuple2(record.key(),record.value());
}
}).print();
ssc.start();
ssc.终止();
}
}

当我同时运行两个spark流作业时,它会出错

线程“main”java.lang.IllegalStateException中的异常:没有分区venkat4-1的当前分配 位于org.apache.kafka.clients.consumer.internal.SubscriptionState.assignedState(SubscriptionState.java:251) 在org.apache.kafka.clients.consumer.internals.SubscriptionState.needOffsetReset(SubscriptionState.java:315)上 位于org.apache.kafka.clients.consumer.KafkaConsumer.seekToEnd(KafkaConsumer.java:1170) 位于org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.latestOffsets(DirectKafkaInputDStream.scala:197) 位于org.apache.spark.streaming.kafka010.DirectKafkaInputDStream.compute(DirectKafkaInputDStream.scala:214) 在org.apache.spark.streaming.dstream.dstream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(dstream.scala:341) 在org.apache.spark.streaming.dstream.dstream$$anonfun$getOrCompute$1$$anonfun$1$$anonfun$apply$7.apply(dstream.scala:341) 在scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)中 位于org.apache.spark.streaming.dstream.dstream$$anonfun$getOrCompute$1$$anonfun$1.apply(dstream.scala:340) 位于org.apache.spark.streaming.dstream.dstream$$anonfun$getOrCompute$1$$anonfun$1.apply(dstream.scala:340) 位于org.apache.spark.streaming.dstream.dstream.createRDDWithLocalProperties(dstream.scala:415) 位于org.apache.spark.streaming.dstream.dstream$$anonfun$getOrCompute$1.apply(dstream.scala:335) 位于org.apache.spark.streaming.dstream.dstream$$anonfun$getOrCompute$1.apply(dstream.scala:333) 在scala.Option.orElse(Option.scala:289)

每次创建具有相同组的kafka consumer的单独实例时,都将创建分区的重新平衡。我相信消费者不会容忍这种再平衡。我该怎么解决这个问题

下面是使用的命令

SPARK#u KAFKA#u VERSION=0.10 spark2 submit--num executors 2--master thread--deploy mode client--files jaas.conf#jaas.conf,hive.keytab#hive.keytab--driver java options“-Djava.security.auth.login.config=./jaas.conf”-class Streaming.App conf“SPARK.executor.extraJavaOptions=-Djava.security.auth.login=./jaas.conf”--conf spark.streaming.kafka.consumer.cache.enabled=false 1-1.0-SNAPSHOT.jar

每次创建具有相同组的kafka consumer的单独实例时,都将创建分区的重新平衡。我相信消费者不会容忍这种再平衡。我该怎么解决这个问题

现在,所有分区只由一个使用者使用。如果数据摄取率很高,消费者可能会以摄取的速度缓慢地消耗数据

将更多消费者添加到同一消费者组,以使用某个主题中的数据并提高消费率。使用这种方法的Spark流在Kafka分区和Spark分区之间1:1并行。Spark将在内部处理它

如果您有比主题分区更多的使用者,那么它将处于空闲状态,并且资源利用不足。始终建议使用者应小于或等于分区计数

如果添加更多进程/线程,Kafka将重新平衡。如果任何消费者或代理未能向ZooKeeper发送心跳信号,则Kafka群集可以重新配置ZooKeeper

Kafka在任何代理失败或向现有主题添加新分区时重新平衡分区存储。这是kafka特有的如何在代理中跨分区平衡数据的方法。

val alertTopics = Array("testtopic")

val kafkaParams = Map[String, Object](
  "bootstrap.servers" -> sparkJobConfig.kafkaBrokers,
  "key.deserializer" -> classOf[StringDeserializer],
  "value.deserializer" -> classOf[StringDeserializer],
  "group.id" -> sparkJobConfig.kafkaConsumerGroup,
  "auto.offset.reset" -> "latest"
)

val streamContext = new StreamingContext(sparkContext, Seconds(sparkJobConfig.streamBatchInterval.toLong))

val streamData = KafkaUtils.createDirectStream(streamContext, PreferConsistent, Subscribe[String, String](alertTopics, kafkaParams))
Spark streaming在Kafka分区和Spark分区之间提供了简单的1:1并行性。如果未使用ConsumerStragies.Assign提供任何分区详细信息,则使用给定主题的所有分区

Kafka将主题的分区分配给组中的消费者,因此 每个分区仅由组中的一个使用者使用。 卡夫卡保证只有一个消费者才能阅读一条消息 在小组里

当您启动第二个spark流媒体作业时,另一个使用者尝试使用来自同一使用者groupid的同一分区。因此它抛出了错误。

val alertTopics = Array("testtopic")

val kafkaParams = Map[String, Object](
  "bootstrap.servers" -> sparkJobConfig.kafkaBrokers,
  "key.deserializer" -> classOf[StringDeserializer],
  "value.deserializer" -> classOf[StringDeserializer],
  "group.id" -> sparkJobConfig.kafkaConsumerGroup,
  "auto.offset.reset" -> "latest"
)

val streamContext = new StreamingContext(sparkContext, Seconds(sparkJobConfig.streamBatchInterval.toLong))

val streamData = KafkaUtils.createDirectStream(streamContext, PreferConsistent, Subscribe[String, String](alertTopics, kafkaParams))
如果你想消费partitio
val topicPartitionsList =  List(new TopicPartition("topic",3), new TopicPartition("topic",4))

    val alertReqStream2 = KafkaUtils.createDirectStream(streamContext, PreferConsistent, ConsumerStrategies.Assign(topicPartitionsList, kafkaParams))