Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka Kafka 0.8的高级消费者api生命周期_Apache Kafka_Kafka Consumer Api - Fatal编程技术网

Apache kafka Kafka 0.8的高级消费者api生命周期

Apache kafka Kafka 0.8的高级消费者api生命周期,apache-kafka,kafka-consumer-api,Apache Kafka,Kafka Consumer Api,我找不到高级消费者的生命周期描述。我使用的是0.8.2.2,无法使用来自卡夫卡客户的现代消费者。这是我的密码: def consume(numberOfEvents: Int, await: Duration = 100.millis): List[MessageEnvelope] = { val consumerProperties = new Properties() consumerProperties.put("zookeeper.connect", kafkaConfi

我找不到高级消费者的生命周期描述。我使用的是0.8.2.2,无法使用来自卡夫卡客户的现代消费者。这是我的密码:

def consume(numberOfEvents: Int, await: Duration = 100.millis): List[MessageEnvelope] = {
    val consumerProperties = new Properties()
    consumerProperties.put("zookeeper.connect", kafkaConfig.zooKeeperConnectString)
    consumerProperties.put("group.id", consumerGroup)
    consumerProperties.put("auto.offset.reset", "smallest")

    val consumer = Consumer.create(new ConsumerConfig(consumerProperties))

    try {
      val messageStreams = consumer.createMessageStreams(
        Predef.Map(kafkaConfig.topic -> 1),
        new DefaultDecoder,
        new MessageEnvelopeDecoder)

      val receiveMessageFuture = Future[List[MessageEnvelope]] {
        messageStreams(kafkaConfig.topic)
          .flatMap(stream => stream.take(numberOfEvents).map(_.message()))
      }

      Await.result(receiveMessageFuture, await)
    } finally {
      consumer.shutdown()
    }
我不清楚。我应该在每次消息检索后关闭消费者,还是可以保留实例并将其重新用于消息获取?我认为重用实例是正确的方法,但找不到一些文章/最佳实践

我正在尝试重用消费者和/或消息流。它对我不起作用,我也找不到它的原因

如果我尝试重用messageStreams,则会出现异常:

2017-04-17_19:57:57.088 ERROR MessageEnvelopeConsumer - Error while awaiting for messages java.lang.IllegalStateException: Iterator is in failed state
java.lang.IllegalStateException: Iterator is in failed state
    at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:54)
    at scala.collection.IterableLike$class.take(IterableLike.scala:134)
    at kafka.consumer.KafkaStream.take(KafkaStream.scala:25)
发生在这里:

def consume(numberOfEvents: Int, await: Duration = 100.millis): List[MessageEnvelope] = {
    try {
      val receiveMessageFuture = Future[List[MessageEnvelope]] {
        messageStreams(kafkaConfig.topic)
          .flatMap(stream => stream.take(numberOfEvents).map(_.message()))
      }
      Try(Await.result(receiveMessageFuture, await)) match {
        case Success(result) => result
        case Failure(_: TimeoutException) => List.empty
        case Failure(e) =>
          // ===> never got any message from topic
          logger.error(s"Error while awaiting for messages ${e.getClass.getName}: ${e.getMessage}", e)
          List.empty

      }
    } catch {
      case e: Exception =>
        logger.warn(s"Error while consuming messages", e)
        List.empty
    }
  }
def consume(numberOfEvents: Int, await: Duration = 100.millis): List[MessageEnvelope] = {
    try {

      val messageStreams = consumer.createMessageStreams(
        Predef.Map(kafkaConfig.topic -> 1),
        new DefaultDecoder,
        new MessageEnvelopeDecoder)

      val receiveMessageFuture = Future[List[MessageEnvelope]] {
        messageStreams(kafkaConfig.topic)
          .flatMap(stream => stream.take(numberOfEvents).map(_.message()))
      }
      Try(Await.result(receiveMessageFuture, await)) match {
        case Success(result) => result
        case Failure(_: TimeoutException) => List.empty
        case Failure(e) =>
          logger.error(s"Error while awaiting for messages ${e.getClass.getName}: ${e.getMessage}", e)
          List.empty

      }
    } catch {
      case e: Exception =>
        // ===> now exception raised here
        logger.warn(s"Error while consuming messages", e)
        List.empty
    }
  }
我每次都尝试创建messageStreams:

没有运气

2017-04-17_20:02:44.236 WARN  MessageEnvelopeConsumer - Error while consuming messages
kafka.common.MessageStreamsExistException: ZookeeperConsumerConnector can create message streams at most once
    at kafka.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:151)
    at MessageEnvelopeConsumer.consume(MessageEnvelopeConsumer.scala:47)
发生在这里:

def consume(numberOfEvents: Int, await: Duration = 100.millis): List[MessageEnvelope] = {
    try {
      val receiveMessageFuture = Future[List[MessageEnvelope]] {
        messageStreams(kafkaConfig.topic)
          .flatMap(stream => stream.take(numberOfEvents).map(_.message()))
      }
      Try(Await.result(receiveMessageFuture, await)) match {
        case Success(result) => result
        case Failure(_: TimeoutException) => List.empty
        case Failure(e) =>
          // ===> never got any message from topic
          logger.error(s"Error while awaiting for messages ${e.getClass.getName}: ${e.getMessage}", e)
          List.empty

      }
    } catch {
      case e: Exception =>
        logger.warn(s"Error while consuming messages", e)
        List.empty
    }
  }
def consume(numberOfEvents: Int, await: Duration = 100.millis): List[MessageEnvelope] = {
    try {

      val messageStreams = consumer.createMessageStreams(
        Predef.Map(kafkaConfig.topic -> 1),
        new DefaultDecoder,
        new MessageEnvelopeDecoder)

      val receiveMessageFuture = Future[List[MessageEnvelope]] {
        messageStreams(kafkaConfig.topic)
          .flatMap(stream => stream.take(numberOfEvents).map(_.message()))
      }
      Try(Await.result(receiveMessageFuture, await)) match {
        case Success(result) => result
        case Failure(_: TimeoutException) => List.empty
        case Failure(e) =>
          logger.error(s"Error while awaiting for messages ${e.getClass.getName}: ${e.getMessage}", e)
          List.empty

      }
    } catch {
      case e: Exception =>
        // ===> now exception raised here
        logger.warn(s"Error while consuming messages", e)
        List.empty
    }
  }
UPD

我使用了基于迭代器的方法。看起来是这样的:

// consumerProperties.put("consumer.timeout.ms", "100")    

private lazy val consumer: ConsumerConnector = Consumer.create(new ConsumerConfig(consumerProperties))

  private lazy val messageStreams: Seq[KafkaStream[Array[Byte], MessageEnvelope]] =
    consumer.createMessageStreamsByFilter(Whitelist(kafkaConfig.topic), 1, new DefaultDecoder, new MessageEnvelopeDecoder)


  private lazy val iterator: ConsumerIterator[Array[Byte], MessageEnvelope] = {
    val stream = messageStreams.head
    stream.iterator()
  }

  def consume(): List[MessageEnvelope] = {
    try {
      if (iterator.hasNext) {
        val fromKafka: MessageAndMetadata[Array[Byte], MessageEnvelope] = iterator.next
        List(fromKafka.message())
      } else {
        List.empty
      }

    } catch {
      case _: ConsumerTimeoutException =>
        List.empty

      case e: Exception =>
        logger.warn(s"Error while consuming messages", e)
        List.empty
    }
  }

现在我试图弄清楚它是否会自动将偏移量提交到ZK…

持续关机会导致不必要的用户组重新平衡,这会对性能产生很大影响。有关最佳实践,请参阅本文:

我的答案是最新的问题更新。迭代器方法对我来说正如预期的那样有效。

您好,我已尝试实现您的方法。遇到了其他问题。您能检查一下我的更新吗?所以,它只能在一种情况下正常工作:当我创建消费者时,messageStreams,fetch message和close consumer。