Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/security/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 如何捕获java.net.ConnectException:akka steram上的连接被拒绝?_Scala_Apache Kafka_Akka Stream_Alpakka - Fatal编程技术网

Scala 如何捕获java.net.ConnectException:akka steram上的连接被拒绝?

Scala 如何捕获java.net.ConnectException:akka steram上的连接被拒绝?,scala,apache-kafka,akka-stream,alpakka,Scala,Apache Kafka,Akka Stream,Alpakka,我有一个卡夫卡消费者,如下所示: import akka.actor.ActorSystem import akka.kafka.scaladsl.Consumer import akka.kafka.{ConsumerSettings, Subscriptions} import akka.stream.ActorMaterializer import akka.stream.scaladsl.Sink import org.apache.kafka.clients.consumer.Cons

我有一个卡夫卡消费者,如下所示:

import akka.actor.ActorSystem
import akka.kafka.scaladsl.Consumer
import akka.kafka.{ConsumerSettings, Subscriptions}
import akka.stream.ActorMaterializer
import akka.stream.scaladsl.Sink
import org.apache.kafka.clients.consumer.ConsumerConfig
import org.apache.kafka.common.serialization.StringDeserializer

import scala.util.{Failure, Success}

object App {
  def main(args: Array[String]): Unit = {


    implicit val system = ActorSystem("SAP-SENDER")
    implicit val executor = system.dispatcher
    implicit val materilizer = ActorMaterializer()

    val config = system.settings.config.getConfig("akka.kafka.consumer")

    val consumerSettings: ConsumerSettings[String, String] =
      ConsumerSettings(config, new StringDeserializer, new StringDeserializer)
        .withBootstrapServers("localhost:9003")
        .withGroupId("SAPSENDER")
        .withProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest")

    Consumer
      .plainSource(
        consumerSettings,
        Subscriptions.topics("TEST-TOPIC")
      )
      .runWith(Sink.foreach(println))
      .onComplete{
        case Success(_) => println("Goood")
        case Failure(ex) =>
          println(s"I am failed ==============> ${ex.getMessage}")
          system.terminate()
      }

  }
} 
kafka服务器未处于活动状态,我只想终止使用者。它始终尝试连接并显示以下消息:

19:03:47.342 [SAP-SENDER-akka.kafka.default-dispatcher-15] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-1, groupId=SAPSENDER] Pausing partitions []
19:03:47.342 [SAP-SENDER-akka.kafka.default-dispatcher-15] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=SAPSENDER] No broker available to send FindCoordinator request
19:03:47.342 [SAP-SENDER-akka.kafka.default-dispatcher-15] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=SAPSENDER] Give up sending metadata request since no node is available
19:03:47.342 [SAP-SENDER-akka.kafka.default-dispatcher-15] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=SAPSENDER] Coordinator discovery failed, refreshing metadata
19:03:47.342 [SAP-SENDER-akka.kafka.default-dispatcher-15] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=SAPSENDER] Give up sending metadata request since no node is available
19:03:47.412 [SAP-SENDER-akka.kafka.default-dispatcher-17] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-1, groupId=SAPSENDER] Pausing partitions []
19:03:47.412 [SAP-SENDER-akka.kafka.default-dispatcher-17] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=SAPSENDER] No broker available to send FindCoordinator request
19:03:47.412 [SAP-SENDER-akka.kafka.default-dispatcher-17] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=SAPSENDER] Give up sending metadata request since no node is available
19:03:47.412 [SAP-SENDER-akka.kafka.default-dispatcher-17] DEBUG org.apache.kafka.clients.consumer.internals.AbstractCoordinator - [Consumer clientId=consumer-1, groupId=SAPSENDER] Coordinator discovery failed, refreshing metadata
19:03:47.412 [SAP-SENDER-akka.kafka.default-dispatcher-17] DEBUG org.apache.kafka.clients.NetworkClient - [Consumer clientId=consumer-1, groupId=SAPSENDER] Give up sending metadata request since no node is available
19:03:47.478 [SAP-SENDER-akka.kafka.default-dispatcher-20] DEBUG org.apache.kafka.clients.consumer.KafkaConsumer - [Consumer clientId=consumer-1, groupId=SAPSENDER] Pausing partitions []   
它还说:

java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.kafka.common.network.PlaintextTransportLayer.finishConnect(PlaintextTransportLayer.java:50)
    at org.apache.kafka.common.network.KafkaChannel.finishConnect(KafkaChannel.java:173)
    at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:515)
    at org.apache.kafka.common.network.Selector.poll(Selector.java:467)
    at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:535)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:265)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:236)
    at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:215)
    at org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorReady(AbstractCoordinator.java:231)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:316)
    at org.apache.kafka.clients.consumer.KafkaConsumer.updateAssignmentMetadataIfNeeded(KafkaConsumer.java:1214)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1179)
    at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1164)
    at akka.kafka.internal.KafkaConsumerActor.poll(KafkaConsumerActor.scala:380)
    at akka.kafka.internal.KafkaConsumerActor.akka$kafka$internal$KafkaConsumerActor$$receivePoll(KafkaConsumerActor.scala:360)
    at akka.kafka.internal.KafkaConsumerActor$$anonfun$receive$1.applyOrElse(KafkaConsumerActor.scala:221)
    at akka.actor.Actor.aroundReceive(Actor.scala:539)
    at akka.actor.Actor.aroundReceive$(Actor.scala:537)
    at akka.kafka.internal.KafkaConsumerActor.akka$actor$Timers$$super$aroundReceive(KafkaConsumerActor.scala:142)
    at akka.actor.Timers.aroundReceive(Timers.scala:51)
    at akka.actor.Timers.aroundReceive$(Timers.scala:40)
    at akka.kafka.internal.KafkaConsumerActor.aroundReceive(KafkaConsumerActor.scala:142)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:610)
    at akka.actor.ActorCell.invoke(ActorCell.scala:579)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:268)
    at akka.dispatch.Mailbox.run(Mailbox.scala:229)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)  
如何在流中捕获
ConnectException
,并阻止消费者尝试连接卡夫卡

代码托管在这里。

看看这一点以及升级到kafka客户端2.0的工作,我猜很多重试责任都委托给了kafka客户端。例如,我尝试传递这些属性

val consumerSettings: ConsumerSettings[String, String] =
  ConsumerSettings(config, new StringDeserializer, new StringDeserializer)
    .withProperties(
      "reconnect.backoff.ms" -> "10000",
      "reconnect.backoff.max.ms" -> "20000"
    )
    .withBootstrapServers("localhost:9099")
    .withGroupId("SAPSENDER")
    .withProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest")
10秒后第二次出现异常。我发现了这些财产

考虑到这一点,我认为kafka客户机的新修改可能缺少一个功能,因为Kafkaconsumerator没有向流公开异常,我尝试了使用repo的各种组合,但仍然得到了连续的调试消息流


我希望这能为正确的方向提供一些提示,如果您解决了问题,请告诉我们。

对于卡夫卡客户端2.0+阿尔帕卡·卡夫卡,卡夫卡无法注意到给定地址上没有卡夫卡经纪人可用


请参见

您应该监视流,如果出现错误,请重新启动流。例如,您可以在actor内部运行流,并使用actor监控处理错误连接

连接错误可能会持续几秒钟(可能是网络过载),因此您应该使用退避策略来避免重试

Akka stream已经为您提供了一种使用
RestartSource
对流执行此操作的简单方法。看

此解决方案仅在您启动流且代理关闭时有效,因为当您尝试创建它时,使用者会抛出异常。 但是,如果您成功创建了消费者,并且之后整个kafka群集崩溃,则内部kafka消费者将使用所述的
reconnect.backoff.ms
reconnect.backoff.max.ms
配置重新连接,并且您的流不会失败

如果您想限制退休人数,您应该执行以下操作

val result: Future[Done] = RestartSource
  .onFailuresWithBackoff(
    minBackoff = 3.seconds,
    maxBackoff = 30.seconds,
    randomFactor = 0.2
  ) { () => // your consumer  
  }.
  .take(3) // retries limit
  .runWith(Sink.ignore)

result.onComplete {
  case _ => println("Max retries reached")
}

上面显示的代码,我应该在哪里调用
control.get().shutdown()
。在最多4次重试之后,我想调用
control.get().shutdown()
,但我应该在哪里调用它?当使用重试次数时,是否有可能将回调传递给
RestartSource
?我已确认,但它没有正确停止。卡夫卡服务器未处于活动状态。
val result: Future[Done] = RestartSource
  .onFailuresWithBackoff(
    minBackoff = 3.seconds,
    maxBackoff = 30.seconds,
    randomFactor = 0.2
  ) { () => // your consumer  
  }.
  .take(3) // retries limit
  .runWith(Sink.ignore)

result.onComplete {
  case _ => println("Max retries reached")
}