Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka 卡夫卡-端口更改后SimpleConsumer不工作_Apache Kafka - Fatal编程技术网

Apache kafka 卡夫卡-端口更改后SimpleConsumer不工作

Apache kafka 卡夫卡-端口更改后SimpleConsumer不工作,apache-kafka,Apache Kafka,我有一个Hortonwork的Hadoop集群堆栈,版本为2.5,其中包括Kafka 0.10,其中我有两个代理运行并侦听PLAINTEXT://localhost:6667,但是如果我将端口更改为PLAINTEXT://localhost:9092,由于公司端口限制,我无法启动以前能够启动的Spark作业(使用端口6667),获取以下错误: 16/10/19 16:30:23 INFO SimpleConsumer: Reconnect due to socket error: java.ni

我有一个Hortonwork的Hadoop集群堆栈,版本为2.5,其中包括Kafka 0.10,其中我有两个代理运行并侦听
PLAINTEXT://localhost:6667
,但是如果我将端口更改为
PLAINTEXT://localhost:9092
,由于公司端口限制,我无法启动以前能够启动的Spark作业(使用端口6667),获取以下错误:

16/10/19 16:30:23 INFO SimpleConsumer: Reconnect due to socket error: java.nio.channels.ClosedChannelException
Exception in thread "main" org.apache.spark.SparkException: java.nio.channels.ClosedChannelException
        at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
        at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
        at scala.util.Either.fold(Either.scala:97)
        at org.apache.spark.streaming.kafka.KafkaCluster$.checkErrors(KafkaCluster.scala:365)
        at org.apache.spark.streaming.kafka.DirectKafkaInputDStream$DirectKafkaInputDStreamCheckpointData.restore(DirectKafkaInputDStream.scala:197)
        at org.apache.spark.streaming.dstream.DStream.restoreCheckpointData(DStream.scala:515)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$restoreCheckpointData$2.apply(DStream.scala:516)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$restoreCheckpointData$2.apply(DStream.scala:516)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at org.apache.spark.streaming.dstream.DStream.restoreCheckpointData(DStream.scala:516)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$restoreCheckpointData$2.apply(DStream.scala:516)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$restoreCheckpointData$2.apply(DStream.scala:516)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at org.apache.spark.streaming.dstream.DStream.restoreCheckpointData(DStream.scala:516)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$restoreCheckpointData$2.apply(DStream.scala:516)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$restoreCheckpointData$2.apply(DStream.scala:516)
        at scala.collection.immutable.List.foreach(List.scala:318)
        at org.apache.spark.streaming.dstream.DStream.restoreCheckpointData(DStream.scala:516)
        at org.apache.spark.streaming.DStreamGraph$$anonfun$restoreCheckpointData$2.apply(DStreamGraph.scala:151)
        at org.apache.spark.streaming.DStreamGraph$$anonfun$restoreCheckpointData$2.apply(DStreamGraph.scala:151)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
        at org.apache.spark.streaming.DStreamGraph.restoreCheckpointData(DStreamGraph.scala:151)
        at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:158)
        at org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:877)
        at org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:877)
        at scala.Option.map(Option.scala:145)
        at org.apache.spark.streaming.StreamingContext$.getOrCreate(StreamingContext.scala:877)
        at org.apache.spark.streaming.api.java.JavaStreamingContext$.getOrCreate(JavaStreamingContext.scala:775)
        at org.apache.spark.streaming.api.java.JavaStreamingContext.getOrCreate(JavaStreamingContext.scala)
        at com.ncr.dataplatform.Runner.main(Runner.java:48)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/10/19 16:30:24 INFO SparkContext: Invoking stop() from shutdown hook
16/10/19 16:30:23信息简单消费者:由于套接字错误而重新连接:java.nio.channels.ClosedChannelException
线程“main”org.apache.spark.sparkeexception中的异常:java.nio.channels.ClosedChannelException
位于org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
位于org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$checkErrors$1.apply(KafkaCluster.scala:366)
在scala.util.other.fold处(other.scala:97)
在org.apache.spark.streaming.kafka.KafkaCluster$.checkErrors上(KafkaCluster.scala:365)
在org.apache.spark.streaming.kafka.DirectKafkaInputDStream$DirectKafkaInputDStreamCheckpointData.restore(DirectKafkaInputDStream.scala:197)
位于org.apache.spark.streaming.dstream.dstream.restoreCheckpointData(dstream.scala:515)
位于org.apache.spark.streaming.dstream.dstream$$anonfun$restoreCheckpointData$2.apply(dstream.scala:516)
位于org.apache.spark.streaming.dstream.dstream$$anonfun$restoreCheckpointData$2.apply(dstream.scala:516)
位于scala.collection.immutable.List.foreach(List.scala:318)
位于org.apache.spark.streaming.dstream.dstream.restoreCheckpointData(dstream.scala:516)
位于org.apache.spark.streaming.dstream.dstream$$anonfun$restoreCheckpointData$2.apply(dstream.scala:516)
位于org.apache.spark.streaming.dstream.dstream$$anonfun$restoreCheckpointData$2.apply(dstream.scala:516)
位于scala.collection.immutable.List.foreach(List.scala:318)
位于org.apache.spark.streaming.dstream.dstream.restoreCheckpointData(dstream.scala:516)
位于org.apache.spark.streaming.dstream.dstream$$anonfun$restoreCheckpointData$2.apply(dstream.scala:516)
位于org.apache.spark.streaming.dstream.dstream$$anonfun$restoreCheckpointData$2.apply(dstream.scala:516)
位于scala.collection.immutable.List.foreach(List.scala:318)
位于org.apache.spark.streaming.dstream.dstream.restoreCheckpointData(dstream.scala:516)
位于org.apache.spark.streaming.DStreamGraph$$anonfun$restoreCheckpointData$2.apply(DStreamGraph.scala:151)
位于org.apache.spark.streaming.DStreamGraph$$anonfun$restoreCheckpointData$2.apply(DStreamGraph.scala:151)
位于scala.collection.mutable.resizeblearray$class.foreach(resizeblearray.scala:59)
位于scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
位于org.apache.spark.streaming.DStreamGraph.restoreCheckpointData(DStreamGraph.scala:151)
位于org.apache.spark.streaming.StreamingContext(StreamingContext.scala:158)
位于org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:877)
位于org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:877)
位于scala.Option.map(Option.scala:145)
位于org.apache.spark.streaming.StreamingContext$.getOrCreate(StreamingContext.scala:877)
位于org.apache.spark.streaming.api.java.JavaStreamingContext$.getOrCreate(JavaStreamingContext.scala:775)
位于org.apache.spark.streaming.api.java.JavaStreamingContext.getOrCreate(JavaStreamingContext.scala)
位于com.ncr.dataplatform.Runner.main(Runner.java:48)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
位于org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
位于org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
位于org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
位于org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/10/19 16:30:24信息SparkContext:从关闭挂钩调用stop()
在所有集群节点上,我禁用了
iptables
selinux
,人们可以从外部连接到Kafka新端口。 我可以从任何将运行spark作业的datanode远程登录到两个端口(6667和9092)上的kafka broker,也可以从我配置的端口上的Zookeeper


知道为什么会发生这种情况吗?我无法从这个错误消息中获得更多信息,我已经没有主意了。

你能从执行器机器上netcat URL和端口吗?我假设你正在群集节点上运行spark。似乎连接了
\nc-v-n 153.77.130.48 9092 Ncat:Version 6.40(http://nmap.org/ncat )Ncat:已连接到153.77.130.48:9092。
您是否能够从executor计算机对URL和端口进行netcat?我假设您正在群集节点中运行spark。似乎已连接到
#nc-v-n 153.77.130.48 9092 Ncat:版本6.40(http://nmap.org/ncat )Ncat:已连接到153.77.130.48:9092。