Java 从流作业启动批处理
嗨,我有一个关于Flink流处理的maven项目。根据我从流中得到的消息,我启动了一个批处理过程,但目前我收到了一个错误 我对弗林克这个世界很陌生,如果你有任何想法,请告诉我。下面是我用来启动独立集群的代码Java 从流作业启动批处理,java,apache-flink,flink-streaming,Java,Apache Flink,Flink Streaming,嗨,我有一个关于Flink流处理的maven项目。根据我从流中得到的消息,我启动了一个批处理过程,但目前我收到了一个错误 我对弗林克这个世界很陌生,如果你有任何想法,请告诉我。下面是我用来启动独立集群的代码 final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment ( ); KafkaConsumerService kafkaConsumerServ
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment ( );
KafkaConsumerService kafkaConsumerService= new KafkaConsumerService();
FlinkKafkaConsumer010<String> kafkaConsumer = kafkaConsumerService.getKafkaConsumer(settings );
DataStream<String> messageStream = env.addSource (kafkaConsumer).setParallelism (3);
messageStream
.filter(new MyFilter()).setParallelism(3).name("Filter")
.map(new ProcessFile(arg)).setParallelism(3).name("start batch")
.addSink(new DiscardingSink()).setParallelism(3).name("DiscardData");
env.execute("Stream processor");
该错误是从作业管理器web门户复制的。获取的错误:org.apache.flink.client.program.ProgramInvocationException:检索JobManager网关失败。
位于org.apache.flink.client.program.ClusterClient.runDetached(ClusterClient.java:497)
位于org.apache.flink.client.program.StandaloneClusterClient.submitJob(StandaloneClusterClient.java:103)
在org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:442)上
位于org.apache.flink.client.program.DetachedEnvironment.finalizeExecute(DetachedEnvironment.java:76)
在org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:387)上
位于cw.supply.data.parser.maps.ProcessFileMessage.map(ProcessFileMessage.java:47)
位于cw.supply.data.parser.maps.ProcessFileMessage.map(ProcessFileMessage.java:25)
位于org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
位于org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:528)
位于org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:503)
位于org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:483)
位于org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:891)
位于org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:869)
位于org.apache.flink.streaming.api.operators.StreamFilter.processElement(StreamFilter.java:40)
位于org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:528)
位于org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:503)
位于org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:483)
位于org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:891)
位于org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:869)
位于org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:103)
位于org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:110)
位于org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:269)
位于org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.emitRecord(Kafka010Fetcher.java:86)
位于org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:152)
在org.apache.flink.streaming.connectors.kafka.flinkkafsumerbase.run(flinkkafsumerbase.java:483)
位于org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:87)
位于org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:55)
位于org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:95)
位于org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:263)
位于org.apache.flink.runtime.taskmanager.Task.run(Task.java:702)
运行(Thread.java:748)
原因:org.apache.flink.util.FlinkException:无法连接到主要JobManager。请检查作业管理器是否正在运行。
位于org.apache.flink.client.program.ClusterClient.getJobManagerGateway(ClusterClient.java:789)
位于org.apache.flink.client.program.ClusterClient.runDetached(ClusterClient.java:495)
... 30多
原因:org.apache.flink.runtime.leaderretrieval.LeaderRetrievalException:无法检索leader网关。
位于org.apache.flink.runtime.util.leaderretrievaltils.retrieveLeaderGateway(leaderretrievaltils.java:79)
位于org.apache.flink.client.program.ClusterClient.getJobManagerGateway(ClusterClient.java:784)
... 还有31个
原因:java.util.concurrent.TimeoutException:Futures在[10000毫秒]后超时
在scala.concurrent.impl.Promise$DefaultPromise.ready处(Promise.scala:219)
在scala.concurrent.impl.Promise$DefaultPromise.result处(Promise.scala:223)
在scala.concurrent.Await$$anonfun$result$1.apply处(package.scala:190)
在scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
在scala.concurrent.Await$.result处(package.scala:190)
在scala.concurrent.Await.result(package.scala)处
位于org.apache.flink.runtime.util.leaderretrievaltils.retrieveLeaderGateway(leaderretrievaltils.java:77)
... 32更多在访问我验证的环境后,我发现了问题所在。我使用的是端口未打开的JobManager的公共地址。相反,我开始使用私有IP,因为所有节点都在同一子网中,不需要向世界开放端口。希望这对其他人也有帮助
public ProcessFile(String arg) { }
@Override
public String map(String message) throws Exception {
MessageType typedmessage = ParseMessage(message);
if (isWhatIwant()) {
String[] batchArgs = createBatchArgs();
Configuration config = new Configuration();
config.setString(JobManagerOptions.ADDRESS, jobMasterHost);
config.setInteger(JobManagerOptions.PORT, jobMasterPort);
StandaloneClusterClient client = new StandaloneClusterClient(config);
client.setDetached(true);
PackagedProgram program = new PackagedProgram(new File(jarLocation), SupplyBatchJob.class.getName(), batchArgs);
client.run(program, 7);
}
return typedmessage;
}