Apache flink Flink历史服务器:发送到HDFS的空作业日志

Apache flink Flink历史服务器:发送到HDFS的空作业日志,apache-flink,flink-streaming,flink-sql,flink-batch,Apache Flink,Flink Streaming,Flink Sql,Flink Batch,遵循最新的历史服务器文档 jobmanager.archive.fs.dir可以在启动会话集群后在HDFS上创建 作业完成运行后,将在jobmanager.archive.fs.dir下创建一个名为作业id的空文件 历史记录服务器已启动并正在运行 从历史记录服务器日志: 2020-09-24 22:39:43,270 DEBUG org.apache.flink.runtime.webmonitor.history.HistoryServerArchiveFetcher [] - Startin

遵循最新的历史服务器文档

jobmanager.archive.fs.dir
可以在启动会话集群后在HDFS上创建

作业完成运行后,将在
jobmanager.archive.fs.dir
下创建一个名为作业id的空文件

历史记录服务器已启动并正在运行

从历史记录服务器日志:

2020-09-24 22:39:43,270 DEBUG org.apache.flink.runtime.webmonitor.history.HistoryServerArchiveFetcher [] - Starting archive fetching.
2020-09-24 22:39:43,270 DEBUG org.apache.flink.runtime.webmonitor.history.HistoryServerArchiveFetcher [] - Checking archive directory hdfs://ltx1-holdemnn01.grid.linkedin.com:9000/user/cyzhang/flink-history-server.
2020-09-24 22:39:43,272 INFO  org.apache.flink.runtime.webmonitor.history.HistoryServerArchiveFetcher [] - Processing archive hdfs://hostname/user/cyzhang/flink-history-server/2daf03dd7f9129637ced43d9237c1328.
2020-09-24 22:39:43,272 ERROR org.apache.flink.runtime.webmonitor.history.HistoryServerArchiveFetcher [] - Critical failure while fetching/processing job archives.
java.lang.NullPointerException: null
        at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:919) ~[hadoop-common-2.10.0.123.jar:?]
        at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.open(HadoopFileSystem.java:120) ~[flink-hadoop-fs-1.11.1.jar:1.11.1]
        at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.open(HadoopFileSystem.java:37) ~[flink-hadoop-fs-1.11.1.jar:1.11.1]
        at org.apache.flink.runtime.history.FsJobArchivist.getArchivedJsons(FsJobArchivist.java:108) ~[flink-runtime_2.11-1.11.1.jar:1.11.1]
        at org.apache.flink.runtime.webmonitor.history.HistoryServerArchiveFetcher$JobArchiveFetcherTask.run(HistoryServerArchiveFetcher.java:225) ~[flink-dist_2.11-1.11.1.jar:1.11.1]
        at org.apache.flink.runtime.util.Runnables.lambda$withUncaughtExceptionHandler$0(Runnables.java:40) ~[flink-runtime_2.11-1.11.1.jar:1.11.1]
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:1.8.0_172]
        at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [?:1.8.0_172]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) [?:1.8.0_172]
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) [?:1.8.0_172]
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_172]
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_172]
        at java.lang.Thread.run(Thread.java:748) [?:1.8.0_172]
但是,上面的错误似乎是由上一个作业日志为空引起的,因为它试图监视以前完成的作业

我想这里的主要问题是为什么作业在完成运行后会将空日志发送到其HDFS
jobmanager.archive.fs.dir

我是否错过了一些配置设置