Java 弗林克:乔布斯赢了';t使用更高的taskmanager.heap.mb运行

Java 弗林克:乔布斯赢了';t使用更高的taskmanager.heap.mb运行,java,apache-flink,Java,Apache Flink,简单的工作:kafka->flatmap->reduce->map 作业运行正常,默认值为taskmanager.heap.mb(512Mb)。根据:此值应尽可能大。由于所讨论的机器有96Gb的RAM,我将其设置为75000(任意值) 启动作业时出现以下错误: Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed. at org.apache.flink.runtime.

简单的工作:
kafka->flatmap->reduce->map

作业运行正常,默认值为taskmanager.heap.mb(512Mb)。根据:
此值应尽可能大
。由于所讨论的机器有96Gb的RAM,我将其设置为75000(任意值)

启动作业时出现以下错误:

Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.   
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$5.apply$mcV$sp(JobManager.scala:563)   
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$5.apply(JobManager.scala:509)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$handleMessage$1$$anonfun$applyOrElse$5.apply(JobManager.scala:509)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Caused by: org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException: Not enough free slots available to run the job. You can decrease the operator parallelism or increase the number of slots per TaskManager in the configuration. Task to schedule: < Attempt #0 (Source: Custom Source (1/1)) @ (unassigned) - [SCHEDULED] > with groupID < 95b239d1777b2baf728645df9a1c4232 > in sharing group < SlotSharingGroup [772c9ff1cf0b6cb3a361e3352f75fcee, d4f856f13654f424d7c49d0f00f6ecca, 81bb8c4310faefe32f97ebd6baa4c04f, 95b239d1777b2baf728645df9a1c4232] >. Resources available to scheduler: Number of instances=0, total number of slots=0, available slots=0
at org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleTask(Scheduler.java:255)
at org.apache.flink.runtime.jobmanager.scheduler.Scheduler.scheduleImmediately(Scheduler.java:131)
at org.apache.flink.runtime.executiongraph.Execution.scheduleForExecution(Execution.java:298)
at org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:458)
at org.apache.flink.runtime.executiongraph.ExecutionJobVertex.scheduleAll(ExecutionJobVertex.java:322)
at org.apache.flink.runtime.executiongraph.ExecutionGraph.scheduleForExecution(ExecutionGraph.java:686)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply$mcV$sp(JobManager.scala:982)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:962)
at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$org$apache$flink$runtime$jobmanager$JobManager$$submitJob$1.apply(JobManager.scala:962)
... 8 more
原因:org.apache.flink.runtime.client.JobExecutionException:作业执行失败。
在org.apache.flink.runtime.jobmanager.jobmanager$$anonfun$handleMessage$1$$anonfun$applyorlse$5.apply$mcV$sp(jobmanager.scala:563)
在org.apache.flink.runtime.jobmanager.jobmanager$$anonfun$handleMessage$1$$anonfun$applyorlse$5.apply(jobmanager.scala:509)
在org.apache.flink.runtime.jobmanager.jobmanager$$anonfun$handleMessage$1$$anonfun$applyorlse$5.apply(jobmanager.scala:509)
在scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
在scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
运行(AbstractDispatcher.scala:41)
在akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:401)
位于scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
位于scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
位于scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
在scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)中
原因:org.apache.flink.runtime.jobmanager.scheduler.NoResourceAvailableException:可用插槽不足,无法运行作业。您可以降低操作员并行性或增加配置中每个TaskManager的插槽数。要调度的任务:在共享组中,具有组ID<95b239d7b7b7baf72b728645df9a1c4c4232>。调度程序可用的资源:实例数=0,插槽总数=0,可用插槽数=0
位于org.apache.flink.runtime.jobmanager.scheduler.scheduleTask(scheduler.java:255)
位于org.apache.flink.runtime.jobmanager.scheduler.scheduleInstance(scheduler.java:131)
位于org.apache.flink.runtime.executiongraph.Execution.scheduleforeexecution(Execution.java:298)
位于org.apache.flink.runtime.executiongraph.ExecutionVertex.scheduleForExecution(ExecutionVertex.java:458)
位于org.apache.flink.runtime.executiongraph.ExecutionJobVertex.scheduleAll(ExecutionJobVertex.java:322)
在org.apache.flink.runtime.executiongraph.executiongraph.scheduleForExecution(executiongraph.java:686)
在org.apache.flink.runtime.jobmanager.jobmanager$$anonfun$org$apache$flink$runtime$jobmanager$jobmanager$$submitJob$1.apply$mcV$sp(jobmanager.scala:982)
位于org.apache.flink.runtime.jobmanager.jobmanager$$anonfun$org$apache$flink$runtime$jobmanager$jobmanager$$submitJob$1.apply(jobmanager.scala:962)
位于org.apache.flink.runtime.jobmanager.jobmanager$$anonfun$org$apache$flink$runtime$jobmanager$jobmanager$$submitJob$1.apply(jobmanager.scala:962)
... 8个以上
将默认值(512)恢复到此参数,作业将正常运行。5000时有效->10000时无效

我错过了什么



编辑:这比我想象的更容易出错。将该值设置为50000并重新提交将获得成功。在每次测试中,集群都会停止并重新启动。

您可能遇到的情况是在工作人员在主服务器上注册之前提交作业

5GB JVM堆的初始化速度很快,TaskManager几乎可以立即注册。对于70GB堆,JVM需要一段时间来初始化和引导。因此,工作人员稍后注册,并且由于缺少工作人员,在提交作业时无法执行作业

这也是为什么在你重新提交工作后它会起作用的原因


如果以“流”模式启动集群(通过start cluster streaming.sh独立启动),JVM的初始化速度会更快,因为至少Flink的内部内存会延迟初始化。

是否在JobManager web界面中检查所有TaskManager是否都已连接?由于内存量非常大,它们可能需要一段时间才能启动,因为它们将您提供给它们的几乎所有内存都分配为字节[]数组。检查这一点的一种方法是web仪表板,其中显示了已注册的TaskManager数量。如果您想自动执行此操作,REST监控API支持对此的请求,例如“/概述”。有关详细信息,请参阅: