Scala 在单机上运行的集群占用了太多的/dev/shm空间

Scala 在单机上运行的集群占用了太多的/dev/shm空间,scala,akka,Scala,Akka,我正在运行官方akka提供的示例: 我的操作系统是:带有最新内核的Linux Mint 19 对于工作者拨入示例(转换示例),我无法完全运行此示例,因为/dev/shm中没有足够的空间。虽然我有超过2GB的可用空间 问题是,当我启动第一个前端节点时,它会占用一些KBs空间。当我启动第二个时,它会占用一些MBs空间。当我发射第三个时,它消耗了几百兆字节的空间。此外,我甚至无法启动第四个,它只是抛出一个错误,导致整个集群崩溃: [info] Warning: space is running low

我正在运行官方akka提供的示例:

我的操作系统是:带有最新内核的Linux Mint 19

对于
工作者拨入示例
转换示例
),我无法完全运行此示例,因为/dev/shm中没有足够的空间。虽然我有超过2GB的可用空间

问题是,当我启动第一个前端节点时,它会占用一些KBs空间。当我启动第二个时,它会占用一些MBs空间。当我发射第三个时,它消耗了几百兆字节的空间。此外,我甚至无法启动第四个,它只是抛出一个错误,导致整个集群崩溃:

[info] Warning: space is running low in /dev/shm (tmpfs) threshold=167,772,160 usable=95,424,512
[info] Warning: space is running low in /dev/shm (tmpfs) threshold=167,772,160 usable=45,088,768
[info] [ERROR] [11/05/2018 21:03:56.156] [ClusterSystem-akka.actor.default-dispatcher-12] [akka://ClusterSystem@127.0.0.1:57246/] swallowing exception during message send
[info] io.aeron.exceptions.RegistrationException: IllegalStateException : Insufficient usable storage for new log of length=50335744 in /dev/shm (tmpfs)
[info]  at io.aeron.ClientConductor.onError(ClientConductor.java:174)
[info]  at io.aeron.DriverEventsAdapter.onMessage(DriverEventsAdapter.java:81)
[info]  at org.agrona.concurrent.broadcast.CopyBroadcastReceiver.receive(CopyBroadcastReceiver.java:100)
[info]  at io.aeron.DriverEventsAdapter.receive(DriverEventsAdapter.java:56)
[info]  at io.aeron.ClientConductor.service(ClientConductor.java:660)
[info]  at io.aeron.ClientConductor.awaitResponse(ClientConductor.java:696)
[info]  at io.aeron.ClientConductor.addPublication(ClientConductor.java:371)
[info]  at io.aeron.Aeron.addPublication(Aeron.java:259)
[info]  at akka.remote.artery.aeron.AeronSink$$anon$1.<init>(AeronSink.scala:103)
[info]  at akka.remote.artery.aeron.AeronSink.createLogicAndMaterializedValue(AeronSink.scala:100)
[info]  at akka.stream.impl.GraphStageIsland.materializeAtomic(PhasedFusingActorMaterializer.scala:630)
[info]  at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
[info]  at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
[info]  at akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:406)
[info]  at akka.stream.scaladsl.RunnableGraph.run(Flow.scala:588)
[info]  at akka.remote.artery.Association.runOutboundOrdinaryMessagesStream(Association.scala:726)
[info]  at akka.remote.artery.Association.runOutboundStreams(Association.scala:657)
[info]  at akka.remote.artery.Association.associate(Association.scala:649)
[info]  at akka.remote.artery.AssociationRegistry.association(Association.scala:989)
[info]  at akka.remote.artery.ArteryTransport.association(ArteryTransport.scala:724)
[info]  at akka.remote.artery.ArteryTransport.send(ArteryTransport.scala:710)
[info]  at akka.remote.RemoteActorRef.$bang(RemoteActorRefProvider.scala:591)
[info]  at akka.actor.ActorRef.tell(ActorRef.scala:124)
[info]  at akka.actor.ActorSelection$.rec$1(ActorSelection.scala:265)
[info]  at akka.actor.ActorSelection$.deliverSelection(ActorSelection.scala:269)
[info]  at akka.actor.ActorSelection.tell(ActorSelection.scala:46)
[info]  at akka.actor.ScalaActorSelection.$bang(ActorSelection.scala:280)
[info]  at akka.actor.ScalaActorSelection.$bang$(ActorSelection.scala:280)
[info]  at akka.actor.ActorSelection$$anon$1.$bang(ActorSelection.scala:198)
[info]  at akka.cluster.ClusterCoreDaemon.gossipTo(ClusterDaemon.scala:1330)
[info]  at akka.cluster.ClusterCoreDaemon.gossip(ClusterDaemon.scala:1047)
[info]  at akka.cluster.ClusterCoreDaemon.gossipTick(ClusterDaemon.scala:1010)
[info]  at akka.cluster.ClusterCoreDaemon$$anonfun$initialized$1.applyOrElse(ClusterDaemon.scala:496)
[info]  at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
[info]  at akka.actor.Actor.aroundReceive(Actor.scala:517)
[info]  at akka.actor.Actor.aroundReceive$(Actor.scala:515)
[info]  at akka.cluster.ClusterCoreDaemon.aroundReceive(ClusterDaemon.scala:295)
[info]  at akka.actor.ActorCell.receiveMessage(ActorCell.scala:588)
[info]  at akka.actor.ActorCell.invoke(ActorCell.scala:557)
[info]  at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
[info]  at akka.dispatch.Mailbox.run(Mailbox.scala:225)
[info]  at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
[info]  at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
[info]  at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
[info]  at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
[info]  at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[info]警告:在/dev/shm(tmpfs)阈值中空间不足=167772160可用=95424512
[信息]警告:在/dev/shm(tmpfs)中空间不足阈值=167772160可用=45088768
[info][ERROR][11/05/2018 21:03:56.156][ClusterSystem akka.actor.default-dispatcher-12][akka://ClusterSystem@127.0.0.1:57246/]消息发送期间吞咽异常
[信息]io.aeron.exceptions.RegistrationException:IllegalStateException:长度为50335744 in/dev/shm(tmpfs)的新日志的可用存储不足
[信息]位于io.aeron.ClientConductor.onError(ClientConductor.java:174)
[信息]在io.aeron.DriverEventsAdapter.onMessage上(DriverEventsAdapter.java:81)
[信息]位于org.agrona.concurrent.broadcast.CopyBroadcastReceiver.receive(CopyBroadcastReceiver.java:100)
[信息]在io.aeron.DriverEventsAdapter.receive(DriverEventsAdapter.java:56)
[信息]位于io.aeron.ClientConductor.service(ClientConductor.java:660)
[信息]位于io.aeron.ClientConductor.awaitResponse(ClientConductor.java:696)
[信息]位于io.aeron.ClientConductor.addPublication(ClientConductor.java:371)
[信息]位于io.aeron.aeron.addPublication(aeron.java:259)
[信息]在akka.remote.artery.aeron.AeronSink$$anon$1.(AeronSink.scala:103)
[信息]在akka.remote.artery.aeron.AeronSink.createLogicAndMaterializedValue(AeronSink.scala:100)
[信息]位于akka.stream.impl.GraphStageIsland.MaterialeAtomic(PhasedFusingActorMaterializer.scala:630)
[信息]位于akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:450)
[信息]在akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:415)
[信息]位于akka.stream.impl.PhasedFusingActorMaterializer.materialize(PhasedFusingActorMaterializer.scala:406)
[信息]位于akka.stream.scaladsl.RunnableGraph.run(Flow.scala:588)
[信息]在akka.remote.artery.Association.RunoutBoundOrdinalMessagesStream(Association.scala:726)
[信息]在akka.remote.artery.Association.runOutboundStreams(Association.scala:657)
[信息]在akka.remote.artery.Association.Association(Association.scala:649)
[信息]位于akka.remote.artery.AssociationRegistry.association(association.scala:989)
[信息]在akka。远程。动脉。动脉运输。协会(动脉运输。scala:724)
[信息]在akka。远程。动脉。动脉运输。发送(动脉运输。scala:710)
[信息]位于akka.remote.RemoteActorRef.$bang(RemoteActorRefProvider.scala:591)
[信息]在akka.actor.ActorRef.tell(ActorRef.scala:124)
[信息]在akka.actor.ActorSelection$.rec$1(ActorSelection.scala:265)
[信息]在akka.actor.ActorSelection$.deliverSelection(ActorSelection.scala:269)
[信息]在akka.actor.ActorSelection.tell(ActorSelection.scala:46)
[信息]在akka.actor.ScalaActorSelection.$bang(ActorSelection.scala:280)
[信息]在akka.actor.ScalaActorSelection.$bang$(ActorSelection.scala:280)
[信息]在akka.actor.ActorSelection$$anon$1.bang(ActorSelection.scala:198)
[信息]位于akka.cluster.clustercoredemon.gossipTo(ClusterDaemon.scala:1330)
[信息]位于akka.cluster.clusterCoredemon.gossip(ClusterDaemon.scala:1047)
[信息]位于akka.cluster.clustercoredemon.gossipTick(ClusterDaemon.scala:1010)
[info]位于akka.cluster.ClusterCoreDaemon$$anonfun$初始化$1.applyOrElse(ClusterDaemon.scala:496)
[信息]位于scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
[信息]在akka.actor.actor.aroundReceive(actor.scala:517)
[信息]在akka.actor.actor.aroundReceive$(actor.scala:515)
[info]位于akka.cluster.ClusterCoreDaemon.aroundReceive(ClusterDaemon.scala:295)
[信息]位于akka.actor.ActorCell.receiveMessage(ActorCell.scala:588)
[info]位于akka.actor.ActorCell.invoke(ActorCell.scala:557)
[信息]位于akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
[信息]在akka.dispatch.Mailbox.run(Mailbox.scala:225)
[信息]位于akka.dispatch.Mailbox.exec(Mailbox.scala:235)
[info]位于akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
[信息]位于akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
[信息]位于akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
[info]位于akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
它似乎正在向每个节点发送巨大的消息(
48MB+?


那上面是什么?什么是根本原因,我该如何解决这个问题?

你知道问题是什么了吗?我也有同样的错误。