Java 在具有单个线程池的Future内运行Future.map

Java 在具有单个线程池的Future内运行Future.map,java,scala,akka,threadpool,Java,Scala,Akka,Threadpool,我有一段代码,它使用了带有akka(ActorSystem)的ExecutionContext(EC)构建。这段代码做了一些非常特别的事情:它使用了一个AkkaForkJoinPool,并行度max=1,并执行如下操作: implicit ec = // akka EC backed by AkkaForkJoinPool with parallelism=1 Future{ // (1) // (2) get data from DB which uses a separate Exec

我有一段代码,它使用了带有akka(ActorSystem)的ExecutionContext(EC)构建。这段代码做了一些非常特别的事情:它使用了一个AkkaForkJoinPool,并行度max=1,并执行如下操作:

implicit ec = // akka EC backed by AkkaForkJoinPool with parallelism=1

Future{ // (1)
  // (2) get data from DB which uses a separate ExecutionContext for IO
  val data: Future[Data] = getData()

  // (3) use the data
  data.map{ whatEver }

  // etc ...
}
[编辑:我知道,这样说是很奇怪的,因为它有着最高的未来(1)。但实际上,代码不是我自己的,它跨越了多个函数,并使用了更复杂的操作,例如几个包装的理解。所以我不会改变这一点]

现在,我移动了这段代码,并按照相同的规则替换akka提供的隐式ExecutionContext(EC):我使用并行度为1的(java)ForkJoinPool

因此,此代码在映射(3)处被卡住。我的理解是,当调用map(3)时,它需要一个线程,但EC不能提供线程,因为它唯一可用的线程是由未来(1)获取的

我不清楚ForkJoinPool应该如何工作。所以我的问题是,我是否理解正确,以及:

  • 如果没有,我就错误地使用了java池。也就是说,有没有办法让这一切顺利进行
  • 如果是,akka如何管理

  • 我使用的是akka 2.3.15、scala 2.11.12和java 8

    而不是包装未来的一切,而是使用a来理解第一个未来的结果,因为一切都取决于它

    for {
      data <- getData()
    } yield data.map( whatEver )
    

    查看akka的代码,我想我找到了它的功能。我不完全确定,但几乎是:akka ActorSystem创建了一个
    Dispatchers
    ,它创建了一个
    MessageDispatcherConfiguration
    ,它创建了一个
    Dispatcher
    ,它创建了ExecutorService(我传递了它的类层次结构)。有几种可能的实现,但我认为这是最常见的,这就是使用ForkJoinPool时发生的情况

    现在,Dispatcher将批处理执行器扩展到当前线程,该执行器可以将内部任务(如问题中的映射(需要运行一个线程))批处理到一起

    再一次,代码太复杂了,我无法确定,我将不做更多的调查。但实际上,akka EC可以包装对父线程的内部映射调用,这与标准(即java)ForkJoinPool不同

    我认为这是来自akka的一个聪明的技巧,而不是一个典型的实现。 BatchingExecutor的文件中说:

    /**
     * Mixin trait for an Executor
     * which groups multiple nested `Runnable.run()` calls
     * into a single Runnable passed to the original
     * Executor. This can be a useful optimization
     * because it bypasses the original context's task
     * queue and keeps related (nested) code on a single
     * thread which may improve CPU affinity. However,
     * if tasks passed to the Executor are blocking
     * or expensive, this optimization can prevent work-stealing
     * and make performance worse. Also, some ExecutionContext
     * may be fast enough natively that this optimization just
     * adds overhead.
     * The default ExecutionContext.global is already batching
     * or fast enough not to benefit from it; while
     * `fromExecutor` and `fromExecutorService` do NOT add
     * this optimization since they don't know whether the underlying
     * executor will benefit from it.
     * A batching executor can create deadlocks if code does
     * not use `scala.concurrent.blocking` when it should,
     * because tasks created within other tasks will block
     * on the outer task completing.
     * This executor may run tasks in any order, including LIFO order.
     * There are no ordering guarantees.
     *
     * WARNING: The underlying Executor's execute-method must not execute the submitted Runnable
     * in the calling thread synchronously. It must enqueue/handoff the Runnable.
     */
    

    我只是在写一篇编辑。正如我所说,现在改变太复杂了。但是我仍然想知道它是如何与Akkai一起工作的,当您更改执行上下文时,是否可能突然使用与
    getData
    使用的执行上下文相同的执行上下文?在这种情况下,这将导致死锁。如果我使用与map(3)中的getData(2)相同的EC,它实际上可以工作。所以不,我没有混合EC。1号怎么样?1是否也使用相同的ec?这就是造成僵局的原因。也就是说,2或3使用与1.3相同的ec使用与1相同的ec,这会导致锁定。但出于某种原因,使用akka是有效的。我的想法是,要么akka在做一些聪明的事情来让它工作,要么我在做一些错误的事情(我不知道ForkJoinPool应该如何工作,但如果编码好的话,它可以处理这个问题,我不会感到惊讶)。我不认为akka做了什么特别的事情。也许它不是真正的并行最大1?我认为,如果1和3使用同一个ec和一个线程,它们肯定会死锁。你能添加你的akka配置吗?
    /**
     * Mixin trait for an Executor
     * which groups multiple nested `Runnable.run()` calls
     * into a single Runnable passed to the original
     * Executor. This can be a useful optimization
     * because it bypasses the original context's task
     * queue and keeps related (nested) code on a single
     * thread which may improve CPU affinity. However,
     * if tasks passed to the Executor are blocking
     * or expensive, this optimization can prevent work-stealing
     * and make performance worse. Also, some ExecutionContext
     * may be fast enough natively that this optimization just
     * adds overhead.
     * The default ExecutionContext.global is already batching
     * or fast enough not to benefit from it; while
     * `fromExecutor` and `fromExecutorService` do NOT add
     * this optimization since they don't know whether the underlying
     * executor will benefit from it.
     * A batching executor can create deadlocks if code does
     * not use `scala.concurrent.blocking` when it should,
     * because tasks created within other tasks will block
     * on the outer task completing.
     * This executor may run tasks in any order, including LIFO order.
     * There are no ordering guarantees.
     *
     * WARNING: The underlying Executor's execute-method must not execute the submitted Runnable
     * in the calling thread synchronously. It must enqueue/handoff the Runnable.
     */