Java 火花:forEachPartition不工作

Java 火花:forEachPartition不工作,java,apache-spark,spark-streaming,Java,Apache Spark,Spark Streaming,我想使用foreachpartition在数据库中保存数据,但我注意到这个函数不起作用 RDD2.foreachRDD(new VoidFunction<JavaRDD<Object>>() { @Override public void call(JavaRDD<Object> t) throws Exception { t.foreachPartition(new Vo

我想使用foreachpartition在数据库中保存数据,但我注意到这个函数不起作用

RDD2.foreachRDD(new VoidFunction<JavaRDD<Object>>() {


            @Override
            public void call(JavaRDD<Object> t) throws Exception {


                t.foreachPartition(new VoidFunction<Iterator<Object>>() {

                    @Override
                    public void call(Iterator<Object> t) throws Exception {
                        System.out.println("test");
                    }   }
                );
          }});

正如您在我的日志中看到的,它说我的spark正在等待一个接收器停止,但是我的接收器不能停止,如果不能,那么如果我们必须停止发送方,spark流的目的是什么。

您能验证您在foreachRDD()中得到的值吗?我已经在本地测试了你的代码,它非常适合我。我不知道是否有一种方法可以让我打印值,但我尝试添加System.out.println(t);在调用(JavaRDD t)方法之后,它在BrokerSpout.java:191的map上显示了MapPartitionsRDD[4],我认为我的问题与ForEachPartition有关,在DStream上有一个print()方法,所以只需在context.start()之前调用它;context.waittermination();以确保向其填充值。啊,是的,我尝试了RDD2。打印();但是如果我运行foreachPartition,这不会显示任何内容,因为为了测试和确保数据在管道中流动,这个方法会阻塞foreachRDD()中的所有注释逻辑。
6/05/30 10:18:41 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
16/05/30 10:18:41 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
16/05/30 10:18:41 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2946 bytes)
16/05/30 10:18:41 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
16/05/30 10:18:41 INFO SparkContext: Starting job: foreachPartition at BrokerSpout.java:265
16/05/30 10:18:41 INFO RecurringTimer: Started timer for BlockGenerator at time 1464596321600
-------------------------------------------
Time: 1464596321500 ms
-------------------------------------------

16/05/30 10:18:41 INFO ReceivedBlockTracker: Deleting batches ArrayBuffer()
16/05/30 10:18:41 INFO ReceiverTracker: Registered receiver for stream 0 from 10.25.30.41:59407
16/05/30 10:18:41 INFO InputInfoTracker: remove old batch metadata: 
16/05/30 10:18:41 INFO ReceiverSupervisorImpl: Starting receiver
16/05/30 10:18:41 INFO ReceiverSupervisorImpl: Called receiver onStart
16/05/30 10:18:41 INFO ReceiverSupervisorImpl: Waiting for receiver to be stopped
16/05/30 10:18:42 INFO SparkContext: Starting job: foreachPartition at BrokerSpout.java:265
16/05/30 10:18:42 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
16/05/30 10:18:42 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks