Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/mongodb/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Mongodb Spark-如何在map()中创建新的RDD?(SparkContext对于执行者为空)_Mongodb_Apache Spark_Mongodb Query - Fatal编程技术网

Mongodb Spark-如何在map()中创建新的RDD?(SparkContext对于执行者为空)

Mongodb Spark-如何在map()中创建新的RDD?(SparkContext对于执行者为空),mongodb,apache-spark,mongodb-query,Mongodb,Apache Spark,Mongodb Query,我有一个通过MongoDB Spark连接器连接MongoDB的应用程序。我的代码崩溃,因为SparkContext对于执行者为空。基本上,我是从MongoDB读取数据,处理这些数据,从而产生需要发送到MongoDB的额外查询。最后一步是保存这些附加查询的数据。我使用的代码是: JavaMongoRDD<Document> rdd = MongoSpark.load(sc); JavaMongoRDD<Document> aggregatedRdd = r

我有一个通过MongoDB Spark连接器连接MongoDB的应用程序。我的代码崩溃,因为SparkContext对于执行者为空。基本上,我是从MongoDB读取数据,处理这些数据,从而产生需要发送到MongoDB的额外查询。最后一步是保存这些附加查询的数据。我使用的代码是:

    JavaMongoRDD<Document> rdd = MongoSpark.load(sc);
    JavaMongoRDD<Document> aggregatedRdd = rdd.withPipeline(...);
    JavaPairRDD<String, Document> pairRdd = aggregatedRdd
            .mapToPair((document) -> new Tuple2(document.get("_id"), document));
    JavaPairRDD<String, List<Document>> mergedRdd = pairRdd.aggregateByKey(new LinkedList<Document>(),
            combineFunction, mergeFunction);

    JavaRDD<Tuple2<String, List<Tuple2<Date, Date>>>> dateRdd = mergedRdd.map(...);

    //at this point dateRdd contains key/value pairs of:
    //Key: a MongoDB document ID (String)
    //Value: List of Tuple<Date, Date> which are date ranges (start time and end time). 

    //For each of that date ranges I want to retrieve the data out of MongoDB
    //and, for now, I just want to save that data

    dateRdd.foreachPartition(new VoidFunction<Iterator<Tuple2<String, List<Tuple2<Date, Date>>>>>() {
        @Override
        public void call(Iterator<Tuple2<String, List<Tuple2<Date, Date>>>> partitionIterator) throws Exception {
            for (; partitionIterator.hasNext(); ) {
                Tuple2<String, List<Tuple2<Date, Date>>> tuple = partitionIterator.next();
                String fileName = tuple._1;
                List<Tuple2<Date, Date>> dateRanges = tuple._2;

                for (Tuple2<Date, Date> dateRange : dateRanges) {
                    Date startDate = dateRange._1;
                    Date endDate = dateRange._2;

                    Document aggregationDoc = Document.parse("{ $match: { ts: {$lt: new Date(" + startDate.getTime()
                            + "), $gt: new Date(" + endDate.getTime() + ")}, root_document: \"" + fileName
                            + "\", signals: { $elemMatch: { signal: \"SomeValue\" } } } }");


                    //this call will use the initial MongoSpark rdd with the aggregation pipeline that just got created.
                    //this will get sent to MongoDB 
                    JavaMongoRDD<Document> filteredSignalRdd = rdd.withPipeline(Arrays.asList(aggregationDoc));

                    String outputFileName = String.format("output_data_%s_%d-%d", fileName,
                            startDate.getTime(), endDate.getTime());
                    filteredSignalRdd.saveAsTextFile(outputFileName);
                }
            }
        }
    }); 
我对我的申请的期望如下图所示:

这里的问题是什么,如何实现新RDD的“嵌套”异步创建

如何访问executors中的MongoSpark“上下文”?MongoSpark库需要访问SparkContext,这在executors中不可用


我是否需要再次将所有数据带给驱动程序,然后让驱动程序向MongoSpark“上下文”发送新呼叫?我可以看到这可能是如何工作的,但这需要异步完成,也就是说,每当一个分区完成处理数据并准备好一个
时,将它推送到驱动程序,让他开始新的查询。如何做到这一点

这是预期的,不会改变。Spark不支持:

  • 嵌套RDD
  • 嵌套转换
  • 嵌套操作
  • 从操作/转换访问上下文或会话

在这种情况下,您可能可以使用标准的Mongo客户端。

我不太确定我是否理解您的意思。您是否建议每个执行者在
foreachPartition
部分中使用标准的Mongo客户端,或者在
mapPartitions
中更好地使用标准的Mongo客户端,然后返回新加载的数据,以便我以后可以使用它?是的,如果您需要基于Spark中已有的数据进行查询,我想我应该为每个executor环境创建一个MongoDB连接。您是否有设置执行器本地变量的示例?我认为我不应该每个分区连接一个分片集群,因为我可能有很多分区。请看:我不熟悉Mongo/Java。若客户端是线程安全的,或者提供了连接池,那个么可以尝试使用singleton。它将在所有执行器线程之间共享。否则
rdd,映射分区(iter=>{createClient;proceisterwithclient})
Job aborted due to stage failure: Task 23 in stage 2.0 failed 4 times, most recent failure: Lost task 23.3 in stage 2.0 (TID 501, hadoopb24): java.lang.IllegalArgumentException: requirement failed: RDD transformation requires a non-null SparkContext.
Unfortunately SparkContext in this MongoRDD is null.
This can happen after MongoRDD has been deserialized.
SparkContext is not Serializable, therefore it deserializes to null.
RDD transformations are not allowed inside lambdas used in other RDD transformations.
    at scala.Predef$.require(Predef.scala:233)
    at com.mongodb.spark.rdd.MongoRDD.checkSparkContext(MongoRDD.scala:170)
    at com.mongodb.spark.rdd.MongoRDD.copy(MongoRDD.scala:126)
    at com.mongodb.spark.rdd.MongoRDD.withPipeline(MongoRDD.scala:116)
    at com.mongodb.spark.rdd.api.java.JavaMongoRDD.withPipeline(JavaMongoRDD.scala:46)