Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/haskell/8.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何将数据集中的空值保存到mongodb?_Mongodb_Apache Spark_Apache Spark Sql - Fatal编程技术网

如何将数据集中的空值保存到mongodb?

如何将数据集中的空值保存到mongodb?,mongodb,apache-spark,apache-spark-sql,Mongodb,Apache Spark,Apache Spark Sql,我有严格的要求,将空值保存到Mongodb,因为我知道nosql不建议存储空值,但我的业务需求有一个场景 具有空值的示例csv文件 a,b,c,id ,2,3,A 4,4,4,B 将csv保存到mongodb的代码 我的要求是有一个“a”字段将在其中输入空值。使用MongoSpark保存为DataSet将默认忽略空值键。所以我的解决方法是将Dataset转换为BsonObject类型的javaPairRDD 代码 缺点 将类似于api的高级数据集引入低级RDD将失去spark优化queryp

我有严格的要求,将空值保存到Mongodb,因为我知道nosql不建议存储空值,但我的业务需求有一个场景

具有空值的示例csv文件

a,b,c,id
,2,3,A
4,4,4,B

将csv保存到mongodb的代码

我的要求是有一个“a”字段将在其中输入空值。

使用MongoSpark保存为DataSet将默认忽略空值键。所以我的解决方法是将Dataset转换为BsonObject类型的javaPairRDD

代码

缺点

将类似于api的高级数据集引入低级RDD将失去spark优化queryplans的能力,因此性能是一个权衡

    StructType schema = DataTypes.createStructType(new StructField[] {
                DataTypes.createStructField("a",  DataTypes.IntegerType, false),
                DataTypes.createStructField("b", DataTypes.IntegerType, true),
                DataTypes.createStructField("c", DataTypes.IntegerType, true),
                DataTypes.createStructField("id", DataTypes.StringType, true),

        });
        Dataset<Row> g  = spark.read()
                .format("csv")
                .schema(schema)
                .option("header", "true")  
                .option("inferSchema","false")

                .load("/home/Documents/SparkLogs/a.csv");



        MongoSpark.save(g
                .write()
        .option("database", "A")
                .option("collection","b").mode("overwrite")
                )

        ;

{
    "_id" : ObjectId("5d663b6bec20c94c990e6d0c"),
    "a" : 4,
    "b" : 4,
    "c" : 4,
    "id" : "B"
}

/* 2 */
{
    "_id" : ObjectId("5d663b6bec20c94c990e6d0d"),
    "b" : 2,
    "c" : 3,
    "id" : "A"
}


/** imports ***/
import scala.Tuple2;

import java.beans.Encoder;
import java.util.UUID;

import org.apache.hadoop.conf.Configuration;

import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.sql.Encoders;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
import org.bson.BSONObject;
import org.bson.BasicBSONObject;
import com.mongodb.hadoop.MongoOutputFormat;
/** imports ***/



private static void saveToMongoDB_With_Null(Dataset<Row> ds, Configuration outputConfig,String [] cols) {
        JavaPairRDD<Object,BSONObject> document =   ds
                .toJavaRDD()
                .mapToPair(f -> {

                     BSONObject doc = new BasicBSONObject();

                     for(String p:cols)
                         doc.put(p, f.getAs(p));

                        return new Tuple2<Object, BSONObject>(null, doc);

                });

        document.saveAsNewAPIHadoopFile(
                    "file:///this-is-completely-unused"
                , Object.class
                , BSONObject.class
                , MongoOutputFormat.class
                , outputConfig);
    }



    Configuration outputConfig = new Configuration();
    outputConfig.set("mongo.output.uri",
                     "mongodb://192.168.0.19:27017/database.collection");
  outputConfig.set("mongo.output.format", 
                      "com.mongodb.hadoop.MongoOutputFormat");

 Dataset<Row> g  = spark.read()
                .format("csv")
                .schema(schema)
                .option("header", "true")  
                .option("inferSchema","false")

                .load("/home/Documents/SparkLogs/a.csv");

    saveToMongoDB_With_Null(g, outputConfig,g.columns());




<!-- https://mvnrepository.com/artifact/org.mongodb.mongo-hadoop/mongo-hadoop-core -->
<dependency>
    <groupId>org.mongodb.mongo-hadoop</groupId>
    <artifactId>mongo-hadoop-core</artifactId>
    <version>2.0.2</version>
</dependency>

{
    "_id" : "a62e9b02-da97-493b-9563-fc19054df60e",
    "a" : null,
    "b" : 2,
    "c" : 3,
    "id" : "A"
}

{
    "_id" : "fed373a8-e671-44a4-8b85-7c7e2ff59585",
    "a" : 4,
    "b" : 4,
    "c" : 4,
    "id" : "B"
}