从多个MongoDB读取以形成数据集

从多个MongoDB读取以形成数据集,mongodb,apache-spark,apache-spark-sql,Mongodb,Apache Spark,Apache Spark Sql,我想从2个不同的Mongo数据库创建2个数据集。我目前正在使用官方的MongoSpark连接器。sparkSession按以下方式启动 SparkConf sparkConf = new SparkConf().setMaster("yarn").setAppName("test") .set("spark.mongodb.input.partitioner", "MongoShardedPartitioner")

我想从2个不同的Mongo数据库创建2个数据集。我目前正在使用官方的MongoSpark连接器。sparkSession按以下方式启动

SparkConf sparkConf = new SparkConf().setMaster("yarn").setAppName("test")
                        .set("spark.mongodb.input.partitioner", "MongoShardedPartitioner")
                        .set("spark.mongodb.input.uri", "mongodb://192.168.77.62/db1.coll1")
                        .set("spark.sql.crossJoin.enabled", "true");
SparkSession sparkSession = sparkSession.builder().appName("test1").config(sparkConf).getOrCreate();
如果我想更改spark.mongodb.input.uri,我将如何做?我已经尝试过更改sparkSession的runtimeConfig,并将ReadConfig与readOverrides一起使用,但这些都不起作用

方法1:

sparkSession.conf().set("spark.mongodb.input.uri", "mongodb://192.168.77.63/db1.coll2");
方法2:

Map<String, String> readOverrides = new HashMap<String, String>();
readoverrides.put("uri","192.168.77.63/db1.coll2");
ReadConfig readConfig = ReadConfig.create(sparkSession).withOptions(readOverrides);
Dataset<Position> ds = MongoSpark.load(sparkSession, readConfig, Position.class);
编辑2:

public static void main(String[] args) {
    SparkSession sparkSession = SparkSession.builder().appName("test")
            .config("spark.worker.cleanup.enabled", "true").config("spark.scheduler.mode", "FAIR").getOrCreate();
    String mongoURI2 = "mongodb://192.168.77.63:27017/db1.coll1";
    Map<String, String> readOverrides1 = new HashMap<String, String>();
    readOverrides1.put("uri", mongoURI2);
    ReadConfig readConfig1 = ReadConfig.create(sparkSession).withOptions(readOverrides1);
    MongoSpark.load(sparkSession,readConfig1,Position.class).show();
}
publicstaticvoidmain(字符串[]args){
SparkSession SparkSession=SparkSession.builder().appName(“测试”)
.config(“spark.worker.cleanup.enabled”,“true”).config(“spark.scheduler.mode”,“FAIR”).getOrCreate();
字符串2=”mongodb://192.168.77.63:27017/db1.coll1";
Map readOverrides1=新的HashMap();
readOverrides1.put(“uri”,mongoURI2);
ReadConfig readConfig1=ReadConfig.create(sparkSession).withOptions(readOverrides1);
load(sparkSession,readConfig1,Position.class).show();
}
不过,这与上一次编辑给出的例外情况相同

build.sbt:
libraryDependencies+=“org.mongodb.spark”%”mongo-spark-connector_2.11“%”2.0.0“

现在,您可以将
uri1
uri2
作为参数传递到
/usr/local/spark/bin/spark submit path tomyjar.app.jar MongoUri1 MongoUri2 sparkMasterUri
中,然后为每个
uri
创建
config

spark.read.mongo(READdb)

在ReadConfig中设置uri是没有用的。Spark Mongo连接器在调用ReadConfig.create()方法时使用此信息。所以在使用它之前,试着在SparkContext中设置它

如下图所示:

SparkContext context = spark.sparkContext();
context.conf().set("spark.mongodb.input.uri","mongodb://host:ip/database.collection");
JavaSparkContext jsc = new JavaSparkContext(context);

但是sparkSession在初始化ReadConfig时需要有一个mongodb uri。我尝试使用2个ReadConfig,但在运行时失败,它说它需要一个用于SparkSession的uri,如果我将一个uri传递给SparkSession,我无法使用ReadConfig覆盖它。但如果我不在SparkSession中设置mongo config,我的ReadConfig也不会创建。请仔细检查我在问题中所做的编辑。我没有在spark会话中设置mongo配置嘿,编辑的代码,这样你就可以看到完整的应用程序,mongo配置设置独立于spark配置。我做了一个新的编辑。请看一看。我正在使用java。我只是想通过ReadConfig提供uri,而不对SparkSession进行任何更改。你能检查一下吗。在代码中还使用了sparkSession.read.mongo()方法。我没有找到与之类似的Java。你能帮我一下吗
 package com.example.app

 import com.mongodb.spark.config.{ReadConfig, WriteConfig}
 import com.mongodb.spark.sql._

object App {


 def main(args: Array[String]): Unit = {

    val MongoUri1 = args(0).toString
    val MongoUri2 = args(1).toString
    val SparkMasterUri= args(2).toString

     def makeMongoURI(uri:String,database:String,collection:String) = (s"${uri}/${database}.${collection}")

   val mongoURI1 = s"mongodb://${MongoUri1}:27017"
   val mongoURI2 = s"mongodb://${MongoUri2}:27017"

   val CONFdb1 = makeMongoURI(s"${mongoURI1}","MyColletion1,"df")
   val CONFdb2 = makeMongoURI(s"${mongoURI2}","MyColletion2,"df")

   val WRITEdb1: WriteConfig =  WriteConfig(scala.collection.immutable.Map("uri"->CONFdb1))
   val READdb1: ReadConfig = ReadConfig(Map("uri" -> CONFdb1))

   val WRITEdb2: WriteConfig =  WriteConfig(scala.collection.immutable.Map("uri"->CONFdb2))
   val READdb2: ReadConfig = ReadConfig(Map("uri" -> CONFdb2))

   val spark = SparkSession
  .builder
  .appName("AppMongo")
  .config("spark.worker.cleanup.enabled", "true")
  .config("spark.scheduler.mode", "FAIR")
  .getOrCreate()

   val df1 = spark.read.mongo(READdb1)
   val df2 = spark.read.mongo(READdb2)
   df1.write.mode("overwrite").mongo(WRITEdb1)
   df2.write.mode("overwrite").mongo(WRITEdb2)

 }

}
spark.read.mongo(READdb)
SparkContext context = spark.sparkContext();
context.conf().set("spark.mongodb.input.uri","mongodb://host:ip/database.collection");
JavaSparkContext jsc = new JavaSparkContext(context);