Sql server 计算分区火花JDBC的上下限
我使用Spark jdbc和Scala从MS SQL server读取数据,并希望按指定列对这些数据进行分区。我不想手动设置分区列的上下限。我可以读取此字段中的某种最大值和最小值,并将其设置为上限/下限吗? 另外,使用这个查询,我想从数据库中读取所有数据。 目前,查询机制如下所示:Sql server 计算分区火花JDBC的上下限,sql-server,scala,apache-spark,spark-jdbc,Sql Server,Scala,Apache Spark,Spark Jdbc,我使用Spark jdbc和Scala从MS SQL server读取数据,并希望按指定列对这些数据进行分区。我不想手动设置分区列的上下限。我可以读取此字段中的某种最大值和最小值,并将其设置为上限/下限吗? 另外,使用这个查询,我想从数据库中读取所有数据。 目前,查询机制如下所示: def jdbcOptions() = Map[String,String]( "driver" -> "db.driver", "url" -> "db.url", "user"
def jdbcOptions() = Map[String,String](
"driver" -> "db.driver",
"url" -> "db.url",
"user" -> "db.user",
"password" -> "db.password",
"customSchema" -> "db.custom_schema",
"dbtable" -> "(select * from TestAllData where dayColumn > 'dayValue') as subq",
"partitionColumn" -> "db.partitionColumn",
"lowerBound" -> "1",
"upperBound" -> "30",
"numPartitions" -> "5"
}
val dataDF = sparkSession
.read
.format("jdbc")
.options(jdbcOptions())
.load()
如果
dayColumn
是数字或日期字段,则可以使用下一个代码检索边界:
def jdbcBoundOptions() = Map[String,String]{
"driver" -> "db.driver",
"url" -> "db.url",
"user" -> "db.user",
"password" -> "db.password",
"customSchema" -> "db.custom_schema",
"dbtable" -> "(select max(db.partitionColumn), min(db.partitionColumn) from TestAllData where dayColumn > 'dayValue') as subq",
"numPartitions" -> "1"
}
val boundRow = sparkSession
.read
.format("jdbc")
.options(jdbcBoundOptions())
.load()
.first()
val maxDay = boundRow.getInt(0)
val mimDay = boundRow.getInt(1)
请注意,numPartitions
必须为1,在这种情况下,我们不需要像Spark中所描述的那样指定分区细节
最后,您可以将检索到的边界用于原始查询:
def jdbcOptions() = Map[String,String]{
"driver" -> "db.driver",
"url" -> "db.url",
"user" -> "db.user",
"password" -> "db.password",
"customSchema" -> "db.custom_schema",
"dbtable" -> "(select * from TestAllData where dayColumn > 'dayValue') as subq",
"partitionColumn" -> "db.partitionColumn",
"lowerBound" -> minDay.toString,
"upperBound" -> maxDay.toString,
"numPartitions" -> "5"
}
Hi@Cassie是db.partitionColumn的数字列吗?@AlexandrosBiratsis是的,partitionColumn的数据类型是int