Scala 水平连接多个数据帧

Scala 水平连接多个数据帧,scala,apache-spark,hadoop,apache-spark-sql,Scala,Apache Spark,Hadoop,Apache Spark Sql,我有以下数据帧 val count :Dataframe = spark.sql("select 1,$database_name,$table_name count(*) from $table_name ") 输出: 1,库存,T076p,4332 输出: 4112或4332(可以相同) 输出: 4330 输出: 4332 问题:我如何加入上面的dataframe以获得结果dataframe,从而提供输出。 如下所示 库存,T076p,43324324330 这里逗号表示列分隔符 另外,我

我有以下数据帧

val count :Dataframe = spark.sql("select 1,$database_name,$table_name count(*) from $table_name ")
输出:

1,库存,T076p,4332

输出:

4112或4332(可以相同)

输出:

4330

输出:

4332

问题:我如何加入上面的
dataframe
以获得结果
dataframe
,从而提供输出。
如下所示

库存,T076p,43324324330

这里逗号表示列分隔符

另外,我在每个
dataframe
中添加了1,因此我可以使用join
dataframes
(因此这里不强制使用1)

问题
我如何加入上面的dataframe以获得结果dataframe 请按以下方式给我o/p

库存,T076p,43324324330-此处逗号表示列分隔符

看看这个例子。我用下面的虚拟数据帧模拟了您的需求


结果:

+-------+-------+-------+-------+ |column1|column2|column3|column4| +-------+-------+-------+-------+ | 1| stock| T076p| 4332| +-------+-------+-------+-------+ +-------+-------+ |column1|column2| +-------+-------+ | 1| 4332| +-------+-------+ +-------+-------+ |column1|column2| +-------+-------+ | 1| 4330| +-------+-------+ +-------+-------+ |column1|column2| +-------+-------+ | 1| 4332| +-------+-------+ [stock,T076p,4332,4330,4332] +-------+-------+-------+-------+ |第1列|第2列|第3列|第4列| +-------+-------+-------+-------+ |1 |库存| T076p | 4332| +-------+-------+-------+-------+ +-------+-------+ |第1列|第2列| +-------+-------+ | 1| 4332| +-------+-------+ +-------+-------+ |第1列|第2列| +-------+-------+ | 1| 4330| +-------+-------+ +-------+-------+ |第1列|第2列| +-------+-------+ | 1| 4332| +-------+-------+ [股票,T076p,43324304332]
嗨,你在下面试过吗。如果您有任何问题,请随时提问。如果你同意我的回答。请注意voteup并作为所有者接受答案。谢谢
val truecount : Dataframe = spark.sql("select 1,count(*) from $table_name where flag =true")`
   val Falsecount : DataFrame = spark.sql("select 1,count(*) from $table_name where flag =false")
package com.examples

import org.apache.log4j.{Level, Logger}
import org.apache.spark.sql.SparkSession

object MultiDFJoin {
  def main(args: Array[String]) {
    import org.apache.spark.sql.functions._
    Logger.getLogger("org").setLevel(Level.OFF)

    val spark = SparkSession.builder.
      master("local")
      .appName(this.getClass.getName)
      .getOrCreate()
    import spark.implicits._
    val columns = Array("column1", "column2", "column3", "column4")
    val df1 = (Seq(
      (1, "stock", "T076p", 4332))
      ).toDF(columns: _*).as("first")
    df1.show()
    val df2 = Seq((1, 4332)).toDF(columns.slice(0, 2): _*).as("second")
    df2.show()
    val df3 = Seq((1, 4330)).toDF(columns.slice(0, 2): _*).as("third")
    df3.show()
    val df4 = Seq((1, 4332)).toDF(columns.slice(0, 2): _*).as("four")
    df4.show()
    val finalcsv = df1.join(df2, col("first.column1") === col("second.column1")).selectExpr("first.*", "second.column2")
      .join(df3, Seq("column1")).selectExpr("first.*", "third.column2")
      .join(df4, Seq("column1"))
      .selectExpr("first.*", "third.column2", "four.column2")
      .drop("column1").collect.mkString(",") // this column used for just joining hence dropping
    print(finalcsv)
  }
}
+-------+-------+-------+-------+ |column1|column2|column3|column4| +-------+-------+-------+-------+ | 1| stock| T076p| 4332| +-------+-------+-------+-------+ +-------+-------+ |column1|column2| +-------+-------+ | 1| 4332| +-------+-------+ +-------+-------+ |column1|column2| +-------+-------+ | 1| 4330| +-------+-------+ +-------+-------+ |column1|column2| +-------+-------+ | 1| 4332| +-------+-------+ [stock,T076p,4332,4330,4332]