Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/visual-studio-2010/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark Spark在不同列上多次连接同一数据集_Apache Spark_Apache Spark Sql - Fatal编程技术网

Apache spark Spark在不同列上多次连接同一数据集

Apache spark Spark在不同列上多次连接同一数据集,apache-spark,apache-spark-sql,Apache Spark,Apache Spark Sql,我有以下两个数据集 code,name IN,India US,United States UK,United Kingdom SG,Singapore 我们可以将code1、code2和code3与第一个数据集连接起来,并获得每个列的名称吗 id,name,code1desc,code2desc,code3desc 1,abc,United Kingdom,Singapore,United States 2,efg,Singapore,United Kingdom,United Sta

我有以下两个数据集


code,name
IN,India
US,United States
UK,United Kingdom
SG,Singapore 
我们可以将code1、code2和code3与第一个数据集连接起来,并获得每个列的名称吗


id,name,code1desc,code2desc,code3desc
1,abc,United Kingdom,Singapore,United States
2,efg,Singapore,United Kingdom,United States
第一列联接正在工作,但第二列联接失败

Dataset<Row> code1 = people.join(countries, people.col("code1").equalTo(countries.col("code")),"left_outer").withColumnRenamed("name","code1desc");
    code1.show();
Dataset code1=people.join(countries,people.col(“code1”).equalTo(countries.col(“code”),“left_outer”)。其中列重命名为(“name”,“code1desc”);
代码1.show();
以下代码失败:

Dataset<Row> code2 = code1.join(countries, code1.col("code2").equalTo(countries.col("code")),"left_outer");
    code2.show();
Dataset code2=code1.join(countries,code1.col(“code2”).equalTo(countries.col(“code”),“left_outer”);
代码2.show();
对于每个民族的“代码[i]”列,需要加入国家/地区,可以在Scala上循环完成:

// data 
val countries = List(
  ("IN", "India"),
  ("US", "United States"),
  ("UK", "United Kingdom"),
  ("SG", "Singapore")
).toDF("code", "name")

val people = List(
  (1, "abc", "UK", "SG", "US"),
  (2, "efg", "SG", "UK", "US")
).toDF("id", "name", "code1", "code2", "code3")

// action
val countryColumns = List("code1", "code2", "code3")
val result = countryColumns.foldLeft(people)((people, column) =>
  people.alias("p")
    .join(countries.withColumnRenamed("name", column + "desc").alias("c"),
      col("p." + column) === $"c.code",
      "left_outer")
    .drop(column, "code")
)
结果是:

+---+----+--------------+--------------+-------------+
|id |name|code1desc     |code2desc     |code3desc    |
+---+----+--------------+--------------+-------------+
|1  |abc |United Kingdom|Singapore     |United States|
|2  |efg |Singapore     |United Kingdom|United States|
+---+----+--------------+--------------+-------------+

注意:如果“国家”数据帧较小,则可以使用广播连接来获得更好的性能。

如果您的国家代码数据帧足够小,则可以使用
udf
。首先,我们将把代码收集到一个映射中,然后在每个代码列上应用udf

code\u-df
是您的国家代码数据框,
data\u-df
是您的数据

import org.apache.spark.sql.functions._

val mapcode = code_df.rdd.keyBy(row => row(0)).collectAsMap()
println("Showing 10 rows of mapcode")

for ((k,v) <- mapcode) {
  printf("key: %s, value: %s\n", k, v)
}


def getCode( code: String ) : String = {
  val desc = mapcode(code).getAs[String](1)
  return desc
}

val getcode_udf = udf(getCode _)

val newdatadf = data_df.withColumn("code1desc", getcode_udf($"code1"))
  .withColumn("code2desc", getcode_udf($"code2"))
  .withColumn("code3desc", getcode_udf($"code3"))

println("Showing 10 rows of final result")
newdatadf.show(10, truncate = false)
import org.apache.spark.sql.functions._

val mapcode = code_df.rdd.keyBy(row => row(0)).collectAsMap()
println("Showing 10 rows of mapcode")

for ((k,v) <- mapcode) {
  printf("key: %s, value: %s\n", k, v)
}


def getCode( code: String ) : String = {
  val desc = mapcode(code).getAs[String](1)
  return desc
}

val getcode_udf = udf(getCode _)

val newdatadf = data_df.withColumn("code1desc", getcode_udf($"code1"))
  .withColumn("code2desc", getcode_udf($"code2"))
  .withColumn("code3desc", getcode_udf($"code3"))

println("Showing 10 rows of final result")
newdatadf.show(10, truncate = false)
Showing 10 rows of mapcode
key: IN, value: [IN,India]
key: SG, value: [SG,Singapore]
key: UK, value: [UK,United Kingdom]
key: US, value: [US,United States]
Showing 10 rows of final result
+---+----+-----+-----+-----+--------------+--------------+-------------+
|id |name|code1|code2|code3|code1desc     |code2desc     |code3desc    |
+---+----+-----+-----+-----+--------------+--------------+-------------+
|1  |abc |UK   |SG   |US   |United Kingdom|Singapore     |United States|
|2  |efg |SG   |UK   |US   |Singapore     |United Kingdom|United States|
+---+----+-----+-----+-----+--------------+--------------+-------------+