比较Scala Spark中的两个数组列

比较Scala Spark中的两个数组列,scala,apache-spark,array-column,Scala,Apache Spark,Array Column,我有一个下面给出的数据帧格式 movieId1 | genreList1 | genreList2 -------------------------------------------------- 1 |[Adventure,Comedy] |[Adventure] 2 |[Animation,Drama,War] |[War,Drama] 3 |[Adventure,Drama] |[Dra

我有一个下面给出的数据帧格式

movieId1 | genreList1              | genreList2
--------------------------------------------------
1        |[Adventure,Comedy]       |[Adventure]
2        |[Animation,Drama,War]    |[War,Drama]
3        |[Adventure,Drama]        |[Drama,War]
并尝试创建另一个标志列,显示genreList2是否是genreList1的子集

movieId1 | genreList1              | genreList2        | Flag
---------------------------------------------------------------
1        |[Adventure,Comedy]       | [Adventure]       |1
2        |[Animation,Drama,War]    | [War,Drama]       |1
3        |[Adventure,Drama]        | [Drama,War]       |0
我试过这个

def intersect_check(a: Array[String], b: Array[String]): Int = {
  if (b.sameElements(a.intersect(b))) { return 1 } 
  else { return 2 }
}

def intersect_check_udf =
  udf((colvalue1: Array[String], colvalue2: Array[String]) => intersect_check(colvalue1, colvalue2))

data = data.withColumn("Flag", intersect_check_udf(col("genreList1"), col("genreList2")))
但这会抛出
org.apache.spark.SparkException:无法执行用户定义的函数。
错误。关于如何解决这个问题的任何想法。
注:上述函数(
intersect\u check
)适用于
数组
S。

我们可以定义一个

udf
,它计算两个
数组
列之间的
交集
的长度,并检查它是否等于第二列的长度。如果是,则第二个数组是第一个数组的子集

另外,
udf
的输入需要是class
WrappedArray[String]
,而不是
Array[String]

import scala.collection.mutable.WrappedArray
import org.apache.spark.sql.functions.col

val same_elements = udf { (a: WrappedArray[String], 
                           b: WrappedArray[String]) => 
  if (a.intersect(b).length == b.length){ 1 }else{ 0 }  
}

df.withColumn("test",same_elements(col("genreList1"),col("genreList2")))
  .show(truncate = false)
+--------+-----------------------+------------+----+
|movieId1|genreList1             |genreList2  |test|
+--------+-----------------------+------------+----+
|1       |[Adventure, Comedy]    |[Adventure] |1   |
|2       |[Animation, Drama, War]|[War, Drama]|1   |
|3       |[Adventure, Drama]     |[Drama, War]|0   |
+--------+-----------------------+------------+----+
数据

val df = List((1,Array("Adventure","Comedy"), Array("Adventure")),
              (2,Array("Animation","Drama","War"), Array("War","Drama")),
              (3,Array("Adventure","Drama"),Array("Drama","War"))).toDF("movieId1","genreList1","genreList2")

我们可以定义一个
udf
,它计算两个
数组
列之间的
交集
的长度,并检查它是否等于第二列的长度。如果是,则第二个数组是第一个数组的子集

另外,
udf
的输入需要是class
WrappedArray[String]
,而不是
Array[String]

import scala.collection.mutable.WrappedArray
import org.apache.spark.sql.functions.col

val same_elements = udf { (a: WrappedArray[String], 
                           b: WrappedArray[String]) => 
  if (a.intersect(b).length == b.length){ 1 }else{ 0 }  
}

df.withColumn("test",same_elements(col("genreList1"),col("genreList2")))
  .show(truncate = false)
+--------+-----------------------+------------+----+
|movieId1|genreList1             |genreList2  |test|
+--------+-----------------------+------------+----+
|1       |[Adventure, Comedy]    |[Adventure] |1   |
|2       |[Animation, Drama, War]|[War, Drama]|1   |
|3       |[Adventure, Drama]     |[Drama, War]|0   |
+--------+-----------------------+------------+----+
数据

val df = List((1,Array("Adventure","Comedy"), Array("Adventure")),
              (2,Array("Animation","Drama","War"), Array("War","Drama")),
              (3,Array("Adventure","Drama"),Array("Drama","War"))).toDF("movieId1","genreList1","genreList2")

下面是使用
subsetOf

  val spark =
    SparkSession.builder().master("local").appName("test").getOrCreate()

  import spark.implicits._

  val data = spark.sparkContext.parallelize(
  Seq(
    (1,Array("Adventure","Comedy"),Array("Adventure")),
  (2,Array("Animation","Drama","War"),Array("War","Drama")),
  (3,Array("Adventure","Drama"),Array("Drama","War"))
  )).toDF("movieId1", "genreList1", "genreList2")


  val subsetOf = udf((col1: Seq[String], col2: Seq[String]) => {
    if (col2.toSet.subsetOf(col1.toSet)) 1 else 0
  })

  data.withColumn("flag", subsetOf(data("genreList1"), data("genreList2"))).show()

希望这有帮助

以下是使用
subsetOf

  val spark =
    SparkSession.builder().master("local").appName("test").getOrCreate()

  import spark.implicits._

  val data = spark.sparkContext.parallelize(
  Seq(
    (1,Array("Adventure","Comedy"),Array("Adventure")),
  (2,Array("Animation","Drama","War"),Array("War","Drama")),
  (3,Array("Adventure","Drama"),Array("Drama","War"))
  )).toDF("movieId1", "genreList1", "genreList2")


  val subsetOf = udf((col1: Seq[String], col2: Seq[String]) => {
    if (col2.toSet.subsetOf(col1.toSet)) 1 else 0
  })

  data.withColumn("flag", subsetOf(data("genreList1"), data("genreList2"))).show()

希望这有帮助

一种解决方案可能是利用spark array内置函数:
genreList2
genreList1
的子集,如果两者的交集等于
genreList2
。在下面的代码中,添加了一个
sort_array
操作,以避免两个排序不同但元素相同的数组之间出现不匹配

val spark = {
    SparkSession
    .builder()
    .master("local")
    .appName("test")
    .getOrCreate()
}

import spark.implicits._
import org.apache.spark.sql._
import org.apache.spark.sql.functions._

val df = Seq(
    (1, Array("Adventure","Comedy"), Array("Adventure")),
    (2, Array("Animation","Drama","War"), Array("War","Drama")),
    (3, Array("Adventure","Drama"), Array("Drama","War"))
).toDF("movieId1", "genreList1", "genreList2")

df
.withColumn("flag",
 sort_array(array_intersect($"genreList1",$"genreList2"))
 .equalTo(
   sort_array($"genreList2")
 )
.cast("integer")
)
.show()
输出是

+--------+--------------------+------------+----+
|movieId1|          genreList1|  genreList2|flag|
+--------+--------------------+------------+----+
|       1| [Adventure, Comedy]| [Adventure]|   1|
|       2|[Animation, Drama...|[War, Drama]|   1|
|       3|  [Adventure, Drama]|[Drama, War]|   0|
+--------+--------------------+------------+----+

一种解决方案可能是利用spark阵列内置函数:
genreList2
genreList1
的子集,如果两者的交集等于
genreList2
。在下面的代码中,添加了一个
sort_array
操作,以避免两个排序不同但元素相同的数组之间出现不匹配

val spark = {
    SparkSession
    .builder()
    .master("local")
    .appName("test")
    .getOrCreate()
}

import spark.implicits._
import org.apache.spark.sql._
import org.apache.spark.sql.functions._

val df = Seq(
    (1, Array("Adventure","Comedy"), Array("Adventure")),
    (2, Array("Animation","Drama","War"), Array("War","Drama")),
    (3, Array("Adventure","Drama"), Array("Drama","War"))
).toDF("movieId1", "genreList1", "genreList2")

df
.withColumn("flag",
 sort_array(array_intersect($"genreList1",$"genreList2"))
 .equalTo(
   sort_array($"genreList2")
 )
.cast("integer")
)
.show()
输出是

+--------+--------------------+------------+----+
|movieId1|          genreList1|  genreList2|flag|
+--------+--------------------+------------+----+
|       1| [Adventure, Comedy]| [Adventure]|   1|
|       2|[Animation, Drama...|[War, Drama]|   1|
|       3|  [Adventure, Drama]|[Drama, War]|   0|
+--------+--------------------+------------+----+

请检查同一数据集的两个示例,它们不匹配。谢谢..更新..请检查同一数据集的两个示例,它们不匹配。谢谢..更新。。