Apache spark Pyspark-计算新项目的长度

Apache spark Pyspark-计算新项目的长度,apache-spark,pyspark,apache-spark-sql,pyspark-sql,pyspark-dataframes,Apache Spark,Pyspark,Apache Spark Sql,Pyspark Sql,Pyspark Dataframes,在PySpark(=2.4.0 from pyspark.sql import SparkSession from pyspark.sql.functions import array_except spark = SparkSession.builder.appName("test").getOrCreate() data = [(("ID1", ['A', 'B'], ['A', 'C'])), (("ID2", ['A', 'B'], ['A', 'B'])), (("ID2", ['A

在PySpark(<2.4)数据帧中,我有两个列表。我想统计列表1中的新项目,这些项目不在列表2中

data = [(("ID1", ['A', 'B'], ['A', 'C'])), (("ID2", ['A', 'B'], ['A', 'B'])), (("ID2", ['A', 'B'], None))]
df = spark.createDataFrame(data, ["ID", "List1", "List2"])
df.show(truncate=False)

+---+------+------+
|ID |List1 |List2 |
+---+------+------+
|ID1|[A, B]|[A, C]|
|ID2|[A, B]|[A, B]|
|ID2|[A, B]|null  |
+---+------+------+
目前,我已经编写了一个
UDF
,可以给我答案。我正在检查是否可以在没有
UDF
的情况下执行此操作

当前解决方案

def sum_list(x, y):
    total = 0
    if y is None:
      total = 0

    elif x is None and y is not None:
      total = len(y)

    else:
      lst = [1 for item in y if item not in x]
      total = len(lst)

    return total

new_udf = udf(sum_list , IntegerType())
df = df.withColumn('new_count', new_udf('List2', 'List1'))
df.show()

+---+------+------+---------+
| ID| List1| List2|new_count|
+---+------+------+---------+
|ID1|[A, B]|[A, C]|        1|
|ID2|[A, B]|[A, B]|        0|
|ID2|[A, B]|  null|        2|
+---+------+------+---------+

除此之外,您可以使用数组_。但是Spark>=2.4.0

from pyspark.sql import SparkSession
from pyspark.sql.functions import array_except

spark = SparkSession.builder.appName("test").getOrCreate()
data = [(("ID1", ['A', 'B'], ['A', 'C'])), (("ID2", ['A', 'B'], ['A', 'B'])), (("ID2", ['A', 'B'], None))]
df = spark.createDataFrame(data, ["ID", "List1", "List2"])
df.show()

df.withColumn('new_count', when(df.List2.isNull(), size(df.List1)).otherwise(size(array_except('List1', 'List2')))).show()

使用pyspark<2.4,您可以组合
分解
分组方式
数组_contain

df=df。选择('ID','List1','List2',F.explode('List1')。别名('List1_explode'))
df=df.groupby('ID','List1','List2').agg((F.sum(F.when(F.expr(“数组包含(列表2,列表1分解)”),0)。否则(1))。别名('new\u count'))
df.show()
+---+------+------+---------+
|ID |列表1 |列表2 |新计数|
+---+------+------+---------+
|ID2 |[A,B]|[A,B]| 0|
|ID2 |[A,B]| null | 2|
|ID1 |[A,B]|[A,C]| 1|
+---+------+------+---------+

我有数百万条记录,不会爆炸性地产生oom问题吗?这是一个合理的担忧,但老实说我不知道。我至少要试一下;)我在跑步