Python 将pyspark DataFrame(字符串列表)的每一行与字符串主列表相交?

Python 将pyspark DataFrame(字符串列表)的每一行与字符串主列表相交?,python,apache-spark,pyspark,user-defined-functions,Python,Apache Spark,Pyspark,User Defined Functions,假设我有一个这样的数据帧 [Row(case_number='5307793179', word_list=['n', 'b', 'c']), Row(case_number='5307793171', word_list=['w', 'e', 'c']), Row(case_number='5307793172', word_list=['1', 'f', 'c']), Row(case_number='5307793173', word_list=['a', 'k', 'c']),

假设我有一个这样的数据帧

[Row(case_number='5307793179', word_list=['n', 'b', 'c']),
 Row(case_number='5307793171', word_list=['w', 'e', 'c']),
 Row(case_number='5307793172', word_list=['1', 'f', 'c']),
 Row(case_number='5307793173', word_list=['a', 'k', 'c']),
 Row(case_number='5307793174', word_list=['z', 'l', 'c']),
 Row(case_number='5307793175', word_list=['b', 'r', 'c'])]
还有这样一个主单词列表:

master_word_list = ['b', 'c']
有没有一种灵活的方法可以根据主单词列表过滤单词列表,这样生成的pyspark数据框看起来像这样。(我所说的圆滑是指不使用UDF,如果UDF是最好/唯一的方法,我也会接受它作为解决方案)

从Spark 2.4开始提供:

pyspark.sql.functions.array\u intersect(col1,col2)

Collection函数:返回col1和col2相交处的元素数组,不带重复项

参数:

  • col1–包含数组的列的名称
  • col2–包含数组的列的名称
[Row(case_number='5307793179', word_list=['b', 'c']),
 Row(case_number='5307793171', word_list=['c']),
 Row(case_number='5307793172', word_list=['c']),
 Row(case_number='5307793173', word_list=['c']),
 Row(case_number='5307793174', word_list=['c']),
 Row(case_number='5307793175', word_list=['b', 'c'])]
from pyspark.sql.functions import array, array_intersect, lit

master_word_list_col = array(*[lit(x) for x in master_word_list])

df = spark.createDataFrame(
    [("5307793179", ["n", "b", "c"])], 
    ("case_number", "word_list")
)

df.withColumn("word_list", array_intersect("word_list", master_word_list_col)).show()
+-----------+---------+
|case_number|word_list|
+-----------+---------+
| 5307793179|   [b, c]|
+-----------+---------+