Apache spark 在DataFrame中搜索关键字
我有一个Spark数据框和一个“关键字”列表。Apache spark 在DataFrame中搜索关键字,apache-spark,pyspark,Apache Spark,Pyspark,我有一个Spark数据框和一个“关键字”列表。 对于4列,我需要检查值是否在列表中,并用特定结果(不一定是列名)填充新列“结果”。 然后我需要搜索所有剩余的列,当有匹配项时,结果是'other' 示例数据帧: df = spark.createDataFrame([ ["apple", "Null","Null","alcatel","Aalst","123","01-01-2016","blu"], ["apple", "apple","Lorem ipsum dolor sit am
对于4列,我需要检查值是否在列表中,并用特定结果(不一定是列名)填充新列“结果”。
然后我需要搜索所有剩余的列,当有匹配项时,结果是'other' 示例数据帧:
df = spark.createDataFrame([
["apple", "Null","Null","alcatel","Aalst","123","01-01-2016","blu"],
["apple", "apple","Lorem ipsum dolor sit amet","Null","Excepteur sint occaecat","543","07-12-2010","cat"],
["asus","apple","nisi ut aliquid ex ea commodi consequatur?","","Null","578","06-04-2020","htc"],
["samsung","fugiat quo voluptas nulla pariatur","apple","Null","Antwerp","285","04-08-2018","asus"],
["sony","magni dolores","Null","asus","quis nostrud exercitation","386","06-06-2009","huawei"],
["vivo","laborum","Null","Veriatis","adipisci ","389","23-12-2005","oppo"],
["alcatel","laboriosam","Contains Apple","Null","Asus","104","02-03-2018","zte"],
["sharp","null","null","apple","Asus","333","07-09-2017","alcatel"]
]).toDF("a-val","b-val","c-val","d-val","e-val","f-val","g-val","h-val")
keywords = ['apple', 'asus', 'alcatel']
df.withColum('result', when(col('a-val').isin(keywords), concat(lit('a'), col(result))))
df.withColum('result', when(col('b-val').isin(keywords), concat(lit('b'), col(result))))
df.withColum('result', when(col('c-val').isin(keywords), concat(lit('c'), col(result))))
df.withColum('result', when(col('d-val').isin(keywords), concat(lit('d'), col(result))))
可能的结果
result
-------
a
b
c
d
a;b
b;d
a;c;d
a;other
c;d;other
...
不确定concat
是否是理想的方法,或者最好先创建一个列表并添加它
按列搜索成功,但合并结果并搜索其余列我无法完成
我真的很感激任何帮助 IIUC这可以按照
希望有帮助。@anky我添加了一个示例数据帧。结果可能不仅仅是一个字母(a、b、c或d),最后它是一个单词。因此,如果关键字存在,您想搜索每一列,并按行搜索它们吗?@anky Yes,搜索所有列。只有在a-val、b-val、c-val或d-val中找到,结果才是基于列的特定单词(如a代表a-val、b代表b-val等),如果是另一列,结果总是“其他”很好的答案,我之前也曾想过类似的结果,但无法实现。将删除我的答案,太长了,一个小东西的想法只来自你的答案;)@JohnDoe可能是这样的:
{i:i[0]如果i在['a-val','b-val','c-val','d-val']中,那么df.columns中i的其他值为}
evalCol={i:i[0] if i.startswith(('a','b','c','d')) else 'other' for i in df.columns}
{'a-val': 'a',
'b-val': 'b',
'c-val': 'c',
'd-val': 'd',
'e-val': 'other',
'f-val': 'other',
'g-val': 'other',
'h-val': 'other'}
keywords = ['apple', 'asus', 'alcatel']
df.withColumn('result',f.concat_ws(';',*[f.when(f.col(k).isin(keywords),v).otherwise(None) for k,v in evalCol.items()])).show(10,False)
+-------+----------------------------------+------------------------------------------+--------+-------------------------+-----+----------+-------+-------+
|a-val |b-val |c-val |d-val |e-val |f-val|g-val |h-val |result |
+-------+----------------------------------+------------------------------------------+--------+-------------------------+-----+----------+-------+-------+
|apple |Null |Null |alcatel |Aalst |123 |01-01-2016|blu |a;d |
|apple |apple |Lorem ipsum dolor sit amet |Null |Excepteur sint occaecat |543 |07-12-2010|cat |a;b |
|asus |apple |nisi ut aliquid ex ea commodi consequatur?| |Null |578 |06-04-2020|htc |a;b |
|samsung|fugiat quo voluptas nulla pariatur|apple |Null |Antwerp |285 |04-08-2018|asus |c;other|
|sony |magni dolores |Null |asus |quis nostrud exercitation|386 |06-06-2009|huawei |d |
|vivo |laborum |Null |Veriatis|adipisci |389 |23-12-2005|oppo | |
|alcatel|laboriosam |Contains Apple |Null |Asus |104 |02-03-2018|zte |a |
|sharp |null |null |apple |Asus |333 |07-09-2017|alcatel|d;other|
+-------+----------------------------------+------------------------------------------+--------+-------------------------+-----+----------+-------+-------+