Apache spark 使用多列调用pyspark udf

Apache spark 使用多列调用pyspark udf,apache-spark,pyspark,Apache Spark,Pyspark,下面的UDF不起作用-我是否正确地传入了2列&以正确的方式调用函数 谢谢 def shield(x, y): if x == '': shield = y else: shield = x return shield df3.withColumn("shield", shield(df3.custavp1, df3.custavp1)) 我认为将这些论点传递给udf是不正确的 下面给出了正确的方法: >>> ls [[1,

下面的UDF不起作用-我是否正确地传入了2列&以正确的方式调用函数

谢谢

def shield(x, y):
    if x == '':
       shield = y
    else:
       shield = x
    return shield

df3.withColumn("shield", shield(df3.custavp1, df3.custavp1))

我认为将这些论点传递给udf是不正确的

下面给出了正确的方法:

>>> ls
[[1, 2, 3, 4], [5, 6, 7, 8]]
>>> from pyspark.sql import Row
>>> R = Row("A1", "A2")
>>> df = sc.parallelize([R(*r) for r in zip(*ls)]).toDF()
>>> df.show
<bound method DataFrame.show of DataFrame[A1: bigint, A2: bigint]>
>>> df.show()
+---+---+
| A1| A2|
+---+---+
|  1|  5|
|  2|  6|
|  3|  7|
|  4|  8|
+---+---+

>>> def foo(x,y):
...     if x%2 == 0:
...             return x
...     else:
...             return y
... 
>>> 
>>> from pyspark.sql.functions import udf
>>> from pyspark.sql.types import IntegerType
>>> 
>>> custom_udf = udf(foo, IntegerType())
>>> df1 = df.withColumn("res", custom_udf(col("A1"), col("A2")))
>>> df1.show()
+---+---+---+
| A1| A2|res|
+---+---+---+
|  1|  5|  5|
|  2|  6|  2|
|  3|  7|  7|
|  4|  8|  4|
+---+---+---+
>>ls
[[1, 2, 3, 4], [5, 6, 7, 8]]
>>>从pyspark.sql导入行
>>>R=行(“A1”、“A2”)
>>>df=sc.parallelize([R(*R)表示zip中的R(*ls)]).toDF()
>>>df.show
>>>df.show()
+---+---+
|A1 | A2|
+---+---+
|  1|  5|
|  2|  6|
|  3|  7|
|  4|  8|
+---+---+
>>>def foo(x,y):
...     如果x%2==0:
...             返回x
...     其他:
...             返回y
... 
>>> 
>>>从pyspark.sql.functions导入udf
>>>从pyspark.sql.types导入IntegerType
>>> 
>>>自定义_udf=udf(foo,IntegerType())
>>>df1=df.带列(“res”、自定义(列(“A1”)、列(“A2”))
>>>df1.show()
+---+---+---+
|A1 | A2 | res|
+---+---+---+
|  1|  5|  5|
|  2|  6|  2|
|  3|  7|  7|
|  4|  8|  4|
+---+---+---+

如果有帮助,请告诉我。

嗯,脚本运行了很长时间,然后抛出此错误。如果我删除UDF,整个脚本都会运行,因此我无法真正理解python worker的错误:/usr/bin/python:No-spark-PYTHONPATH was:/data3/thread/local/filecache/10/spark2-hdp-thread-archive.tar.gz/spark-core_2.11-2.3.0.2.6.5.0-292.jar java.io.eofexceptionar您使用jupyter笔记本吗?不,我使用的是直入式终端。它100%链接到UDF,UDF现在遵循您的指导原则,但仍然会产生错误。当我删除该行时,它不会出错