Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/340.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python PySpark-将列表作为参数传递给UDF_Python_Pyspark_Spark Dataframe_User Defined Functions - Fatal编程技术网

Python PySpark-将列表作为参数传递给UDF

Python PySpark-将列表作为参数传递给UDF,python,pyspark,spark-dataframe,user-defined-functions,Python,Pyspark,Spark Dataframe,User Defined Functions,我需要将列表传递到UDF中,该列表将确定距离的分数/类别。目前,我正在将所有距离硬编码为第四个分数 a= spark.createDataFrame([("A", 20), ("B", 30), ("D", 80)],["Letter", "distances"]) from pyspark.sql.functions import udf def cate(label, feature_list): if feature_list == 0: return label

我需要将列表传递到UDF中,该列表将确定距离的分数/类别。目前,我正在将所有距离硬编码为第四个分数

a= spark.createDataFrame([("A", 20), ("B", 30), ("D", 80)],["Letter", "distances"])

from pyspark.sql.functions import udf
def cate(label, feature_list):
    if feature_list == 0:
        return label[4]
label_list = ["Great", "Good", "OK", "Please Move", "Dead"]
udf_score=udf(cate, StringType())
a.withColumn("category", udf_score(label_list,a["distances"])).show(10)
当我尝试这样的事情时,我得到了这个错误

Py4JError: An error occurred while calling z:org.apache.spark.sql.functions.col. Trace:
py4j.Py4JException: Method col([class java.util.ArrayList]) does not exist
    at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:318)
    at py4j.reflection.ReflectionEngine.getMethod(ReflectionEngine.java:339)
    at py4j.Gateway.invoke(Gateway.java:274)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:214)
    at java.lang.Thread.run(Thread.java:745)

尝试对函数进行咖喱化,以便DataFrame调用中的唯一参数是希望函数对其执行操作的列的名称:

udf_score=udf(lambda x: cate(label_list,x), StringType())
a.withColumn("category", udf_score("distances")).show(10)
希望这有帮助

从pyspark.sql.functions导入udf,col
#样本数据
a=sqlContext.createDataFrame([(“a”,20),(“B”,30),(“D”,80)],[“字母”,“距离])
label_list=[“好”、“好”、“好”、“请移动”、“死亡”]
def类别(标签、功能列表):
如果特征列表==0:
退货标签[4]
else:#您可能还需要添加'else'条件,否则在这种情况下会添加'null'
返回“我不确定!”
def udf_分数(标签列表):
返回自定义项(lambda l:cate(l,标签列表))
a、 带列(“类别”,自定义项分数(标签列表)(列(“距离”))。显示()
输出为:

+------+---------+--------------+
|Letter|distances|      category|
+------+---------+--------------+
|     A|       20|I am not sure!|
|     B|       30|I am not sure!|
|     D|       80|I am not sure!|
+------+---------+--------------+

我认为通过将列表作为变量的默认值传递,这可能会有所帮助

from pyspark.sql.functions import udf, col

#sample data
a= sqlContext.createDataFrame([("A", 20), ("B", 30), ("D", 80),("E",0)],["Letter", "distances"])
label_list = ["Great", "Good", "OK", "Please Move", "Dead"]

#Passing List as Default value to a variable
def cate( feature_list,label=label_list):
    if feature_list == 0:
        return label[4]
    else:  #you may need to add 'else' condition as well otherwise 'null' will be added in this case
        return 'I am not sure!'

udfcate = udf(cate, StringType())

a.withColumn("category", udfcate("distances")).show()
输出:

+------+---------+--------------+
|Letter|distances|      category|
+------+---------+--------------+
|     A|       20|I am not sure!|
|     B|       30|I am not sure!|
|     D|       80|I am not sure!|
|     E|        0|          Dead|
+------+---------+--------------+

人们说我们可以使用pyspark.sql.functions.array()将列表直接传递给UDF(来自wards上的Spark 2.20)。我如何使用array()重写上面的示例。对不起,投了反对票;我觉得问题更多的是如何将两个参数都发送到函数,而不是总是默认使用一个参数。在全局范围内未定义标签列表的情况下,如果您需要动态发送该列表,则此解决方案无法处理。ags29和@Prem准确地回答了这个问题。就连我也在寻找类似的解决方案。阿米斯。。在本例中,列表是静态的,因此对于静态列表来说,这是一个合理可行的解决方案。