Python 如何通过重复值“创建数组列”;“另一列的大小”;皮斯帕克时代?
我想添加一个新列Python 如何通过重复值“创建数组列”;“另一列的大小”;皮斯帕克时代?,python,apache-spark,pyspark,apache-spark-sql,pyspark-dataframes,Python,Apache Spark,Pyspark,Apache Spark Sql,Pyspark Dataframes,我想添加一个新列score,它是一个数组,其长度等于另一列values的大小,并包含所有值2 使用列的大小时出错,但如果用硬编码的数字替换,则工作正常 数据 columns = ["id","values"] data = [("sample1", [12.0,10.0]), ("sample2", [1.0,2.0,3.0,4.0])] rdd = spark.sparkContext.parallelize(da
score
,它是一个数组,其长度等于另一列values
的大小,并包含所有值2
使用列的大小时出错,但如果用硬编码的数字替换,则工作正常
数据
columns = ["id","values"]
data = [("sample1", [12.0,10.0]), ("sample2", [1.0,2.0,3.0,4.0])]
rdd = spark.sparkContext.parallelize(data)
源数据帧
+-------+--------------------+
| id| values|
+-------+--------------------+
|sample1| [12.0, 10.0]|
|sample2|[1.0, 2.0, 3.0, 4.0]|
+-------+--------------------+
from pyspark.sql.functions import *
df.withColumn("score",array([lit(x) for x in [2]*(size(col("values")))])).show()
预期产出
+-------+--------------------+--------------------+
| id| values| score|
+-------+--------------------+--------------------+
|sample1| [12.0, 10.0]| [2, 2] |
|sample2|[1.0, 2.0, 3.0, 4.0]| [2, 2, 2, 2]|
+-------+--------------------+--------------------+
代码
+-------+--------------------+
| id| values|
+-------+--------------------+
|sample1| [12.0, 10.0]|
|sample2|[1.0, 2.0, 3.0, 4.0]|
+-------+--------------------+
from pyspark.sql.functions import *
df.withColumn("score",array([lit(x) for x in [2]*(size(col("values")))])).show()
低于错误值
:java.lang.RuntimeException:不支持的文字类型类
java.util.ArrayList[2]
不能将Python列表与Spark列相乘。您可以使用array\u repeat
功能
import pyspark.sql.functions as F
df2 = df.withColumn('score', F.expr('array_repeat(2, size(values))'))
df2.show()
+-------+--------------------+------------+
| id| values| score|
+-------+--------------------+------------+
|sample1| [12.0, 10.0]| [2, 2]|
|sample2|[1.0, 2.0, 3.0, 4.0]|[2, 2, 2, 2]|
+-------+--------------------+------------+
不能将Python列表与Spark列相乘。您可以使用array\u repeat
功能
import pyspark.sql.functions as F
df2 = df.withColumn('score', F.expr('array_repeat(2, size(values))'))
df2.show()
+-------+--------------------+------------+
| id| values| score|
+-------+--------------------+------------+
|sample1| [12.0, 10.0]| [2, 2]|
|sample2|[1.0, 2.0, 3.0, 4.0]|[2, 2, 2, 2]|
+-------+--------------------+------------+
此功能仅适用于Spark 2.4+。对于旧版本,可以使用UDF完成此操作:
from pyspark.sql.functions import udf, size, lit
from pyspark.sql.types import ArrayType, IntegerType
array_repeat_udf = udf(lambda v, n: [v for _ in range(n)], ArrayType(IntegerType()))
df1 = df.withColumn('score', array_repeat_udf(lit(2), size("values")))
df1.show()
#+-------+--------------------+------------+
#| id| values| score|
#+-------+--------------------+------------+
#|sample1| [12.0, 10.0]| [2, 2]|
#|sample2|[1.0, 2.0, 3.0, 4.0]|[2, 2, 2, 2]|
#+-------+--------------------+------------+
此功能仅适用于Spark 2.4+。对于旧版本,可以使用UDF完成此操作:
from pyspark.sql.functions import udf, size, lit
from pyspark.sql.types import ArrayType, IntegerType
array_repeat_udf = udf(lambda v, n: [v for _ in range(n)], ArrayType(IntegerType()))
df1 = df.withColumn('score', array_repeat_udf(lit(2), size("values")))
df1.show()
#+-------+--------------------+------------+
#| id| values| score|
#+-------+--------------------+------------+
#|sample1| [12.0, 10.0]| [2, 2]|
#|sample2|[1.0, 2.0, 3.0, 4.0]|[2, 2, 2, 2]|
#+-------+--------------------+------------+