Apache spark 在DataFrame中将向量列展开为普通列

Apache spark 在DataFrame中将向量列展开为普通列,apache-spark,pyspark,Apache Spark,Pyspark,我想在数据帧中将向量列展开为普通列。transform创建单个列,但数据类型或“nullable”有问题,当我尝试显示时会出现错误。请参见下面的示例代码。如何解决这个问题 from pyspark.sql.types import * from pyspark.ml.feature import VectorAssembler from pyspark.sql.functions import udf spark = SparkSession\ .builder\

我想在数据帧中将向量列展开为普通列。transform创建单个列,但数据类型或“nullable”有问题,当我尝试显示时会出现错误。请参见下面的示例代码。如何解决这个问题

from pyspark.sql.types import *
from pyspark.ml.feature import VectorAssembler
from pyspark.sql.functions import udf

spark = SparkSession\
        .builder\
        .config("spark.driver.maxResultSize", "40g") \
        .config('spark.sql.shuffle.partitions', '2001') \
        .getOrCreate()

data = [(0.2, 53.3, 0.2, 53.3),
        (1.1, 43.3, 0.3, 51.3),
        (2.6, 22.4, 0.4, 43.3),
        (3.7, 25.6, 0.2, 23.4)]     
df = spark.createDataFrame(data, ['A','B','C','D'])
df.show(3)
df.printSchema() 

vecAssembler = VectorAssembler(inputCols=['C','D'], outputCol="features")
new_df = vecAssembler.transform(df)
new_df.printSchema()
new_df.show(3)

split1_udf = udf(lambda value: value[0], DoubleType())
split2_udf = udf(lambda value: value[1], DoubleType())
new_df = new_df.withColumn('c1', split1_udf('features')).withColumn('c2', split2_udf('features'))
new_df.printSchema()
new_df.show(3)

要素列包含类型
pyspark.ml.linalg.DenseVector
,要素向量元素的类型为
numpy.float64

numpy数据类型
转换为本机
python
类型
value.item()

使用此修复程序将产生以下输出

+---+----+---+----+----------+---+----+
|  A|   B|  C|   D|  features| c1|  c2|
+---+----+---+----+----------+---+----+
|0.2|53.3|0.2|53.3|[0.2,53.3]|0.2|53.3|
|1.1|43.3|0.3|51.3|[0.3,51.3]|0.3|51.3|
|2.6|22.4|0.4|43.3|[0.4,43.3]|0.4|43.3|
|3.7|25.6|0.2|23.4|[0.2,23.4]|0.2|23.4|
+---+----+---+----+----------+---+----+

我不知道UDF有什么问题。但我找到了另一个解决方案——下面

data = [(0.2, 53.3, 0.2, 53.3),
        (1.1, 43.3, 0.3, 51.3),
        (2.6, 22.4, 0.4, 43.3),
        (3.7, 25.6, 0.2, 23.4)]      
df = spark.createDataFrame(data, ['A','B','C','D'])  

vecAssembler = VectorAssembler(inputCols=['C','D'], outputCol="features")
new_df = vecAssembler.transform(df)

def extract(row):
    return (row.A, row.B,row.C,row.D,) + tuple(row.features.toArray().tolist())

extracted_df = new_df.rdd.map(extract).toDF(['A','B','C','D', 'col1', 'col2'])
extracted_df.show()

错误是什么?错误消息有一英里长,我理解的关键行如下:原因:net.razorvine.pickle.PickleException:构建ClassDict(对于numy.dtype)Py4JJavaErrorTraceback(最近一次调用上次)--->1 new_df.show(3)show中的/opt/cloudera/parcels/SPARK2/lib/SPARK2/python/pyspark/sql/dataframe.py(self,n,truncate,vertical)->350打印(self.\u jdf.showString(n,20,vertical))/usr/local/lib/python2.7/site-packages/py4j/java_gateway.pyc in_ucall(self,*args)1256 return_value=get_返回值(>1257 answer,self.gateway\u client,self.target\u id,self.name)/opt/cloudera/parcels/SPARK2/lib/SPARK2/python/pyspark/sql/utils.py in-deco(*a,**kw)-->63返回f(*a,**kw)/usr/local/lib/python2.7/site-packages/py4j/protocol.pyc in-get返回值(answer,gateway\u client,target\u id,name)-->328格式(target_id,“.”,name),value)@Goodfithuser不在注释中粘贴错误消息。相反,你的问题应该包括完整的回溯。我知道这会很长,但事实就是这样。
data = [(0.2, 53.3, 0.2, 53.3),
        (1.1, 43.3, 0.3, 51.3),
        (2.6, 22.4, 0.4, 43.3),
        (3.7, 25.6, 0.2, 23.4)]      
df = spark.createDataFrame(data, ['A','B','C','D'])  

vecAssembler = VectorAssembler(inputCols=['C','D'], outputCol="features")
new_df = vecAssembler.transform(df)

def extract(row):
    return (row.A, row.B,row.C,row.D,) + tuple(row.features.toArray().tolist())

extracted_df = new_df.rdd.map(extract).toDF(['A','B','C','D', 'col1', 'col2'])
extracted_df.show()