Apache spark 使用StandardScaler时SparseVector与DenseVector的比较

Apache spark 使用StandardScaler时SparseVector与DenseVector的比较,apache-spark,pyspark,spark-dataframe,apache-spark-mllib,pyspark-sql,Apache Spark,Pyspark,Spark Dataframe,Apache Spark Mllib,Pyspark Sql,我使用以下代码来规范化PySpark数据帧 来自pyspark.ml.feature导入StandardScaler、VectorAssembler 从pyspark.ml导入管道 cols=[“a”、“b”、“c”] df=spark.createDataFrame([(1,0,3),(2,3,2),(1,3,1),(3,0,3)],cols) 管道(阶段)=[ 矢量汇编程序(inputCols=cols,outputCol='features'), StandardScaler(withMe

我使用以下代码来规范化PySpark数据帧

来自pyspark.ml.feature导入StandardScaler、VectorAssembler
从pyspark.ml导入管道
cols=[“a”、“b”、“c”]
df=spark.createDataFrame([(1,0,3),(2,3,2),(1,3,1),(3,0,3)],cols)
管道(阶段)=[
矢量汇编程序(inputCols=cols,outputCol='features'),
StandardScaler(withMean=True,inputCol='features',outputCol='scaledFeatures')
]).fit(df).transform(df).select(cols+['scaledFeatures']).head()
这就产生了预期的结果:

Row(a=1, b=0, c=3, scaledFeatures=DenseVector([-0.7833, -0.866, 0.7833]))
但是,当我在从拼花文件加载的(大得多)数据集上运行管道时,我收到以下异常:

16/12/21 09:47:50 WARN TaskSetManager: Lost task 0.0 in stage 60.0 (TID 6370, 10.231.153.67): org.apache.spark.SparkException: Failed to execute user defined function($anonfu
n$2: (vector) => vector)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply2_2$(Unknown Source)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
        at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
        at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:121)
        at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.next(SerDeUtil.scala:112)
        at scala.collection.Iterator$class.foreach(Iterator.scala:893)
        at org.apache.spark.api.python.SerDeUtil$AutoBatchedPickler.foreach(SerDeUtil.scala:112)
        at org.apache.spark.api.python.PythonRDD$.writeIteratorToStream(PythonRDD.scala:504)
        at org.apache.spark.api.python.PythonRunner$WriterThread$$anonfun$run$3.apply(PythonRDD.scala:328)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1877)
        at org.apache.spark.api.python.PythonRunner$WriterThread.run(PythonRDD.scala:269)
Caused by: java.lang.IllegalArgumentException: Do not support vector type class org.apache.spark.mllib.linalg.SparseVector
        at org.apache.spark.mllib.feature.StandardScalerModel.transform(StandardScaler.scala:160)
        at org.apache.spark.ml.feature.StandardScalerModel$$anonfun$2.apply(StandardScaler.scala:167)
        at org.apache.spark.ml.feature.StandardScalerModel$$anonfun$2.apply(StandardScaler.scala:167)
        ... 13 more
我注意到,这里VectorAssembler将我的列转换为mllib.linalg.SparseVector,而不是第一种情况中使用的DenseVector


有什么办法可以解决这个问题吗?

我注意到您希望将其创建为自定义转换,以便将其直接包含在管道中

这应该能帮到你

from pyspark import keyword_only  
from pyspark.ml.pipeline import Transformer  
from pyspark.ml.param.shared import HasInputCol, HasOutputCol
from pyspark.ml.linalg import SparseVector, DenseVector, VectorUDT
from pyspark.sql.functions import udf


class AsDenseTransformer(Transformer, HasInputCol, HasOutputCol):  
    @keyword_only
    def __init__(self, inputCol=None, outputCol=None):
        super(AsDenseTransformer, self).__init__()
        kwargs = self.__init__._input_kwargs
        self.setParams(**kwargs)

    @keyword_only
    def setParams(self, inputCol=None, outputCol=None):
        kwargs = self.setParams._input_kwargs
        return self._set(**kwargs)

    def _transform(self, dataset):
        out_col = self.getOutputCol()
        in_col = dataset[self.getInputCol()]

        asDense = udf(lambda s: DenseVector(s.toArray()), VectorUDT()) 

        return dataset.withColumn(out_col,  asDense(in_col))
一旦定义了它,就可以将其初始化为一个转换,并在vectorassembler之后包含在管道中

Pipeline(stages=[
    VectorAssembler(inputCols=cols, outputCol='features'),
    AsDenseTransformer(inputCol='features', outputCol='features'),
    StandardScaler(withMean=True, inputCol='features', outputCol='scaledFeatures')
]).fit(df).transform(df).select(cols + ['scaledFeatures']).head()

您使用的是哪个版本的spark?spark 2.0.1。很确定这个答案是关键。目前正在尝试将SparseVector转换为DenseVector,但这也不是直截了当的。难道“b=DenseVector(a.toArray())”不是直截了当的解决方案吗?还是有点像Spark的新手。我正在研究如何将该转换应用于数据帧中的列。udf是最好的方式吗?例如,asdensevector=udf(lambda s:DenseVector(s.toArray()),VectorUDT())df=df.withColumn('features',asdensed(df.features))也可以将其作为转换添加到管道中,但我还不确定如何添加任意转换。。。。