Pyspark mllib中梯度增强树中的类型错误

Pyspark mllib中梯度增强树中的类型错误,pyspark,apache-spark-mllib,Pyspark,Apache Spark Mllib,我尝试在一些混合类型的数据上运行梯度增强树算法: [('feature1', 'bigint'), ('feature2', 'int'), ('label', 'double')] 我尝试了以下方法 from pyspark.mllib.tree import GradientBoostedTrees, GradientBoostedTreesModel from pyspark.ml.feature import VectorAssembler from pyspark.mllib.l

我尝试在一些混合类型的数据上运行梯度增强树算法:

[('feature1', 'bigint'),
 ('feature2', 'int'),
 ('label', 'double')]
我尝试了以下方法

from pyspark.mllib.tree import GradientBoostedTrees, GradientBoostedTreesModel
from pyspark.ml.feature import VectorAssembler
from pyspark.mllib.linalg import Vector as MLLibVector, Vectors as MLLibVectors
from pyspark.mllib.regression import LabeledPoint

vectorAssembler = VectorAssembler(inputCols = ["feature1", "feature2"], outputCol = "features")

data_assembled = vectorAssembler.transform(data)
data_assembled = data_assembled.select(['features', 'label'])
data_assembled = data_assembled.select(F.col("features"), F.col("label"))\
  .rdd\
  .map(lambda row: LabeledPoint(MLLibVectors.fromML(row.label), MLLibVectors.fromML(row.features)))

(trainingData, testData) = data_assembled.randomSplit([0.9, 0.1])

model = GradientBoostedTrees.trainRegressor(trainingData,
                                            categoricalFeaturesInfo={}, numIterations=100)
但是,我得到以下错误:

TypeError:不支持的向量类型


但我的类型都不是真正的float。此外,如果相关的话,feature2是二进制的。

我最终避免了mllib实现,而是使用Spark ML:

from pyspark.ml.feature import VectorAssembler
from pyspark.ml.regression import GBTRegressor

vectorAssembler = VectorAssembler(inputCols = ["feature1", "feature2"], outputCol = "features")

data_assembled = vectorAssembler.transform(data)
data_assembled = data_assembled.select(F.col("label"), F.col("features"))

(trainingData, testData) = data_assembled.randomSplit([0.7, 0.3])

gbt_model = GBTRegressor(featuresCol="features", maxIter=10).fit(trainingData)
Python没有LabeledPoint对象所需的双精度类型,因此我假设pyspark的映射会导致到float的转换