Apache spark 将向量转换为数据帧时出错

Apache spark 将向量转换为数据帧时出错,apache-spark,machine-learning,spark-dataframe,apache-spark-mllib,Apache Spark,Machine Learning,Spark Dataframe,Apache Spark Mllib,将矢量转换为数据帧时出错 第一部分中提到的代码运行良好,但将矢量数据转换为数据帧是一种不直观的方法 我想用我所知道的,即第二部分提到的代码来解决这个问题。 你能帮忙吗 val data = Seq( Vectors.sparse(4, Seq((0, 1.0), (3, -2.0))), Vectors.dense(4.0, 5.0, 0.0, 3.0), Vectors.dense(6.0, 7.0, 0.0, 8.0), Vectors.spa

将矢量转换为数据帧时出错

第一部分中提到的代码运行良好,但将矢量数据转换为数据帧是一种不直观的方法

我想用我所知道的,即第二部分提到的代码来解决这个问题。 你能帮忙吗

val data = Seq(
      Vectors.sparse(4, Seq((0, 1.0), (3, -2.0))),
      Vectors.dense(4.0, 5.0, 0.0, 3.0),
      Vectors.dense(6.0, 7.0, 0.0, 8.0),
      Vectors.sparse(4, Seq((0, 9.0), (3, 1.0)))
    )   

val tupleList = data.map(Tuple1.apply)
val df = rdd.toDF("features")
我们不能像下面这样简单地做吗

    val rdd = sc.parallelize(data).map(a => Row(a))
rdd.take(1)

val fields = "features".split(" ").map(fields => StructField(fields,DoubleType, nullable =true))
val df = spark.createDataFrame(rdd, StructType(fields))
df.count()
但我得到一个错误如下

df: org.apache.spark.sql.DataFrame = [features: double]
org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 357.0 failed 4 times, most recent failure: Lost task 1.3 in stage 357.0 (TID 1243, datacouch, executor 3): java.lang.RuntimeException: Error while encoding: java.lang.RuntimeException: org.apache.spark.ml.linalg.DenseVector is not a valid external type for schema of double
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, features), DoubleType) AS features#6583
    at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:290)
    at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:586)
    at org.apache.spark.sql.SparkSession$$anonfun$4.apply(SparkSession.scala:586)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.agg_doAggregateWithoutKey$(Unknown Source)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$8$$anon$1.hasNext(WholeStageCodegenExec.scala:395)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
正如您在中以及在例外情况中清楚地解释的那样,
Vector
的正确
数据类型是
org.apache.spark.ml.linalg.SQLDataTypes.VectorType

spark.createDataFrame(
rdd,
结构类型(Seq)(
StructField(“features”,org.apache.spark.ml.linalg.SQLDataTypes.VectorType)
))
)

@user8371915请先阅读我的问题