Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark Spark将数据帧映射到阵列_Apache Spark_Spark Dataframe - Fatal编程技术网

Apache spark Spark将数据帧映射到阵列

Apache spark Spark将数据帧映射到阵列,apache-spark,spark-dataframe,Apache Spark,Spark Dataframe,我正在使用Spark MLlib PrefixSpan算法。我在Spark 1.6中编写了一些代码,但我们最近又转到了Spark 2.2 我有一个这样的数据帧 viewsPurchasesGrouped: org.apache.spark.sql.DataFrame = [session_id: decimal(29,0), view_product_ids: array<bigint> ... 1 more field] root |-- session_id: decimal

我正在使用Spark MLlib PrefixSpan算法。我在Spark 1.6中编写了一些代码,但我们最近又转到了Spark 2.2

我有一个这样的数据帧

viewsPurchasesGrouped: org.apache.spark.sql.DataFrame = [session_id: decimal(29,0), view_product_ids: array<bigint> ... 1 more field]

root
 |-- session_id: decimal(29,0) (nullable = true)
 |-- view_product_ids: array (nullable = true)
 |    |-- element: long (containsNull = true)
 |-- purchase_product_ids: array (nullable = true)
 |    |-- element: long (containsNull = true)
自从我们转换后,这就不起作用了

我试过这个:

val viewsPurchasesRddString2 = viewsPurchasesGrouped.select("view_product_ids","purchase_product_ids").rdd.map( row =>
  Array(
    row.getSeq[Long](0).toArray, 
    row.getSeq[Long](1).toArray
  )
) 
val viewsPurchasesRddString = viewsPurchasesGrouped.map {
   case Row(session_id: Long, view_product_ids: Array[Long], purchase_product_ids: Array[Long]) => 
     (view_product_ids, purchase_product_ids)
}
看到这个令人费解的错误消息,这意味着它使用了会话id和购买产品id,而不是原始数据帧中的查看产品id和购买产品id

Job aborted due to stage failure: [...] scala.MatchError: [14545234113341303814564569524,WrappedArray(123, 234, 456, 678, 789)]
我也试过:

val viewsPurchasesRddString2 = viewsPurchasesGrouped.select("view_product_ids","purchase_product_ids").rdd.map( row =>
  Array(
    row.getSeq[Long](0).toArray, 
    row.getSeq[Long](1).toArray
  )
) 
val viewsPurchasesRddString = viewsPurchasesGrouped.map {
   case Row(session_id: Long, view_product_ids: Array[Long], purchase_product_ids: Array[Long]) => 
     (view_product_ids, purchase_product_ids)
}
这与

viewsPurchasesRddString: org.apache.spark.sql.Dataset[(Array[Long], Array[Long])] = [_1: array<bigint>, _2: array<bigint>]
prefixSpan: org.apache.spark.mllib.fpm.PrefixSpan = org.apache.spark.mllib.fpm.PrefixSpan@10d69876
<console>:67: error: overloaded method value run with alternatives:
  [Item, Itemset <: Iterable[Item], Sequence <: Iterable[Itemset]](data: org.apache.spark.api.java.JavaRDD[Sequence])org.apache.spark.mllib.fpm.PrefixSpanModel[Item] <and>
  [Item](data: org.apache.spark.rdd.RDD[Array[Array[Item]]])(implicit evidence$1: 
scala.reflect.ClassTag[Item])org.apache.spark.mllib.fpm.PrefixSpanModel[Item] cannot be applied to (org.apache.spark.sql.Dataset[(Array[Long], Array[Long])])
   val model = prefixSpan.run(viewsPurchasesRddString)
                          ^
viewsPurchasesRddString:org.apache.spark.sql.Dataset[(数组[Long],数组[Long])]=[\u1:Array,\u2:Array]
prefixSpan:org.apache.spark.mllib.fpm.prefixSpan=org.apache.spark.mllib.fpm。PrefixSpan@10d69876
:67:错误:重载的方法值与可选项一起运行:

[Item,Itemset您的数据框表明列的类型为
array
,因此您不应使用
Seq[Long]访问这些列
。在spark 1.6中,数据帧上的
map
会自动切换到RDD API,而在spark 2中,您需要使用
RDD.map
来执行相同的操作。因此,我建议这样做可以:

val viewsPurchasesRddString = viewsPurchasesGrouped.rdd.map( row =>
  Array(
    Array(row.getAs[WrappedArray[String]](1).toArray), 
    Array(row.getAs[WrappedArray[String]](2).toArray)
  )
)

如果没有数据模式和更多的上下文,很难回答这个问题。理想情况下,您可以给我们一个笔记本,上面有数据和可运行的代码(值得赞赏,但不是预期的)@StevenBlack谢谢你的回答。我已经添加了viewsPurchasesGrouped的架构。除了viewsPurchasesGrouped的起源(一个配置单元表)之外,没有更多的上下文。如果你能为我提供一个资源,让我了解如何制作一个独立于配置单元表但仍然包含具有相同架构的数据的笔记本,我也会很高兴请提供一个工作笔记本!我刚刚意识到它说的是Array[String],这是从错误的地方复制的。我刚刚修复了它!你的版本可以工作!这是经过修改的代码:val viewsPurchasesRddString=viewspurchasesgroupped.rdd.map(row=>Array(row.getAs[WrappedArray[Long]](1.toArray)),Array(row.getAs[WrappedArray[Long]](2).toArray)它的工作原理也是这样的:val viewsurpchasesrddstring=viewsurpchasesgroupped.rdd.map(row=>Array(row.getAs[WrappedArray[Long]](1.toArray,row.getAs[WrappedArray[Long]](2.toArray))