Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/mongodb/12.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Mongodb Scala-创建IndexedDatasetSpark对象_Mongodb_Scala_Apache Spark_Mahout_Mahout Recommender - Fatal编程技术网

Mongodb Scala-创建IndexedDatasetSpark对象

Mongodb Scala-创建IndexedDatasetSpark对象,mongodb,scala,apache-spark,mahout,mahout-recommender,Mongodb,Scala,Apache Spark,Mahout,Mahout Recommender,我想对从mongodb获得的数据运行Spark RowSimilarity recommender。为此,我编写了下面的代码,它从mongo获取输入,将其转换为对象的RDD。这需要传递给IndexedDataSetSpark,然后再传递给SimilarityAnalysis.RowSimilarityId import org.apache.hadoop.conf.Configuration import org.apache.mahout.math.cf.SimilarityAnalysis

我想对从mongodb获得的数据运行Spark RowSimilarity recommender。为此,我编写了下面的代码,它从mongo获取输入,将其转换为对象的RDD。这需要传递给IndexedDataSetSpark,然后再传递给SimilarityAnalysis.RowSimilarityId

import org.apache.hadoop.conf.Configuration
import org.apache.mahout.math.cf.SimilarityAnalysis
import org.apache.mahout.sparkbindings.indexeddataset.IndexedDatasetSpark
import org.apache.spark.rdd.{NewHadoopRDD, RDD}
import org.apache.spark.{SparkConf, SparkContext}
import org.bson.BSONObject
import com.mongodb.hadoop.MongoInputFormat

object SparkExample extends App {
  val mongoConfig = new Configuration()
  mongoConfig.set("mongo.input.uri", "mongodb://my_mongo_ip:27017/db.collection")

  val sparkConf = new SparkConf()
  val sc = new SparkContext("local", "SparkExample", sparkConf)

  val documents: RDD[(Object, BSONObject)] = sc.newAPIHadoopRDD(
    mongoConfig,
    classOf[MongoInputFormat],
    classOf[Object],
    classOf[BSONObject]
  )
  val new_doc: RDD[(String, String)] = documents.map(
    doc1 => (
    doc1._2.get("product_id").toString(),
    doc1._2.get("product_attribute_value").toString().replace("[ \"", "").replace("\"]", "").split("\" , \"").map(value => value.toLowerCase.replace(" ", "-")).mkString(" ")
    )
  )
  var myIDs = IndexedDatasetSpark(new_doc)(sc) 

  SimilarityAnalysis.rowSimilarityIDS(myIDs).dfsWrite("hdfs://myhadoop:9000/myfile", readWriteSchema)
我无法创建可传递给SimilarityAnalysis.RowSimilarityId的IndexedDatasetSpark。在这件事上请帮助我

Edit1:

我成功地创建了IndexedDatasetSpark对象,代码现在可以正确编译了。我必须添加
(sc)
作为
indexedatasetspark
的隐式参数,以便代码运行:

Error: could not find implicit value for parameter sc: org.apache.spark.SparkContext
现在,当我运行它时,它给出以下错误:

Error: could not find implicit value for parameter sc: org.apache.mahout.math.drm.DistributedContext
java.io.NotSerializableException: org.apache.mahout.math.DenseVector
Serialization stack:
- object not serializable (class: org.apache.mahout.math.DenseVector, value: {3:1.0,8:1.0,10:1.0})
- field (class: scala.Some, name: x, type: class java.lang.Object)
- object (class scala.Some, Some({3:1.0,8:1.0,10:1.0}))
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
我不知道如何给出DistributedContext

这是创建RDD并将其转换为ID以便RowSimilarityID可以处理的正确方法吗

更多背景:我从这种情况开始:

My build.sbt:

name := "scala-mongo"

version := "1.0"

scalaVersion := "2.10.6"

libraryDependencies += "org.mongodb" %% "casbah" % "3.1.1"

libraryDependencies += "org.apache.spark" %% "spark-core" % "1.6.1"
libraryDependencies += "org.mongodb.mongo-hadoop" % "mongo-hadoop-core" % "1.4.2"

libraryDependencies ++= Seq(
  "org.apache.hadoop" % "hadoop-client" % "2.6.0" exclude("javax.servlet", "servlet-api") exclude ("com.sun.jmx", "jmxri") exclude ("com.sun.jdmk", "jmxtools") exclude ("javax.jms", "jms") exclude ("org.slf4j", "slf4j-log4j12") exclude("hsqldb","hsqldb"),
  "org.scalatest" % "scalatest_2.10" % "1.9.2" % "test"
)

libraryDependencies += "org.apache.mahout" % "mahout-math-scala_2.10" % "0.11.2"
libraryDependencies += "org.apache.mahout" % "mahout-spark_2.10" % "0.11.2"
libraryDependencies += "org.apache.mahout" % "mahout-math" % "0.11.2"
libraryDependencies += "org.apache.mahout" % "mahout-hdfs" % "0.11.2"

resolvers += "typesafe repo" at "http://repo.typesafe.com/typesafe/releases/"

resolvers += Resolver.mavenLocal
Edit2:我临时删除了dfsWrite以执行代码,并偶然发现以下错误:

Error: could not find implicit value for parameter sc: org.apache.mahout.math.drm.DistributedContext
java.io.NotSerializableException: org.apache.mahout.math.DenseVector
Serialization stack:
- object not serializable (class: org.apache.mahout.math.DenseVector, value: {3:1.0,8:1.0,10:1.0})
- field (class: scala.Some, name: x, type: class java.lang.Object)
- object (class scala.Some, Some({3:1.0,8:1.0,10:1.0}))
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

是否有我可能跳过的序列化?

我会把你删除的内容放回去,第二个错误是我自己造成的

最初的错误是因为您尚未创建SparkContext,可以执行以下操作:

implicit val mc = mahoutSparkContext()
此后,我认为mc(SparkDistributedContext)到sc(SparkContext)的隐式转换将由package helper函数处理。如果sc仍然丢失,请尝试:

implicit val sc = sdc2sc(mc)

我会把你移除的东西放回去,第二个错误是自己造成的

最初的错误是因为您尚未创建SparkContext,可以执行以下操作:

implicit val mc = mahoutSparkContext()
此后,我认为mc(SparkDistributedContext)到sc(SparkContext)的隐式转换将由package helper函数处理。如果sc仍然丢失,请尝试:

implicit val sc = sdc2sc(mc)

你忘了显示错误了吗?@pferrel:我用最后一个错误编辑了这个问题。请让我知道我是否遵循了在Scala/Spark/Mahout中执行操作的正确过程。@pferrel:在删除dfsrite并让rowSimilarity运行之后,我遇到了一个新问题。更新了问题。你忘了显示错误了吗?@pferrel:我用最后一个错误编辑了问题。请让我知道我是否遵循了在Scala/Spark/Mahout中执行操作的正确过程。@pferrel:在删除dfsrite并让rowSimilarity运行之后,我遇到了一个新问题。已更新问题。谢谢@pferrel。我确实弄明白了Mahouts的来龙去脉。但我仍然必须显式地将(mc)传递给这两个函数才能使其工作。我应该发布最终的代码吗?听起来你可以回答你自己的问题吗?在看了这篇文章之后,我确实设法让代码正常工作了。我还是不知道这是不是正确的方法。我应该把我的代码作为答案吗?谢谢@pferrel。我确实弄明白了Mahouts的来龙去脉。但我仍然必须显式地将(mc)传递给这两个函数才能使其工作。我应该发布最终的代码吗?听起来你可以回答你自己的问题吗?在看了这篇文章之后,我确实设法让代码正常工作了。我还是不知道这是不是正确的方法。我应该把我的代码作为答案吗?