Apache spark Spark Word2VecModel超出了保存的最大RPC大小

Apache spark Spark Word2VecModel超出了保存的最大RPC大小,apache-spark,word2vec,apache-spark-ml,Apache Spark,Word2vec,Apache Spark Ml,我正在培训一个Word2Vec模型,该模型在200维的基础上有相当多的单个术语(~100k) Spark典型的W2V模型化目前的内存使用量主要由每个字的向量组成,即:numberOfDimensions*sizeof(float)*numberOfWords。算算一下,上面的数量级是100MB,给或拿。 考虑到我仍在开发标记器,并且仍在为最佳向量大小进行测试,我实际上在一个75k-150k单词的字典上进行计算,从100维到300维,所以我们假设模型可以达到~500MB 现在一切都很好,直到这个模

我正在培训一个Word2Vec模型,该模型在200维的基础上有相当多的单个术语(~100k)

Spark典型的W2V模型化目前的内存使用量主要由每个字的向量组成,即:
numberOfDimensions*sizeof(float)*numberOfWords
。算算一下,上面的数量级是100MB,给或拿。
考虑到我仍在开发标记器,并且仍在为最佳向量大小进行测试,我实际上在一个75k-150k单词的字典上进行计算,从100维到300维,所以我们假设模型可以达到~500MB

现在一切都很好,直到这个模型保存为止。目前采用以下方式实施:

override protected def saveImpl(path: String): Unit = {
  DefaultParamsWriter.saveMetadata(instance, path, sc)
  val data = Data(instance.wordVectors.wordIndex, instance.wordVectors.wordVectors.toSeq)
  val dataPath = new Path(path, "data").toString
  sparkSession.createDataFrame(Seq(data)).repartition(1).write.parquet(dataPath)
}
即:创建一个1行的数据帧,该行在所有向量的数组中包含一个大的f(l)。数据框保存为拼花地板。那很好。。。除非。。。你必须把它交给遗嘱执行人。在群集模式下执行此操作

这最终导致作业爆炸,堆栈跟踪如下:

16/11/28 11:29:00 INFO scheduler.DAGScheduler: Job 3 failed: parquet at Word2Vec.scala:311, took 5,208453 s  
16/11/28 11:29:00 ERROR datasources.InsertIntoHadoopFsRelationCommand: Aborting job.
org.apache.spark.SparkException: Job aborted due to stage failure: 
    Serialized task 32:5 was 204136673 bytes, 
    which exceeds max allowed: spark.rpc.message.maxSize (134217728 bytes).
    Consider increasing spark.rpc.message.maxSize or using broadcast variables for large values.
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1454)
简单的代码复制(您不能在本地激发shell,但需要将其发送到集群):

对象测试w2v{
def main(参数:数组[字符串]):单位={
val spark=SparkSession.builder().appName(“TestW2V”).getOrCreate()
导入spark.implicits_
//字母表
val randomChars=“abcdefghijklmnopqrstuvxyzabcdefghijklmnopqrstyvwxtz”。tocharray
val random=new java.util.random()
//措辞
def makeWord(wordLength:Int):String=新字符串(0到wordLength).map(=>randomChars(random.nextInt(randomChars.length))).toArray)
val randomWords=for(单词索引randomWords(random.nextInt(randomWords.length)))
val-allWordsDummySentence=randomWords//所有单词至少一次

val randomSequences=for(sentenceIndex我和你有完全相同的经验。它在本地工作得很好,但在集群模式下它会消失,而不会像你建议的那样将RPC大小增加到512mb

i、 e.通过
spark.rpc.message.maxSize=512
让我通过

我也同意保存实现看起来很可疑,尤其是
重新分区(1)

object TestW2V {

def main(args: Array[String]): Unit = {
  val spark = SparkSession.builder().appName("TestW2V").getOrCreate()
  import spark.implicits._

  // Alphabet
  val randomChars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTYVWXTZ".toCharArray
  val random = new java.util.Random()

  // Dictionnary
  def makeWord(wordLength: Int): String = new String((0 until wordLength).map(_ => randomChars(random.nextInt(randomChars.length))).toArray)
  val randomWords = for (wordIndex <- 0 to 100000) // Make approx 100 thousand distinct words
                    yield makeWord(random.nextInt(10)+5)

  // Corpus (make it fairly non trivial)
  def makeSentence(numberOfWords: Int): Seq[String] = (0 until numberOfWords).map(_ => randomWords(random.nextInt(randomWords.length)))
  val allWordsDummySentence = randomWords // all words at least once
  val randomSentences = for (sentenceIndex <- 0 to 100000) 
                        yield makeSentence(random.nextInt(10) +5)
  val corpus: Seq[Seq[String]] = allWordsDummySentence +: randomSentences

  // Train a W2V model on the corpus
  val df = spark.createDataFrame(corpus.map(Tuple1.apply))
  import org.apache.spark.ml.feature.Word2Vec
  val w2v = new Word2Vec().setVectorSize(250).setMinCount(1).setInputCol("_1").setNumPartitions(4)
  val w2vModel = w2v.fit(df)
  w2vModel.save("/home/Documents/w2v")

  spark.stop
}
}