Scala 构建反向索引超出Java堆大小

Scala 构建反向索引超出Java堆大小,scala,hadoop,avro,scalding,Scala,Hadoop,Avro,Scalding,这可能是一个非常特殊的情况,但在我的头上敲了一下之后,我想从Stackoverflow社区得到帮助 我正在为大数据集(大系统中一天的数据)建立一个反向索引。反向索引的构建在Hadoop上作为map reduce作业执行。倒排索引是在scala的帮助下建立的。反向索引的结构如下:{key:“New”,ProductID:[1,2,3,4,5,…]}这些被写入avro文件 在这个过程中,我遇到了Java堆大小问题。我认为原因是像我上面所展示的“New”这样的术语包含大量productId。我大致了解

这可能是一个非常特殊的情况,但在我的头上敲了一下之后,我想从Stackoverflow社区得到帮助

我正在为大数据集(大系统中一天的数据)建立一个反向索引。反向索引的构建在Hadoop上作为map reduce作业执行。倒排索引是在scala的帮助下建立的。反向索引的结构如下:
{key:“New”,ProductID:[1,2,3,4,5,…]}
这些被写入avro文件

在这个过程中,我遇到了Java堆大小问题。我认为原因是像我上面所展示的“New”这样的术语包含大量productId。我大致了解了Scala代码中可能出现的问题:

  def toIndexedRecord(ids: List[Long], token: String): IndexRecord = {
    val javaList = ids.map(l => l: java.lang.Long).asJava //need to convert from scala long to java long
    new IndexRecord(token, javaList)
  }
这就是我使用这个方法的方式(它在许多地方使用,但代码结构和登录名都相同)

textPipeDump
是烫手的
MultipleTextLine
field对象

case class MultipleTextLineFiles(p : String*) extends FixedPathSource(p:_*) with TextLineScheme
我有一个case类来拆分并从文本行中获取我想要的字段,这就是对象
ss

这是我的堆栈跟踪:

Exception in thread "IPC Client (47) connection to /127.0.0.1:55977 from job_201306241658_232590" java.lang.OutOfMemoryError: Java heap space
    at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:226)
    at org.apache.hadoop.ipc.Client$Connection.close(Client.java:903)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:800)
28079664 [main] ERROR cascading.flow.stream.TrapHandler - caught Throwable, no trap available, rethrowing
cascading.pipe.OperatorException: [WritableSequenceFile(h...][com.twitter.scalding.GroupBuilder$$anonfun$1.apply(GroupBuilder.scala:189)] operator Every failed executing operation: MRMAggregator[decl:'value']
    at cascading.flow.stream.AggregatorEveryStage.receive(AggregatorEveryStage.java:136)
    at cascading.flow.stream.AggregatorEveryStage.receive(AggregatorEveryStage.java:39)
    at cascading.flow.stream.OpenReducingDuct.receive(OpenReducingDuct.java:49)
    at cascading.flow.stream.OpenReducingDuct.receive(OpenReducingDuct.java:28)
    at cascading.flow.hadoop.stream.HadoopGroupGate.run(HadoopGroupGate.java:90)
    at cascading.flow.hadoop.FlowReducer.reduce(FlowReducer.java:133)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:520)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1178)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.OutOfMemoryError: Java heap space
    at scala.collection.mutable.ListBuffer.$plus$eq(ListBuffer.scala:168)
    at scala.collection.mutable.ListBuffer.$plus$eq(ListBuffer.scala:45)
    at scala.collection.generic.Growable$$anonfun$$plus$plus$eq$1.apply(Growable.scala:48)
    at scala.collection.generic.Growable$$anonfun$$plus$plus$eq$1.apply(Growable.scala:48)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ListBuffer.$plus$plus$eq(ListBuffer.scala:176)
    at scala.collection.immutable.List.$colon$colon$colon(List.scala:127)
    at scala.collection.immutable.List.$plus$plus(List.scala:193)
    at com.twitter.algebird.ListMonoid.plus(Monoid.scala:86)
    at com.twitter.algebird.ListMonoid.plus(Monoid.scala:84)
    at com.twitter.scalding.KeyedList$$anonfun$sum$1.apply(TypedPipe.scala:264)
    at com.twitter.scalding.MRMAggregator.aggregate(Operations.scala:279)
    at cascading.flow.stream.AggregatorEveryStage.receive(AggregatorEveryStage.java:128)
    ... 12 more
当我为小数据集执行map reduce作业时,我没有得到错误。这意味着,随着数据的增加,我为诸如New或old等词建立索引的项目/产品标识的数量会变大,从而导致堆大小溢出


所以,问题是如何避免java堆大小溢出并完成这项任务。

每个关键字有多少个产品ID?我没有确切的数字,因为有些关键字只返回1或2个项目,但有些可能返回100000个。如果你能让它变懒,这应该可以做到。否则,您可能需要在几个过程中执行该操作,这样就不会在内存中积累太多数据(在我看来,您在内存中同时拥有所有密钥的所有产品ID)。不过,在细节上我无能为力——只是策略而已。
Exception in thread "IPC Client (47) connection to /127.0.0.1:55977 from job_201306241658_232590" java.lang.OutOfMemoryError: Java heap space
    at org.apache.hadoop.io.IOUtils.closeStream(IOUtils.java:226)
    at org.apache.hadoop.ipc.Client$Connection.close(Client.java:903)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:800)
28079664 [main] ERROR cascading.flow.stream.TrapHandler - caught Throwable, no trap available, rethrowing
cascading.pipe.OperatorException: [WritableSequenceFile(h...][com.twitter.scalding.GroupBuilder$$anonfun$1.apply(GroupBuilder.scala:189)] operator Every failed executing operation: MRMAggregator[decl:'value']
    at cascading.flow.stream.AggregatorEveryStage.receive(AggregatorEveryStage.java:136)
    at cascading.flow.stream.AggregatorEveryStage.receive(AggregatorEveryStage.java:39)
    at cascading.flow.stream.OpenReducingDuct.receive(OpenReducingDuct.java:49)
    at cascading.flow.stream.OpenReducingDuct.receive(OpenReducingDuct.java:28)
    at cascading.flow.hadoop.stream.HadoopGroupGate.run(HadoopGroupGate.java:90)
    at cascading.flow.hadoop.FlowReducer.reduce(FlowReducer.java:133)
    at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:520)
    at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:421)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1178)
    at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.OutOfMemoryError: Java heap space
    at scala.collection.mutable.ListBuffer.$plus$eq(ListBuffer.scala:168)
    at scala.collection.mutable.ListBuffer.$plus$eq(ListBuffer.scala:45)
    at scala.collection.generic.Growable$$anonfun$$plus$plus$eq$1.apply(Growable.scala:48)
    at scala.collection.generic.Growable$$anonfun$$plus$plus$eq$1.apply(Growable.scala:48)
    at scala.collection.immutable.List.foreach(List.scala:318)
    at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
    at scala.collection.mutable.ListBuffer.$plus$plus$eq(ListBuffer.scala:176)
    at scala.collection.immutable.List.$colon$colon$colon(List.scala:127)
    at scala.collection.immutable.List.$plus$plus(List.scala:193)
    at com.twitter.algebird.ListMonoid.plus(Monoid.scala:86)
    at com.twitter.algebird.ListMonoid.plus(Monoid.scala:84)
    at com.twitter.scalding.KeyedList$$anonfun$sum$1.apply(TypedPipe.scala:264)
    at com.twitter.scalding.MRMAggregator.aggregate(Operations.scala:279)
    at cascading.flow.stream.AggregatorEveryStage.receive(AggregatorEveryStage.java:128)
    ... 12 more