Apache spark Graphx EdgeRDD计数计算时间长

Apache spark Graphx EdgeRDD计数计算时间长,apache-spark,spark-graphx,Apache Spark,Spark Graphx,我运行的是一个独立的spark,下面的代码与EdgeRDD相关。这些是从文本文件加载的图形边。大约有6700万条记录 val edges: RDD[Edge[Int]] = edge_file.map(line => {val x = line.split("\\s+") Edge(x(0).toLong, x(1).toLong, x(2).toInt); }) val ed

我运行的是一个独立的spark,下面的代码与EdgeRDD相关。这些是从文本文件加载的图形边。大约有6700万条记录

val edges: RDD[Edge[Int]] = edge_file.map(line => {val x = line.split("\\s+")
                                                         Edge(x(0).toLong, x(1).toLong, x(2).toInt); })
val edges1: EdgeRDD[Int] = EdgeRDD.fromEdges(edges)

println(edges1.count)
问题只是计算它们,它在rdd创建上遇到了麻烦。我有一台24gb内存的机器。执行器和驱动程序的最佳设置应该是什么。或者我需要在spark-env.sh中设置任何其他配置。我正在运行spark 1.4.0

spark-1.4.0-bin-hadoop2.6/bin/spark-submit --executor-memory 10g --driver-memory 10g --class "GraphParser" --master local[6] target/scala-2.10/simple-project_2.10-1.0.jar 100
以下是输出:

    15/06/17 02:32:42 INFO SparkContext: Starting job: reduce at EdgeRDDImpl.scala:89
    15/06/17 02:32:42 INFO DAGScheduler: Got job 1 (reduce at EdgeRDDImpl.scala:89) with 6 output partitions (allowLocal=false)
    15/06/17 02:32:42 INFO DAGScheduler: Final stage: ResultStage 1(reduce at EdgeRDDImpl.scala:89)
    15/06/17 02:32:42 INFO DAGScheduler: Parents of final stage: List()
    15/06/17 02:32:42 INFO DAGScheduler: Missing parents: List()
    15/06/17 02:32:42 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[11] at map at EdgeRDDImpl.scala:89), which has no missing parents
    15/06/17 02:32:42 INFO MemoryStore: ensureFreeSpace(2904) called with curMem=507670, maxMem=8890959790
    15/06/17 02:32:42 INFO MemoryStore: Block broadcast_3 stored as values in memory (estimated size 2.8 KB, free 8.3 GB)
    15/06/17 02:32:42 INFO MemoryStore: ensureFreeSpace(1766) called with curMem=510574, maxMem=8890959790
    15/06/17 02:32:42 INFO MemoryStore: Block broadcast_3_piece0 stored as bytes in memory (estimated size 1766.0 B, free 8.3 GB)
    15/06/17 02:32:42 INFO BlockManagerInfo: Added broadcast_3_piece0 in memory on localhost:55287 (size: 1766.0 B, free: 8.3 GB)
    15/06/17 02:32:42 INFO SparkContext: Created broadcast 3 from broadcast at DAGScheduler.scala:874
    15/06/17 02:32:42 INFO DAGScheduler: Submitting 6 missing tasks from ResultStage 1 (MapPartitionsRDD[11] at map at EdgeRDDImpl.scala:89)
    15/06/17 02:32:42 INFO TaskSchedulerImpl: Adding task set 1.0 with 6 tasks
    15/06/17 02:32:42 INFO FairSchedulableBuilder: Added task set TaskSet_1 tasks to pool default
    15/06/17 02:32:47 WARN TaskSetManager: Stage 1 contains a task of very large size (140947 KB). The maximum recommended task size is 100 KB.
    15/06/17 02:32:47 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 2, localhost, PROCESS_LOCAL, 144329897 bytes)
    15/06/17 02:32:53 INFO TaskSetManager: Starting task 1.0 in stage 1.0 (TID 3, localhost, PROCESS_LOCAL, 145670467 bytes)
    15/06/17 02:32:58 INFO TaskSetManager: Starting task 2.0 in stage 1.0 (TID 4, localhost, PROCESS_LOCAL, 145674593 bytes)
    15/06/17 02:33:03 INFO TaskSetManager: Starting task 3.0 in stage 1.0 (TID 5, localhost, PROCESS_LOCAL, 145687533 bytes)
    15/06/17 02:33:08 INFO TaskSetManager: Starting task 4.0 in stage 1.0 (TID 6, localhost, PROCESS_LOCAL, 145694247 bytes)
    15/06/17 02:33:12 INFO TaskSetManager: Starting task 5.0 in stage 1.0 (TID 7, localhost, PROCESS_LOCAL, 145686985 bytes)
    15/06/17 02:33:12 INFO Executor: Running task 0.0 in stage 1.0 (TID 2)
    15/06/17 02:33:12 INFO Executor: Running task 2.0 in stage 1.0 (TID 4)
    15/06/17 02:33:12 INFO Executor: Running task 1.0 in stage 1.0 (TID 3)
    15/06/17 02:33:12 INFO Executor: Running task 5.0 in stage 1.0 (TID 7)
    15/06/17 02:33:12 INFO Executor: Running task 4.0 in stage 1.0 (TID 6)
    15/06/17 02:33:12 INFO Executor: Running task 3.0 in stage 1.0 (TID 5)

在浏览日志后,我发现我的任务规模更大,需要时间来安排。Spark自己警告说

 15/06/17 02:32:47 WARN TaskSetManager: Stage 1 contains a task of very large size (140947 KB). The maximum recommended task size is 100 KB.
这使我使用下面的代码对数据进行分区

val graphDocs = EdgeRDD.fromEdges(sc.parallelize(docList, 200))
这解决了问题。我在45秒内得到了结果。希望这将是有用的人