Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/mysql/69.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala apachespark消息理解_Scala_Apache Spark - Fatal编程技术网

Scala apachespark消息理解

Scala apachespark消息理解,scala,apache-spark,Scala,Apache Spark,请求帮助以理解此消息 INFO spark.MapOutputTrackerMaster: Size of output statuses for shuffle 2 is **2202921** bytes 2202921在这里是什么意思 我的作业执行洗牌操作,在读取前一阶段的洗牌文件时,它首先给出消息,然后在某个时候失败,出现以下错误: 14/11/12 11:09:46 WARN scheduler.TaskSetManager: Lost task 224.0 in stage 4.0

请求帮助以理解此消息

INFO spark.MapOutputTrackerMaster: Size of output statuses for shuffle 2 is **2202921** bytes
2202921在这里是什么意思

我的作业执行洗牌操作,在读取前一阶段的洗牌文件时,它首先给出消息,然后在某个时候失败,出现以下错误:

14/11/12 11:09:46 WARN scheduler.TaskSetManager: Lost task 224.0 in stage 4.0 (TID 13938, ip-xx-xxx-xxx-xx.ec2.internal): FetchFailed(BlockManagerId(11, ip-xx-xxx-xxx-xx.ec2.internal, 48073, 0), shuffleId=2, mapId=7468, reduceId=224)
14/11/12 11:09:46 INFO scheduler.DAGScheduler: Marking Stage 4 (coalesce at <console>:49) as failed due to a fetch failure from Stage 3 (map at <console>:42)
14/11/12 11:09:46 INFO scheduler.DAGScheduler: Stage 4 (coalesce at <console>:49) failed in 213.446 s
14/11/12 11:09:46 INFO scheduler.DAGScheduler: Resubmitting Stage 3 (map at <console>:42) and Stage 4 (coalesce at <console>:49) due to fetch failure
14/11/12 11:09:46 INFO scheduler.DAGScheduler: Executor lost: 11 (epoch 2)
14/11/12 11:09:46 INFO storage.BlockManagerMasterActor: Trying to remove executor 11 from BlockManagerMaster.
14/11/12 11:09:46 INFO storage.BlockManagerMaster: Removed 11 successfully in removeExecutor
14/11/12 11:09:46 INFO scheduler.Stage: Stage 3 is now unavailable on executor 11 (11893/12836, false)
14/11/12 11:09:46 INFO scheduler.DAGScheduler: Resubmitting failed stages
14/11/12 11:09:46 INFO scheduler.DAGScheduler: Submitting Stage 3 (MappedRDD[13] at map at <console>:42), which has no missing parents
14/11/12 11:09:46 INFO storage.MemoryStore: ensureFreeSpace(25472) called with curMem=474762, maxMem=11113699737
14/11/12 11:09:46 INFO storage.MemoryStore: Block broadcast_6 stored as values in memory (estimated size 24.9 KB, free 10.3 GB)
14/11/12 11:09:46 INFO storage.MemoryStore: ensureFreeSpace(5160) called with curMem=500234, maxMem=11113699737
14/11/12 11:09:46 INFO storage.MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 5.0 KB, free 10.3 GB)
14/11/12 11:09:46 INFO storage.BlockManagerInfo: Added broadcast_6_piece0 in memory on ip-xx.ec2.internal:35571 (size: 5.0 KB, free: 10.4 GB)
14/11/12 11:09:46 INFO storage.BlockManagerMaster: Updated info of block broadcast_6_piece0
14/11/12 11:09:46 INFO scheduler.DAGScheduler: Submitting 943 missing tasks from Stage 3 (MappedRDD[13] at map at <console>:42)
14/11/12 11:09:46 INFO cluster.YarnClientClusterScheduler: Adding task set 3.1 with 943 tasks

我导出了1280,因为我有20个节点,每个节点有32个核心。我将其导出为2*32*20。

对于洗牌阶段,它将创建一些
ShuffleMapTask
s,将中间结果输出到磁盘。位置信息将存储在
MapStatus
es中,并发送到
MapOutputCrackerMaster
(驱动程序)

然后,当下一阶段开始运行时,它需要这些位置状态。因此,执行者将要求
MapOutputRackerMaster
获取它们
MapOutputTrackerMaster
将这些状态序列化为字节并发送给执行者。以下是这些状态的大小(以字节为单位)

这些状态将通过Akka发送。Akka对最大消息大小有限制。您可以通过
spark.akka.frameSize
进行设置:

允许“控制平面”通信的最大消息大小(对于序列化任务和任务结果),以MB为单位。如果您的任务需要将大的结果发送回驱动程序(例如,在大数据集上使用collect()),请增加此值


如果大小大于
spark.akka.frameSize
,akka将拒绝传递消息,您的作业将失败。因此,它可以帮助您将
spark.akka.frameSize
调整为最佳尺寸。

您可以提供一个在spark shell中设置frameSize的示例吗?我找不到语法。您使用的是哪个版本?自Spark 1.1以来,您可以使用
/Spark shell--conf Spark.akka.frameSize=30
进行设置。30意味着30比,记录在案?我使用的是1.0.1,但我们将进行升级。我不想让人觉得我忘恩负义,但从文档中很难看出这一点。谢谢你抽出时间。另外,如何在应用程序中实现这一点?System.setProperty(“spark.akka.frameSize”,“30”)?如果在应用程序中,可以在创建SparkContext时通过SparkConf设置值,例如
val conf=new SparkConf();conf.set(“spark.akka.frameSize”,“30”)
(rdd1 ++ rdd2).map { t => ((t.id), t) }.groupByKey(1280).map {
  case ((id), sequence) =>
    val newrecord = sequence.maxBy {
      case Fact(id, key, type, day, group, c_key, s_key, plan_id,size,
        is_mom, customer_shipment_id, customer_shipment_item_id, asin, company_key, product_line_key, dw_last_updated, measures) => dw_last_updated.toLong
    }
    ((PARTITION_KEY + "=" + newrecord.day.toString + "/part"), (newrecord))
}.coalesce(2048,true).saveAsTextFile("s3://myfolder/PT/test20nodes/")```