Apache spark 超出限制的物理内存

Apache spark 超出限制的物理内存,apache-spark,Apache Spark,下面是我的星火提交 spark2-submit --class my.class \ --master yarn \ --deploy-mode cluster \ --queue queue-name\ --executor-memory 10G \ --driver-memory 20G \ --num-executors 60 \ --conf spark.executor.memoryOverhead=4G \ --conf spark.yarn.maxAppAttempts=1 \ -

下面是我的星火提交

spark2-submit --class my.class \
--master yarn \
--deploy-mode cluster \
--queue queue-name\
--executor-memory 10G \
--driver-memory 20G \
--num-executors 60 \
--conf spark.executor.memoryOverhead=4G \
--conf spark.yarn.maxAppAttempts=1 \
--conf spark.dynamicAllocation.maxExecutors=480 \
$HOME/myjar.jar param1 param2 param3
错误

我的问题

  • 我正在分配10G的执行器内存,14GB是从哪里来的
  • 我已经提到4G占执行器内存的40%,但仍然建议增加开销内存

  • 您已经为每个spark executor分配了10gb,您需要确保运行executor的机器/节点上有足够的资源满足其其他需求

    Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 50 in stage 27.0 failed 4 times, 
    most recent failure: Lost task 50.4 in stage 27.0 (TID 20899, cdts13hdfc07p.rxcorp.com, executor 962): 
    ExecutorLostFailure (executor 962 exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 
    15.7 GB of 14 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.