Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hive 带有Tez的配置单元内存不足错误_Hive_Yarn_Hadoop2_Elastic Map Reduce_Apache Tez - Fatal编程技术网

Hive 带有Tez的配置单元内存不足错误

Hive 带有Tez的配置单元内存不足错误,hive,yarn,hadoop2,elastic-map-reduce,apache-tez,Hive,Yarn,Hadoop2,Elastic Map Reduce,Apache Tez,我有一个脚本,在hive 13(纱线)上运行良好 我正在尝试用泰兹。在大数据集上运行查询时,遇到以下错误 0 FATAL [Socket Reader #1 for port 55739] org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread Thread[Socket Reader #1 for port 55739,5,main] threw an Error. Shutting down now...

我有一个脚本,在hive 13(纱线)上运行良好 我正在尝试用泰兹。在大数据集上运行查询时,遇到以下错误

0 FATAL [Socket Reader #1 for port 55739] org.apache.hadoop.yarn.YarnUncaughtExceptionHandler: Thread Thread[Socket Reader #1 for port 55739,5,main] threw an Error.  Shutting down now...
            java.lang.OutOfMemoryError: GC overhead limit exceeded
                at java.nio.ByteBuffer.allocate(ByteBuffer.java:331)
                at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:1510)
                at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:750)
                at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:624)
                at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:595)
            2015-12-07 20:31:32,859 FATAL [AsyncDispatcher event handler] org.apache.hadoop.yarn.event.AsyncDispatcher: Error in dispatcher thread
            java.lang.OutOfMemoryError: GC overhead limit exceeded
            2015-12-07 20:31:30,590 WARN [IPC Server handler 0 on 55739] org.apache.hadoop.ipc.Server: IPC Server handler 0 on 55739, call heartbeat({  containerId=container_1449516549171_0001_01_000100, requestId=10184, startIndex=0, maxEventsToGet=0, taskAttemptId=null, eventCount=0 }), rpc version=2, client version=19, methodsFingerPrint=557389974 from 10.10.30.35:47028 Call#11165 Retry#0: error: java.lang.OutOfMemoryError: GC overhead limit exceeded
            java.lang.OutOfMemoryError: GC overhead limit exceeded
                at javax.security.auth.SubjectDomainCombiner.optimize(SubjectDomainCombiner.java:464)
                at javax.security.auth.SubjectDomainCombiner.combine(SubjectDomainCombiner.java:267)
                at java.security.AccessControlContext.goCombiner(AccessControlContext.java:499)
                at java.security.AccessControlContext.optimize(AccessControlContext.java:407)
                at java.security.AccessController.getContext(AccessController.java:501)
                at javax.security.auth.Subject.doAs(Subject.java:412)
                at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
                at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
            2015-12-07 20:32:53,495 INFO [Thread-60] amazon.emr.metrics.MetricsSaver: Saved 4:3 records to /mnt/var/em/raw/i-782f08c8_20151207_7921_07921_raw.bin
            2015-12-07 20:32:53,495 INFO [AsyncDispatcher event handler] org.apache.hadoop.yarn.event.AsyncDispatcher: Exiting, bbye..
            2015-12-07 20:32:50,435 INFO [IPC Server handler 20 on 55739] org.apache.hadoop.ipc.Server: IPC Server handler 20 on 55739, call getTask(org.apache.tez.common.ContainerContext@409a6aa9), rpc version=2, client version=19, methodsFingerPrint=557389974 from 10.10.30.33:33644 Call#11094 Retry#0: error: java.io.IOException: java.lang.OutOfMemoryError: GC overhead limit exceeded
            java.io.IOException: java.lang.OutOfMemoryError: GC overhead limit exceeded
            2015-12-07 20:32:29,117 WARN [IPC Server handler 23 on 55739] org.apache.hadoop.ipc.Server: IPC Server handler 23 on 55739, call getTask(org.apache.tez.common.ContainerContext@7c7e6992), rpc version=2, client version=19, methodsFingerPrint=557389974 from 10.10.30.38:44218 Call#11260 Retry#0: error: java.lang.OutOfMemoryError: GC overhead limit exceeded
            java.lang.OutOfMemoryError: GC overhead limit exceeded
            2015-12-07 20:32:53,497 INFO [Thread-60] amazon.emr.metrics.MetricsSaver: Saved 1:1 records to /mnt/var/em/raw/i-782f08c8_20151207_7921_07921_raw.bin
            2015-12-07 20:32:53,498 INFO [Thread-61] amazon.emr.metrics.MetricsSaver: Saved 1:1 records to /mnt/var/em/raw/i-782f08c8_20151207_7921_07921_raw.bin
            2015-12-07 20:32:53,498 INFO [Thread-2] org.apache.tez.dag.app.DAGAppMaster: DAGAppMaster received a signal. Signaling TaskScheduler
            2015-12-07 20:32:53,498 INFO [Thread-2] org.apache.tez.dag.app.rm.TaskSchedulerEventHandler: TaskScheduler notified that iSignalled was : true
            2015-12-07 20:32:53,499 INFO [Thread-2] org.apache.tez.dag.history.HistoryEventHandler: Stopping HistoryEventHandler
            2015-12-07 20:32:53,499 INFO [Thread-2] org.apache.tez.dag.history.recovery.RecoveryService: Stopping RecoveryService
            2015-12-07 20:32:53,499 INFO [Thread-2] org.apache.tez.dag.history.recovery.RecoveryService: Closing Summary Stream
            2015-12-07 20:32:53,499 INFO [LeaseRenewer:hadoop@10.10.30.148:9000] org.apache.hadoop.util.ExitUtil: Halt with status -1 Message: HaltException

有关EMR群集的一些规格—m1.xlarge主节点、4个r3.8XL大型核心节点、2个r3.8XL大型任务节点(大约1.3 TB内存)

我尝试了以下设置,但不起作用

set tez.task.resource.memory.mb=8000;
SET hive.tez.container.size=30208;
SET hive.tez.java.opts=-Xmx24168m;
另外,由于亚马逊在EMR上提供了0.4.1版本的tez,我现在正在运行它(也许这就是问题所在?)


谁能帮忙修理一下吗。我试图调整一些与内存相关的属性,如mapreduce.map.memory.mb,但运气不佳

尝试更改
tez.task.resource.memory.mb


这是在Tez上下文中用于映射内存和减少内存的参数。

使用以下属性检查您的warn site.xml

<configuration>
  <property>
   <name>yarn.nodemanager.vmem-check-enabled</name>
    <value>false</value>
    <description>Whether virtual memory limits will be enforced for containers</description>
  </property>
 <property>
   <name>yarn.nodemanager.vmem-pmem-ratio</name>
    <value>4</value>
    <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
  </property>
 <property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>1024</value>
  </property>
  <property>
    <name>yarn.scheduler.maximum-allocation-mb</name>
    <value>2048</value>
  </property>
  <property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>2048</value>
  </property>
</configuration>

warn.nodemanager.vmem-check-enabled
假的
是否对容器强制执行虚拟内存限制
纱线.nodemanager.vmem-pmem-比值
4.
为容器设置内存限制时虚拟内存与物理内存之间的比率
warn.scheduler.minimum-allocation-mb
1024
warn.scheduler.maximum-allocation-mb
2048
warn.nodemanager.resource.memory-mb
2048

Hmm。。。先从
hive.tez.container.size
开始,然后再处理这个讨厌的“任务”属性。有关EMR集群的一些规范-m1.xlarge主节点、4个r3.8XL大型核心节点、2个r3.8XL大型任务节点(大约1.3 TB内存)我尝试了以下设置,但它们不起作用。设置tez.task.resource.memory.mb=8000;设置hive.tez.container.size=30208;设置hive.tez.java.opts=-Xmx24168m;当你说“将用于映射内存和减少内存的参数”是什么意思?map和reduce是否根据分配给它们的容器使用内存和CPU?另外,关于导出tez.task.resource.memory.mb有什么最佳实践吗?