Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/javascript/394.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
缓冲区大小太小。size=131072 needed=5569380,但我有配置单元配置集hive.exec.orc.default.buffer.size=5600000;_Hive_Amazon Emr - Fatal编程技术网

缓冲区大小太小。size=131072 needed=5569380,但我有配置单元配置集hive.exec.orc.default.buffer.size=5600000;

缓冲区大小太小。size=131072 needed=5569380,但我有配置单元配置集hive.exec.orc.default.buffer.size=5600000;,hive,amazon-emr,Hive,Amazon Emr,我正在努力完成配置单元查询。我的蜂箱与: set hive.exec.dynamic.partition.mode=nonstrict; set hive.exec.dynamic.partition=true; set hive.exec.max.dynamic.partitions=10000; set mapreduce.map.memory.mb=10000; set mapreduce.reduce.memory.mb=10000; set mapreduce.reduce.memor

我正在努力完成配置单元查询。我的蜂箱与:

set hive.exec.dynamic.partition.mode=nonstrict;
set hive.exec.dynamic.partition=true;
set hive.exec.max.dynamic.partitions=10000;
set mapreduce.map.memory.mb=10000;
set mapreduce.reduce.memory.mb=10000;
set mapreduce.reduce.memory.mb=10000;
set hive.exec.max.dynamic.partitions.pernode=10000;
set hive.execution.engine=tez;
set hive.exec.stagingdir=/tmp/hive/;
set hive.exec.scratchdir=/tmp/hive/;
set hive.groupby.orderby.position.alias=true;
set hive.vectorized.execution.enabled=true;
set hive.exec.parallel=true;
set hive.llap.io.enabled=false;
set hive.exec.orc.default.buffer.size=5600000;
在使用nohup运行配置单元文件后,如果在查询失败后滚动查看它,我会多次看到以下消息:

java.io.IOException:java.lang.IllegalArgumentException:缓冲区大小太小。尺寸=131072所需=5569380

然后就在最后:

已终止/失败,原因是:其他\u顶点\u失败]由于顶点\u失败,DAG未成功。失败顶点:1个失败顶点:14个

不确定要提供哪些其他信息?看起来我的上一个配置单元配置不起作用,因为我在看到以下错误输出后设置了它:

set hive.exec.orc.default.buffer.size=5600000;
我是否可以添加或编辑任何对话以再次尝试此操作