Hadoop 为什么没有ORDERBY/sort by子句的配置单元查询会以一个减速机结束?

Hadoop 为什么没有ORDERBY/sort by子句的配置单元查询会以一个减速机结束?,hadoop,mapreduce,hive,reducers,Hadoop,Mapreduce,Hive,Reducers,我有一个与流作业关联的简单查询,其中没有ORDERBY语句 set hive.exec.max.dynamic.partitions.pernode=100; set hive.exec.max.dynamic.partitions=100; set hive.exec.max.created.files=100; set hive.exec.dynamic.partition.mode=nonstrict; set mapred.reduce.tasks=20; add file /home/

我有一个与流作业关联的简单查询,其中没有ORDERBY语句

set hive.exec.max.dynamic.partitions.pernode=100;
set hive.exec.max.dynamic.partitions=100;
set hive.exec.max.created.files=100;
set hive.exec.dynamic.partition.mode=nonstrict;
set mapred.reduce.tasks=20;
add file /home/devo/c1166313/pafvalid.py ;
add file /home/devo/c1166313/paf-rules.properties ;
from
 (from  
   (select * from mz_paf_errors_dummy_v) p
select transform (p.*)  row format delimited fields terminated by '|' 
using 'pafvalid.py paf-rules.properties 10'
as (<column list>)
row format delimited fields terminated by '|' )  b
insert overwrite table mytab partition (passfail, batch_sk) select <col list>;

听起来很奇怪,因为您设置了应覆盖任何其他设置的
mapter.reduce.tasks
。尝试检查
hive.exec.reducers.bytes.per.reducer
参数,因为默认情况下,减缩器的数量是按以下方式计算的:
#reducers=(#映射器的输入字节数)/(hive.exec.reducers.bytes.per.reducer)
-输出的大小是多少?嗯,很好的建议,hive.exec.reducers.bytes.per.reducer=268435456如果再次看到单个reducer,我会记住。我相信在这种情况下,我们有更多的数据,但会记住这一点。
Number of reduce tasks determined at compile time: 1