Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/css/38.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 配置单元作业卡在map=100%时,减少0%_Hadoop_Hive - Fatal编程技术网

Hadoop 配置单元作业卡在map=100%时,减少0%

Hadoop 配置单元作业卡在map=100%时,减少0%,hadoop,hive,Hadoop,Hive,我正在hadoop-2.2.0上运行hive-0.12.0。提交查询后: select i_item_desc ,i_category ,i_class ,i_current_price ,i_item_id ,sum(ws_ext_sales_price) as itemrevenue ,sum(ws_ext_sales_price)*100/sum(sum(ws_ext_sales_price)) over (partition by i_class)

我正在hadoop-2.2.0上运行hive-0.12.0。提交查询后:

select  i_item_desc
  ,i_category
  ,i_class
  ,i_current_price
  ,i_item_id
  ,sum(ws_ext_sales_price) as itemrevenue
  ,sum(ws_ext_sales_price)*100/sum(sum(ws_ext_sales_price)) over
      (partition by i_class) as revenueratio
from item JOIN web_sales ON (web_sales.ws_item_sk = item.i_item_sk) JOIN date_dim ON (web_sales.ws_sold_date_sk = date_dim.d_date_sk)
where item.i_category in ('Jewelry', 'Sports', 'Books')
    and date_dim.d_date between '2001-01-12' and '2001-02-11'
    and ws_sold_date between '2001-01-12' and '2001-02-11'
group by
    i_item_id
    ,i_item_desc
    ,i_category
    ,i_class
    ,i_current_price
order by
    i_category
    ,i_class
    ,i_item_id
    ,i_item_desc
    ,revenueratio
limit 100;
我在日志中发现以下错误:

Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 1
2014-07-07 15:26:16,893 Stage-3 map = 0%,  reduce = 0%
2014-07-07 15:26:22,033 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 1.32 sec
然后,最后一行每隔一秒左右无限重复一次。如果我查看容器日志,我会看到:

2014-07-07 17:12:17,477 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1404402886929_0036_m_000000_0: Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
我已经搜索了退出代码143,但是大部分内容都是关于内存问题的,我按照的建议设置了相当大的内存。我甚至试着在那篇文章的每个设置中增加6GB,但仍然没有成功

我还使用以下工具运行此作业:

hive -hiveconf hive.root.logger=DEBUG,console
这确实产生了更多的信息,但我看不出有什么问题


我不知道还有什么地方可以看…

您执行的查询是什么?哎呀。现在修好了。谢谢。在hadoop 2.3.0上再次遇到它