Mapreduce 配置单元计数查询无法完成,请永远运行

Mapreduce 配置单元计数查询无法完成,请永远运行,mapreduce,hive,hbase,hadoop2,Mapreduce,Hive,Hbase,Hadoop2,我是Hive新手,我使用HBASE-1.1.0、Hadoop-2.5.1和Hive-0.13来满足我的需求 设置非常好,我可以使用直线运行配置单元查询 查询:从X_表中选择计数(*)。 查询以37.848秒完成 我使用Maven项目设置了相同的环境,并尝试使用配置单元客户端执行一些select查询,但效果良好。但是,当我尝试执行相同的计数查询时,Mapreduce作业无法完成。看起来好像又重新开始工作了。我如何解决这个问题 代码 日志详细信息: Total jobs = 1 Launching

我是Hive新手,我使用HBASE-1.1.0、Hadoop-2.5.1和Hive-0.13来满足我的需求

设置非常好,我可以使用直线运行配置单元查询

查询:从X_表中选择计数(*)。

查询以37.848秒完成

我使用Maven项目设置了相同的环境,并尝试使用配置单元客户端执行一些select查询,但效果良好。但是,当我尝试执行相同的计数查询时,Mapreduce作业无法完成。看起来好像又重新开始工作了。我如何解决这个问题

代码

日志详细信息:

Total jobs = 1 
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):  
    set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:  
    set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
    set mapreduce.job.reduces=<number>
Starting Job = job_1429243611915_0030, Tracking URL = http://master:8088/proxy/application_1429243611915_0030/
Kill Command = /usr/local/pcs/hadoop/bin/hadoop job  -kill job_1429243611915_0030
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2015-04-20 09:28:02,616 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:29:02,728 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:30:03,432 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:31:04,054 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:32:04,675 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:33:05,298 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:34:05,866 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:35:06,419 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:36:06,985 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:37:07,551 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:38:08,289 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:39:09,184 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:40:09,780 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:41:10,367 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:42:10,965 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:43:11,595 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:44:12,181 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:45:12,952 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:46:13,590 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:47:14,218 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:48:14,790 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:49:15,378 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:50:16,014 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:51:16,808 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:52:17,378 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:53:17,928 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:54:18,491 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:55:19,049 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:56:19,797 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:57:20,344 Stage-1 map = 0%,  reduce = 0%
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
    set mapreduce.job.reduces=<number>
Starting Job = job_1429243611915_0031, Tracking URL = http://master:8088/proxy/application_1429243611915_0031/
Kill Command = /usr/local/pcs/hadoop/bin/hadoop job  -kill job_1429243611915_0031
2015-04-20 09:58:20,858 Stage-1 map = 0%,  reduce = 0%
Total jobs=1
正在启动作业1/1
编译时确定的reduce任务数:1
要更改减速器的平均负载(以字节为单位):
设置hive.exec.reducers.bytes.per.reducer=
为了限制减速器的最大数量:
设置hive.exec.reducers.max=
为了设置恒定数量的减速器:
设置mapreduce.job.reduces=
开始作业=作业\u 1429243611915\u 0030,跟踪URL=http://master:8088/proxy/application_1429243611915_0030/
Kill命令=/usr/local/pcs/hadoop/bin/hadoop作业-Kill作业
阶段1的Hadoop作业信息:映射者数量:1;减速器数量:1
2015-04-20 09:28:02616第一阶段地图=0%,减少=0%
2015-04-20 09:29:02728第一阶段地图=0%,减少=0%
2015-04-20 09:30:03432第一阶段地图=0%,减少=0%
2015-04-20 09:31:04054第一阶段地图=0%,减少=0%
2015-04-20 09:32:04675第一阶段地图=0%,减少=0%
2015-04-20 09:33:05298第一阶段地图=0%,减少=0%
2015-04-20 09:34:05866第一阶段地图=0%,减少=0%
2015-04-20 09:35:06419第一阶段地图=0%,减少=0%
2015-04-20 09:36:06985第一阶段地图=0%,减少=0%
2015-04-20 09:37:07551第一阶段地图=0%,减少=0%
2015-04-20 09:38:08289第一阶段地图=0%,减少=0%
2015-04-20 09:39:09184第一阶段地图=0%,减少=0%
2015-04-20 09:40:09780第一阶段地图=0%,减少=0%
2015-04-20 09:41:10367第一阶段地图=0%,减少=0%
2015-04-20 09:42:10965第一阶段地图=0%,减少=0%
2015-04-20 09:43:11595第一阶段地图=0%,减少=0%
2015-04-20 09:44:12181第一阶段地图=0%,减少=0%
2015-04-20 09:45:12952第一阶段地图=0%,减少=0%
2015-04-20 09:46:13590第一阶段地图=0%,减少=0%
2015-04-20 09:47:14218第一阶段地图=0%,减少=0%
2015-04-20 09:48:14790第一阶段地图=0%,减少=0%
2015-04-20 09:49:15378第一阶段地图=0%,减少=0%
2015-04-20 09:50:16014第一阶段地图=0%,减少=0%
2015-04-20 09:51:16808第一阶段地图=0%,减少=0%
2015-04-20 09:52:17378第一阶段地图=0%,减少=0%
2015-04-20 09:53:17928第一阶段地图=0%,减少=0%
2015-04-20 09:54:18491第一阶段地图=0%,减少=0%
2015-04-20 09:55:19049第一阶段地图=0%,减少=0%
2015-04-20 09:56:19797第一阶段地图=0%,减少=0%
2015-04-20 09:57:20344第一阶段地图=0%,减少=0%
职位总数=1
正在启动作业1/1
编译时确定的reduce任务数:1
要更改减速器的平均负载(以字节为单位):
设置hive.exec.reducers.bytes.per.reducer=
为了限制减速器的最大数量:
设置hive.exec.reducers.max=
为了设置恒定数量的减速器:
设置mapreduce.job.reduces=
开始作业=作业\u 1429243611915\u 0031,跟踪URL=http://master:8088/proxy/application_1429243611915_0031/
Kill命令=/usr/local/pcs/hadoop/bin/hadoop作业-Kill作业
2015-04-20 09:58:20858第一阶段地图=0%,减少=0%

如果在
warn site.xml
文件中增加这两种配置的内存,那么它将运行得很快

warn.scheduler.最大分配mb

纱线.节点管理器.资源.内存mb

上面的答案很有效,而且对我帮助很大。我试图在配置单元中运行一个简单的count(*)查询,但它既不会出错也不会完成。它将一直挂在那里,直到我在命令提示符下终止作业。我完全疯了,我没有从谷歌那里得到合适的答案。 但是上面的答案对我帮助很大。因此,我们需要增加的记忆

  • warn.scheduler.最大分配mb
  • 纱线.节点管理器.资源.内存mb

这可以在
warn Site.xml
中完成,甚至可以在warn服务下的Cloudera Manager中完成。增加内存后,重新启动所有过时的服务。这将解决问题。

您找到解决方案了吗??
Total jobs = 1 
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):  
    set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:  
    set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
    set mapreduce.job.reduces=<number>
Starting Job = job_1429243611915_0030, Tracking URL = http://master:8088/proxy/application_1429243611915_0030/
Kill Command = /usr/local/pcs/hadoop/bin/hadoop job  -kill job_1429243611915_0030
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2015-04-20 09:28:02,616 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:29:02,728 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:30:03,432 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:31:04,054 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:32:04,675 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:33:05,298 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:34:05,866 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:35:06,419 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:36:06,985 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:37:07,551 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:38:08,289 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:39:09,184 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:40:09,780 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:41:10,367 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:42:10,965 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:43:11,595 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:44:12,181 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:45:12,952 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:46:13,590 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:47:14,218 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:48:14,790 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:49:15,378 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:50:16,014 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:51:16,808 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:52:17,378 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:53:17,928 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:54:18,491 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:55:19,049 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:56:19,797 Stage-1 map = 0%,  reduce = 0%
2015-04-20 09:57:20,344 Stage-1 map = 0%,  reduce = 0%
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
    set mapreduce.job.reduces=<number>
Starting Job = job_1429243611915_0031, Tracking URL = http://master:8088/proxy/application_1429243611915_0031/
Kill Command = /usr/local/pcs/hadoop/bin/hadoop job  -kill job_1429243611915_0031
2015-04-20 09:58:20,858 Stage-1 map = 0%,  reduce = 0%