Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 对于一些蜂巢查询,我没有';你看不到订单吗?_Hadoop_Hive_Mapreduce_Hiveql_Hadoop Partitioning - Fatal编程技术网

Hadoop 对于一些蜂巢查询,我没有';你看不到订单吗?

Hadoop 对于一些蜂巢查询,我没有';你看不到订单吗?,hadoop,hive,mapreduce,hiveql,hadoop-partitioning,Hadoop,Hive,Mapreduce,Hiveql,Hadoop Partitioning,我的问题是 SELECT txnno, product FROM txnrecsbycat TABLESAMPLE(BUCKET 2 OUT OF 10) ORDER BY txnno; 我正在获得成功,但无法查看我的O/p 我的o/p是: 职位总数=1 正在启动作业1/1 编译时确定的reduce任务数:1 要更改减速器的平均负载(以字节为单位): 设置hive.exec.reducers.bytes.per.reducer= 为了限制减速器的最大数量: set hive.exec.r

我的问题是

SELECT txnno, product FROM txnrecsbycat TABLESAMPLE(BUCKET 2 OUT OF 10) ORDER BY txnno;
我正在获得成功,但无法查看我的O/p 我的o/p是:

职位总数=1 正在启动作业1/1 编译时确定的reduce任务数:1 要更改减速器的平均负载(以字节为单位):

设置hive.exec.reducers.bytes.per.reducer=
为了限制减速器的最大数量:

  set hive.exec.reducers.max=<number>
  set mapreduce.job.reduces=<number>

Starting Job = job_1500975292039_0005, Tracking URL = http://localhost:8088/proxy/application_1500975292039_0005/
Kill Command = /usr/lib/hadoop-2.2.0/bin/hadoop job  -kill job_1500975292039_0005
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-07-25 20:26:48,640 Stage-1 map = 0%,  reduce = 0%
2017-07-25 20:27:05,179 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.61 sec
2017-07-25 20:27:20,461 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.47 sec
MapReduce Total cumulative CPU time: 5 seconds 470 msec
Ended Job = job_1500975292039_0005
MapReduce Jobs Launched:
Job 0: Map: 1  Reduce: 1   Cumulative CPU: 5.47 sec   HDFS Read: 2498 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 5 seconds 470 msec
OK
Time taken: 51.819 seconds
设置hive.exec.reducers.max=
为了设置恒定数量的减速器:

  set hive.exec.reducers.max=<number>
  set mapreduce.job.reduces=<number>

Starting Job = job_1500975292039_0005, Tracking URL = http://localhost:8088/proxy/application_1500975292039_0005/
Kill Command = /usr/lib/hadoop-2.2.0/bin/hadoop job  -kill job_1500975292039_0005
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-07-25 20:26:48,640 Stage-1 map = 0%,  reduce = 0%
2017-07-25 20:27:05,179 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 3.61 sec
2017-07-25 20:27:20,461 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.47 sec
MapReduce Total cumulative CPU time: 5 seconds 470 msec
Ended Job = job_1500975292039_0005
MapReduce Jobs Launched:
Job 0: Map: 1  Reduce: 1   Cumulative CPU: 5.47 sec   HDFS Read: 2498 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 5 seconds 470 msec
OK
Time taken: 51.819 seconds
设置mapreduce.job.reduces=
开始作业=作业\u 1500975292039\u 0005,跟踪URL=http://localhost:8088/proxy/application_1500975292039_0005/
Kill命令=/usr/lib/hadoop-2.2.0/bin/hadoop作业-Kill作业
阶段1的Hadoop作业信息:映射者数量:1;减速器数量:1
2017-07-25 20:26:48640第一阶段地图=0%,减少=0%
2017-07-25 20:27:05179第一阶段map=100%,reduce=0%,累计CPU 3.61秒
2017-07-25 20:27:20461第一阶段map=100%,reduce=100%,累计CPU 5.47秒
MapReduce总累计CPU时间:5秒470毫秒
结束作业=作业1500975292039\u 0005
推出MapReduce作业:
作业0:映射:1减少:1累计CPU:5.47秒HDFS读取:2498 HDFS写入:0成功
总MapReduce CPU时间:5秒470毫秒
好啊
所用时间:51.819秒

由于输出中有OK(在结果之前),这意味着此查询没有返回任何内容。检查BUCKET数据。

在查询中,一切看起来都很好。
是否选择txnno,txnrecsbycat limit 10中的产品是否返回任何内容?