Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/design-patterns/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hive 使配置单元仅返回值_Hive - Fatal编程技术网

Hive 使配置单元仅返回值

Hive 使配置单元仅返回值,hive,Hive,我想让hive只返回值!不是其他的词,比如关于处理的信息 hive> select max(temp) from temp where dtime like '2014-07%' ; Query ID = hduser_20170608003255_d35b8a43-8cc5-4662-89ce-9ee5f87d3ba0 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks determined at compile

我想让hive只返回值!不是其他的词,比如关于处理的信息

hive> select max(temp) from temp where dtime like '2014-07%' ;
Query ID = hduser_20170608003255_d35b8a43-8cc5-4662-89ce-9ee5f87d3ba0
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1496864651740_0008, Tracking URL = http://localhost:8088/proxy/application_1496864651740_0008/
Kill Command = /home/hduser/hadoop/bin/hadoop job  -kill job_1496864651740_0008
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2017-06-08 00:33:01,955 Stage-1 map = 0%,  reduce = 0%
2017-06-08 00:33:08,187 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 4.13 sec
2017-06-08 00:33:14,414 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 5.91 sec
MapReduce Total cumulative CPU time: 5 seconds 910 msec
Ended Job = job_1496864651740_0008
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 5.91 sec   HDFS Read: 853158 HDFS Write: 5 SUCCESS
Total MapReduce CPU Time Spent: 5 seconds 910 msec
OK
44.4
Time taken: 20.01 seconds, Fetched: 1 row(s)
hive>从temp-where-dtime中选择max(temp),如“2014-07%”;
查询ID=hduser_20170608003255_d35b8a43-8cc5-4662-89ce-9ee5f87d3ba0
职位总数=1
正在启动作业1/1
编译时确定的reduce任务数:1
要更改减速器的平均负载(以字节为单位):
设置hive.exec.reducers.bytes.per.reducer=
为了限制减速器的最大数量:
设置hive.exec.reducers.max=
为了设置恒定数量的减速器:
设置mapreduce.job.reduces=
起始作业=作业_1496864651740 _0008,跟踪URL=http://localhost:8088/proxy/application_1496864651740_0008/
Kill命令=/home/hduser/hadoop/bin/hadoop作业-Kill作业
阶段1的Hadoop作业信息:映射者数量:1;减速器数量:1
2017-06-08 00:33:01955第一阶段地图=0%,减少=0%
2017-06-08 00:33:08187第一阶段map=100%,reduce=0%,累计CPU 4.13秒
2017-06-08 00:33:14414第一阶段map=100%,reduce=100%,累计CPU 5.91秒
MapReduce总累计CPU时间:5秒910毫秒
结束作业=作业149686451740\u 0008
推出MapReduce作业:
阶段1:映射:1减少:1累计CPU:5.91秒HDFS读取:853158 HDFS写入:5成功
总MapReduce CPU时间:5秒910毫秒
好啊
44.4
所用时间:20.01秒,获取:1行
我希望它只返回44.4的值


提前感谢…

您可以将结果放入shell脚本中的变量中。max_temp变量将仅包含以下结果:

max_temp=$(hive -e " set hive.cli.print.header=false; select max(temp) from temp where dtime like '2014-07%';")

echo "$max_temp"

您可以将结果放入shell脚本中的变量中。max_temp变量将仅包含以下结果:

max_temp=$(hive -e " set hive.cli.print.header=false; select max(temp) from temp where dtime like '2014-07%';")

echo "$max_temp"
您也可以使用-S

hive -S -e  "select max(temp) from temp where dtime like '2014-07%';"
您也可以使用-S

hive -S -e  "select max(temp) from temp where dtime like '2014-07%';"

第一步:了解命令行实用程序的
stdout
/
stderr
。第二步:了解Linux shell如何使stderr从特定命令重定向到“nothing”。下一步可能是使用正则表达式(RegEx)过滤stdout,但这需要一些基本的计算技能……第一步:了解命令行实用程序的stdout/
stderr
。第二步:了解Linux shell如何使stderr从特定命令重定向到“nothing”。下一步可能是使用正则表达式(RegEx)过滤stdout,但这需要一些基本的计算技能…@user3446106:如果它适用于you@user3446106当前位置如果答案对你有效,请确保答案正确