Performance 配置单元变量使用

Performance 配置单元变量使用,performance,hadoop,hive,hiveql,Performance,Hadoop,Hive,Hiveql,我正在Cloudera-Training-VM-4.1.1.c上使用Hive0.9.0版本。 我在SELECT查询中使用hive变量时遇到了问题。请看一看,让我知道哪里出了问题 1) 这是我的表,名为rushi\u target hive> desc rushi_target; OK load_ts timestamp id int name string loc string data_dt string Time

我正在Cloudera-Training-VM-4.1.1.c上使用Hive0.9.0版本。 我在
SELECT
查询中使用hive变量时遇到了问题。请看一看,让我知道哪里出了问题

1) 这是我的表,名为
rushi\u target

hive> desc rushi_target;
OK
load_ts         timestamp   
id       int      
name string 
loc     string 
data_dt        string 
Time taken: 0.154 seconds
2) 这是数据

hive> select * from rushi_target;
OK
2015-12-01 00:02:34 1        rushi  pune  2015-12-01
2015-12-02 04:02:34 2        komal pune  2015-12-02
2015-12-03 00:03:34 3        bhanu bangalore     2015-12-03
2015-12-04 00:03:34 4        sachin          pune  2015-12-04
Time taken: 0.258 seconds
3) 设置配置单元变量

hive> set maxtimevar= select max(load_ts) from rushi_target;
hive> ${hiveconf:maxtimevar};
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201512211113_0011, Tracking URL = http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201512211113_0011
Kill Command = /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201512211113_0011
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2015-12-22 08:51:35,401 Stage-1 map = 0%,  reduce = 0%
2015-12-22 08:51:37,409 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.83 sec
2015-12-22 08:51:38,416 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.83 sec
2015-12-22 08:51:39,423 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.43 sec
2015-12-22 08:51:40,429 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.43 sec
2015-12-22 08:51:41,439 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.43 sec
MapReduce Total cumulative CPU time: 1 seconds 430 msec
Ended Job = job_201512211113_0011
MapReduce Jobs Launched: 
Job 0: Map: 1  Reduce: 1   Cumulative CPU: 1.43 sec   HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 430 msec
OK
2015-12-04 00:03:34
Time taken: 8.632 seconds
4) 显示配置单元变量的值

hive> set maxtimevar= select max(load_ts) from rushi_target;
hive> ${hiveconf:maxtimevar};
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201512211113_0011, Tracking URL = http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201512211113_0011
Kill Command = /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201512211113_0011
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2015-12-22 08:51:35,401 Stage-1 map = 0%,  reduce = 0%
2015-12-22 08:51:37,409 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.83 sec
2015-12-22 08:51:38,416 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.83 sec
2015-12-22 08:51:39,423 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.43 sec
2015-12-22 08:51:40,429 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.43 sec
2015-12-22 08:51:41,439 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.43 sec
MapReduce Total cumulative CPU time: 1 seconds 430 msec
Ended Job = job_201512211113_0011
MapReduce Jobs Launched: 
Job 0: Map: 1  Reduce: 1   Cumulative CPU: 1.43 sec   HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 1 seconds 430 msec
OK
2015-12-04 00:03:34
Time taken: 8.632 seconds
6) 这不会引发任何错误,但也不会给出任何输出:

hive> select * from rushi_target where load_ts = '${hiveconf:maxtimevar}';
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201512211113_0025, Tracking URL = http://0.0.0.0:50030/jobdetails.jsp?jobid=job_201512211113_0025
Kill Command = /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=0.0.0.0:8021 -kill job_201512211113_0025
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 0
2015-12-22 10:16:56,666 Stage-1 map = 0%,  reduce = 0%
2015-12-22 10:16:58,675 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.39 sec
2015-12-22 10:16:59,686 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 0.39 sec
2015-12-22 10:17:00,696 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 0.39 sec
MapReduce Total cumulative CPU time: 390 msec
Ended Job = job_201512211113_0025
MapReduce Jobs Launched: 
Job 0: Map: 1   Cumulative CPU: 0.39 sec   HDFS Read: 0 HDFS Write: 0 SUCCESS
Total MapReduce CPU Time Spent: 390 msec
OK
Time taken: 6.568 seconds
hive> 

我想在
SELECT
query中使用配置单元变量
maxtimevar
。请建议如何使用。

配置单元变量的用法是传递值而不是查询

您的查询
select*from rushi_target,其中load_ts='${hiveconf:maxtimevar}'计算为
select*from rushi\u target,其中load\u ts='select max(load\u ts)from rushi\u target'
。它正在寻找以load_ts作为查询的数据,显然不会有具有这些值的数据

这是5中的一个,
select*from rushi\u target,其中load\u ts=${hiveconf:maxtimevar}将被评估为
select*from rushi\u target,其中load\u ts=select max(load\u ts)from rushi\u target。这是无效的查询,因此由于语法错误而失败

您需要这样编写查询

select rt.* from rushi_target rt 
join 
(select max(load_ts) max_load_ts from rushi_target) rt1 
on 1=1 
where rt.load_ts = rt1.max_load_ts;

请将问题格式化。几乎不可读。感谢Damian的格式化,它可以在我的本地桌面上处理一小部分数据。但当应用于数百万条记录时不起作用。陷入两者之间。将发布新问题。已发布基于此场景的下一个问题。我比你高。它被反射了吗?不,它还没有反射。这很好,我认为你需要有足够的分数来完成。上面的查询将引入交叉连接。虽然它不会影响小型数据集的性能,但在大型数据集上,这将显示出相当大的问题。