分区在Hive 2.3.0中不起作用
我创建了如下表:分区在Hive 2.3.0中不起作用,hive,hiveql,hadoop3,Hive,Hiveql,Hadoop3,我创建了如下表: create table emp ( > eid int, > fname string, > lname string, > salary double, > city string, > dept string ) > row format delimited fields terminated by ','; create table part_emp ( >
create table emp (
> eid int,
> fname string,
> lname string,
> salary double,
> city string,
> dept string )
> row format delimited fields terminated by ',';
create table part_emp (
> eid int,
> fname string,
> lname string,
> salary double,
> dept string )
> partitioned by ( city string )
> row format delimited fields terminated by ',';
然后,为了启用分区,我设置了以下属性:
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
我创建了分区表,如下所示:
create table emp (
> eid int,
> fname string,
> lname string,
> salary double,
> city string,
> dept string )
> row format delimited fields terminated by ',';
create table part_emp (
> eid int,
> fname string,
> lname string,
> salary double,
> dept string )
> partitioned by ( city string )
> row format delimited fields terminated by ',';
创建表之后,我发出了insert查询作为
insert into table part_emp partition(city)
select eid,fname,lname,salary,dept,city from emp;
但它不起作用
WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Query ID = max_20180311015337_5a67813d-dcc5-46c0-ac4b-a54c11ffb912
Total jobs = 3
Launching Job 1 out of 3
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_1520757649534_0004, Tracking URL = http://ubuntu:8088/proxy/application_1520757649534_0004/
Kill Command = /home/max/bigdata/hadoop-3.0.0/bin/hadoop job -kill job_1520757649534_0004
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2018-03-11 01:53:44,996 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1520757649534_0004 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Stage-Stage-1: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
在Hive 1.x上同样成功,我有同样的问题,设置Hive.exec.max.dynamic.partitions.pernode=1000;默认值100解决了我的问题。你可以试试
PS:此设置意味着:允许在每个mapper/reducer节点中创建的最大动态分区数。是否插入覆盖。。。工作?我还尝试了插入覆盖,但也不工作,正如警告中所说,尝试设置执行引擎astez或spark并尝试运行。此外,请参阅MR作业日志以获取有关应用程序_1520757649534_0004的故障检查日志的更多信息,以获取有关故障错误的更多信息