Hive TEZ上的配置单元(java.lang.RuntimeException:本机snappy库不可用:此版本的libhadoop是在不支持snappy的情况下构建的)

Hive TEZ上的配置单元(java.lang.RuntimeException:本机snappy库不可用:此版本的libhadoop是在不支持snappy的情况下构建的),hive,snappy,tez,Hive,Snappy,Tez,我的脚本不断地出错 java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support 代码本身如下所示 WITH step1 AS( SELECT columns FROM t1 stg WHERE time_key < '2017-04-08' AND time_key >= DATE_A

我的脚本不断地出错

java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support
代码本身如下所示

WITH step1 AS(
  SELECT columns
  FROM t1 stg
  WHERE time_key < '2017-04-08' AND time_key >= DATE_ADD('2017-04-08', -31)
  GROUP BY columns
  HAVING conditions1
)
, step2 AS(
  SELECT columns
  FROM t2
  WHERE conditions2
)
, step3 AS(
  SELECT columns
  FROM stg
  JOIN comverse_sub
  ON conditions3
)
INSERT INTO TABLE t1 PARTITION(time_key = '2017-04-08')
SELECT columns
FROM step3
WHERE conditions4
得到

snappy:  true /usr/hdp/2.5.0.0-1245/hadoop/lib/native/libsnappy.so.1
我对tez的设置是

set tez.queue.name=adhoc;
set hive.execution.engine=tez;
set hive.tez.container.size=4096;
set hive.auto.convert.join=true;
set hive.exec.parallel=true;
set hive.tez.auto.reducer.parallelism=true;
SET hive.exec.compress.output=true;
SET tez.runtime.compress=true;
SET tez.runtime.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
我还应该注意到,并不是所有关于tez的脚本都失败了。一些工作。像这个

WITH hist AS(
  SELECT columns
  FROM t1
  WHERE conditions1
)
INSERT INTO TABLE t1 PARTITION(time_key)
SELECT columns
FROM hist
INNER JOIN t2
  on conditions2
INNER JOIN t3
  ON conditions3
WHERE conditions4
为什么会发生这种情况


我检查了一下。没有帮助。而且,当我在MR上运行脚本时,它们都能工作。

好吧,我解决了它。我只是不断添加设置,直到它工作。我仍然不知道问题出在哪里,解决方案也不是很好,但暂时可以

set tez.queue.name=adhoc;
set hive.execution.engine=tez;
set hive.tez.container.size=4096;
set hive.auto.convert.join=true;
set hive.exec.parallel=true;
set hive.tez.auto.reducer.parallelism=true;
SET hive.exec.compress.output=true;
SET tez.runtime.compress=true;
SET tez.runtime.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
SET das.reduce-tasks-per-node=12;
SET das.map-tasks-per-node=12;
SET das.job.map-task.memory=4096;
SET das.job.reduce-task.memory=4096;
SET das.job.application-manager.memory=4096;
SET tez.runtime.io.sort.mb=512;
SET tez.runtime.io.sort.factor=100;
set hive.tez.min.partition.factor=0.25;
set hive.tez.max.partition.factor=2.0;
set mapred.reduce.tasks = -1;
set tez.queue.name=adhoc;
set hive.execution.engine=tez;
set hive.tez.container.size=4096;
set hive.auto.convert.join=true;
set hive.exec.parallel=true;
set hive.tez.auto.reducer.parallelism=true;
SET hive.exec.compress.output=true;
SET tez.runtime.compress=true;
SET tez.runtime.compress.codec=org.apache.hadoop.io.compress.SnappyCodec;
SET das.reduce-tasks-per-node=12;
SET das.map-tasks-per-node=12;
SET das.job.map-task.memory=4096;
SET das.job.reduce-task.memory=4096;
SET das.job.application-manager.memory=4096;
SET tez.runtime.io.sort.mb=512;
SET tez.runtime.io.sort.factor=100;
set hive.tez.min.partition.factor=0.25;
set hive.tez.max.partition.factor=2.0;
set mapred.reduce.tasks = -1;