Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Hadoop 配置单元:如何比较WHERE子句中具有复杂数据类型的两列?_Hadoop_Hive_Hiveql_Hadoop2_Beeline - Fatal编程技术网

Hadoop 配置单元:如何比较WHERE子句中具有复杂数据类型的两列?

Hadoop 配置单元:如何比较WHERE子句中具有复杂数据类型的两列?,hadoop,hive,hiveql,hadoop2,beeline,Hadoop,Hive,Hiveql,Hadoop2,Beeline,我有一个蜂巢表作为我的源表。 我还有一个蜂巢表作为目标。 源表和目标表的DDL都是相同的,只是在目标表中添加了一些日志列。 以下是DDL: 资料来源: 目标数据: +---------------------+--------------------------+-------------------------+------------------------------------------------------------------+-------------------------

我有一个蜂巢表作为我的源表。 我还有一个蜂巢表作为目标。 源表和目标表的DDL都是相同的,只是在目标表中添加了一些日志列。 以下是DDL: 资料来源:

目标数据:

+---------------------+--------------------------+-------------------------+------------------------------------------------------------------+--------------------------------------+-----------------------------------+------------------------------------+
| customer_detail.id  |   customer_detail.name   |  customer_detail.city   |              customer_detail.properties_owned                    |  customer_detail.audit_insterted_ts  | customer_detail.audit_dml_action  | customer_detail.audit_active_flag  |
+---------------------+--------------------------+-------------------------+------------------------------------------------------------------+--------------------------------------+-----------------------------------+------------------------------------+
| 1                   | Aiman Sarosh             |      kolkata            |  [{"property_addr":"H1 Block Saltlake","location":"kolkata"}]    | 2018-09-04 06:55:12.361              | I                                 | A                                  |
| 2                   | Justin                   |      delhi              |  [{"property_addr":"some address in delhi","location":"delhi"}]  | 2018-09-05 08:36:39.023              | I                                 | A                                  |
+---------------------+--------------------------+-------------------------+---------------------------------------------------------------------------------------------------------+-----------------------------------+------------------------------------+
当我运行下面的查询时,它应该为我获取1条已修改的记录,即:

+---------------------+--------------------------+-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+-----------------------------------+------------------------------------+
| customer_detail.id  |   customer_detail.name   |  customer_detail.city   |                                                                  customer_detail.properties_owned                                              |  customer_detail.audit_insterted_ts  | customer_detail.audit_dml_action  | customer_detail.audit_active_flag  |
+---------------------+--------------------------+-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+-----------------------------------+------------------------------------+
| 1                   | Aiman Sarosh             |      kolkata            |  [{"property_addr":"H1 Block Saltlake","location":"kolkata"},{"property_addr":"New Property Added Saltlake","location":"kolkata"}]             | 2018-09-05 07:15:10.321              | U                                 | A                                  |
+---------------------+--------------------------+-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------+--------------------------------------+-----------------------------------+------------------------------------+
基本上,
{“property\u addr”:“New property adder Saltlake”,“location”:“kolkata”}
元素已添加到
源处记录ID 1的数组列
properties\u owned

查询:

SELECT  --fetch modified/updated records in source
   source.id AS id,
   source.name AS name,
   source.city AS city,
   source.properties_owned AS properties_owned,
   current_timestamp() AS audit_insterted_ts,
   'U' AS audit_dml_action,
   'A' AS audit_active_flag
FROM source.customer_detail source
INNER JOIN target.customer_detail jrnl
ON source.id=jrnl.id
WHERE source.name!=jrnl.name
OR source.city!=jrnl.city
OR source.properties_owned!=jrnl.properties_owned
但这是一个错误:

Error: Error while compiling statement: FAILED: SemanticException [Error 10016]: Line 14:3 Argument type mismatch 'properties_owned': The 1st argument of NOT EQUAL  is expected to a primitive type, but list is found (state=42000,code=10016)
使用联接时,如何比较WHERE子句中具有复杂数据类型的两列?

我可以使用
.POS
.ITEM
,但这不会有帮助,因为我的列是结构的数组,数组的长度可以不同。

问题:您试图比较列表而不是基元类型

当前情况:无法使用内置配置单元UDF直接比较复杂对象列表(字符串列表有一些变通方法)


解决方法:您将需要一些第三方UDF来帮助您解决这个问题。有几个有趣的UDF(我以前没有测试过)

处理复杂类型的一种方法是将它们转换为字符串,例如Json字符串。有一个项目包含有用的第三方配置单元UDF。它有
to_json
函数,可以将任何复杂类型转换为json字符串。首先,克隆并构建jar:

git clone https://github.com/klout/brickhouse.git
cd brickhouse
mvn clean package
然后将Brickhouse jar复制到HDFS,并将jar添加到配置单元中:

add jar hdfs://<your_path>/brickhouse-0.7.1-SNAPSHOT.jar;
现在您可以使用它,例如

hive> select to_json(ARRAY(MAP('a',1), MAP('b',2)));
OK
[{"a":1},{"b":2}]
所以在您的例子中,您需要将列转换为json字符串,然后在
where
子句中进行比较。请记住,
到_json
会按原样转换复杂值。例如,在您的示例中,有两个数组

[{"property_addr":"H1 Block Saltlake","location":"kolkata"},{"property_addr":"New Property Added Saltlake","location":"kolkata"}]


将会不同

我使用
侧视图explode()
修复了此问题
然后在分解的列上使用
concat_ws()
collect_list(array)
方法,这最终给了我一个
字符串,我比较了它:

SELECT  --fetch modified/updated records in source
   source.id AS id,
   source.name AS name,
   source.city AS city,
   source.properties_owned AS properties_owned,
   current_timestamp() AS audit_insterted_ts,
   'U' AS audit_dml_action,
   'A' AS audit_active_flag
FROM source.customer_detail source
INNER JOIN target.customer_detail jrnl
ON source.id=jrnl.id
WHERE source.id IN
(
SELECT t1.id
FROM
(
   SELECT src.id,concat_ws(',', collect_list(src.property_addr),collect_list(src.location)) newcol
   FROM
   (
      SELECT id, prop_owned.property_addr AS property_addr, prop_owned.location AS location
      FROM source.customer_detail LATERAL VIEW explode(properties_owned) exploded_tab AS prop_owned
   ) src
   GROUP BY src.id
) t1
INNER JOIN
(
   SELECT trg.id,concat_ws(',', collect_list(trg.property_addr),collect_list(trg.location)) newcol
   FROM
   (
      SELECT id, prop_owned.property_addr AS property_addr, prop_owned.location AS location
      FROM target.customer_detail LATERAL VIEW explode(properties_owned) exploded_tab AS prop_owned
   ) trg
   GROUP BY trg.id
) t2
ON t1.id=t2.id
WHERE t1.newcol!=t2.newcol

希望有人觉得这有用且有帮助。:-)

您可以使用
横向视图分解
分解阵列,然后执行连接我已经尝试过使用该方法,但我不知道我应该应用什么连接。任何示例查询都会有真正的帮助:)谢谢serge_k。这是个好办法。但是我使用
lateralview explode()
和一个子查询来实现它,以获得一列,并使用
collect\u list
concat\u ws
进行比较。
create temporary function to_json as 'brickhouse.udf.json.ToJsonUDF';
hive> select to_json(ARRAY(MAP('a',1), MAP('b',2)));
OK
[{"a":1},{"b":2}]
[{"property_addr":"H1 Block Saltlake","location":"kolkata"},{"property_addr":"New Property Added Saltlake","location":"kolkata"}]
[{"property_addr":"New Property Added Saltlake","location":"kolkata"},{"property_addr":"H1 Block Saltlake","location":"kolkata"}]
SELECT  --fetch modified/updated records in source
   source.id AS id,
   source.name AS name,
   source.city AS city,
   source.properties_owned AS properties_owned,
   current_timestamp() AS audit_insterted_ts,
   'U' AS audit_dml_action,
   'A' AS audit_active_flag
FROM source.customer_detail source
INNER JOIN target.customer_detail jrnl
ON source.id=jrnl.id
WHERE source.id IN
(
SELECT t1.id
FROM
(
   SELECT src.id,concat_ws(',', collect_list(src.property_addr),collect_list(src.location)) newcol
   FROM
   (
      SELECT id, prop_owned.property_addr AS property_addr, prop_owned.location AS location
      FROM source.customer_detail LATERAL VIEW explode(properties_owned) exploded_tab AS prop_owned
   ) src
   GROUP BY src.id
) t1
INNER JOIN
(
   SELECT trg.id,concat_ws(',', collect_list(trg.property_addr),collect_list(trg.location)) newcol
   FROM
   (
      SELECT id, prop_owned.property_addr AS property_addr, prop_owned.location AS location
      FROM target.customer_detail LATERAL VIEW explode(properties_owned) exploded_tab AS prop_owned
   ) trg
   GROUP BY trg.id
) t2
ON t1.id=t2.id
WHERE t1.newcol!=t2.newcol