Hive 直线查询输出采用JSON格式,而不是csv表
我使用下面这样的直线查询,HDFS中的底层数据来自大型机服务器。我只想执行一个查询并将其转储为csv(或任何表格格式): 我的问题是:Hive 直线查询输出采用JSON格式,而不是csv表,hive,hortonworks-data-platform,beeline,Hive,Hortonworks Data Platform,Beeline,我使用下面这样的直线查询,HDFS中的底层数据来自大型机服务器。我只想执行一个查询并将其转储为csv(或任何表格格式): 我的问题是: The format is not clean, there are extra rows at top and bottom ; It appears as JSOn and not a table. Some numbers seem hexadecimal format. +-------------------------------------
The format is not clean, there are extra rows at top and bottom ;
It appears as JSOn and not a table.
Some numbers seem hexadecimal format.
+-----------------------------------------------------------------------------------------------------------------------------+
| col1:{"col1_a":"00000" col1_b:"0" col1_c:{"col11_a":"00000" col11_tb:{"mo_acct_tp":"0" col11_c:"0"}} col1_d:"0"}|
+-----------------------------------------------------------------------------------------------------------------------------+
我想要一个顶部有列名且没有嵌套的常规csv。您必须执行
showHeader=true
,您将获得所需的结果
beeline -u 'jdbc:hive2://server.com:port/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;transportMode=binary' -–showHeader=true --outputformat=csv2 -e "SELECT * FROM tbl LIMIT 2;"> tables1.csv
您也可以尝试表格格式,outputformat=table
,这不会将csv作为输出,但会提供一个干净的表格结构,如下所示:
+-----+---------+-----------------+
| id | value | comment |
+-----+---------+-----------------+
| 1 | Value1 | Test comment 1 |
| 2 | Value2 | Test comment 2 |
| 3 | Value3 | Test comment 3 |
+-----+---------+-----------------+
请帮助我们更好地了解您的数据 在直线或配置单元中运行select查询时,您的表是否具有如下数据:
> select * from test;
+------------------------------------------------------------------------------------------------------------------------+--+
| test.col |
+------------------------------------------------------------------------------------------------------------------------+--+
| {"col1_a":"00000","col1_b":"0","col1_c":{"col11_a":"00000","col11_tb":{"mo_acct_tp":"0","col11_c":"0"}},"col1_d":"0"} |
+------------------------------------------------------------------------------------------------------------------------+--+
如果是,您可能必须解析Json对象中的数据,如下所示:
select
get_json_object(tbl.col, '$.col1_a') col1_a
, get_json_object(tbl.col, '$.col1_b') col1_b
, get_json_object(tbl.col, '$.col1_c.col11_a') col1_c_col11_a
, get_json_object(tbl.col, '$.col1_c.col11_tb.col11_c') col1_c_col11_tb_col11_c
, get_json_object(tbl.col, '$.col1_c.col11_tb.mo_acct_tp') col1_c_col11_tb_mo_acct_tp
, get_json_object(tbl.col, '$.col1_d') col1_d
from test tbl
INFO : Completed executing command(queryId=hive_20180918182457_a2d6230d-28bc-4839-a1b5-0ac63c7779a5); Time taken: 1.007 seconds
INFO : OK
+---------+---------+-----------------+--------------------------+-----------------------------+---------+--+
| col1_a | col1_b | col1_c_col11_a | col1_c_col11_tb_col11_c | col1_c_col11_tb_mo_acct_tp | col1_d |
+---------+---------+-----------------+--------------------------+-----------------------------+---------+--+
| 00000 | 0 | 00000 | 0 | 0 | 0 |
+---------+---------+-----------------+--------------------------+-----------------------------+---------+--+
1 row selected (2.058 seconds)
然后可以在命令行中使用此查询将结果导出到文件中
>beeline -u 'jdbc:hive2://server.com:port/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;transportMode=binary' --showHeader=false --outputformat=csv2 -e "select
get_json_object(tbl.col, '$.col1_a') col1_a
, get_json_object(tbl.col, '$.col1_b') col1_b
, get_json_object(tbl.col, '$.col1_c.col11_a') col1_c_col11_a
, get_json_object(tbl.col, '$.col1_c.col11_tb.col11_c') col1_c_col11_tb_col11_c
, get_json_object(tbl.col, '$.col1_c.col11_tb.mo_acct_tp') col1_c_col11_tb_mo_acct_tp
, get_json_object(tbl.col, '$.col1_d') col1_d
from corpde_commops.test tbl;" > test.csv
如果需要文件中的列名,请将--showHeader=true
最终产出将是:
>cat test.csv
col1_a,col1_b,col1_c_col11_a,col1_c_col11_tb_col11_c,col1_c_col11_tb_mo_acct_tp,col1_d
00000,0,00000,0,0,0
很明显,我看不出你的直截了当的陈述有任何错误。
如果您的数据与上面的示例不同,则解决方案可能是另一种方式
祝你一切顺利 outputformat=table不会将嵌套的JSON转换为表。谢谢你。
>cat test.csv
col1_a,col1_b,col1_c_col11_a,col1_c_col11_tb_col11_c,col1_c_col11_tb_mo_acct_tp,col1_d
00000,0,00000,0,0,0