Hadoop Ambari Hive UTF-8问题

Hadoop Ambari Hive UTF-8问题,hadoop,encoding,utf-8,hive,ambari,Hadoop,Encoding,Utf 8,Hive,Ambari,配置单元表中的西里尔字母符号有问题。已安装版本: ambari-server 2.4.2.0-136 hive-2-5-3-0-37 1.2.1000.2.5.3.0-37 Ubuntu 14.04 问题是什么: 将区域设置设置为ru_ru.UTF-8: spark@hadoop:~$ locale LANG=ru_RU.UTF-8 LANGUAGE=ru_RU:ru LC_CTYPE="ru_RU.UTF-8" LC_NUMERIC="ru_RU.UTF-8" LC_TIME="ru_RU.

配置单元表中的西里尔字母符号有问题。已安装版本:

ambari-server 2.4.2.0-136
hive-2-5-3-0-37 1.2.1000.2.5.3.0-37
Ubuntu 14.04
问题是什么:

  • 将区域设置设置为ru_ru.UTF-8:

    spark@hadoop:~$ locale
    LANG=ru_RU.UTF-8
    LANGUAGE=ru_RU:ru
    LC_CTYPE="ru_RU.UTF-8"
    LC_NUMERIC="ru_RU.UTF-8"
    LC_TIME="ru_RU.UTF-8"
    LC_COLLATE="ru_RU.UTF-8"
    LC_MONETARY="ru_RU.UTF-8"
    LC_MESSAGES="ru_RU.UTF-8"
    LC_PAPER="ru_RU.UTF-8"
    LC_NAME="ru_RU.UTF-8"
    LC_ADDRESS="ru_RU.UTF-8"
    LC_TELEPHONE="ru_RU.UTF-8"
    LC_MEASUREMENT="ru_RU.UTF-8"
    LC_IDENTIFICATION="ru_RU.UTF-8"
    LC_ALL=ru_RU.UTF-8
    
  • 连接到配置单元并创建测试表:

    spark@hadoop:~$ beeline -n spark -u jdbc:hive2://spark@hadoop.domain.com:10000/
    
    Connecting to enter code herejdbc:hive2://spark@hadoop.domain.com:10000/
    Connected to: Apache Hive (version 1.2.1000.2.5.3.0-37)
    Driver: Hive JDBC (version 1.2.1000.2.5.3.0-37)
    Transaction isolation: TRANSACTION_REPEATABLE_READ
    Beeline version 1.2.1000.2.5.3.0-37 by Apache Hive
    
    0: jdbc:hive2://spark@hadoop.domain.com> CREATE TABLE `test`(`name` string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' WITH SERDEPROPERTIES ( 'serialization.encoding'='UTF-8');
    No rows affected (0,127 seconds)
    
  • 插入西里尔字母符号:

    0: jdbc:hive2://spark@hadoop.domain.com> insert into test values('привет');
    
    INFO  : Tez session hasn't been created yet. Opening session
    INFO  : Dag name: insert into test values('привет')(Stage-1)
    INFO  :
    
    INFO  : Status: Running (Executing on YARN cluster with App id application_1490211406894_2481)
    
    INFO  : Map 1: -/-
    INFO  : Map 1: 0/1
    INFO  : Map 1: 0(+1)/1
    INFO  : Map 1: 1/1
    INFO  : Loading data to table default.test from hdfs://hadoop.domain.com:8020/apps/hive/warehouse/test/.hive-staging_hive_2017-03-23_13-41-46_215_3133047104896717605-116/-ext-10000
    INFO  : Table default.test stats: [numFiles=1, numRows=1, totalSize=7, rawDataSize=6]
    No rows affected (6,652 seconds)
    
  • 从表中选择:

    0: jdbc:hive2://spark@hadoop.domain.com> select * from test;
    +------------+--+
    | test.name  |
    +------------+--+
    | ?@825B     |
    +------------+--+
    1 row selected (0,162 seconds)
    
  • 我在ApacheHive上读过很多bug,测试过unicode、utf-8、utf-16以及一些isos编码,但运气不佳

    有人能帮我吗


    谢谢

    Hortonwroks的人帮我解决了这个问题。好像是个虫子


    Cyrillic小写字母Pe
    П
    (unicode
    U+043F
    )==>问号
    (unicode
    U+003F
    ),Cyrillic小写字母Er
    CharStyle
    U+0440
    )===>
    )等等,等等,所有字符中丢失的最高Unicode字节…@JosefZ thx-给我一些新信息。你知道怎么处理吗?