HiveServer2:使用自定义PasswordAuthenticationProvider时与Thrift SASL相关的异常

HiveServer2:使用自定义PasswordAuthenticationProvider时与Thrift SASL相关的异常,hive,thrift,ambari,sasl,Hive,Thrift,Ambari,Sasl,我已经基于OAuth2创建了PasswdAuthenticationProvider接口的自定义实现。我认为代码与我遇到的问题无关,但是,它是可以找到的 我已使用以下属性配置了hive site.xml: <property> <name>hive.server2.authentication</name> <value>CUSTOM</value> </property> <property>

我已经基于OAuth2创建了
PasswdAuthenticationProvider
接口的自定义实现。我认为代码与我遇到的问题无关,但是,它是可以找到的

我已使用以下属性配置了
hive site.xml

<property>
   <name>hive.server2.authentication</name>
   <value>CUSTOM</value>
</property>
<property>
   <name>hive.server2.custom.authentication.class</name>
   <value>com.telefonica.iot.cosmos.hive.authprovider.OAuth2AuthenticationProviderImpl</value>
</property>
问题出现在以下错误反复出现之后:

2016-02-01 11:52:48,227 ERROR [pool-5-thread-4]: server.TThreadPoolServer (TThreadPoolServer.java:run(215)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
    at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:189)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException
    at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
    at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
    at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
    at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
    at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
    at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
    at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
    ... 4 more
2016-02-01 11:53:18,323 ERROR [pool-5-thread-5]: server.TThreadPoolServer (TThreadPoolServer.java:run(215)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
    at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:189)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException
    at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
    at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
    at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
    at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
    at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
    at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
    at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
    ... 4 more
为什么??我在其他几个问题中看到,当使用默认值
hive.server2.authentication
,即
SASL
,而客户端没有进行握手时,会发生这种情况。但在我的例子中,这样一个属性的值是
CUSTOM
。我无法理解,任何帮助都将不胜感激

编辑1

我发现有对HiveServer2的定期请求。。。从HiveServer2本身!以下是导致Thrift SASL错误的请求:

$ sudo tcpdump -i lo port 10000
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on lo, link-type EN10MB (Ethernet), capture size 65535 bytes
...
...
10:18:48.183469 IP dev-fiwr-bignode-11.hi.inet.ndmp > dev-fiwr-bignode-11.hi.inet.55758: Flags [.], ack 7, win 512, options [nop,nop,TS val 1034162147 ecr 1034162107], length 0
^C
21 packets captured
42 packets received by filter
0 packets dropped by kernel
[fiware-portal@dev-fiwr-bignode-11 ~]$ sudo netstat -nap | grep 55758
tcp        0      0 10.95.76.91:10000           10.95.76.91:55758           CLOSE_WAIT  7190/java           
tcp        0      0 10.95.76.91:55758           10.95.76.91:10000           FIN_WAIT2   -                   
[fiware-portal@dev-fiwr-bignode-11 ~]$ ps -ef | grep 7190
hive      7190     1  1 10:10 ?        00:00:10 /usr/java/jdk1.7.0_71//bin/java -Xmx1024m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/var/log/hadoop/hive -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/usr/lib/hadoop -Dhadoop.id.str=hive -Dhadoop.root.logger=INFO,console -Djava.library.path=:/usr/lib/hadoop/lib/native/Linux-amd64-64:/usr/lib/hadoop/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx1024m -Xmx4096m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /usr/lib/hive/lib/hive-service-0.13.0.2.1.7.0-784.jar org.apache.hive.service.server.HiveServer2 -hiveconf hive.metastore.uris=" " -hiveconf hive.log.file=hiveserver2.log -hiveconf hive.log.dir=/var/log/hive
1011     14158 12305  0 10:19 pts/1    00:00:00 grep 7190
有什么想法吗

编辑2

更多关于从HiveServer2发送到HiveServer2的连接的研究。数据包始终发送5个字节,以下字节(十六进制):
22 41 30 31

对这些连接有什么想法吗?

我终于“修复”了这个问题。由于消息是由运行在HiveServer2机器上的Ambari代理发送的(某种奇怪的ping),我只是添加了一个
iptables
规则,阻止所有到环回接口上TCP/10000端口的连接:

iptables -A INPUT -i lo -p tcp --dport 10000 -j DROP
当然,现在Ambari警告HiveServer2未激活(ping已停止)。如果我想从Ambari重新启动服务器,则必须删除上述规则(启动脚本中还有另一个活动检查);然后在重新启动后,我可以再次启用该规则。好吧,我可以接受

iptables -A INPUT -i lo -p tcp --dport 10000 -j DROP