Apache spark hiveserver2无法在Spark上运行sql

Apache spark hiveserver2无法在Spark上运行sql,apache-spark,hive,yarn,Apache Spark,Hive,Yarn,以下是我的版本: 蜂巢:1.2 Hadoop:CDH5.3 火花:1.4.1 我在spark上使用hive和hive客户端成功了,但在启动hiveserver2并尝试使用beeline的sql之后,失败了 错误是: 2015-11-29 21:49:42,786 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:42 INFO spark.SparkCon

以下是我的版本: 蜂巢:1.2 Hadoop:CDH5.3 火花:1.4.1

我在spark上使用hive和hive客户端成功了,但在启动hiveserver2并尝试使用beeline的sql之后,失败了

错误是:

2015-11-29 21:49:42,786 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:42 INFO spark.SparkContext: Added JAR file:/root/cdh/apache-hive-1.2.1-bin/lib/hive-exec-1.2.1.jar at http://10.96.30.51:10318/jars/hive-exec-1.2.1.jar with timestamp 1448804982784
2015-11-29 21:49:43,336 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm297
2015-11-29 21:49:43,356 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO retry.RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm297 after 1 fail over attempts. Trying to fail over immediately.
2015-11-29 21:49:43,357 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO client.ConfiguredRMFailoverProxyProvider: Failing over to rm280
2015-11-29 21:49:43,359 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/11/29 21:49:43 INFO retry.RetryInvocationHandler: Exception while invoking getClusterMetrics of class ApplicationClientProtocolPBClientImpl over rm280 after 2 fail over attempts. Trying to fail over after sleeping for 477ms.
2015-11-29 21:49:43,359 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - java.net.ConnectException: Call From hd-master-001/10.96.30.51 to hd-master-001:8032 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
2015-11-29 21:49:43,359 INFO  [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) -    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
我的状态是hd-master-002是活动资源管理器,hd-master-001是备份。hd-master-001上的8032端口未打开。当然,连接hd-master-001的8032端口时会出现连接错误

但她为什么要连接一个备份资源管理器。 如果我在spark on Thread上使用配置单元客户端命令shell,一切都正常


PS:我没有在没有配置单元的情况下重建spark程序集jar,我只是从构建的程序集jar中删除了“org.apache.hive”和“org.apache.hadoop.hive”。但我认为这不是问题所在,因为我在spark on Thread上成功使用了hive客户端。

我发现错误发生在sql为“select count(*)的地方。如果sql为“select*”,则即使使用HiveServer2也可以。我们发现,如果使用“beeline-n root”,也可以