Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 蜂巢点火超时_Apache Spark_Hadoop_Hive_Cloudera - Fatal编程技术网

Apache spark 蜂巢点火超时

Apache spark 蜂巢点火超时,apache-spark,hadoop,hive,cloudera,Apache Spark,Hadoop,Hive,Cloudera,在cloudera发行版下,我根据在线文档在SPARK上配置了HIVE 尝试执行简单查询以进行测试时: beeline -u "jdbc:hive2://<HOST_NAME>.<DOMAIN>:10000/default" -n mehditazi -p <PASSWORD> -e "SET hive.execution.engine=spark;SET spark.dynamicAllocation.enabled=true;SET spark.exec

在cloudera发行版下,我根据在线文档在SPARK上配置了HIVE

尝试执行简单查询以进行测试时:

beeline -u "jdbc:hive2://<HOST_NAME>.<DOMAIN>:10000/default" -n mehditazi -p <PASSWORD>  -e "SET hive.execution.engine=spark;SET spark.dynamicAllocation.enabled=true;SET spark.executor.memory=4g;SET spark.executor.cores=4;SET hive.spark.client.connect.timeout=5000;select count(*) from default.sample_07";
beeline-u“jdbc:hive2://:10000/default”-n mehditazi-p-e“SET-hive.execution.engine=spark;SET-spark.dynamicAllocation.enabled=true;SET-spark.executor.memory=4g;SET-spark.executor.cores=4;SET-hive.spark.client.connect.timeout=5000;从default.sample_07中选择count(*);
我得到以下错误:

2018-05-31 18:29:51,625 WARN  [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present.  Continuing without it.
scan complete in 3ms
Connecting to jdbc:hive2://<HOST_NAME>.<DOMAIN>:10000/default
Connected to: Apache Hive (version 1.1.0-cdh5.8.0)
Driver: Hive JDBC (version 1.1.0-cdh5.8.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
No rows affected (0.101 seconds)
No rows affected (0.005 seconds)
No rows affected (0.005 seconds)
No rows affected (0.005 seconds)
No rows affected (0.005 seconds)
INFO  : Compiling command(queryId=hive_20180531182929_1e4bd43e-df8a-4b87-b898-dc73eebfbda3): select count(*) from default.sample_07
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, type:bigint, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20180531182929_1e4bd43e-df8a-4b87-b898-dc73eebfbda3); Time taken: 0.463 seconds
INFO  : Executing command(queryId=hive_20180531182929_1e4bd43e-df8a-4b87-b898-dc73eebfbda3): select count(*) from default.sample_07
INFO  : Query ID = hive_20180531182929_1e4bd43e-df8a-4b87-b898-dc73eebfbda3
INFO  : Total jobs = 1
INFO  : Launching Job 1 out of 1
INFO  : Starting task [Stage-1:MAPRED] in serial mode
INFO  : In order to change the average load for a reducer (in bytes):
INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
INFO  : In order to limit the maximum number of reducers:
INFO  :   set hive.exec.reducers.max=<number>
INFO  : In order to set a constant number of reducers:
INFO  :   set mapreduce.job.reduces=<number>
ERROR : Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark client.
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:64)
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:114)
        at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:125)
        at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:97)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1782)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1539)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1318)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1127)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1120)
        at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:178)
        at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:72)
        at org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:232)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        at org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:245)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timed out waiting for client connection.
        at com.google.common.base.Throwables.propagate(Throwables.java:156)
        at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:120)
        at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:80)
        at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:99)
        at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:95)
        at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:65)
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:62)
        ... 22 more
Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timed out waiting for client connection.
        at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:37)
        at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:104)
        ... 27 more
Caused by: java.util.concurrent.TimeoutException: Timed out waiting for client connection.
        at org.apache.hive.spark.client.rpc.RpcServer$2.run(RpcServer.java:141)
        at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
        at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        ... 1 more

ERROR : Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark client.
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:64)
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:114)
        at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:125)
        at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:97)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1782)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1539)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1318)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1127)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1120)
        at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:178)
        at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:72)
        at org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:232)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        at org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:245)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timed out waiting for client connection.
        at com.google.common.base.Throwables.propagate(Throwables.java:156)
        at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:120)
        at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:80)
        at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:99)
        at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:95)
        at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:65)
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:62)
        ... 22 more
处理语句时出错:失败:执行错误,从org.apache.hadoop.hive.ql.exec.spark.SparkTask返回代码1

2018-05-31 18:29:51,625 WARN  [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present.  Continuing without it.
scan complete in 3ms
Connecting to jdbc:hive2://<HOST_NAME>.<DOMAIN>:10000/default
Connected to: Apache Hive (version 1.1.0-cdh5.8.0)
Driver: Hive JDBC (version 1.1.0-cdh5.8.0)
Transaction isolation: TRANSACTION_REPEATABLE_READ
No rows affected (0.101 seconds)
No rows affected (0.005 seconds)
No rows affected (0.005 seconds)
No rows affected (0.005 seconds)
No rows affected (0.005 seconds)
INFO  : Compiling command(queryId=hive_20180531182929_1e4bd43e-df8a-4b87-b898-dc73eebfbda3): select count(*) from default.sample_07
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:_c0, type:bigint, comment:null)], properties:null)
INFO  : Completed compiling command(queryId=hive_20180531182929_1e4bd43e-df8a-4b87-b898-dc73eebfbda3); Time taken: 0.463 seconds
INFO  : Executing command(queryId=hive_20180531182929_1e4bd43e-df8a-4b87-b898-dc73eebfbda3): select count(*) from default.sample_07
INFO  : Query ID = hive_20180531182929_1e4bd43e-df8a-4b87-b898-dc73eebfbda3
INFO  : Total jobs = 1
INFO  : Launching Job 1 out of 1
INFO  : Starting task [Stage-1:MAPRED] in serial mode
INFO  : In order to change the average load for a reducer (in bytes):
INFO  :   set hive.exec.reducers.bytes.per.reducer=<number>
INFO  : In order to limit the maximum number of reducers:
INFO  :   set hive.exec.reducers.max=<number>
INFO  : In order to set a constant number of reducers:
INFO  :   set mapreduce.job.reduces=<number>
ERROR : Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark client.
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:64)
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:114)
        at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:125)
        at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:97)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1782)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1539)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1318)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1127)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1120)
        at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:178)
        at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:72)
        at org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:232)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        at org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:245)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timed out waiting for client connection.
        at com.google.common.base.Throwables.propagate(Throwables.java:156)
        at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:120)
        at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:80)
        at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:99)
        at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:95)
        at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:65)
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:62)
        ... 22 more
Caused by: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timed out waiting for client connection.
        at io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:37)
        at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:104)
        ... 27 more
Caused by: java.util.concurrent.TimeoutException: Timed out waiting for client connection.
        at org.apache.hive.spark.client.rpc.RpcServer$2.run(RpcServer.java:141)
        at io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
        at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
        at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
        ... 1 more

ERROR : Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)'
org.apache.hadoop.hive.ql.metadata.HiveException: Failed to create spark client.
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:64)
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:114)
        at org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:125)
        at org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:97)
        at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
        at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
        at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1782)
        at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1539)
        at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1318)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1127)
        at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1120)
        at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:178)
        at org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:72)
        at org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:232)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
        at org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:245)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.util.concurrent.TimeoutException: Timed out waiting for client connection.
        at com.google.common.base.Throwables.propagate(Throwables.java:156)
        at org.apache.hive.spark.client.SparkClientImpl.<init>(SparkClientImpl.java:120)
        at org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:80)
        at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:99)
        at org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.<init>(RemoteHiveSparkClient.java:95)
        at org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:65)
        at org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:62)
        ... 22 more
2018-05-31 18:29:51625警告[main]mapreduce.TableMapReduceUtil:包含PrefixTreeCodec的hbase前缀树模块jar不存在。没有它就继续。
扫描在3毫秒内完成
连接到jdbc:hive2://:10000/默认值
连接到:Apache Hive(版本1.1.0-cdh5.8.0)
驱动程序:Hive JDBC(版本1.1.0-cdh5.8.0)
事务隔离:事务可重复读取
没有受影响的行(0.101秒)
没有受影响的行(0.005秒)
没有受影响的行(0.005秒)
没有受影响的行(0.005秒)
没有受影响的行(0.005秒)
信息:编译命令(queryId=hive_20180531182929_1e4bd43e-df8a-4b87-b898-dc73eebfbda3):从默认值中选择计数(*)。示例_07
信息:语义分析已完成
信息:返回配置单元架构:架构(FieldSchema:[FieldSchema(名称:_c0,类型:bigint,注释:null)],属性:null)
信息:已完成编译命令(queryId=hive_20180531182929_1e4bd43e-df8a-4b87-b898-dc73eebfbda3);所用时间:0.463秒
信息:正在执行命令(queryId=hive_20180531182929_1e4bd43e-df8a-4b87-b898-dc73eebfbda3):从默认值中选择计数(*)。示例_07
信息:查询ID=hive_20180531182929_1e4bd43e-df8a-4b87-b898-dc73eebfbda3
信息:总工作=1
信息:正在启动作业1(共1个)
信息:以串行模式启动任务[Stage-1:MAPRED]
信息:要更改减速器的平均负载(以字节为单位):
信息:设置hive.exec.reducers.bytes.per.reducer=
信息:为了限制减速器的最大数量:
信息:设置hive.exec.reducers.max=
信息:为了设置恒定数量的减速器:
信息:设置mapreduce.job.reduces=
错误:未能执行spark任务,异常为“org.apache.hadoop.hive.ql.metadata.HiveException(未能创建spark客户端)。”
org.apache.hadoop.hive.ql.metadata.HiveException:未能创建spark客户端。
位于org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:64)
位于org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionManagerImpl.getSession(SparkSessionManagerImpl.java:114)
位于org.apache.hadoop.hive.ql.exec.spark.SparkUtilities.getSparkSession(SparkUtilities.java:125)
位于org.apache.hadoop.hive.ql.exec.spark.SparkTask.execute(SparkTask.java:97)
位于org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160)
位于org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
位于org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1782)
位于org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1539)
位于org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1318)
位于org.apache.hadoop.hive.ql.Driver.run(Driver.java:1127)
位于org.apache.hadoop.hive.ql.Driver.run(Driver.java:1120)
位于org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:178)
位于org.apache.hive.service.cli.operation.SQLOperation.access$100(SQLOperation.java:72)
位于org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:232)
位于java.security.AccessController.doPrivileged(本机方法)
位于javax.security.auth.Subject.doAs(Subject.java:415)
位于org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)
位于org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:245)
位于java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
在java.util.concurrent.FutureTask.run(FutureTask.java:262)处
位于java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
位于java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
运行(Thread.java:745)
原因:java.lang.RuntimeException:java.util.concurrent.ExecutionException:java.util.concurrent.TimeoutException:等待客户端连接时超时。
位于com.google.common.base.Throwables.propagate(Throwables.java:156)
位于org.apache.hive.spark.client.SparkClientImpl.(SparkClientImpl.java:120)
位于org.apache.hive.spark.client.SparkClientFactory.createClient(SparkClientFactory.java:80)
位于org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.createRemoteClient(RemoteHiveSparkClient.java:99)
位于org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient。(RemoteHiveSparkClient.java:95)
位于org.apache.hadoop.hive.ql.exec.spark.HiveSparkClientFactory.createHiveSparkClient(HiveSparkClientFactory.java:65)
位于org.apache.hadoop.hive.ql.exec.spark.session.SparkSessionImpl.open(SparkSessionImpl.java:62)
... 还有22个
原因:java.util.concurrent.ExecutionException:java.util.concurrent.TimeoutException:等待客户端连接时超时。
位于io.netty.util.concurrent.AbstractFuture.get(AbstractFuture.java:37)
位于org.apache.hive.spark.client.SparkClientImpl.(SparkClientImpl.java:104)
... 还有27个
原因:java.util.concurrent.TimeoutException:等待客户端连接时超时。
位于org.apache.hive.spark.client.rpc.RpcServer$2.run(RpcServer.java:141)
位于io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
在io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)上
在io.netty.util.c