Apache spark spark解释器读取配置单元元存储超时

Apache spark spark解释器读取配置单元元存储超时,apache-spark,hive,apache-zeppelin,cloudera-cdh,hive-metastore,Apache Spark,Hive,Apache Zeppelin,Cloudera Cdh,Hive Metastore,各位,我正在使用齐柏林飞艇的火花解释器读取蜂巢数据。 我的版本信息列表: 齐柏林飞艇:版本0.8.2 CDH:Cloudera Express 5.13.1 蜂巢:1.1.0 当我在spark解释器笔记本的一段中运行此sql时 %sql从表中选择*不存在 即使该表不存在,它也会运行300s,我认为300s是配置hive.metastore.client.socket.timeout=300 日志如下,信息和警告的间隔正好为300秒 INFO [2020-10-16 11:32:42,887]

各位,我正在使用齐柏林飞艇的火花解释器读取蜂巢数据。 我的版本信息列表:

  • 齐柏林飞艇:版本0.8.2
  • CDH:Cloudera Express 5.13.1
  • 蜂巢:1.1.0
当我在spark解释器笔记本的一段中运行此sql时
%sql从表中选择*不存在
即使该表不存在,它也会运行300s,我认为300s是配置
hive.metastore.client.socket.timeout=300

日志如下,信息和警告的间隔正好为300秒

INFO [2020-10-16 11:32:42,887] ({pool-2-thread-7} SchedulerFactory.java[jobStarted]:114) - Job 20200723-090353_389722279 started by scheduler org.apache.zeppelin.spark.SparkSqlInterpreter113305554
 WARN [2020-10-16 11:37:42,930] ({pool-2-thread-7} RetryingMetaStoreClient.java[invoke]:184) - MetaStoreClient lost connection. Attempting to reconnect.
org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException: Read timed out
...
Caused by: java.net.SocketTimeoutException: Read timed out
    at java.net.SocketInputStream.socketRead0(Native Method)
    at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
    at java.net.SocketInputStream.read(SocketInputStream.java:171)
    at java.net.SocketInputStream.read(SocketInputStream.java:141)
    at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
    at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
    at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
    at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127)
    ... 93 more
 INFO [2020-10-16 11:37:43,933] ({pool-2-thread-7} HiveMetaStoreClient.java[open]:376) - Trying to connect to metastore with URI thrift://host:9083
 INFO [2020-10-16 11:37:43,948] ({pool-2-thread-7} HiveMetaStoreClient.java[open]:472) - Connected to metastore.
 INFO [2020-10-16 11:37:43,956] ({pool-2-thread-7} SchedulerFactory.java[jobFinished]:120) - Job 20200723-090353_389722279 finished by scheduler org.apache.zeppelin.spark.SparkSqlInterpreter113305554
然后重新连接蜂巢元存储 现在我不知道怎么修,有人能帮我吗?多谢各位