Hive 无法访问pyspark中的默认数据库

Hive 无法访问pyspark中的默认数据库,hive,pyspark,Hive,Pyspark,当我尝试执行下面的代码时,我遇到下面的异常 from pyspark.sql import HiveContext sqlContext = HiveContext(sc) depts = sqlContext.sql("select * from departments") 17/09/13 03:37:12 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schem

当我尝试执行下面的代码时,我遇到下面的异常

from pyspark.sql import HiveContext
sqlContext = HiveContext(sc)
depts = sqlContext.sql("select * from departments")

17/09/13 03:37:12 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.1.0
17/09/13 03:37:12 WARN metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException
17/09/13 03:37:14 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
Traceback (most recent call last):
File "", line 1, in 
File "/usr/lib/spark/python/pyspark/sql/context.py", line 580, in sql
return DataFrame(self.ssqlctx.sql(sqlQuery), self)
File "/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in call
File "/usr/lib/spark/python/pyspark/sql/utils.py", line 51, in deco
raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: u'Table not found: departments; line 1 pos 14'
我使用的是Cloudera虚拟机版本5.10,Spark版本1.6.0。

此异常的解决方案
  • 使用sudorm-R/etc/spar/conf/hive.xml删除了链接文件
  • 再次使用sudo ln-s/etc/hive/conf/hive-site.xml/etc/spark/conf/hive-site.xml链接文件
  • 此异常的解决方案
  • 使用sudorm-R/etc/spar/conf/hive.xml删除了链接文件
  • 再次使用sudo ln-s/etc/hive/conf/hive-site.xml/etc/spark/conf/hive-site.xml链接文件

  • 看起来Spark在其conf dir中找不到合适的
    hive site.xml
    ,因此用Derby增强了一个虚拟元存储。如何解决这个问题?我使用
    sudo ln-s/etc/hive/conf/hive-site.xml/etc/Spark/conf/hive.xml链接了hive-site.xml,但是仍然面临同样的问题,像Spark这样的书籍在其conf dir中找不到合适的
    hive site.xml
    ,因此用Derby增强了一个虚拟元存储。如何解决这个问题?我使用
    sudo ln-s/etc/hive/conf/hive-site.xml/etc/Spark/conf/hive.xml链接了hive-site.xml,但仍然面临同样的问题