Apache spark HDP 3.1上的spark 3.x处于无头模式,配置单元-未找到配置单元表

Apache spark HDP 3.1上的spark 3.x处于无头模式,配置单元-未找到配置单元表,apache-spark,hive,apache-spark-sql,hortonworks-data-platform,hive-metastore,Apache Spark,Hive,Apache Spark Sql,Hortonworks Data Platform,Hive Metastore,如何使用headless()版本的Spark在HDP3.1上配置Spark 3.x以与hive交互 首先,我下载并解压缩了headless spark 3.x: cd ~/development/software/spark-3.0.0-bin-without-hadoop export HADOOP_CONF_DIR=/etc/hadoop/conf/ export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk export SPARK_DIST_CLAS

如何使用headless()版本的Spark在HDP3.1上配置Spark 3.x以与hive交互

首先,我下载并解压缩了headless spark 3.x:

cd ~/development/software/spark-3.0.0-bin-without-hadoop
export HADOOP_CONF_DIR=/etc/hadoop/conf/
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk
export SPARK_DIST_CLASSPATH=$(hadoop --config /usr/hdp/current/spark2-client/conf classpath)
 
ls /usr/hdp # note version ad add it below and replace 3.1.x.x-xxx with it

./bin/spark-shell --master yarn --queue myqueue --conf spark.driver.extraJavaOptions='-Dhdp.version=3.1.x.x-xxx' --conf spark.yarn.am.extraJavaOptions='-Dhdp.version=3.1.x.x-xxx' --conf spark.hadoop.metastore.catalog.default=hive --files /usr/hdp/current/hive-client/conf/hive-site.xml

spark.sql("show databases").show
// only showing default namespace, existing hive tables are missing
+---------+
|namespace|
+---------+
|  default|
+---------+

spark.conf.get("spark.sql.catalogImplementation")
res2: String = in-memory # I want to see hive here - how? How to add hive jars onto the classpath?
注 这是Spark 3.x ond HDP 3.1和的更新版本

此外:我知道spark中酸性蜂巢表的问题。现在,我只是希望能够看到现有的数据库

编辑 我们必须把蜂箱罐放到类路径上。尝试如下:

 export SPARK_DIST_CLASSPATH="/usr/hdp/current/hive-client/lib*:${SPARK_DIST_CLASSPATH}"
现在使用spark sql:

./bin/spark-sql --master yarn --queue myqueue--conf spark.driver.extraJavaOptions='-Dhdp.version=3.1.x.x-xxx' --conf spark.yarn.am.extraJavaOptions='-Dhdp.version=3.1.x.x-xxx' --conf spark.hadoop.metastore.catalog.default=hive --files /usr/hdp/current/hive-client/conf/hive-site.xml
在以下情况下失败:

Error: Failed to load class org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.
Failed to load main class org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.
也就是说,行:
export SPARK\u DIST\u CLASSPATH=“/usr/hdp/current/hive client/lib*:${SPARK\u DIST\u CLASSPATH}”
无效(如果未设置,则问题相同)。

如上所述,需要蜂窝jar。它们不提供无头版本

我无法改装这些

解决方案:不用担心:只需在Hadoop 3.2上使用spark构建(在HDP3.1上)