Apache spark 找不到具有RegisterEmptable的表或视图

Apache spark 找不到具有RegisterEmptable的表或视图,apache-spark,pyspark,spark-dataframe,pyspark-sql,Apache Spark,Pyspark,Spark Dataframe,Pyspark Sql,因此,我在pyspark shell上运行以下操作: >>> data = spark.read.csv("annotations_000", header=False, mode="DROPMALFORMED", schema=schema) >>> data.show(3) +----------+--------------------+--------------------+---------+---------+--------+---------

因此,我在pyspark shell上运行以下操作:

>>> data = spark.read.csv("annotations_000", header=False, mode="DROPMALFORMED", schema=schema)
>>> data.show(3)
+----------+--------------------+--------------------+---------+---------+--------+-----------------+
|   item_id|           review_id|                text|   aspect|sentiment|comments| annotation_round|
+----------+--------------------+--------------------+---------+---------+--------+-----------------+
|9999900031|9999900031/custom...|Just came back to...|breakfast|        3|    null|ASE_OpeNER_round2|
|9999900031|9999900031/custom...|Just came back to...|    staff|        3|    null|ASE_OpeNER_round2|
|9999900031|9999900031/custom...|The hotel was loc...|    noise|        2|    null|ASE_OpeNER_round2|
+----------+--------------------+--------------------+---------+---------+--------+-----------------+
>>> data.registerTempTable("temp")
>>> df = sqlContext.sql("select first(item_id), review_id, first(text), concat_ws(';', collect_list(aspect)) as aspect from temp group by review_id")
>>> df.show(3)
+---------------------+--------------------+--------------------+--------------------+
|first(item_id, false)|           review_id|  first(text, false)|              aspect|
+---------------------+--------------------+--------------------+--------------------+
|               100012|100012/tripadviso...|We stayed here la...|          staff;room| 
|               100013|100013/tripadviso...|We stayed for two...|           breakfast|
|               100031|100031/tripadviso...|We stayed two nig...|noise;breakfast;room|
+---------------------+--------------------+--------------------+--------------------+
它与shell sqlContext变量完美配合

当我将其作为脚本编写时:

from pyspark import SparkContext
from pyspark.sql import SparkSession, SQLContext

sc = SparkContext(appName="AspectDetector")
spark = SparkSession(sc)
sqlContext = SQLContext(sc)
data.registerTempTable("temp")
df = sqlContext.sql("select first(item_id), review_id, first(text), concat_ws(';', collect_list(aspect)) as aspect from temp group by review_id")
然后运行它,我会得到以下结果:

pyspark.sql.utils.AnalysisException:未找到u'表或视图:temp; 第1行位置99'


这怎么可能?我是否在安装sqlContext时出错了?

首先,您需要使用配置单元支持初始化spark,例如:

spark = SparkSession.builder \
    .master("yarn") \
    .appName("AspectDetector") \
    .enableHiveSupport() \
    .getOrCreate()

sqlContext = SQLContext(spark)
但是,您需要使用
spark.sql()
来运行查询,而不是使用
sqlContext.sql()


我也发现这一点令人困惑,但我认为这是因为当您执行
data.registerEmptable(“temp”)
时,您实际上处于spark上下文而不是sqlContext上下文中。如果要查询配置单元表,仍应使用
sqlContext.sql()

使用配置单元上下文对其进行初始化