Python Pyspark dataframe在显示数据框内容时显示错误

Python Pyspark dataframe在显示数据框内容时显示错误,python,pyspark,hive,pyspark-sql,pyspark-dataframes,Python,Pyspark,Hive,Pyspark Sql,Pyspark Dataframes,我正在使用spark 2.3.2和pyspark从蜂箱中读取数据。 这是我的密码 from pyspark import SparkContext from pyspark.sql import SQLContext sql_sc = SQLContext(sc) SparkContext.setSystemProperty("hive.metastore.uris", "thrift://17.20.24.186:9083").enableHiveSupport().getOrCreate()

我正在使用spark 2.3.2和pyspark从蜂箱中读取数据。 这是我的密码

from pyspark import SparkContext
from pyspark.sql import SQLContext
sql_sc = SQLContext(sc)
SparkContext.setSystemProperty("hive.metastore.uris", "thrift://17.20.24.186:9083").enableHiveSupport().getOrCreate()
df=sql_sc.sql("SELECT * FROM mtsods.model_result_abt")
df.show() ## here is where error occurs
当我试图显示数据帧的内容时,出现如下所示的错误

Py4JJavaError                             Traceback (most recent call last)
<ipython-input-32-1a6ce2362cd4> in <module>()
----> 1 df.show()

C:\spark-2.3.2-bin-hadoop2.7\python\pyspark\sql\dataframe.py in show(self, n, truncate, vertical)
    348         """
    349         if isinstance(truncate, bool) and truncate:
--> 350             print(self._jdf.showString(n, 20, vertical))
    351         else:
    352             print(self._jdf.showString(n, int(truncate), vertical))

C:\spark-2.3.2-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py in __call__(self, *args)
   1255         answer = self.gateway_client.send_command(command)
   1256         return_value = get_return_value(
-> 1257             answer, self.gateway_client, self.target_id, self.name)
   1258 
   1259         for temp_arg in temp_args:

C:\spark-2.3.2-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

C:\spark-2.3.2-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o419.showString.
: java.lang.AssertionError: assertion failed: No plan for HiveTableRelation `mtsods`.`model_result_abt`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [feature#319, profile_id#320, model_id#321, value#322, score#323, rank#324, year_d#325, taxpayer#326, it_ref_no#327]

    at scala.Predef$.assert(Predef.scala:170)
    at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
    at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
    at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
    at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
    at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
    at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
    at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
    at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
    at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
    at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
    at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:72)
    at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:68)
    at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:77)
    at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:77)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3254)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2489)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2703)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:254)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Py4JJavaError回溯(最近一次调用)
在()
---->1 df.show()
show中的C:\spark-2.3.2-bin-hadoop2.7\python\pyspark\sql\dataframe.py(self,n,truncate,vertical)
348         """
349如果存在(截断,布尔)和截断:
-->350打印(self._jdf.showString(n,20,垂直))
351其他:
352打印(self._jdf.showString(n,int(截断),垂直))
C:\spark-2.3.2-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java\u gateway.py in\uuuu调用(self,*args)
1255 answer=self.gateway\u client.send\u命令(command)
1256返回值=获取返回值(
->1257应答,self.gateway_客户端,self.target_id,self.name)
1258
1259对于临时参数中的临时参数:
C:\spark-2.3.2-bin-hadoop2.7\python\pyspark\sql\utils.py in deco(*a,**kw)
61 def装饰(*a,**千瓦):
62尝试:
--->63返回f(*a,**kw)
64除py4j.protocol.Py4JJavaError外的其他错误为e:
65 s=e.java_exception.toString()
C:\spark-2.3.2-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py在get\u return\u值中(应答、网关\u客户端、目标\u id、名称)
326 raise Py4JJavaError(
327“调用{0}{1}{2}时出错。\n”。
-->328格式(目标id,“.”,名称),值)
329其他:
330升起Py4JError(
Py4JJavaError:调用o419.showString时出错。
:java.lang.AssertionError:assertion失败:没有针对HiveTableRelation`mtsods`.`model#u result_abt`,org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe,[feature#319,profile#id#320,model#id#321,value#322,score#323,rank#324,year#d#325纳税人#326,it#ref#327]
在scala.Predef$.assert处(Predef.scala:170)
位于org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
在org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
在org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
在scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply处(TraversableOnce.scala:157)
在scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply处(TraversableOnce.scala:157)
位于scala.collection.Iterator$class.foreach(Iterator.scala:893)
位于scala.collection.AbstractIterator.foreach(迭代器.scala:1336)
位于scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
位于scala.collection.AbstractIterator.foldLeft(Iterator.scala:1336)
在org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply上(QueryPlanner.scala:75)
在org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply上(QueryPlanner.scala:67)
位于scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
位于scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
位于org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
位于org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:72)
位于org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:68)
位于org.apache.spark.sql.execution.QueryExecution.ExecutePlan$lzycompute(QueryExecution.scala:77)
位于org.apache.spark.sql.execution.QueryExecution.ExecutePlan(QueryExecution.scala:77)
位于org.apache.spark.sql.Dataset.withAction(Dataset.scala:3254)
位于org.apache.spark.sql.Dataset.head(Dataset.scala:2489)
位于org.apache.spark.sql.Dataset.take(Dataset.scala:2703)
位于org.apache.spark.sql.Dataset.showString(Dataset.scala:254)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
位于py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
在py4j.Gateway.invoke处(Gateway.java:282)
位于py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
在py4j.commands.CallCommand.execute(CallCommand.java:79)
在py4j.GatewayConnection.run处(GatewayConnection.java:238)
运行(Thread.java:748)
甚至df.count()、df.head()、df.first()都显示了相同的错误。我如何查看创建的数据帧的内容


注意:同样的查询在hue(cloudera)——hive中也能正常工作,这不是因为show或count操作。Spark在惰性评估模型中工作。因此,在应用任何操作操作时都会遇到错误

使用spark submit时在配置之前使用

--conf spark.sql.catalogImplementation=hive 

这段代码的具体编写位置。您能指导我吗?我正在使用spyder IDE进行编码。@Rahul我也更新了上面的代码,启用了配置单元支持,但仍然存在相同的错误,而且我没有使用spark shell