Pyspark 创建配置单元表需要配置单元支持(如选择)
我计划将spark数据框保存到配置单元表中,这样我就可以查询它们并从中提取纬度和经度,因为spark数据框是不可编辑的 使用jupyter中的pyspark,我编写了以下代码来创建spark会话:Pyspark 创建配置单元表需要配置单元支持(如选择),pyspark,jupyter-notebook,hiveql,Pyspark,Jupyter Notebook,Hiveql,我计划将spark数据框保存到配置单元表中,这样我就可以查询它们并从中提取纬度和经度,因为spark数据框是不可编辑的 使用jupyter中的pyspark,我编写了以下代码来创建spark会话: import findspark findspark.init() from pyspark import SparkContext, SparkConf from pyspark.sql import SparkSession #readmultiple csv with pyspark spa
import findspark
findspark.init()
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
#readmultiple csv with pyspark
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.sql.catalogImplementation=hive").enableHiveSupport() \
.getOrCreate()
df = spark.read.csv("Desktop/train/train.csv",header=True);
Pickup_locations=df.select("pickup_datetime","Pickup_latitude",
"Pickup_longitude")
print(Pickup_locations.count())
然后我运行hiveql:
df.createOrReplaceTempView("mytempTable")
spark.sql("create table hive_table as select * from mytempTable");
我得到了这个错误:
Py4JJavaError: An error occurred while calling o24.sql.
: org.apache.spark.sql.AnalysisException: Hive support is required to CREATE Hive TABLE (AS SELECT);;
'CreateTable `hive_table`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, ErrorIfExists
+- Project [id#311, vendor_id#312, pickup_datetime#313, dropoff_datetime#314, passenger_count#315, pickup_longitude#316, pickup_latitude#317, dropoff_longitude#318, dropoff_latitude#319, store_and_fwd_flag#320, trip_duration#321]
我以前遇到过这种情况。您需要向spark submit命令传递一个配置参数,以便它将配置单元视为spark sql的目录实现 以下是spark submit的外观:
spark-submit --deploy-mode cluster --master yarn --conf spark.sql.catalogImplementation=hive --class harri_sparkStreaming.com_spark_streaming.App ./target/com-spark-streaming-2.3.0-jar-with-dependencies.jar
诀窍在于:--conf spark.sql.catalogImplementation=hive
希望这能有所帮助我以前也遇到过这种情况。您需要向spark submit命令传递一个配置参数,以便它将配置单元视为spark sql的目录实现 以下是spark submit的外观:
spark-submit --deploy-mode cluster --master yarn --conf spark.sql.catalogImplementation=hive --class harri_sparkStreaming.com_spark_streaming.App ./target/com-spark-streaming-2.3.0-jar-with-dependencies.jar
诀窍在于:--conf spark.sql.catalogImplementation=hive
希望这对你有所帮助你能控制你如何运行spark吗?我的意思是,您是否能够更改submit spark命令参数?您是否可以控制如何运行spark?我的意思是你能更改submit spark命令参数吗?