Apache spark PySpark HiveContext错误

Apache spark PySpark HiveContext错误,apache-spark,hive,hiveql,pyspark,Apache Spark,Hive,Hiveql,Pyspark,我正在尝试使用PySpark使用下面的命令刷新表分区。我可以发出任何其他SQL命令,但MSCK REPAIR TABLE给我带来了问题 代码: conf = SparkConf().setAppName("PythonHiveExample")\ .set("spark.executor.memory", "3g")\ .set("spark.driver.memory", "3g")\

我正在尝试使用PySpark使用下面的命令刷新表分区。我可以发出任何其他SQL命令,但
MSCK REPAIR TABLE
给我带来了问题

代码:

conf = SparkConf().setAppName("PythonHiveExample")\
                  .set("spark.executor.memory", "3g")\
                  .set("spark.driver.memory", "3g")\
                  .set("spark.driver.cores", "2")\
                  .set("spark.storage.memoryFraction", "0.4")
sc = SparkContext(conf=conf)
sqlContext = HiveContext(sc)
sqlContext.sql("MSCK REPAIR TABLE testdatabase.testtable;")
            py4j.protocol.Py4JJavaError: An error occurred while calling o43.sql.
            : org.apache.spark.sql.AnalysisException: missing EOF at 'MSCK' near 'testdatabase'; line 1 pos 17
                    at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:254)
                    at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
                    at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
                    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
                    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
                    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
                    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
错误:

conf = SparkConf().setAppName("PythonHiveExample")\
                  .set("spark.executor.memory", "3g")\
                  .set("spark.driver.memory", "3g")\
                  .set("spark.driver.cores", "2")\
                  .set("spark.storage.memoryFraction", "0.4")
sc = SparkContext(conf=conf)
sqlContext = HiveContext(sc)
sqlContext.sql("MSCK REPAIR TABLE testdatabase.testtable;")
            py4j.protocol.Py4JJavaError: An error occurred while calling o43.sql.
            : org.apache.spark.sql.AnalysisException: missing EOF at 'MSCK' near 'testdatabase'; line 1 pos 17
                    at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:254)
                    at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
                    at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
                    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
                    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
                    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
                    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
sql中的文件“/usr/hdp/2.3.0.0-2557/spark/python/pyspark/sql/context.py”,第488行 返回数据帧(self.\u ssql\u ctx.sql(sqlQuery),self) 文件“/usr/hdp/2.3.0.0-2557/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py”,第538行,在调用中 文件“/usr/hdp/2.3.0.0-2557/spark/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py”,第300行,在get\u返回值中 py4j.protocol.Py4JJavaError:调用o43.sql时出错。 :org.apache.spark.sql.AnalysisException:在“;”处缺少EOF接近"10",;第1行位置41

新错误:

conf = SparkConf().setAppName("PythonHiveExample")\
                  .set("spark.executor.memory", "3g")\
                  .set("spark.driver.memory", "3g")\
                  .set("spark.driver.cores", "2")\
                  .set("spark.storage.memoryFraction", "0.4")
sc = SparkContext(conf=conf)
sqlContext = HiveContext(sc)
sqlContext.sql("MSCK REPAIR TABLE testdatabase.testtable;")
            py4j.protocol.Py4JJavaError: An error occurred while calling o43.sql.
            : org.apache.spark.sql.AnalysisException: missing EOF at 'MSCK' near 'testdatabase'; line 1 pos 17
                    at org.apache.spark.sql.hive.HiveQl$.createPlan(HiveQl.scala:254)
                    at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:41)
                    at org.apache.spark.sql.hive.ExtendedHiveQlParser$$anonfun$hiveQl$1.apply(ExtendedHiveQlParser.scala:40)
                    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:136)
                    at scala.util.parsing.combinator.Parsers$Success.map(Parsers.scala:135)
                    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)
                    at scala.util.parsing.combinator.Parsers$Parser$$anonfun$map$1.apply(Parsers.scala:242)

我目前正在使用Spark 1.6,下面的语句正在使用hive metastore更新分区


sqlContext.sql(“alter table schema.table\u name add partition(key=value)”)
您可以尝试以下命令:

ALTER TABLE table_name ADD PARTITION 

你能试着删除
在查询结束时?我记得有一个案例解决了这个问题。尝试使用这个sqlContext.sql(“使用testdatabase;”)sqlContext.sql(“MSCK REPAIR TABLE testtable;”)尝试了上述建议。还是有错误。已经在上面添加了它。应该是一个注释。