Apache spark MSCK无法通过Spark SQL工作

Apache spark MSCK无法通过Spark SQL工作,apache-spark,hive,apache-spark-sql,Apache Spark,Hive,Apache Spark Sql,我已经创建了配置单元上下文对象并尝试执行msck命令,该命令将把分区添加到配置单元表中,但它给出了以下异常 Exception in thread "main" org.apache.spark.sql.catalyst.parser.ParseException: Operation not allowed: msck repair table(line 1, pos 0) == SQL == msck repair table table_name ^^^ at org.

我已经创建了配置单元上下文对象并尝试执行msck命令,该命令将把分区添加到配置单元表中,但它给出了以下异常

Exception in thread "main" org.apache.spark.sql.catalyst.parser.ParseException:
Operation not allowed: msck repair table(line 1, pos 0)

== SQL ==
msck repair table table_name
^^^

        at org.apache.spark.sql.catalyst.parser.ParserUtils$.operationNotAllowed(ParserUtils.scala:43)
        at org.apache.spark.sql.execution.SparkSqlAstBuilder$$anonfun$visitFailNativeCommand$1.apply(SparkSqlParser.scala:837)
        at org.apache.spark.sql.execution.SparkSqlAstBuilder$$anonfun$visitFailNativeCommand$1.apply(SparkSqlParser.scala:828)
        at org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:96)
        at org.apache.spark.sql.execution.SparkSqlAstBuilder.visitFailNativeCommand(SparkSqlParser.scala:828)
        at org.apache.spark.sql.execution.SparkSqlAstBuilder.visitFailNativeCommand(SparkSqlParser.scala:53)
        at org.apache.spark.sql.catalyst.parser.SqlBaseParser$FailNativeCommandContext.accept(SqlBaseParser.java:900)
        at org.antlr.v4.runtime.tree.AbstractParseTreeVisitor.visit(AbstractParseTreeVisitor.java:42)
        at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleStatement$1.apply(AstBuilder.scala:64)
        at org.apache.spark.sql.catalyst.parser.AstBuilder$$anonfun$visitSingleStatement$1.apply(AstBuilder.scala:64)
        at org.apache.spark.sql.catalyst.parser.ParserUtils$.withOrigin(ParserUtils.scala:96)
        at org.apache.spark.sql.catalyst.parser.AstBuilder.visitSingleStatement(AstBuilder.scala:63)
        at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:54)
        at org.apache.spark.sql.catalyst.parser.AbstractSqlParser$$anonfun$parsePlan$1.apply(ParseDriver.scala:53)
        at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parse(ParseDriver.scala:82)
        at org.apache.spark.sql.execution.SparkSqlParser.parse(SparkSqlParser.scala:46)
        at org.apache.spark.sql.catalyst.parser.AbstractSqlParser.parsePlan(ParseDriver.scala:53)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:582)
        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:682)
        at com.mcd.spark.driver.R2D2Driver$.main(R2D2Driver.scala:321)
        at com.mcd.spark.driver.R2D2Driver.main(R2D2Driver.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:729)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
创建了如下所述的spark上下文和hive上下文

 val conf = new SparkConf().setAppName(appName).setMaster(master)
    var sc: SparkContext = null
    sc = new SparkContext(conf)
     val hqlContext = new org.apache.spark.sql.hive.HiveContext(sc)

hqlContext.sql("msck repair table table_name")

Can some one help me to solve how to add partitions into hive table?

Regards,
Aswin
使用
“runSqlHive”
进行尝试,如:

hqlContext.runSqlHive("msck repair table table_name")

试试看{
Class.forName(“org.apache.hive.jdbc.HiveDriver”);
Connecton connection=DriverManager.getConnection(“jdbc:hive2://:/”,“);
语句stmt=connection.createStatement();
stmt.execute(“msck修复表表_名称”);
}捕获(最终类NotFoundException | SQLException e{
抛出新的运行时异常(e);
}
尝试使用
“runSqlHive”
如:

hqlContext.runSqlHive("msck repair table table_name")

试试看{
Class.forName(“org.apache.hive.jdbc.HiveDriver”);
Connecton connection=DriverManager.getConnection(“jdbc:hive2://:/”,“);
语句stmt=connection.createStatement();
stmt.execute(“msck修复表表_名称”);
}捕获(最终类NotFoundException | SQLException e{
抛出新的运行时异常(e);
}

我想我以前也遇到过这个问题。最近spark中添加了对MSCK的支持,并支持快速统计。您能试试其他语法是否适合您吗

ALTER TABLE{TABLE_name}恢复分区;


另外,您可能还想看看如何防止副作用。

我想我以前也遇到过这种情况。最近spark中添加了对MSCK的支持,并支持快速统计。您可以试试其他语法是否适合您吗

ALTER TABLE{TABLE_name}恢复分区;


另外,您可能需要查看一下以防止出现副作用。

HiveContext没有显示此方法。我需要导入任何内容吗?它应该包含在org.apache.spark.sql.hive.HiveContext中?顺便问一下,什么是spark和hive版本?下面提到了hive 2.1.0-amzn-0 spark版本2.0.0我不知道为什么HiveContext无法显示此方法。在Spark 2.0中,似乎启用了SparkSession,从HiveContext.v1.6.1和v2.0.0中删除了runSqlHive方法。还有其他替代方法吗?HiveContext没有显示此方法。是否需要导入任何内容?它应该包含在org.apache.Spark.sql.hive.HiveContext中?顺便问一下,Spark和HiveContext版本是什么?版本下面提到了ns Hive 2.1.0-amzn-0 Spark 2.0.0版我不知道为什么HiveContext无法显示此方法。在Spark 2.0中,似乎启用了SparkSession,方法runSqlHive从HiveContext.v1.6.1和v2.0.0中删除了任何其他替代方法?