Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/cocoa/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Spark配置单元错误,我如何解决?_Java_Hadoop_Apache Spark_Intellij Idea_Apache Spark Sql - Fatal编程技术网

Java Spark配置单元错误,我如何解决?

Java Spark配置单元错误,我如何解决?,java,hadoop,apache-spark,intellij-idea,apache-spark-sql,Java,Hadoop,Apache Spark,Intellij Idea,Apache Spark Sql,我尝试编写一个简单的代码,用于使用SparkSql访问配置单元表: SparkSession spark = SparkSession.builder() .appName("Java Spark Hive Example") .master("local[*]") .config("hive.met

我尝试编写一个简单的代码,用于使用SparkSql访问配置单元表:

SparkSession spark = SparkSession.builder()
                                 .appName("Java Spark Hive Example")
                                 .master("local[*]")
                                 .config("hive.metastore.uris", "thrift://localhost:9083")
                                 .enableHiveSupport()
                                 .getOrCreate();

try{
    Dataset<Row> df = spark.sql("select survey_response_value from health");
    df.show();
} catch (Exception AnalysisException) {
    System.out.print("\nTable is not found\n");
}
SparkSession spark=SparkSession.builder()
.appName(“Java Spark配置单元示例”)
.master(“本地[*]”)
.config(“hive.metastore.uris”thrift://localhost:9083")
.enableHiveSupport()
.getOrCreate();
试一试{
数据集df=spark.sql(“从运行状况中选择调查\响应\值”);
df.show();
}捕获(异常分析异常){
System.out.print(“\n找不到表\n”);
}
我在我的系统上运行了很多次这个特殊的程序,它运行得很好。但它突然停止工作并开始出错。
以下是错误和跟踪的完整列表:

我正在使用IntelliJ

我没有对依赖项或代码做过任何事情。所以我不明白是什么让代码不起作用。我怎样才能摆脱它?请帮帮我。 问题是:

17:22:50.442 [main] INFO  org.apache.spark.SparkContext - Created broadcast 0 from show at hivespark.java:29
Exception in thread "main" java.lang.ExceptionInInitializerError
    at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
    at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
    at org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:225)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:308)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
    at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
    at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2377)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2113)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2112)
    at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2795)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2112)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2327)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:636)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:595)
    at org.apache.spark.sql.Dataset.show(Dataset.scala:604)
    at sparky.hivespark.main(hivespark.java:29)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Jackson version is too old 2.5.1
    at com.fasterxml.jackson.module.scala.JacksonModule$class.setupModule(JacksonModule.scala:56)
    at com.fasterxml.jackson.module.scala.DefaultScalaModule.setupModule(DefaultScalaModule.scala:19)
    at com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:651)
    at org.apache.spark.rdd.RDDOperationScope$.<init>(RDDOperationScope.scala:82)
    at org.apache.spark.rdd.RDDOperationScope$.<clinit>(RDDOperationScope.scala)
    ... 25 more
17:22:50.612 [Thread-2] INFO  org.apache.spark.SparkContext - Invoking stop() from shutdown hook
17:22:50.442[main]INFO org.apache.spark.SparkContext-从hivespark.java:29的节目中创建了广播0
线程“main”java.lang.ExceptionInInitializeError中出现异常
位于org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:132)
位于org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:113)
位于org.apache.spark.sql.execution.SparkPlan.getByteArrayRdd(SparkPlan.scala:225)
位于org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:308)
位于org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
位于org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2371)
位于org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
位于org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2765)
位于org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2370)
位于org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2377)
位于org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2113)
位于org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2112)
位于org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2795)
位于org.apache.spark.sql.Dataset.head(Dataset.scala:2112)
位于org.apache.spark.sql.Dataset.take(Dataset.scala:2327)
位于org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
在org.apache.spark.sql.Dataset.show(Dataset.scala:636)上
在org.apache.spark.sql.Dataset.show(Dataset.scala:595)上
在org.apache.spark.sql.Dataset.show(Dataset.scala:604)上
在sparky.hivespark.main(hivespark.java:29)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
原因:com.fasterxml.jackson.databind.JsonMappingException:jackson版本太旧2.5.1
位于com.fasterxml.jackson.module.scala.JacksonModule$class.setupModule(JacksonModule.scala:56)
位于com.fasterxml.jackson.module.scala.DefaultScalaModule.setupModule(DefaultScalaModule.scala:19)
位于com.fasterxml.jackson.databind.ObjectMapper.registerModule(ObjectMapper.java:651)
位于org.apache.spark.rdd.RDDOperationScope$(RDDOperationScope.scala:82)
位于org.apache.spark.rdd.RDDOperationScope$(RDDOperationScope.scala)
... 25多
17:22:50.612[Thread-2]INFO org.apache.spark.SparkContext-从shutdown hook调用stop()

如果您正在使用SBT,请添加

在这里我提到了2.8.x,您也可以提到在2.5以上的环境中兼容的任何版本

// https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-core
libraryDependencies += "com.fasterxml.jackson.core" % "jackson-core" % "2.8.7"

// https://mvnrepository.com/artifact/com.fasterxml.jackson.core/jackson-databind
libraryDependencies += "com.fasterxml.jackson.core" % "jackson-databind" % "2.8.7"
如果您使用的是maven


com.fasterxml.jackson.core
杰克逊核心
2.8.7
com.fasterxml.jackson.core
杰克逊数据绑定
2.8.7

您能否将构建文件与作用域结合起来,具体地说是更快的xml依赖关系?在此之后,您还面临着其他问题吗?请回复。
<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-core</artifactId>
    <version>2.8.7</version>
</dependency>



<dependency>
    <groupId>com.fasterxml.jackson.core</groupId>
    <artifactId>jackson-databind</artifactId>
    <version>2.8.7</version>
</dependency>