Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/378.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java spark到配置单元表插入到动态分区异常_Java_Apache Spark_Hadoop_Hive - Fatal编程技术网

Java spark到配置单元表插入到动态分区异常

Java spark到配置单元表插入到动态分区异常,java,apache-spark,hadoop,hive,Java,Apache Spark,Hadoop,Hive,在下面的代码中,我将数据插入到表txnaggr_rt_中,该表有两列,分别为txninterval和intervaltype。我已经在spark sql中启用了动态分区。如果分区已经存在,则没有问题 SparkSession spark = SparkSession.builder().appName("Java Spark Hive Example") .config("spark.sql.warehouse.dir", "hdfs://localhost:80

在下面的代码中,我将数据插入到表txnaggr_rt_中,该表有两列,分别为txninterval和intervaltype。我已经在spark sql中启用了动态分区。如果分区已经存在,则没有问题

SparkSession spark = SparkSession.builder().appName("Java Spark Hive Example")
                .config("spark.sql.warehouse.dir", "hdfs://localhost:8020/user/hive/warehouse")
                .config("hive.exec.dynamic.partition", "true").config("hive.exec.dynamic.partition.mode", "nonstrict")
                .enableHiveSupport().getOrCreate();
        spark.sql("use nadb");
        spark.sql("show tables").show();
        spark.sql("insert into table txnaggr_rt_fact partition(txninterval='2018-09-03',intervaltype='test') values('1','2','3',4)"); //(Line number 113) Exception raises here as partition doesn't exist
数据将被插入到表中,但如果分区不存在,则会出现异常,但如果分区已存在,则不会出现问题

SparkSession spark = SparkSession.builder().appName("Java Spark Hive Example")
                .config("spark.sql.warehouse.dir", "hdfs://localhost:8020/user/hive/warehouse")
                .config("hive.exec.dynamic.partition", "true").config("hive.exec.dynamic.partition.mode", "nonstrict")
                .enableHiveSupport().getOrCreate();
        spark.sql("use nadb");
        spark.sql("show tables").show();
        spark.sql("insert into table txnaggr_rt_fact partition(txninterval='2018-09-03',intervaltype='test') values('1','2','3',4)"); //(Line number 113) Exception raises here as partition doesn't exist
这是一个例外:

Exception in thread "main" org.apache.spark.sql.AnalysisException: 

java.lang.NullPointerException: null;
        at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:106)
        at org.apache.spark.sql.hive.HiveExternalCatalog.loadPartition(HiveExternalCatalog.scala:843)
        at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.processInsert(InsertIntoHiveTable.scala:249)
        at org.apache.spark.sql.hive.execution.InsertIntoHiveTable.run(InsertIntoHiveTable.scala:99)
        at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
        at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
        at org.apache.spark.sql.execution.command.DataWritingCommandExec.executeCollect(commands.scala:115)
        at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
        at org.apache.spark.sql.Dataset$$anonfun$6.apply(Dataset.scala:190)
        at org.apache.spark.sql.Dataset$$anonfun$52.apply(Dataset.scala:3253)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77)
        at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3252)
        at org.apache.spark.sql.Dataset.<init>(Dataset.scala:190)
        at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:75)
        at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:638)
        at com.cw.na.spark.HiveSqlTest.main(HiveSqlTest.java:113)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NullPointerException
        at org.apache.hadoop.hive.ql.metadata.Hive.replaceFiles(Hive.java:3412)
        at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1650)
        at org.apache.hadoop.hive.ql.metadata.Hive.loadPartition(Hive.java:1579)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:498)
        at org.apache.spark.sql.hive.client.Shim_v0_14.loadPartition(HiveShim.scala:836)
        at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply$mcV$sp(HiveClientImpl.scala:741)
        at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:739)
        at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$loadPartition$1.apply(HiveClientImpl.scala:739)
        at org.apache.spark.sql.hive.client.HiveClientImpl$$anonfun$withHiveState$1.apply(HiveClientImpl.scala:272)
        at org.apache.spark.sql.hive.client.HiveClientImpl.liftedTree1$1(HiveClientImpl.scala:210)
        at org.apache.spark.sql.hive.client.HiveClientImpl.retryLocked(HiveClientImpl.scala:209)
        at org.apache.spark.sql.hive.client.HiveClientImpl.withHiveState(HiveClientImpl.scala:255)
        at org.apache.spark.sql.hive.client.HiveClientImpl.loadPartition(HiveClientImpl.scala:739)
        at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply$mcV$sp(HiveExternalCatalog.scala:855)
        at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843)
        at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$loadPartition$1.apply(HiveExternalCatalog.scala:843)
        at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)
        ... 25 more
此属性hive.exec.dynamic.partition.mode设置为strict,实现后我将其更改为nonstrict。之后我没有重新启动spark,但我已经停止了metastore并启动了它。我也需要重新启动spark吗。我的代码中还缺少什么

以下是txnaggr\u rt\u事实的模式:

channelid               string
chaincodeid             string
chaincodefcn            string
count                   int
txninterval             date
intervaltype            string

# Partition Information
# col_name              data_type               comment

txninterval             date
intervaltype            string
我需要帮助。
谢谢

如果您确定原因是缺少插入的分区,您可以在插入数据之前发出以下查询:

alter table add if not exists partition (txninterval=<value>, intervaltype=<value>)

txnaggr\u rt\u事实的模式是什么?插入值列表中的值数可能与列数不同。下面是架构,我将相应地插入channelid string chaincodeid string chaincodefcn string count int txninterval date intervaltype string Partition信息colu_name data_type comment txninterval date intervaltype string如果您确定由于缺少分区而导致异常下降,那么通过查看日志就不明显了,如果不存在分区txninterval=,使用alter table add如何,intervaltype=插入前?我会尽力让您知道的嘿,非常感谢。成功了。请加上这个作为答案,我会把它标记为正确答案。非常感谢你抽出时间。这救了我