Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/17.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 将SPARK数据帧插入配置单元管理的Acid表不工作,HDP 3.0_Scala_Apache Spark_Hive - Fatal编程技术网

Scala 将SPARK数据帧插入配置单元管理的Acid表不工作,HDP 3.0

Scala 将SPARK数据帧插入配置单元管理的Acid表不工作,HDP 3.0,scala,apache-spark,hive,Scala,Apache Spark,Hive,我在将Spark数据框插入配置单元表时遇到问题。谁能帮帮我吗。HDP版本3.1,Spark版本2.3提前感谢 //原始代码部分 import org.apache.spark.SparkContext; import com.hortonworks.spark.sql.hive.llap.HiveWarehouseSessionImpl; import org.apache.spark.sql.DataFrame import com.hortonworks.hwc.HiveWarehouseS

我在将Spark数据框插入配置单元表时遇到问题。谁能帮帮我吗。HDP版本3.1,Spark版本2.3提前感谢

//原始代码部分

import org.apache.spark.SparkContext;
import com.hortonworks.spark.sql.hive.llap.HiveWarehouseSessionImpl;
import org.apache.spark.sql.DataFrame
import com.hortonworks.hwc.HiveWarehouseSession;
import org.apache.spark.sql.SparkSession$;

val spark = SparkSession.builder.getOrCreate()
spark.sparkContext.setLogLevel("ERROR")
**val hive = com.hortonworks.spark.sql.hive.llap.HiveWarehouseBuilder.session(spark).build()**
/*
Some Transformation operations happend and the output of transformation is stored in VAL RESULT
/*
val result = {
  num_records
  .union(df.transform(profile(heatmap_cols2type)))
}

result.createOrReplaceTempView("out_temp"); //Create tempview

scala> result.show()
+-----+--------------------+-----------+------------------+------------+-------------------+
| type|              column|      field|             value|       order|               date|
+-----+--------------------+-----------+------------------+------------+-------------------+
|TOTAL|                 all|num_records|               737|           0|2019-12-05 18:10:12|
|  NUM|available_points_...|    present|               737|           0|2019-12-05 18:10:12|

hive.setDatabase("EXAMPLE_DB")
hive.createTable("EXAMPLE_TABLE").ifNotExists().column("`type`", "String").column("`column`", "String").column("`field`", "String").column("`value`","String").column("`order`", "bigint").column("`date`", "TIMESTAMP").create()

hive.executeUpdate("INSERT INTO TABLE EXAMPLE_DB.EXAMPLE_TABLE SELECT * FROM out_temp");

-----ERROR of Orginal code----------------
Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: SemanticException [Error 10001]: Line 1:86 Table not found 'out_temp'**strong text**
我尝试的另一种选择是:由于Hive和Spark使用独立的目录,所以通过检查来自HWC写入操作的文档

spark.sqlSELECT类型、列、字段、值、订单、出局日期\u temp.write.formatHiveWarehouseSession.HIVE\u WAREHOUSE\u CONNECTOR.optiontable、惠灵顿\u profile.save

----交替步长误差-------- java.lang.ClassNotFoundException:未能找到数据源:HiveWarehouseSession.HIVE\u WAREHOUSE\u连接器。请在以下网址查找包裹: 位于org.apache.spark.sql.execution.datasources.DataSource$.lookUpdateAsourceDataSource.scala:639 位于org.apache.spark.sql.DataFrameWriter.saveDataFrameWriter.scala:241 ... 58删去 原因:java.lang.ClassNotFoundException:HiveWarehouseSession.HIVE\u WAREHOUSE\u CONNECTOR.DefaultSource

我的问题是:

除了将out_temp保存为Spark中的tempview之外,还有其他方法可以直接在配置单元中创建表吗? 有没有办法从spark数据帧插入到配置单元表中


谢谢大家抽出时间

result.write.saveexample\u table.parquet

result.write.mode(SaveMode.Overwrite).saveAsTable("EXAMPLE_TABLE")

您可以从

中阅读更多详细信息,我尝试了result.write,但下面是错误:scala>result.write.modeSaveMode.Overwrite.saveAsTableEXAMPLE\u DB.EXAMPLE\u TABLE org.apache.spark.sql.catalyst.analysis.NoSuchDatabaseException:未找到数据库“dfa60358\u惠灵顿\u fz\u DB”;在org.apache.spark.sql.catalyst.catalog.SessionCatalog.org$apache$spark$sql$catalyst$catalog$SessionCatalog$requiredbexistsessioncatalog.scala:177,示例_表是使用配置单元目录创建的。是否仍要从Spark DF插入配置单元表?感谢回复请参考我尝试了以下方法:result.write.formatcom.hortonworks.spark.sql.hive.llap.HiveWarehouseConnector.optiondatabase,EXAMPLE_DB.optiontable,EXAMPLE_TABLE.modeAppend.save Justification:由于数据帧结果在spark目录中,我们无法插入到hive目录中。我们需要使用Hadoop 3.0写操作HiveWareHouseConnector。但问题是,结果数据帧作为增量文件存储在配置单元表HDFS location EXAMPLE_表中。而在HDP2.6中,我们可以使用Spark.SQL将DF插入到配置单元表中,这将在表HDFS中创建零件文件location@Venka您是如何解决此问题的。我也遇到了相同的问题。您需要使用HiveWarehouseConnector来保存您的Spark DF。result.write.formatcom.hortonworks.spark.sql.hive.llap.HiveWarehouseConnector.optiondatabase,示例_DB.optiontable,示例_TABLE.modeAppend.save