Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/sql/84.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Sql 如何通过连接到另一个本地表来更新从dataframe构建的本地表?_Sql_Scala_Apache Spark_Azure Databricks - Fatal编程技术网

Sql 如何通过连接到另一个本地表来更新从dataframe构建的本地表?

Sql 如何通过连接到另一个本地表来更新从dataframe构建的本地表?,sql,scala,apache-spark,azure-databricks,Sql,Scala,Apache Spark,Azure Databricks,我有两个本地表,我想根据第二个本地表的值更新第一个本地表,方法是像这样使用sql连接它们,并将其存储在另一个本地表中 val a=spark.sql(""" UPDATE PC SET PC.ComponentCode = 'UN' ,PC.LegacyCategoryCode = 'UN' FROM tbllabortemp PC JOIN ( SELECT CL.ContractScheduleId ,CL.WarehouseId

我有两个本地表,我想根据第二个本地表的值更新第一个本地表,方法是像这样使用sql连接它们,并将其存储在另一个本地表中

val a=spark.sql(""" UPDATE PC
SET PC.ComponentCode = 'UN'
,PC.LegacyCategoryCode = 'UN'
FROM tbllabortemp PC
JOIN (
            SELECT CL.ContractScheduleId
                ,CL.WarehouseId
                ,CL.ComponentCode
            FROM tblContractLineitemUnicorn CL
            INNER JOIN tbllabortemp CP ON CP.ContractSchedule_ID = CL.ContractScheduleId
                AND CP.warehouse_id = CL.WarehouseId
                AND CP.Component_Code = CL.ComponentCode
            GROUP BY CL.ContractScheduleId
                ,CL.WarehouseId
                ,CL.ComponentCode
            HAVING SUM(CL.LineItemTotalPurchasedUnits) < 1
            ) LI ON PC.ContractScheduleID = LI.ContractScheduleId
            AND PC.warehouse_id = LI.WarehouseId
            AND PC.Component_Code = LI.ComponentCode """)

a.createOrReplaceTempView("MergeTable")

但是它给了我一个不匹配的输入'FROM'期望,请在这方面提供帮助。谢谢下面是Delta lake UPDATE语句的语法

UPDATE [db_name.]table_name [AS alias] SET col1 = value1 [, col2 = value2 ...] [WHERE predicate]
下面是在Databricks中使用UPDATE语句的限制

Note

The following types of subqueries are not supported:

Nested subqueries, that is, a subquery inside another subquery
A NOT IN subquery inside an OR, for example, a = 3 OR b NOT IN (SELECT c from t)
In most cases, you can rewrite NOT IN subqueries using NOT EXISTS. We recommend using NOT EXISTS whenever possible, as UPDATE with NOT IN subqueries can be slow.
看起来您需要在UPDATE语句中没有FROM之前重新编写查询,并且不支持嵌套子查询和联接


val inner = spark.sql("""SELECT CL.ContractScheduleId
                ,CL.WarehouseId
                ,CL.ComponentCode
            FROM tblContractLineitemUnicorn CL
            INNER JOIN tbllabortemp CP ON CP.ContractSchedule_ID = CL.ContractScheduleId
                AND CP.warehouse_id = CL.WarehouseId
                AND CP.Component_Code = CL.ComponentCode
            GROUP BY CL.ContractScheduleId
                ,CL.WarehouseId
                ,CL.ComponentCode
            HAVING SUM(CL.LineItemTotalPurchasedUnits) < 1
            """)

val a=spark.sql(""" UPDATE tbllabortemp as PC
SET PC.ComponentCode = 'UN'
,PC.LegacyCategoryCode = 'UN'
WHERE EXISTS (SELECT ContractScheduleId from inner CL where PC.ContractScheduleID = LI.ContractScheduleId
            AND PC.warehouse_id = LI.WarehouseId
            AND PC.Component_Code = LI.ComponentCode """) 


a.createOrReplaceTempView("MergeTable")

下面是Delta lake UPDATE语句的语法

UPDATE [db_name.]table_name [AS alias] SET col1 = value1 [, col2 = value2 ...] [WHERE predicate]
下面是在Databricks中使用UPDATE语句的限制

Note

The following types of subqueries are not supported:

Nested subqueries, that is, a subquery inside another subquery
A NOT IN subquery inside an OR, for example, a = 3 OR b NOT IN (SELECT c from t)
In most cases, you can rewrite NOT IN subqueries using NOT EXISTS. We recommend using NOT EXISTS whenever possible, as UPDATE with NOT IN subqueries can be slow.
看起来您需要在UPDATE语句中没有FROM之前重新编写查询,并且不支持嵌套子查询和联接


val inner = spark.sql("""SELECT CL.ContractScheduleId
                ,CL.WarehouseId
                ,CL.ComponentCode
            FROM tblContractLineitemUnicorn CL
            INNER JOIN tbllabortemp CP ON CP.ContractSchedule_ID = CL.ContractScheduleId
                AND CP.warehouse_id = CL.WarehouseId
                AND CP.Component_Code = CL.ComponentCode
            GROUP BY CL.ContractScheduleId
                ,CL.WarehouseId
                ,CL.ComponentCode
            HAVING SUM(CL.LineItemTotalPurchasedUnits) < 1
            """)

val a=spark.sql(""" UPDATE tbllabortemp as PC
SET PC.ComponentCode = 'UN'
,PC.LegacyCategoryCode = 'UN'
WHERE EXISTS (SELECT ContractScheduleId from inner CL where PC.ContractScheduleID = LI.ContractScheduleId
            AND PC.warehouse_id = LI.WarehouseId
            AND PC.Component_Code = LI.ComponentCode """) 


a.createOrReplaceTempView("MergeTable")

我尝试在没有子查询的情况下使用这种方法,但问题是我的表是从dataframe派生的本地表,而不是delta表,我认为DataRicks不支持更新本地表,它支持对delta表进行更新。当我执行该语句时,会出现此错误。update destination仅支持delta源。,您知道如何将由这样的语句a.createOrReplaceTempViewMergeTable创建的本地表更改为增量表。我已更新了答案以处理非增量表的数据帧。如果这有帮助,请接受答案。我尝试过在没有子查询的情况下使用这种方法,但问题是我的表是从dataframe派生的本地表,而不是delta表,我认为DataRicks不支持更新本地表,它支持增量表的更新当我执行该语句时,它会给我这个错误。更新目标仅支持增量源。您知道如何将由类似于a.createOrReplaceTempViewMergeTable的语句创建的本地表更改为增量表。我已更新了答案以处理非增量表的数据帧。如果这有帮助,请接受答案。