Pyspark 向位于data lake中的空数据帧添加新行

Pyspark 向位于data lake中的空数据帧添加新行,pyspark,pyspark-sql,azure-databricks,pyspark-dataframes,Pyspark,Pyspark Sql,Azure Databricks,Pyspark Dataframes,我使用下面的代码创建了一个空dataframe表,以定位在Delta: deltaResultPath = "/ml/streaming-analysis/delta/Result" # Create Delta Lake table sqlq = "CREATE TABLE stockDailyPrices_delta USING DELTA LOCATION '" + deltaResultPath + "'" spark.sql(sqlq) 我是spark新手,不完全理解sparkS

我使用下面的代码创建了一个空dataframe表,以定位在Delta:

deltaResultPath = "/ml/streaming-analysis/delta/Result"

# Create Delta Lake table
sqlq = "CREATE TABLE stockDailyPrices_delta USING DELTA LOCATION '" + deltaResultPath + "'"
spark.sql(sqlq)

我是spark新手,不完全理解sparkSQL代码。我想做的不是从另一个数据帧插入值,而是添加python脚本中生成的值。 类似于从以下位置修改代码:

insert_sql = "insert into stockDailyPrices_delta select f.* from stockDailyPrices f where f.price_date >= '"  + price_date_min.strftime('%Y-%m-%d') + "' and f.price_date <= '" + price_date_max.strftime('%Y-%m-%d') + "'"
spark.sql(insert_sql)
但是,我看到以下错误:

org.apache.spark.sql.catalyst.parser.ParseException: 

ParseException: "\nmismatched input 'Time' expecting {'(', 'SELECT', 'FROM', 'DESC', 'VALUES', 'TABLE', 'INSERT', 'DESCRIBE', 'MAP', 'MERGE', 'UPDATE', 'REDUCE'}(line 1, pos 16)\n\n== SQL ==\ninsert into df (Time, cpu_temp, dsp_temp) values (%s, %s, %s)\n----------------^^^\n"

如何修复此代码?

我可以使用类似这样的代码

spark.sql("insert into Result_delta select {} as Time, {} as cpu_temp, {} as dsp_temp".format(Time, cpu_temp, dsp_temp))
spark.sql("insert into Result_delta select {} as Time, {} as cpu_temp, {} as dsp_temp".format(Time, cpu_temp, dsp_temp))