Mysql 使用for循环数组对重复键使用INSERT INTO table时出错

Mysql 使用for循环数组对重复键使用INSERT INTO table时出错,mysql,python-2.7,pyspark,pyspark-sql,aws-glue,Mysql,Python 2.7,Pyspark,Pyspark Sql,Aws Glue,我正在使用pyspark框架更新mysql数据库,并在AWS Glue服务上运行 我有一个数据框,如下所示: df2= sqlContext.createDataFrame([("xxx1","81A01","TERR NAME 55","NY"),("xxx2","81A01","TERR NAME 55","NY"),("x103","81A01","TERR NAME 01","NJ")], ["zip_code","territory_code","territory_name","st

我正在使用pyspark框架更新mysql数据库,并在AWS Glue服务上运行

我有一个数据框,如下所示:

df2= sqlContext.createDataFrame([("xxx1","81A01","TERR NAME 55","NY"),("xxx2","81A01","TERR NAME 55","NY"),("x103","81A01","TERR NAME 01","NJ")], ["zip_code","territory_code","territory_name","state"])

# Print out information about this data
df2.show()
+--------+--------------+--------------+-----+
|zip_code|territory_code|territory_name|state|
+--------+--------------+--------------+-----+
|    xxx1|         81A01|  TERR NAME 55|   NY|
|    xxx2|         81A01|  TERR NAME 55|   NY|
|    x103|         81A01|  TERR NAME 01|   NJ|
+---------------------------------------------
sarry = df2.collect()
for r in sarry:
     db = MySQLdb.connect("xxxx.rds.amazonaws.com", "username", "password", 
      "databasename")
     cursor = db.cursor()
     insertQry=INSERT INTO ZIP_TERR(zip_code, territory_code, territory_name, 
     state) VALUES(r.zip_code, r.territory_code, r.territory_name, r.state) ON 
     DUPLICATE KEY UPDATE territory_name = VALUES(territory_name), state = 
     VALUES(state);"
     n=cursor.execute(insertQry)
     db.commit()
     db.close()
print('zip_code_new: ', r.zip_code, r.territory_code, r.territory_name, r.state)

zip_code_new:  xxx1 81A01 TERR NAME 55 NY
我有一个主键邮政编码,我需要确保没有重复的键或主键异常,因此我使用INSERT INTO。。。。在重复的钥匙上

由于要插入/更新的行不止一行,所以我使用python中的数组来循环记录,并执行插入到数据库中的操作。代码如下:

df2= sqlContext.createDataFrame([("xxx1","81A01","TERR NAME 55","NY"),("xxx2","81A01","TERR NAME 55","NY"),("x103","81A01","TERR NAME 01","NJ")], ["zip_code","territory_code","territory_name","state"])

# Print out information about this data
df2.show()
+--------+--------------+--------------+-----+
|zip_code|territory_code|territory_name|state|
+--------+--------------+--------------+-----+
|    xxx1|         81A01|  TERR NAME 55|   NY|
|    xxx2|         81A01|  TERR NAME 55|   NY|
|    x103|         81A01|  TERR NAME 01|   NJ|
+---------------------------------------------
sarry = df2.collect()
for r in sarry:
     db = MySQLdb.connect("xxxx.rds.amazonaws.com", "username", "password", 
      "databasename")
     cursor = db.cursor()
     insertQry=INSERT INTO ZIP_TERR(zip_code, territory_code, territory_name, 
     state) VALUES(r.zip_code, r.territory_code, r.territory_name, r.state) ON 
     DUPLICATE KEY UPDATE territory_name = VALUES(territory_name), state = 
     VALUES(state);"
     n=cursor.execute(insertQry)
     db.commit()
     db.close()
print('zip_code_new: ', r.zip_code, r.territory_code, r.territory_name, r.state)

zip_code_new:  xxx1 81A01 TERR NAME 55 NY
在运行上述插入查询函数时,我收到以下错误消息,无法获得有关错误的任何线索。请帮忙

Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-2291407229037300959.py", line 367, in <module>
    raise Exception(traceback.format_exc())
Exception: Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-2291407229037300959.py", line 360, in <module>
    exec(code, _zcUserQueryNameSpace)
  File "<stdin>", line 8, in <module>
  File "/usr/local/lib/python2.7/site-packages/pymysql/cursors.py", line 170, in execute
    result = self._query(query)
  File "/usr/local/lib/python2.7/site-packages/pymysql/cursors.py", line 328, in _query
    conn.query(q)
  File "/usr/local/lib/python2.7/site-packages/pymysql/connections.py", line 893, in query
    self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  File "/usr/local/lib/python2.7/site-packages/pymysql/connections.py", line 1103, in _read_query_result
    result.read()
  File "/usr/local/lib/python2.7/site-packages/pymysql/connections.py", line 1396, in read
    first_packet = self.connection._read_packet()
  File "/usr/local/lib/python2.7/site-packages/pymysql/connections.py", line 1059, in _read_packet
    packet.check_error()
  File "/usr/local/lib/python2.7/site-packages/pymysql/connections.py", line 384, in check_error
    err.raise_mysql_exception(self._data)
  File "/usr/local/lib/python2.7/site-packages/pymysql/err.py", line 109, in raise_mysql_exception
    raise errorclass(errno, errval)
InternalError: (1054, u"Unknown column 'r.zip_code' in 'field list'")

谢谢。我正在使用AWS Glue/Pyspark,因此我需要使用本机python库。

以下插入查询可以使用for循环

insertQry="INSERT INTO ZIP_TERR(zip_code, territory_code, territory_name, state) VALUES(%s, %s, %s, %s) ON DUPLICATE KEY UPDATE territory_name = %s, state = %s;

n=cursor.execute(insertQry, (r.zip_code, r.territory_code, r.territory_name, r.state, r.territory_name, r.state))
print (" CURSOR status :", n)
结果输出:

CURSOR status : 2

谢谢。希望这对其他人有参考价值。

hi yuva,for循环作为一种顺序更新每一行到MySQL的方式出现。如果是这样,则在处理大量实时数据时可能会出现问题。如何处理相同的问题?我已经更改了逻辑,使用RDS Aurora提供的S3前缀中的LOADDATA将数据加载到临时表中,然后在重复时插入。。从暂存表导入目标表。希望这能帮上忙。嗨,yuva,能帮你回答你最后的问题吗?当然会有帮助的!:)请参阅我提供示例代码的SO帖子:。如果你需要帮助,请告诉我。您可能需要上传pymysql连接器作为pyspark胶水代码的一部分,这是一个zip文件。