Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/mysql/59.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何通过python将python字典存储到mysql数据库中_Python_Mysql_Dictionary - Fatal编程技术网

如何通过python将python字典存储到mysql数据库中

如何通过python将python字典存储到mysql数据库中,python,mysql,dictionary,Python,Mysql,Dictionary,我试图通过将字典转换成字符串,然后尝试插入,将下面的字典存储到mysql DB中,但我得到以下错误。如何解决这个问题,或者是否有其他方法将字典存储到mysql数据库中 dic = {'office': {'component_office': ['Word2010SP0', 'PowerPoint2010SP0']}} d = str(dic) # Sql query sql = "INSERT INTO ep_soft(ip_address, soft_data) VALUES ('%s',

我试图通过将字典转换成字符串,然后尝试插入,将下面的字典存储到mysql DB中,但我得到以下错误。如何解决这个问题,或者是否有其他方法将字典存储到mysql数据库中

dic = {'office': {'component_office': ['Word2010SP0', 'PowerPoint2010SP0']}}
d = str(dic)

# Sql query
sql = "INSERT INTO ep_soft(ip_address, soft_data) VALUES ('%s', '%s')" % ("192.xxx.xx.xx", d )

soft_data is a VARCHAR(500)
错误: 执行异常(1064),“您的SQL语法有错误;请查看与您的MySQL服务器版本对应的手册,以获取正确的语法 在“office”附近使用:{“component_office”:[“Word2010SP0”,“PowerPoint2010SP0”在第1行)

有什么建议或帮助吗?

试试这个:

dic = { 'office': {'component_office': ['Word2010SP0', 'PowerPoint2010SP0'] } }

"INSERT INTO `db`.`table`(`ip_address`, `soft_data`) VALUES (`{}`, `{}`)".format("192.xxx.xx.xx", str(dic))

db
table
更改为所需的值。

更改代码如下:

dic = {'office': {'component_office': ['Word2010SP0', 'PowerPoint2010SP0']}}
d = str(dic)

# Sql query
sql = """INSERT INTO ep_soft(ip_address, soft_data) VALUES (%r, %r)""" % ("192.xxx.xx.xx", d )   

首先,不要像那样构造原始SQL查询。永远不要。这就是参数化查询的目的。你在请求攻击

如果您想存储任意数据,例如Python字典,您应该序列化该数据。对于格式来说,这是一个不错的选择

总体而言,您的代码应该如下所示:

import MySQLdb
import json

db = MySQLdb.connect(...)    
cursor = db.cursor() 

dic = {'office': {'component_office': ['Word2010SP0', 'PowerPoint2010SP0']}}
sql = "INSERT INTO ep_soft(ip_address, soft_data) VALUES (%s, %s)"

cursor.execute(sql, ("192.xxx.xx.xx", json.dumps(dic)))
cursor.commit()

清理输入是一个好主意,“.format”在需要在查询中多次使用同一变量时非常有用。(本例中您不需要这样做)


如果不使用cur.escape(variable),则需要将占位符{}括在引号中。

这个答案有一些关于连接对象的伪代码,mysql的风格是memsql,但除此之外,应该简单易懂

import json
#... do something
a_big_dict = getAHugeDict()  #build a huge python dict
conn = getMeAConnection(...)
serialized_dict = json.dumps(a_big_dict) #serialize dict to string
#Something like this to hold the serialization...
qry_create = """
CREATE TABLE TABLE_OF_BIG_DICTS (
ROWID BIGINT NOT NULL AUTO_INCREMENT,
SERIALIZED_DICT BLOB NOT NULL,
UPLOAD_DT TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP,
KEY (`ROWID`) USING CLUSTERED COLUMNSTORE
);
"""
conn.execute(qry_create)
#Something like this to hold em'
qry_insert = """
INSERT INTO TABLE_OF_BIG_DICTS (SERIALIZED_DICT)
SELECT '{SERIALIZED_DICT}' as SERIALIZED_DICT;
"""
#Send it to db
conn.execute(qry_insert.format(SERIALIZED_DICT=serialized_dict))
#grab the latest
qry_read = """
SELECT a.SERIALIZED_DICT
from TABLE_OF_BIG_DICTS a
JOIN 
(
    SELECT MAX(UPLOAD_DT) AS MAX_UPLOAD_DT
    FROM TABLE_OF_BIG_DICTS
)                           b
ON  a.UPLOAD_DT = b.MAX_UPLOAD_DT
LIMIT 1
"""

#something like this to read the latest dict...
df_dict = conn.sql_to_dataframe(qry_read)
dict_str = df_dict.iloc[df_dict.index.min()][0]

#dicts never die they just get rebuilt
dict_better = json.loads(dict_str)

通过这样做,我得到以下异常(1054,“字段列表”中的未知列“192.xx.xx.xxx”)将
INSERT
中的
d
更改为
dic
我尝试了该脚本,它工作正常。您可以打印代码并进行回溯吗?很难理解错误在哪里,还有一个选项,请仅在mysql work-branch中尝试SQL语句。感谢MySQLdb.escape_string()的输入解决了这个问题:如果有人不是在2013年读到这篇文章:这个错误是由于大括号周围的倒勾引起的。它们应该是单引号。它仍然抛出相同的错误(1064,“您的SQL语法有错误;请查看与您的MySQL服务器版本对应的手册,了解在“192.xxx.x.xxx\”附近使用的正确语法,\”“{'component\u office\':[\'Word2010SP0\']}”\'在第1行')好的,将d=str(dic)改为d=json.dumps(dic)。感谢您的输入MySQLdb.escape\u string()解决了如何在服务器另一端反序列化数据的问题?@VignanBandi:
dic=json.loads(str\u from\u db)
import json
#... do something
a_big_dict = getAHugeDict()  #build a huge python dict
conn = getMeAConnection(...)
serialized_dict = json.dumps(a_big_dict) #serialize dict to string
#Something like this to hold the serialization...
qry_create = """
CREATE TABLE TABLE_OF_BIG_DICTS (
ROWID BIGINT NOT NULL AUTO_INCREMENT,
SERIALIZED_DICT BLOB NOT NULL,
UPLOAD_DT TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP,
KEY (`ROWID`) USING CLUSTERED COLUMNSTORE
);
"""
conn.execute(qry_create)
#Something like this to hold em'
qry_insert = """
INSERT INTO TABLE_OF_BIG_DICTS (SERIALIZED_DICT)
SELECT '{SERIALIZED_DICT}' as SERIALIZED_DICT;
"""
#Send it to db
conn.execute(qry_insert.format(SERIALIZED_DICT=serialized_dict))
#grab the latest
qry_read = """
SELECT a.SERIALIZED_DICT
from TABLE_OF_BIG_DICTS a
JOIN 
(
    SELECT MAX(UPLOAD_DT) AS MAX_UPLOAD_DT
    FROM TABLE_OF_BIG_DICTS
)                           b
ON  a.UPLOAD_DT = b.MAX_UPLOAD_DT
LIMIT 1
"""

#something like this to read the latest dict...
df_dict = conn.sql_to_dataframe(qry_read)
dict_str = df_dict.iloc[df_dict.index.min()][0]

#dicts never die they just get rebuilt
dict_better = json.loads(dict_str)