Python 带有CLOB的cx_Oracle executemany
我正在尝试解析多个CSV,并使用cx_Oracle将其数据插入表中。我使用execute插入到表中没有问题,但是当我使用executemany尝试相同的过程时,我得到了一个错误。我使用execute的代码是Python 带有CLOB的cx_Oracle executemany,python,oracle10g,cx-oracle,Python,Oracle10g,Cx Oracle,我正在尝试解析多个CSV,并使用cx_Oracle将其数据插入表中。我使用execute插入到表中没有问题,但是当我使用executemany尝试相同的过程时,我得到了一个错误。我使用execute的代码是 with open(key,'r') as file: for line in file: data = data.split(",") query = "INSERT INTO " + tables[key] + " VALUES ("
with open(key,'r') as file:
for line in file:
data = data.split(",")
query = "INSERT INTO " + tables[key] + " VALUES ("
for col in range(len(data)):
query += ":" + str(col) + ","
query = query[:-1] + ")"
cursor.execute(query, data)
但是当我用
with open(key,'r') as file:
list = []
for line in file:
data = data.split(",")
list.append(data)
if len(list) > 0:
query = "INSERT INTO " + tables[key] + " VALUES ("
for col in range(len(data)):
query += ":" + str(col) + ","
query = query[:-1] + ")"
cursor.prepare(query)
cursor.executemany(None,list)
当试图插入到包含CLOB列且数据超过4000字节的表中时,我得到了“ValueError:字符串数据太大”。当表没有CLOB列时,ExecuteMy非常有效。有什么方法可以让cx\U Oracle在执行命令时将相应的列视为CLOB吗?尝试将大型列的输入大小设置为
cx\U Oracle.CLOB
。如果您有二进制数据,则可能不起作用,但应适用于CSV
中的任何文本。2K
值可能低于需要的值
请注意,当涉及CLOB
列时,executemany
似乎要慢得多,但仍优于重复执行:
def _executemany(cursor, sql, data):
'''
run the parameterized sql with the given dataset using cursor.executemany
if any column contains string values longer than 2k, use CLOBS to avoid "string
too large" errors.
@param sql parameterized sql, with parameters named according to the field names in data
@param data array of dicts, one per row to execute. each dict must have fields corresponding
to the parameter names in sql
'''
input_sizes = {}
for row in data:
for k, v in row.items():
if isinstance(v, basestring) and len(v) > 2000:
input_sizes[k] = cx_Oracle.CLOB
cursor.setinputsizes(**input_sizes)
cursor.executemany(sql, data)