Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/353.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何从数据库中获取数据并更快地写入csv?_Python_Postgresql - Fatal编程技术网

Python 如何从数据库中获取数据并更快地写入csv?

Python 如何从数据库中获取数据并更快地写入csv?,python,postgresql,Python,Postgresql,我需要从数据库中获取数据,并将其写入CSV文件中的相应列。 下面的代码一个接一个地执行得非常慢 async def fetch_and_write(): conn = await asyncpg.connect('...') with open('/Users/mac/Desktop/input.csv','r') as csvinput: with open('/Users/mac/Desktop/output.csv', 'w') as csvoutput:

我需要从数据库中获取数据,并将其写入CSV文件中的相应列。 下面的代码一个接一个地执行得非常慢

async def fetch_and_write():
  conn = await asyncpg.connect('...')
  with open('/Users/mac/Desktop/input.csv','r') as csvinput:
     with open('/Users/mac/Desktop/output.csv', 'w') as csvoutput:
        reader = csv.reader(csvinput)
        writer = csv.writer(csvoutput, lineterminator='\n')

        all = []
        row = next(reader)
        row.append('new_column_name')
        all.append(row)

        for row in reader:
            query = "SELECT .. FROM .. WHERE id = '%s';"
            query = query % row[14]
            try:
                result = await conn.fetch(query)
            except BaseException:
                print("Oops!That was no valid number.")
                continue

            row.append(result[0][0])
            all.append(row)

        writer.writerows(all)
如何从CSV中分块读取id并使用in子句来提高性能?

您可以使用postgres的Copy命令来完成这项任务。 e、 你的问题应该是


使用csv分隔符“,”将Select*从foo复制到“/tmp/test.csv”

根据我在评论中的建议,您可以在一次查询中获取n条记录。下面是您提供的代码的修改版本

未经测试


您可以从csv中分块读取ID并在sql的in子句中使用。是的,您可以提供一些示例吗?但是您也可以迭代地执行此操作,也许您应该使用while而不是if?是的。但现在,您在一个查询中得到500个结果。所以你的迭代次数减少了500倍。
async def fetch_and_write():
    n = 500  # fetch #rows at once
    conn = await asyncpg.connect('...')
    with open('/Users/mac/Desktop/input.csv','r') as csvinput:
        with open('/Users/mac/Desktop/output.csv', 'w') as csvoutput:
        reader = csv.reader(csvinput)
        writer = csv.writer(csvoutput, lineterminator='\n')

        all = []
        ids_list = []
        row = next(reader)
        row.append('new_column_name')
        all.append(row)

        for row in reader:

            ids_list.append(row[14])
            if len(ids_list) >= n:

                in_p=', '.join(map(lambda x: '%s', args))

                query = "SELECT .. FROM .. WHERE id in '%s';"
                query = query % in_p
                try:
                    result = await conn.fetch(query)
                except BaseException:
                    print("Oops!That was no valid number.")
                    continue
                ids_list = []   

                row.append(result[0][0])
                writer.writerows(all)
                all = []

        if len(ids_list)>0:
            in_p=', '.join(map(lambda x: '%s', args))

            query = "SELECT .. FROM .. WHERE id in '%s';"
            query = query % in_p
            try:
                result = await conn.fetch(query)
            except BaseException:
                print("Oops!That was no valid number.")
                continue

            row.append(result[0][0])
            writer.writerows(all)