Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/postgresql/10.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
SQlAlchemy批量插入占用postgresql连接字符串太多时间_Postgresql_Sqlalchemy - Fatal编程技术网

SQlAlchemy批量插入占用postgresql连接字符串太多时间

SQlAlchemy批量插入占用postgresql连接字符串太多时间,postgresql,sqlalchemy,Postgresql,Sqlalchemy,当使用SQLAlchemy中性能链接中给出的批量插入代码时,sqlite工作正常,并且需要花费文档中描述的时间。同时对postgresql连接字符串使用相同的代码。总时间乘以许多倍 有没有办法让postgresql更快?我在这里做错了什么 特别是大容量插入映射和大容量保存对象,它们是我插入370000行的唯一选项 Postgresql连接字符串 connection_string = 'postgresql://' + conf.DB_USER + ':' + conf.DB_PASSWORD

当使用SQLAlchemy中性能链接中给出的批量插入代码时,sqlite工作正常,并且需要花费文档中描述的时间。同时对postgresql连接字符串使用相同的代码。总时间乘以许多倍

有没有办法让postgresql更快?我在这里做错了什么

特别是大容量插入映射和大容量保存对象,它们是我插入370000行的唯一选项

Postgresql连接字符串

connection_string = 'postgresql://' + conf.DB_USER + ':' + conf.DB_PASSWORD + '@' + \
                    conf.DB_HOST + ':' + conf.DB_PORT + '/' + conf.DB_NAME
用于检查性能的代码:

import time
import sqlite3

from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String,  create_engine
from sqlalchemy.orm import scoped_session, sessionmaker


Base = declarative_base()
DBSession = scoped_session(sessionmaker())
engine = None


class Customer(Base):
    __tablename__ = "customer"
    id = Column(Integer, primary_key=True)
    name = Column(String(255))


def init_sqlalchemy(dbname='sqlite:///sqlalchemy.db'):
    global engine
    connection_string = 'postgresql://' + 'scott' + ':' + 'tiger' + '@' + \
                        'localhost' + ':' + '5432' + '/' + 'test_db'
    engine = create_engine(connection_string, echo=False)
    DBSession.remove()
    DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
    Base.metadata.drop_all(engine)
    Base.metadata.create_all(engine)


def test_sqlalchemy_orm(n=100000):
    init_sqlalchemy()
    t0 = time.time()
    for i in xrange(n):
        customer = Customer()
        customer.name = 'NAME ' + str(i)
        DBSession.add(customer)
        if i % 1000 == 0:
            DBSession.flush()
    DBSession.commit()
    print(
        "SQLAlchemy ORM: Total time for " + str(n) +
        " records " + str(time.time() - t0) + " secs")


def test_sqlalchemy_orm_pk_given(n=100000):
    init_sqlalchemy()
    t0 = time.time()
    for i in xrange(n):
        customer = Customer(id=i+1, name="NAME " + str(i))
        DBSession.add(customer)
        if i % 1000 == 0:
            DBSession.flush()
    DBSession.commit()
    print(
        "SQLAlchemy ORM pk given: Total time for " + str(n) +
        " records " + str(time.time() - t0) + " secs")


def test_sqlalchemy_orm_bulk_save_objects(n=100000):
    init_sqlalchemy()
    t0 = time.time()
    n1 = n
    while n1 > 0:
        n1 = n1 - 10000
        DBSession.bulk_save_objects(
            [
                Customer(name="NAME " + str(i))
                for i in xrange(min(10000, n1))
            ]
        )
    DBSession.commit()
    print(
        "SQLAlchemy ORM bulk_save_objects(): Total time for " + str(n) +
        " records " + str(time.time() - t0) + " secs")

def test_sqlalchemy_orm_bulk_insert(n=100000):
    init_sqlalchemy()
    t0 = time.time()
    n1 = n
    while n1 > 0:
        n1 = n1 - 10000
        DBSession.bulk_insert_mappings(
            Customer,
            [
                dict(name="NAME " + str(i))
                for i in xrange(min(10000, n1))
            ]
        )
    DBSession.commit()
    print(
        "SQLAlchemy ORM bulk_insert_mappings(): Total time for " + str(n) +
        " records " + str(time.time() - t0) + " secs")

def test_sqlalchemy_core(n=100000):
    init_sqlalchemy()
    t0 = time.time()
    engine.execute(
        Customer.__table__.insert(),
        [{"name": 'NAME ' + str(i)} for i in xrange(n)]
    )
    print(
        "SQLAlchemy Core: Total time for " + str(n) +
        " records " + str(time.time() - t0) + " secs")


def init_sqlite3(dbname):
    conn = sqlite3.connect(dbname)
    c = conn.cursor()
    c.execute("DROP TABLE IF EXISTS customer")
    c.execute(
        "CREATE TABLE customer (id INTEGER NOT NULL, "
        "name VARCHAR(255), PRIMARY KEY(id))")
    conn.commit()
    return conn


def test_sqlite3(n=100000, dbname='sqlite3.db'):
    conn = init_sqlite3(dbname)
    c = conn.cursor()
    t0 = time.time()
    for i in xrange(n):
        row = ('NAME ' + str(i),)
        c.execute("INSERT INTO customer (name) VALUES (?)", row)
    conn.commit()
    print(
        "sqlite3: Total time for " + str(n) +
        " records " + str(time.time() - t0) + " sec")

if __name__ == '__main__':
    test_sqlalchemy_orm(100000)
    test_sqlalchemy_orm_pk_given(100000)
    test_sqlalchemy_orm_bulk_save_objects(100000)
    test_sqlalchemy_orm_bulk_insert(100000)
    test_sqlalchemy_core(100000)
    test_sqlite3(100000)
输出:

SQLAlchemy ORM: Total time for 100000 records 40.6781959534 secs
SQLAlchemy ORM pk given: Total time for 100000 records 21.0855250359 secs
SQLAlchemy ORM bulk_save_objects(): Total time for 100000 records 14.068707943 secs
SQLAlchemy ORM bulk_insert_mappings(): Total time for 100000 records 11.6551070213 secs
SQLAlchemy Core: Total time for 100000 records 12.5298728943 secs
sqlite3: Total time for 100000 records 0.477468013763 sec
使用原始连接字符串(即sqlite):

输出:

SQLAlchemy ORM: Total time for 100000 records 16.9145789146 secs
SQLAlchemy ORM pk given: Total time for 100000 records 10.2713520527 secs
SQLAlchemy ORM bulk_save_objects(): Total time for 100000 records 3.69206118584 secs
SQLAlchemy ORM bulk_insert_mappings(): Total time for 100000 records 1.00701212883 secs
SQLAlchemy Core: Total time for 100000 records 0.467703104019 secs
sqlite3: Total time for 100000 records 0.566409826279 sec

最快的方法是使用
COPY FROM
(请参阅),但如果您没有写入权限,例如部署到Heroku,则可以利用

例如,对于批量插入或核心插入,请执行以下操作:

engine = create_engine(
    "postgresql+psycopg2://scott:tiger@host/dbname",
    executemany_mode='values',
    executemany_values_page_size=10000)
将计时设置为:

SQLAlchemy ORM bulk_save_objects(): Total time for 100000 records 2.796818971633911 secs
SQLAlchemy ORM bulk_insert_mappings(): Total time for 100000 records 1.3805248737335205 secs
SQLAlchemy Core: Total time for 100000 records 1.1153180599212646 secs
而不是

SQLAlchemy ORM bulk_save_objects(): Total time for 100000 records 9.02771282196045 secs
SQLAlchemy ORM bulk_insert_mappings(): Total time for 100000 records 7.643821716308594 secs
SQLAlchemy Core: Total time for 100000 records 7.460561275482178 secs

请不要将代码(或输出)包含为图像,因为它们无法复制且不支持搜索。代码是文本。您还应该包含一个最小但可验证的示例,说明您所做的尝试。仅仅说“我遵循了”并提及一些功能通常并不能解决问题。现在,
bulk\u insert\u映射
bulk\u save\u对象
都不是灵丹妙药,实际性能可能取决于许多因素。例如,前面提到的批量操作收集到单个
executemany
的简单插入,但由于您正在测试Postgresql,因此可能使用psycopg2作为DB-API驱动程序。不比当前实现中循环中的
execute()。另一方面,您可以使用psycopg2的其他功能来加速大批量插入:@IljaEverilä:图像将替换为代码。加速psycopg2的解决方案奏效了。谢谢:)干杯。现在大约需要1秒钟
SQLAlchemy ORM bulk_save_objects(): Total time for 100000 records 9.02771282196045 secs
SQLAlchemy ORM bulk_insert_mappings(): Total time for 100000 records 7.643821716308594 secs
SQLAlchemy Core: Total time for 100000 records 7.460561275482178 secs