Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/postgresql/9.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python PG800复制功能引发编程错误_Python_Postgresql_Csv - Fatal编程技术网

Python PG800复制功能引发编程错误

Python PG800复制功能引发编程错误,python,postgresql,csv,Python,Postgresql,Csv,我有一个函数,可以从CSV文件复制到PSQL数据库 下面是函数: def get(self): filename = export_user_usermetadata_to_gcs() command = """ INSERT INTO import_temp_table_user_passport_misc (profileid, terms_accepted, lastname, firstname, picture_servi

我有一个函数,可以从CSV文件复制到PSQL数据库

下面是函数:

def get(self):
        filename = export_user_usermetadata_to_gcs()
        command = """
        INSERT INTO import_temp_table_user_passport_misc
        (profileid, terms_accepted, lastname, firstname, picture_serving_url,
        is_active, is_passport_active, language, created, modified,
        passport_completion_level, email, about_me, uni_code, meta_data)
        VALUES(%s), (%s), (%s), (%s), (%s), (%s), (%s), (%s), (%s), (%s), (%s), (%s), (%s), (%s), (%s)
        """
        location = '/prod-data-migration-csv-exports/{}'.format(filename)
        with gcs.open(location) as data_stream_source:
            reader = csv.reader(data_stream_source)
            slice = itertools.islice(reader, 5000)
            while slice:
                db.executemany(command, slice)
                slice = itertools.islice(reader, 5000)
import\u temp\u table\u user\u passport\u misc
有一个名为
autoid
的额外列,它是一个增量整数值

抛出的错误是:
ProgrammingError:(u'error',u'error',u'42601',u'INSERT的目标列比表达式'u'79',u'analyze.c',u'884',u'transformInsertRow'多)
。是否需要为自动增量列指定列或值

以下是表格定义:

CREATE TABLE public.import_temp_table_user_passport_misc
(
    autoid integer NOT NULL DEFAULT nextval('import_temp_table_user_passport_misc_autoid_seq'::regclass),
    profileid text COLLATE pg_catalog."default",
    terms_accepted text COLLATE pg_catalog."default",
    lastname text COLLATE pg_catalog."default",
    firstname text COLLATE pg_catalog."default",
    picture_serving_url text COLLATE pg_catalog."default",
    is_active text COLLATE pg_catalog."default",
    is_passport_active text COLLATE pg_catalog."default",
    language text COLLATE pg_catalog."default",
    created text COLLATE pg_catalog."default",
    modified text COLLATE pg_catalog."default",
    passport_completion_level text COLLATE pg_catalog."default",
    email text COLLATE pg_catalog."default",
    about_me text COLLATE pg_catalog."default",
    uni_code text COLLATE pg_catalog."default",
    meta_data text COLLATE pg_catalog."default",
    CONSTRAINT import_temp_table_user_passport_misc_pkey PRIMARY KEY (autoid)
)
您的语句有一个包含一串单列行的行,但从外观上看,您指的是一行,如
值(%s,%s,,%s)


这是正确的,它解决了我的问题,但是运行查询需要花费很长时间(超过10分钟)。您认为合适的任何方式我都可以优化CSV导入?如果是Postgresql,请使用。是的,但是CSV位于URL(谷歌云存储)中。只要您可以将其更改为类似文件的对象并将其作为参数传递,应该没有什么区别。免责声明:从未真正使用过pg8000。所以类似于
db.execute(“从标准格式CSV复制导入临时表用户护照杂项(profileid,terms,lastname,firstname,picture,service,url,is,is,is,passport,active,language,created,modified,passport,completion,email,about,me,uni,meta数据)”,stream=data,stream\u source)
command = """
      INSERT INTO import_temp_table_user_passport_misc
      (profileid, terms_accepted, lastname, firstname, picture_serving_url,
      is_active, is_passport_active, language, created, modified,
      passport_completion_level, email, about_me, uni_code, meta_data)
      VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s, %s)
      """