Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/postgresql/9.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Postgresql Postgres表更新耗时40分钟,最终失败_Postgresql_Database Performance - Fatal编程技术网

Postgresql Postgres表更新耗时40分钟,最终失败

Postgresql Postgres表更新耗时40分钟,最终失败,postgresql,database-performance,Postgresql,Database Performance,我正在尝试更新一个表中所有行的列的值,该表包含大约100万条记录 前几次我运行查询时,它挂起,20分钟后我取消了查询。我用EXPLAIN ANALYZE再次运行它,40分钟后输出: =# explain analyze update documents set state = 'archived'; NOTICE: word is too long to be indexed DETAIL: Words longer than 2047 characters are ignored. ERR

我正在尝试更新一个表中所有行的列的值,该表包含大约100万条记录

前几次我运行查询时,它挂起,20分钟后我取消了查询。我用EXPLAIN ANALYZE再次运行它,40分钟后输出:

=# explain analyze update documents set state = 'archived';
NOTICE:  word is too long to be indexed
DETAIL:  Words longer than 2047 characters are ignored.
ERROR:  deadlock detected
DETAIL:  Process 17080 waits for ShareLock on transaction 14275765; blocked by process 1530.
Process 1530 waits for ShareLock on transaction 14273749; blocked by process 17080.
HINT:  See server log for query details.
Time: 2324900.382 ms
以下是解释输出:

=# explain update documents set workflow_state = 'archived';
                                 QUERY PLAN
----------------------------------------------------------------------------
 Update on documents  (cost=0.00..220673.50 rows=900750 width=1586)
   ->  Seq Scan on documents  (cost=0.00..220673.50 rows=900750 width=1586)
知道发生了什么吗

详情:

PG version 9.3.7

Indexes:
    "documents_pkey" PRIMARY KEY, btree (id)
    "document_search_ix" gin (contents_search)
    "document_user_id_recvd_ix" btree (user_id, bill_date DESC)
Foreign-key constraints:
    "documents_biller_id_fkey" FOREIGN KEY (biller_id) REFERENCES billers(id) ON DELETE SET DEFAULT
    "documents_billercred_id_fkey" FOREIGN KEY (billercred_id) REFERENCES billercreds(id) ON DELETE SET NULL
    "documents_folder_id_fkey" FOREIGN KEY (folder_id) REFERENCES folders(id) ON DELETE CASCADE
    "documents_user_id_fkey" FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE
    "documents_vendor_id_fkey" FOREIGN KEY (vendor_id) REFERENCES vendors(id) ON DELETE SET NULL
Referenced by:
    TABLE "document_billcom_actions" CONSTRAINT "document_billcom_actions_document_id_fkey" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE
    TABLE "document_box_actions" CONSTRAINT "document_box_actions_document_id_fkey" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE
    TABLE "document_email_forwarding_actions" CONSTRAINT "document_email_forwarding_actions_document_id_fkey" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE
    TABLE "document_qbo_actions" CONSTRAINT "document_qbo_actions_document_id_fkey" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE
    TABLE "document_xero_actions" CONSTRAINT "document_xero_actions_document_id_fkey" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE
    TABLE "document_xerofiles_actions" CONSTRAINT "document_xerofiles_actions_document_id_fkey" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE
    TABLE "documenttagmap" CONSTRAINT "documenttagmap_document_id_fkey" FOREIGN KEY (document_id) REFERENCES documents(id) ON DELETE CASCADE
    TABLE "synced_docs" CONSTRAINT "synced_docs_doc_id_fkey" FOREIGN KEY (doc_id) REFERENCES documents(id) ON DELETE CASCADE
Triggers:
    document_search_update BEFORE INSERT OR UPDATE ON documents FOR EACH ROW EXECUTE PROCEDURE tsvector_update_trigger('contents_search', 'pg_catalog.english', 'contents', 'filename', 'account_name', 'account_number')
    document_updated_at_t BEFORE UPDATE ON documents FOR EACH ROW EXECUTE PROCEDURE update_updated_at_column()
    documents_count BEFORE INSERT OR DELETE ON documents FOR EACH ROW EXECUTE PROCEDURE count_trig()
    folder_document_count_trig BEFORE INSERT OR DELETE OR UPDATE ON documents FOR EACH ROW EXECUTE PROCEDURE update_folder_count()
    tags_in_trash_document_count_trig BEFORE DELETE OR UPDATE ON documents FOR EACH ROW EXECUTE PROCEDURE update_tag_trash_count()

它清楚地表明了僵局。可能是一些循环更新依赖项。1。PostgreSQL版本、表详细信息、外键、任何触发器等。请和2。1530和17080中的哪一个是此更新,另一个事务在做什么?3.解释通常是有用的,但在这种情况下不是这样的-你正在更新每一行,所以顺序扫描几乎是它所能做的一切。@RichardHuxton更新了详细信息,谢谢。Re#2 17080正在进行更新,不知道1530在做什么。我不能说这是一个答案,但这里可能给出了最好的解释:-我怀疑你的一个触发器和另一个事务(不管是什么)之间的交互。谢谢,我会关注这一点。如果死锁不是问题,那么对于一个有100万行的表,我应该期望这样的查询大约多长时间/多少数量级?它运行的机器是12核,16GB内存。平均负载接近1。它清楚地表明死锁。可能是一些循环更新依赖项。1。PostgreSQL版本、表详细信息、外键、任何触发器等。请和2。1530和17080中的哪一个是此更新,另一个事务在做什么?3.解释通常是有用的,但在这种情况下不是这样的-你正在更新每一行,所以顺序扫描几乎是它所能做的一切。@RichardHuxton更新了详细信息,谢谢。Re#2 17080正在进行更新,不知道1530在做什么。我不能说这是一个答案,但这里可能给出了最好的解释:-我怀疑你的一个触发器和另一个事务(不管是什么)之间的交互。谢谢,我会关注这一点。如果死锁不是问题,那么对于一个有100万行的表,我应该期望这样的查询大约多长时间/多少数量级?它运行的机器是12核,16GB内存。平均载荷接近1。