Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/postgresql/10.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
PostgreSQL查询使用索引扫描运行得更快,但引擎选择哈希连接_Postgresql_Indexing_Query Optimization_Postgresql Performance - Fatal编程技术网

PostgreSQL查询使用索引扫描运行得更快,但引擎选择哈希连接

PostgreSQL查询使用索引扫描运行得更快,但引擎选择哈希连接,postgresql,indexing,query-optimization,postgresql-performance,Postgresql,Indexing,Query Optimization,Postgresql Performance,查询: SELECT "replays_game".* FROM "replays_game" INNER JOIN "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id" WHERE "replays_playeringame"."player_id" = 50027 如果我设置了set enable_seqscan=off,那么它会执行最快的操作,即: QUERY PLAN ----

查询:

SELECT "replays_game".*
FROM "replays_game"
INNER JOIN
 "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"
WHERE "replays_playeringame"."player_id" = 50027
如果我设置了
set enable_seqscan=off
,那么它会执行最快的操作,即:

QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=0.00..27349.80 rows=3395 width=72) (actual time=28.726..65.056 rows=3398 loops=1)
   ->  Index Scan using replays_playeringame_player_id on replays_playeringame  (cost=0.00..8934.43 rows=3395 width=4) (actual time=0.019..2.412 rows=3398 loops=1)
         Index Cond: (player_id = 50027)
   ->  Index Scan using replays_game_pkey on replays_game  (cost=0.00..5.41 rows=1 width=72) (actual time=0.017..0.017 rows=1 loops=3398)
         Index Cond: (id = replays_playeringame.game_id)
 Total runtime: 65.437 ms
但如果没有可怕的enable_seqscan,它会选择做一件更慢的事情:

QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Hash Join  (cost=7330.18..18145.24 rows=3395 width=72) (actual time=92.380..535.422 rows=3398 loops=1)
   Hash Cond: (replays_playeringame.game_id = replays_game.id)
   ->  Index Scan using replays_playeringame_player_id on replays_playeringame  (cost=0.00..8934.43 rows=3395 width=4) (actual time=0.020..2.899 rows=3398 loops=1)
         Index Cond: (player_id = 50027)
   ->  Hash  (cost=3668.08..3668.08 rows=151208 width=72) (actual time=90.842..90.842 rows=151208 loops=1)
         Buckets: 1024  Batches: 32 (originally 16)  Memory Usage: 1025kB
         ->  Seq Scan on replays_game  (cost=0.00..3668.08 rows=151208 width=72) (actual time=0.020..29.061 rows=151208 loops=1)
 Total runtime: 535.821 ms
以下是相关指标:

Index "public.replays_game_pkey"
 Column |  Type   | Definition
--------+---------+------------
 id     | integer | id
primary key, btree, for table "public.replays_game"

Index "public.replays_playeringame_player_id"
  Column   |  Type   | Definition
-----------+---------+------------
 player_id | integer | player_id
btree, for table "public.replays_playeringame"
所以我的问题是,我做错了什么,博士后错误地估计了这两种加入方式的相对成本?我在成本估算中看到,它认为散列连接会更快。它对索引连接成本的估计是500倍

我怎样才能给博士后更多的线索?在运行上述所有操作之前,我确实运行了一个
真空分析

有趣的是,如果我为游戏数量较少的玩家运行此查询,Postgres会选择索引扫描+嵌套循环。因此,关于大量游戏的某些东西会让这种不受欢迎的行为感到高兴,因为相对估计成本与实际估计成本不符

最后,我应该使用Postgres吗?我不想成为数据库调优方面的专家,所以我正在寻找一个能够在认真的开发人员的关注下表现相当好的数据库,而不是专门的DBA。我担心,如果我坚持参加博士后考试,我会遇到一系列这样的问题,迫使我成为博士后专家,也许另一个DB会更宽容一种更随意的方式


一位Postgres专家(RhodiumToad)检查了我的完整数据库设置()并推荐
设置cpu\u tuple\u cost=0.1
。这大大加快了速度:

或者,切换到MySQL也很好地解决了这个问题。我在我的OSX设备上默认安装了MySQL和Postgres,MySQL的速度快了2倍,通过反复执行查询来比较“预热”的查询。对于“冷”查询,即第一次执行给定查询时,MySQL的速度要快5到150倍。冷查询的性能对于我的特定应用程序非常重要


就我而言,最大的问题仍然悬而未决——Postgres是否需要比MySQL更复杂的操作和配置才能运行良好?例如,考虑到这里没有任何评论家提供的建议。

< P>我猜测你使用的是默认的代码> RealthPosiaPix= 4 ,这太高了,使得索引扫描太贵了。 我尝试用以下脚本重建2个表:

CREATE TABLE replays_game (
    id integer NOT NULL,
    PRIMARY KEY (id)
);

CREATE TABLE replays_playeringame (
    player_id integer NOT NULL,
    game_id integer NOT NULL,
    PRIMARY KEY (player_id, game_id),
    CONSTRAINT replays_playeringame_game_fkey
        FOREIGN KEY (game_id) REFERENCES replays_game (id)
);

CREATE INDEX ix_replays_playeringame_game_id
    ON replays_playeringame (game_id);

-- 150k games
INSERT INTO replays_game
SELECT generate_series(1, 150000);

-- ~150k players, ~2 games each
INSERT INTO replays_playeringame
select trunc(random() * 149999 + 1), generate_series(1, 150000);

INSERT INTO replays_playeringame
SELECT *
FROM
    (
        SELECT
            trunc(random() * 149999 + 1) as player_id,
            generate_series(1, 150000) as game_id
    ) AS t
WHERE
    NOT EXISTS (
        SELECT 1
        FROM replays_playeringame
        WHERE
            t.player_id = replays_playeringame.player_id
            AND t.game_id = replays_playeringame.game_id
    )
;

-- the heavy player with 3000 games
INSERT INTO replays_playeringame
select 999999, generate_series(1, 3000);
默认值为4时:

game=# set random_page_cost = 4;
SET
game=# explain analyse SELECT "replays_game".*
FROM "replays_game"
INNER JOIN "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"
WHERE "replays_playeringame"."player_id" = 999999;
                                                                     QUERY PLAN                                                                      
-----------------------------------------------------------------------------------------------------------------------------------------------------
 Hash Join  (cost=1483.54..4802.54 rows=3000 width=4) (actual time=3.640..110.212 rows=3000 loops=1)
   Hash Cond: (replays_game.id = replays_playeringame.game_id)
   ->  Seq Scan on replays_game  (cost=0.00..2164.00 rows=150000 width=4) (actual time=0.012..34.261 rows=150000 loops=1)
   ->  Hash  (cost=1446.04..1446.04 rows=3000 width=4) (actual time=3.598..3.598 rows=3000 loops=1)
         Buckets: 1024  Batches: 1  Memory Usage: 106kB
         ->  Bitmap Heap Scan on replays_playeringame  (cost=67.54..1446.04 rows=3000 width=4) (actual time=0.586..2.041 rows=3000 loops=1)
               Recheck Cond: (player_id = 999999)
               ->  Bitmap Index Scan on replays_playeringame_pkey  (cost=0.00..66.79 rows=3000 width=0) (actual time=0.560..0.560 rows=3000 loops=1)
                     Index Cond: (player_id = 999999)
 Total runtime: 110.621 ms
将其降至2后:

game=# set random_page_cost = 2;
SET
game=# explain analyse SELECT "replays_game".*
FROM "replays_game"
INNER JOIN "replays_playeringame" ON "replays_game"."id" = "replays_playeringame"."game_id"
WHERE "replays_playeringame"."player_id" = 999999;
                                                                  QUERY PLAN                                                                   
-----------------------------------------------------------------------------------------------------------------------------------------------
 Nested Loop  (cost=45.52..4444.86 rows=3000 width=4) (actual time=0.418..27.741 rows=3000 loops=1)
   ->  Bitmap Heap Scan on replays_playeringame  (cost=45.52..1424.02 rows=3000 width=4) (actual time=0.406..1.502 rows=3000 loops=1)
         Recheck Cond: (player_id = 999999)
         ->  Bitmap Index Scan on replays_playeringame_pkey  (cost=0.00..44.77 rows=3000 width=0) (actual time=0.388..0.388 rows=3000 loops=1)
               Index Cond: (player_id = 999999)
   ->  Index Scan using replays_game_pkey on replays_game  (cost=0.00..0.99 rows=1 width=4) (actual time=0.006..0.006 rows=1 loops=3000)
         Index Cond: (id = replays_playeringame.game_id)
 Total runtime: 28.542 ms
(8 rows)
如果使用SSD,我会将其进一步降低到1.1


至于你的最后一个问题,我真的认为你应该坚持使用postgresql。我有使用postgresql和mssql的经验,我需要在后者上投入三倍的努力,使其性能达到前者的一半。

使用多列
(玩家id,游戏id)
表上的回放玩家游戏
索引,您可能会得到更好的执行计划。这避免了使用随机页面搜索来查找玩家id的游戏id。

我运行了sayap的测试平台代码(谢谢!),并进行了以下修改:

  • 代码运行四次,随机页面成本设置为8,4,2,1;按这个顺序。(cpc=8用于为磁盘缓冲区高速缓存充注)
  • 用减少的(1/2,1/4,1/8)部分硬击球手(分别为3K、1K5750和375个硬击球手)重复测试;其余记录保持不变
  • 对于work_mem,使用较低的设置(最小值为64K)重复这些4*4测试
在这次跑步之后,我做了同样的跑步,但增加了十倍:有1M5的记录(30K硬打击者)

目前,我正在以100倍的放大率运行相同的测试,但初始化相当缓慢

结果单元格中的条目是以毫秒为单位的总时间加上一个表示所选查询计划的字符串。(仅出现一手计划)

初步结论:

  • 原始查询的“工作集”太小:所有工作集都适合核心,导致页面获取的成本被严重高估。将RPC设置为2(或1)“解决”了这个问题,但一旦扩大查询规模,页面成本将占主导地位,RPC=4将变得相当甚至更好

  • 将work\u mem设置为较低的值是另一种使优化者转向索引扫描(而不是哈希+位图扫描)的方法。我发现的差异比Sayap报告的要小。可能我的缓存大小更有效,或者他忘记了初始化缓存

  • 众所周知,优化器在“歪斜”分布(以及“歪斜”或“尖峰”多维分布)方面存在问题。对初始3K/150K硬打击者的1/4和1/8进行的测试表明,一旦“峰值”变平,这种影响就会消失

  • 在2%的边界上发生了一些事情:3000/150000 gererate计划与那些有的计划不同(更糟糕)。这是一篇老文章,但很有帮助,因为我刚刚遇到了一个类似的问题

    这是我到目前为止的发现。在
    replays\u游戏
    中有151208行,击中一件物品的平均成本约为
    log(151208)=12
    。由于过滤后的
    replays_PlayerGame
    中有
    3395
    记录,平均成本是
    12*3395
    ,这相当高。此外,计划者高估了页面成本:它假设所有行都是随机分布的,而事实并非如此。如果这是真的,则序列扫描会更好。因此基本上,查询计划试图避免最坏的情况

    @dsjoerg的问题是,
    replays\u PlayerGame(game\u id)
    上没有索引。如果
    replays\u PlayerGame(game\u id)
    上有索引,则始终会使用索引扫描:扫描索引的成本将变为
    3395+12
    (或类似的成本)

    @尼尔建议在
    (玩家id,游戏id)
    上建立索引,虽然很接近,但是
    Original 3K / 150K  work_mem=16M
    
    rpc     |       3K      |       1K5     |       750     |       375
    --------+---------------+---------------+---------------+------------
    8*      | 50.8  H.BBi.HS| 44.3  H.BBi.HS| 38.5  H.BBi.HS| 41.0  H.BBi.HS
    4       | 43.6  H.BBi.HS| 48.6  H.BBi.HS| 4.34  NBBi    | 1.33  NBBi
    2       | 6.92  NBBi    | 3.51  NBBi    | 4.61  NBBi    | 1.24  NBBi
    1       | 6.43  NII     | 3.49  NII     | 4.19  NII     | 1.18  NII
    
    
    Original 3K / 150K work_mem=64K
    
    rpc     |       3K      |       1K5     |       750     |       375
    --------+---------------+---------------+---------------+------------
    8*      | 74.2  H.BBi.HS| 69.6  NBBi    | 62.4  H.BBi.HS| 66.9  H.BBi.HS
    4       | 6.67  NBBi    | 8.53  NBBi    | 1.91  NBBi    | 2.32  NBBi
    2       | 6.66  NBBi    | 3.6   NBBi    | 1.77  NBBi    | 0.93  NBBi
    1       | 7.81  NII     | 3.26  NII     | 1.67  NII     | 0.86  NII
    
    
    Scaled 10*: 30K / 1M5  work_mem=16M
    
    rpc     |       30K     |       15K     |       7k5     |       3k75
    --------+---------------+---------------+---------------+------------
    8*      | 623   H.BBi.HS| 556   H.BBi.HS| 531   H.BBi.HS| 14.9  NBBi
    4       | 56.4  M.I.sBBi| 54.3  NBBi    | 27.1  NBBi    | 19.1  NBBi
    2       | 71.0  NBBi    | 18.9  NBBi    | 9.7   NBBi    | 9.7   NBBi
    1       | 79.0  NII     | 35.7  NII     | 17.7  NII     | 9.3   NII
    
    
    Scaled 10*: 30K / 1M5  work_mem=64K
    
    rpc     |       30K     |       15K     |       7k5     |       3k75
    --------+---------------+---------------+---------------+------------
    8*      | 729   H.BBi.HS| 722   H.BBi.HS| 723   H.BBi.HS| 19.6  NBBi
    4       | 55.5  M.I.sBBi| 41.5  NBBi    | 19.3  NBBi    | 13.3  NBBi
    2       | 70.5  NBBi    | 41.0  NBBi    | 26.3  NBBi    | 10.7  NBBi
    1       | 69.7  NII     | 38.5  NII     | 20.0  NII     | 9.0   NII
    
    Scaled 100*: 300K / 15M  work_mem=16M
    
    rpc     |       300k    |       150K    |       75k     |       37k5
    --------+---------------+---------------+---------------+---------------
    8*      |7314   H.BBi.HS|9422   H.BBi.HS|6175   H.BBi.HS| 122   N.BBi.I
    4       | 569   M.I.sBBi| 199   M.I.sBBi| 142   M.I.sBBi| 105   N.BBi.I
    2       | 527   M.I.sBBi| 372   N.BBi.I | 198   N.BBi.I | 110   N.BBi.I
    1       | 694   NII     | 362   NII     | 190   NII     | 107   NII
    
    Scaled 100*: 300K / 15M  work_mem=64K
    
    rpc     |       300k    |       150k    |       75k     |       37k5
    --------+---------------+---------------+---------------+------------
    8*      |22800 H.BBi.HS |21920 H.BBi.HS | 20630 N.BBi.I |19669  H.BBi.HS
    4       |22095 H.BBi.HS |  284 M.I.msBBi| 205   B.BBi.I |  116  N.BBi.I
    2       |  528 M.I.msBBi|  399  N.BBi.I | 211   N.BBi.I |  110  N.BBi.I
    1       |  718 NII      |  364  NII     | 200   NII     |  105  NII
    
    [8*] Note: the RandomPageCost=8 runs were only intended as a prerun to prime the disk buffer cache; the results should be ignored.
    
    Legend for node types:
    N := Nested loop
    M := Merge join
    H := Hash (or Hash join)
    B := Bitmap heap scan
    Bi := Bitmap index scan
    S := Seq scan
    s := sort
    m := materialise