Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/sql/68.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/performance/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
棘手的postgresql查询优化(带排序的不同行聚合)_Sql_Performance_Postgresql_Optimization - Fatal编程技术网

棘手的postgresql查询优化(带排序的不同行聚合)

棘手的postgresql查询优化(带排序的不同行聚合),sql,performance,postgresql,optimization,Sql,Performance,Postgresql,Optimization,我有一个事件表,它的模式和数据分布与这个人工表非常相似,可以在本地轻松生成: CREATE TABLE events AS WITH args AS ( SELECT 300 AS scale_factor, -- feel free to reduce this to speed up local testing 1000 AS pa_count, 1 AS l_count_min, 29 AS l_count_rand,

我有一个事件表,它的模式和数据分布与这个人工表非常相似,可以在本地轻松生成:

CREATE TABLE events AS
WITH args AS (
    SELECT
        300 AS scale_factor, -- feel free to reduce this to speed up local testing
        1000 AS pa_count,
        1 AS l_count_min,
        29 AS l_count_rand,
        10 AS c_count,
        10 AS pr_count,
        3 AS r_count,
        '10 days'::interval AS time_range -- edit 2017-05-02: the real data set has years worth of data here, but the query time ranges stay small (a couple days)
)

SELECT
    p.c_id,
    'ABC'||lpad(p.pa_id::text, 13, '0') AS pa_id,
    'abcdefgh-'||((random()*(SELECT pr_count-1 FROM args)+1))::int AS pr_id,
    ((random()*(SELECT r_count-1 FROM args)+1))::int AS r,
    '2017-01-01Z00:00:00'::timestamp without time zone + random()*(SELECT time_range FROM args) AS t
FROM (
    SELECT
        pa_id,
        ((random()*(SELECT c_count-1 FROM args)+1))::int AS c_id,
        (random()*(SELECT l_count_rand FROM args)+(SELECT l_count_min FROM args))::int AS l_count
    FROM generate_series(1, (SELECT pa_count*scale_factor FROM args)) pa_id
) p
JOIN LATERAL (
    SELECT generate_series(1, p.l_count)
) l(id) ON (true);
摘自SELECT*from事件:

我需要的是一个查询,它在给定的时间范围t内为给定的c_id选择所有行,然后对它们进行筛选,以便只按t为每个唯一的pr_id和pa_id组合包含最新的行,然后统计这些行的pr_id和r组合的数量

这是一个相当多的问题,下面是我提出的3个SQL查询,它们可以产生所需的结果:

WITH query_a AS (
    SELECT
        pr_id,
        r,
        count(1) AS quantity
    FROM (
        SELECT DISTINCT ON (pr_id, pa_id)
          pr_id,
          pa_id,
          r
        FROM events
        WHERE
          c_id = 5 AND
          t >= '2017-01-03Z00:00:00' AND
          t < '2017-01-06Z00:00:00'
        ORDER BY pr_id, pa_id, t DESC
    ) latest
    GROUP BY
        1,
        2
    ORDER BY 3, 2, 1 DESC
),


query_b AS (
    SELECT
        pr_id,
        r,
        count(1) AS quantity
    FROM (
        SELECT
          pr_id,
          pa_id,
          first_not_null(r ORDER BY t DESC) AS r
        FROM events
        WHERE
          c_id = 5 AND
          t >= '2017-01-03Z00:00:00' AND
          t < '2017-01-06Z00:00:00'
        GROUP BY
          1,
          2
    ) latest
    GROUP BY
        1,
        2
    ORDER BY 3, 2, 1 DESC
),

query_c AS (
    SELECT
        pr_id,
        r,
        count(1) AS quantity
    FROM (
        SELECT
          pr_id,
          pa_id,
          first_not_null(r) AS r
        FROM events
        WHERE
          c_id = 5 AND
          t >= '2017-01-03Z00:00:00' AND
          t < '2017-01-06Z00:00:00'
        GROUP BY
          1,
          2
    ) latest
    GROUP BY
        1,
        2
    ORDER BY 3, 2, 1 DESC
)
我的困境是,query_c比query_a和query_b的性能好6倍以上,但从技术上讲,不能保证产生与其他查询相同的结果,因为其他查询注意到第一个_not_null聚合中缺少的顺序by。然而,在实践中,它似乎选择了一个我认为是正确和最佳的查询计划

以下是本地计算机上所有3个查询的解释分析详细输出:

问题a:

查询:

我会考虑QueReIsA是可疑的规范查询。


我非常感谢您在这方面的任何意见。事实上,我已经找到了一个在我的应用程序中实现可接受性能的方法,但这个问题仍然困扰着我的睡眠,事实上,我正在休假,我现在正在休假 只有两种不同的方法SYMMV:

注意:可以从第二个查询中省略DISTINCT ON,结果已经是唯一的

我会考虑QueReIsA是可疑的规范查询。

我找到了一种方法,可以使查询快半秒

来自查询a的内部查询

需要配合

ORDER BY pr_id, pa_id, t DESC
尤其是在pr_id和pa_id列在第一位的情况下。 c_id=5是常量,但不能使用索引事件c_id,t DESC,pr_id,pa_id,r,因为列不是按ORDER by子句要求的pr_id,pa_id,t DESC组织的。 如果您至少在pr_id、pa_id、t DESC上有一个索引,则不必进行排序,因为ORDER BY条件与该索引对齐

这就是我所做的

CREATE INDEX events_idx2 ON events (c_id, pr_id, pa_id, t DESC, r);
这个索引可以被您的内部查询使用——至少在理论上是这样。 不幸的是,查询规划者认为最好通过使用带有c_id和x的索引事件\u idx来减少行数我会尝试使用带有匹配索引的标准行数函数,而不是Postgres-specific DISTINCT-ON来查找最新的行

索引

质疑

您还可以根据数据的外部知识进行一次优化

如果您可以保证每对pa_id、pr_id在每一天都有值,那么您可以安全地将用户定义的t范围减少到仅一天

如果用户通常指定t的范围大于1天,这将减少引擎读取和排序的行数

如果您不能在数据中为所有值提供这种保证,但您仍然知道,通常所有的pa_id、pr_id通过t彼此接近,并且用户通常为t提供广泛的范围,那么您可以运行初步查询以缩小主查询的t范围

大概是这样的:

SELECT
    MIN(MaxT) AS StartT
    MAX(MaxT) AS EndT
FROM
    (
        SELECT
            pa_id
            ,pr_id
            ,MAX(t) AS MaxT
        FROM events
        WHERE
            c_id = 5
            AND t >= '2017-01-03Z00:00:00'
            AND t < '2017-01-06Z00:00:00'
        GROUP BY
            pa_id
            ,pr_id
    ) AS T
然后在主查询中使用找到的StartT、EndT,希望新范围比用户最初定义的范围窄得多

上面的查询不必对行进行排序,因此应该很快。主查询必须对行进行排序,但要排序的行会更少,因此总体运行时可能会更好。

[已编辑] 好的,因为这取决于您的数据分布,这里是另一种方法

首先添加以下索引:

在事件c_id、t DESC、pr_id、pa_id、r上创建索引事件\u idx2

这将尽可能快地提取MAXt,前提是子集合在父表上的连接要小得多。但是,如果数据集不是那么小,则速度可能会较慢

SELECT
    e.pr_id,
    e.r,
    count(1) AS quantity
FROM events e
JOIN (
    SELECT
        pr_id,
        pa_id,
        MAX(t) last_t
    FROM events e
    WHERE
        c_id = 5 
        AND t >= '2017-01-03Z00:00:00' 
        AND t < '2017-01-06Z00:00:00'
    GROUP BY 
        pr_id, 
        pa_id
) latest 
    ON (
        c_id = 5 
        AND latest.pr_id = e.pr_id
        AND latest.pa_id = e.pa_id
        AND latest.last_t = e.t
    )
GROUP BY
    e.pr_id,
    e.r
ORDER BY 3, 2, 1 DESC
问题1:

问题5:

:


因此,我采取了一些策略,尝试将分组和不同的数据移动到它们自己的表中,以便我们可以利用多个表索引。请注意,此解决方案仅在您可以控制数据插入数据库的方式(即,您可以更改数据源应用程序)的情况下有效。如果不是,唉,这是没有意义的

实际上,不是立即插入事件表,而是首先检查相关表中是否存在关系日期和prpa。如果没有,创建它们。然后获取它们的ID,并将其用于事件表的insert语句

在开始之前,我在query_c上的性能比query_a提高了10倍,而我对重写的query_a的最终结果是大约4倍的性能。如果这还不够好,请随时关闭

考虑到您在第一个实例中给出的初始数据播种查询,我计算了以下基准:

query_a: 5228.518 ms
query_b: 5708.962 ms
query_c: 538.329 ms
因此,无论付出与否,性能都会提高10倍左右< /p> 我将改变在事件中生成的数据,这种改变需要相当长的时间。实际上您不需要这样做,因为您对表的插入已经包含在内了

对于我的优化,第一步是创建一个包含日期的表,然后在事件表中传输数据,并与之关联,如下所示:

CREATE TABLE dates (
    id SERIAL,
    year_part INTEGER NOT NULL,
    month_part INTEGER NOT NULL,
    day_part INTEGER NOT NULL
);
-- Total runtime: 8.281 ms

INSERT INTO dates(year_part, month_part, day_part) SELECT DISTINCT
    EXTRACT(YEAR FROM t), EXTRACT(MONTH FROM t), EXTRACT(DAY FROM t)
FROM events;
-- Total runtime: 12802.900 ms

CREATE INDEX dates_ymd ON dates USING btree(year_part, month_part, day_part);
-- Total runtime: 13.750 ms

ALTER TABLE events ADD COLUMN date_id INTEGER;
-- Total runtime: 2.468ms

UPDATE events SET date_id = dates.id
FROM dates
WHERE EXTRACT(YEAR FROM t) = dates.year_part
AND EXTRACT(MONTH FROM t) = dates.month_part
AND EXTRACT(DAY FROM T) = dates.day_part
;
-- Total runtime: 388024.520 ms
接下来,我们做同样的操作,但是使用密钥对pr_id,pa_id,这不会太多地减少基数,但是当我们谈论大型集合时,它可以帮助内存使用和交换:

CREATE TABLE prpa (
    id SERIAL,
    pr_id TEXT NOT NULL,
    pa_id TEXT NOT NULL
);
-- Total runtime: 5.451 ms

CREATE INDEX events_prpa ON events USING btree(pr_id, pa_id);
-- Total runtime: 218,908.894 ms

INSERT INTO prpa(pr_id, pa_id) SELECT DISTINCT pr_id, pa_id FROM events;
-- Total runtime: 5566.760 ms

CREATE INDEX prpa_idx ON prpa USING btree(pr_id, pa_id);
-- Total runtime: 84185.057 ms

ALTER TABLE events ADD COLUMN prpa_id INTEGER;
-- Total runtime: 2.067 ms

UPDATE events SET prpa_id = prpa.id
FROM prpa
WHERE events.pr_id = prpa.pr_id
AND events.pa_id = prpa.pa_id;
-- Total runtime: 757915.192

DROP INDEX events_prpa;
-- Total runtime: 1041.556 ms
最后,让我们去掉旧索引和现在不存在的列,然后清理新表:

DROP INDEX events_idx;
-- Total runtime: 1139.508 ms

ALTER TABLE events
    DROP COLUMN pr_id,
    DROP COLUMN pa_id
;
-- Total runtime: 5.376 ms

VACUUM ANALYSE prpa;
-- Total runtime: 1030.142

VACUUM ANALYSE dates;
-- Total runtime: 6652.151
因此,我们现在有以下表格:

events (c_id, r, t, prpa_id, date_id)
dates (id, year_part, month_part, day_part)
prpa (id, pr_id, pa_id)
现在让我们来讨论一个索引,将t DESC推到它所属的末尾,我们现在可以这样做,因为我们在排序之前过滤日期的结果,这减少了t DESC在索引中如此突出的需要:

CREATE INDEX events_idx_new ON events USING btree (c_id, date_id, prpa_id, t DESC);
-- Total runtime: 27697.795
VACUUM ANALYSE events;
现在我们重写查询,使用一个表来存储中间结果——我发现这对大型数据集和awaaaaay都很有效

DROP TABLE IF EXISTS temp_results;

SELECT DISTINCT ON (prpa_id)
    prpa_id,
    r
INTO TEMPORARY temp_results
FROM events
INNER JOIN dates
    ON dates.id = events.date_id
WHERE c_id = 5
AND dates.year_part BETWEEN 2017 AND 2017
AND dates.month_part BETWEEN 1 AND 1
AND dates.day_part BETWEEN 3 AND 5
ORDER BY prpa_id, t DESC;

SELECT
    prpa.pr_id,
    r,
    count(1) AS quantity
FROM temp_results
INNER JOIN prpa ON prpa.id = temp_results.prpa_id
GROUP BY
    1,
    2
ORDER BY 3, 2, 1 DESC;
-- Total runtime: 1233.281 ms
所以,性能不是提高了10倍,而是提高了4倍,这仍然可以

这个解决方案是我发现的两种技术的组合,适用于大型数据集和日期范围。即使这对你的目的来说还不够好,在你的职业生涯中,这里可能有一些你可以重新利用的宝石

编辑:

解释“选择输入查询”的分析:


事件表的预期自然键是什么?它应该是什么意思,是某种转移概率表吗?自然键是pa_id。每个pa_id指的是一个有大约1…30个事件记录的事物。c_id将事物划分为不同的组。pr_id和r本质上是正在记录的数据。这不是一个转移概率表,但不幸的是,我不能更详细地描述实际的业务领域。没有屏幕截图,请更新此表以使用文本。@EvanCarroll您是指我发布的表的屏幕截图吗?我想将其作为HTML表提供,但不知道如何使用SO编辑器。我也认为这不是问题b/c表中的数据可以在我提供的第一个查询中生成,所以这不像是任何人为了解决这个问题而被迫从屏幕截图中手动键入内容……很抱歉反应太晚。我试过你的问题。您的第一个使用窗口函数的查询似乎提供了与我的查询a和b类似的性能。您的第二个查询比我的查询a和b慢约2倍。不幸的是,所有这些问题仍然比我的问题c慢得多。谢谢你的深思熟虑的回答!我担心的是,如果在索引中稍后移动t列,那么随着时间的推移,该查询的性能会变得更差,因为存在的事件越多。关于赏金:很高兴再加一笔,你认为什么是合适的?我从未使用过如此悬赏的系统。我的真实表有一个id uuid主键,但这不是一个自然键。现在允许存在多个相等的行(id列除外),但由于数据收集方式的原因,这些行不太可能存在。我不需要events_idx做其他任何事情,所以我很乐意完全替换它。如果你愿意,你可以在c_id,t上有一个额外的索引。events_idx2可以工作,因为索引是用t DESC组织的,groupby不需要排序,这可以在较短的查询计划中看到。您有一个设置良好的测试环境。您只需将行推10倍,然后测试查询时间。索引应该通过构造来防止错误的查询时间。此外,PostgreSQL还有部分索引,因此您可以在c_id=5的地方建立索引,诸如此类。请加上你想要的任何赏金。我对我得到的很好。弄明白这一点我学到了很多。所以我已经得到了回报:啊,我想我现在不明白你的意思了。查询_a将仅在c_id=5时受到限制,然后遍历整个索引叶节点。因此,如果c_id随时间稳定,那么将有更多的索引页需要读取,因为有更多的事件。也许所有时间戳上都有一个完整索引,另外还有带时间范围的部分索引?完整索引涵盖由于t超出范围而导致部分索引不适用的情况。但我并没有衡量一个区别:指数。。。在“2017-01-03”中,请阅读从编辑A开始的部分。以我的拙见,阻止第一组的内部排序确实是这里的杀手锏。感谢您的补充想法!你在正确的轨道上,CUID随着时间的推移保持相当稳定,但是越来越多的事件被添加。部分索引是一个很酷的想法,但是查询范围是由用户选择的,因此在尝试跨两个部分索引进行查询时,事情总是会分崩离析:。问:您是否同意query_c为我设计的索引演示了一个正确/快速的计划?如果是,您认为有可能从像query_a这样的模糊查询中获得此计划吗?我还同意防止I
不太好!我还开始了一项悬赏…:谢谢这在我发布的人工数据集上表现良好,但不幸的是,真实数据集覆盖了更大的时间范围t。因此,任何早期不包含t列的索引都会随着事件表的增长而降低性能。我在今天早些时候的问题文本中进行了编辑以澄清这一点。如果有机会,我还将尝试调整虚拟数据生成查询以反映这一点。嗯,每个组的top-n可以通过两种方式完成——行数或横向连接。对于给定的c_id和不限制t,您有多少种不同的pa_id和pr_id组合?如果这个数字不是太大,那么横向连接可能会更快。对于您的真实数据,此查询的结果是什么?:使用CTE作为SELECT DISTINCT pa_id,从事件中选择pr_id,其中c_id=5从CTE选择计数*;这个数字相当大。取决于c_id,它可能很容易达到几百万。@FelixGeisendörfer,恐怕你无能为力。如果你说这个表有很多日期,而你只选择了一个小范围,那么你的索引就更好了。但是,当您选择日期范围内的行时,这些行将按t排序。但是您需要它们按pa_id、pr_id重新排序,以获得这对组合的最新行。我不明白你怎么能避免这种情况。您的查询C返回正确的结果纯粹是偶然的,您不应该依赖它。是的,我试过了,只是再次检查了一遍。在这一点上,我几乎确信查询计划程序需要一个补丁才能正确地解决这个问题……感谢您的回答!您能为最终查询显示查询计划吗?如果我正确地理解了您的优化,那么您实际上只是通过减少每行的宽度来加快排序,但基本上查询计划保持不变?只是添加了查询计划,抱歉延迟,忙了一周。啊,关于数据分发警告,您是对的。我以前在这个方向上做过实验,但没有得到好的结果,不幸的是,您的查询似乎也不例外:比我已经太慢的查询慢2倍_a@FelixGeisend哦,嘘,不要!!在c_id=5时,我忘记了最重要的一行。请再次检查,这给了我一个奇怪的成本,没有真空分析,它真的很小,大约300,但运行时间更好。查询1得到~3400毫秒,查询2得到~970毫秒。对于查询2,我得到了相同的计划,所以我想知道您有什么样的地狱般快速的机器:。不管怎么说,你的回答似乎是我迄今为止得到的最好的答案,而悬赏金即将到期,所以我把它授予了你。我还想再多问几天,b/c我很想找到一个和query_c…一样快的查询,但用当前的查询计划器,这可能是不可能的。这很奇怪。对于这两个查询,我在我的机器上得到了相似的结果,但对于我的答案中的查询,仍然只有大约500毫秒。在c_id,pr_id,pa_id,t DESC,r上创建另一个索引之后,您确实运行了真空分析事件,是吗?那么,我的查询在您的机器上的运行时间是多少?@flatter您提供的答案也很好。我之所以喜欢Blag的答案,是因为它不会保留索引的早期部分,也不需要动态索引管理或侵入性的模式更改。我仍然认为没有正确的答案可以产生一个像query_c一样好的计划,但我不想让赏金过期,这意味着业力点数会丢失。
Sort  (cost=65431.46..65467.93 rows=14591 width=23) (actual time=472.809..472.824 rows=30 loops=1)
   Sort Key: (count(1)), events.r, events.pr_id DESC
   Sort Method: quicksort  Memory: 27kB
   ->  HashAggregate  (cost=64276.38..64422.29 rows=14591 width=23) (actual time=472.732..472.776 rows=30 loops=1)
         Group Key: events.pr_id, events.r
         ->  Unique  (cost=0.56..61722.99 rows=145908 width=40) (actual time=0.024..374.392 rows=118571 loops=1)
               ->  Index Only Scan using events_c_id_pr_id_pa_id_t_r_idx on events  (cost=0.56..60936.08 rows=157380 width=40) (actual time=0.021..222.987 rows=155800 loops=1)
                     Index Cond: ((c_id = 5) AND (t >= '2017-01-03 00:00:00'::timestamp without time zone) AND (t < '2017-01-06 00:00:00'::timestamp without time zone))
                     Heap Fetches: 0
 Planning time: 0.171 ms
 Execution time: 472.925 ms
(11 Zeilen)
Sort  (cost=51324.27..51361.47 rows=14880 width=23) (actual time=550.579..550.592 rows=30 loops=1)
   Sort Key: (count(1)), events.r, events.pr_id DESC
   Sort Method: quicksort  Memory: 27kB
   ->  HashAggregate  (cost=50144.21..50293.01 rows=14880 width=23) (actual time=550.481..550.528 rows=30 loops=1)
         Group Key: events.pr_id, events.r
         ->  Unique  (cost=0.42..47540.21 rows=148800 width=40) (actual time=0.050..443.393 rows=118571 loops=1)
               ->  Index Only Scan using events_cid on events  (cost=0.42..46736.42 rows=160758 width=40) (actual time=0.047..269.676 rows=155800 loops=1)
                     Index Cond: ((t >= '2017-01-03 00:00:00'::timestamp without time zone) AND (t < '2017-01-06 00:00:00'::timestamp without time zone))
                     Heap Fetches: 0
 Planning time: 0.366 ms
 Execution time: 550.706 ms
(11 Zeilen)
CREATE INDEX ix_events ON events USING btree (c_id, pa_id, pr_id, t DESC, r);
WITH
CTE_RN
AS
(
    SELECT
        pa_id
        ,pr_id
        ,r
        ,ROW_NUMBER() OVER (PARTITION BY c_id, pa_id, pr_id ORDER BY t DESC) AS rn
    FROM events
    WHERE
        c_id = 5
        AND t >= '2017-01-03Z00:00:00'
        AND t < '2017-01-06Z00:00:00'
)
SELECT
    pr_id
    ,r
    ,COUNT(*) AS quantity
FROM CTE_RN
WHERE rn = 1
GROUP BY 
    pr_id
    ,r
ORDER BY quantity, r, pr_id DESC
;
1   Sort  (cost=158.07..158.08 rows=1 width=44) (actual time=81.445..81.448 rows=30 loops=1)
2     Output: cte_rn.pr_id, cte_rn.r, (count(*))
3     Sort Key: (count(*)), cte_rn.r, cte_rn.pr_id DESC
4     Sort Method: quicksort  Memory: 27kB
5     CTE cte_rn
6       ->  WindowAgg  (cost=0.42..157.78 rows=12 width=88) (actual time=0.204..56.215 rows=15130 loops=1)
7             Output: events.pa_id, events.pr_id, events.r, row_number() OVER (?), events.t, events.c_id
8             ->  Index Only Scan using ix_events3 on public.events  (cost=0.42..157.51 rows=12 width=80) (actual time=0.184..28.688 rows=15130 loops=1)
9                   Output: events.c_id, events.pa_id, events.pr_id, events.t, events.r
10                  Index Cond: ((events.c_id = 5) AND (events.t >= '2017-01-03 00:00:00'::timestamp without time zone) AND (events.t < '2017-01-06 00:00:00'::timestamp without time zone))
11                  Heap Fetches: 15130
12    ->  HashAggregate  (cost=0.28..0.29 rows=1 width=44) (actual time=81.363..81.402 rows=30 loops=1)
13          Output: cte_rn.pr_id, cte_rn.r, count(*)
14          Group Key: cte_rn.pr_id, cte_rn.r
15          ->  CTE Scan on cte_rn  (cost=0.00..0.27 rows=1 width=36) (actual time=0.214..72.841 rows=11491 loops=1)
16                Output: cte_rn.pa_id, cte_rn.pr_id, cte_rn.r, cte_rn.rn
17                Filter: (cte_rn.rn = 1)
18                Rows Removed by Filter: 3639
19  Planning time: 0.452 ms
20  Execution time: 83.234 ms
SELECT
    MIN(MaxT) AS StartT
    MAX(MaxT) AS EndT
FROM
    (
        SELECT
            pa_id
            ,pr_id
            ,MAX(t) AS MaxT
        FROM events
        WHERE
            c_id = 5
            AND t >= '2017-01-03Z00:00:00'
            AND t < '2017-01-06Z00:00:00'
        GROUP BY
            pa_id
            ,pr_id
    ) AS T
SELECT
    e.pr_id,
    e.r,
    count(1) AS quantity
FROM events e
JOIN (
    SELECT
        pr_id,
        pa_id,
        MAX(t) last_t
    FROM events e
    WHERE
        c_id = 5 
        AND t >= '2017-01-03Z00:00:00' 
        AND t < '2017-01-06Z00:00:00'
    GROUP BY 
        pr_id, 
        pa_id
) latest 
    ON (
        c_id = 5 
        AND latest.pr_id = e.pr_id
        AND latest.pa_id = e.pa_id
        AND latest.last_t = e.t
    )
GROUP BY
    e.pr_id,
    e.r
ORDER BY 3, 2, 1 DESC
--PostgreSQL 9.6
--'\\' is a delimiter

-- CREATE TABLE events AS...

VACUUM  ANALYZE events;
CREATE INDEX idx_events_idx ON events (c_id, t DESC, pr_id, pa_id, r);
  -- query A
explain analyze SELECT
        pr_id,
        r,
        count(1) AS quantity
    FROM (
        SELECT DISTINCT ON (pr_id, pa_id)
          pr_id,
          pa_id,
          r
        FROM events
        WHERE
          c_id = 5 AND
          t >= '2017-01-03Z00:00:00' AND
          t < '2017-01-06Z00:00:00'
        ORDER BY pr_id, pa_id, t DESC
    ) latest
    GROUP BY
        1,
        2
    ORDER BY 3, 2, 1 DESC
QUERY PLAN
Sort  (cost=2170.24..2170.74 rows=200 width=15) (actual time=358.239..358.245 rows=30 loops=1)
Sort Key: (count(1)), events.r, events.pr_id
Sort Method: quicksort  Memory: 27kB
->  HashAggregate  (cost=2160.60..2162.60 rows=200 width=15) (actual time=358.181..358.189 rows=30 loops=1)
->  Unique  (cost=2012.69..2132.61 rows=1599 width=40) (actual time=327.345..353.750 rows=12098 loops=1)
->  Sort  (cost=2012.69..2052.66 rows=15990 width=40) (actual time=327.344..348.686 rows=15966 loops=1)
Sort Key: events.pr_id, events.pa_id, events.t
Sort Method: external merge  Disk: 792kB
->  Index Only Scan using idx_events_idx on events  (cost=0.42..896.20 rows=15990 width=40) (actual time=0.059..5.475 rows=15966 loops=1)
Index Cond: ((c_id = 5) AND (t >= '2017-01-03 00:00:00'::timestamp without time zone) AND (t < '2017-01-06 00:00:00'::timestamp without time zone))
Heap Fetches: 0
Total runtime: 358.610 ms
  -- query max/JOIN
explain analyze     SELECT
        e.pr_id,
        e.r,
        count(1) AS quantity
    FROM events e
    JOIN (
        SELECT
            pr_id,
            pa_id,
            MAX(t) last_t
        FROM events e
        WHERE
            c_id = 5 
            AND t >= '2017-01-03Z00:00:00' 
            AND t < '2017-01-06Z00:00:00'
        GROUP BY 
            pr_id, 
            pa_id
    ) latest 
        ON (
            c_id = 5 
            AND latest.pr_id = e.pr_id
            AND latest.pa_id = e.pa_id
            AND latest.last_t = e.t
        )
    GROUP BY
        e.pr_id,
        e.r
    ORDER BY 3, 2, 1 DESC 
QUERY PLAN
Sort  (cost=4153.31..4153.32 rows=1 width=15) (actual time=68.398..68.402 rows=30 loops=1)
Sort Key: (count(1)), e.r, e.pr_id
Sort Method: quicksort  Memory: 27kB
->  HashAggregate  (cost=4153.29..4153.30 rows=1 width=15) (actual time=68.363..68.371 rows=30 loops=1)
->  Merge Join  (cost=1133.62..4153.29 rows=1 width=15) (actual time=35.083..64.154 rows=12098 loops=1)
Merge Cond: ((e.t = (max(e_1.t))) AND (e.pr_id = e_1.pr_id))
Join Filter: (e.pa_id = e_1.pa_id)
->  Index Only Scan Backward using idx_events_idx on events e  (cost=0.42..2739.72 rows=53674 width=40) (actual time=0.010..8.073 rows=26661 loops=1)
Index Cond: (c_id = 5)
Heap Fetches: 0
->  Sort  (cost=1133.19..1137.19 rows=1599 width=36) (actual time=29.778..32.885 rows=12098 loops=1)
Sort Key: (max(e_1.t)), e_1.pr_id
Sort Method: external sort  Disk: 640kB
->  HashAggregate  (cost=1016.12..1032.11 rows=1599 width=36) (actual time=12.731..16.738 rows=12098 loops=1)
->  Index Only Scan using idx_events_idx on events e_1  (cost=0.42..896.20 rows=15990 width=36) (actual time=0.029..5.084 rows=15966 loops=1)
Index Cond: ((c_id = 5) AND (t >= '2017-01-03 00:00:00'::timestamp without time zone) AND (t < '2017-01-06 00:00:00'::timestamp without time zone))
Heap Fetches: 0
Total runtime: 68.736 ms
DROP INDEX idx_events_idx
CREATE INDEX idx_events_flutter ON events (c_id, pr_id, pa_id, t DESC, r)
  -- query A + index by flutter
explain analyze SELECT
        pr_id,
        r,
        count(1) AS quantity
    FROM (
        SELECT DISTINCT ON (pr_id, pa_id)
          pr_id,
          pa_id,
          r
        FROM events
        WHERE
          c_id = 5 AND
          t >= '2017-01-03Z00:00:00' AND
          t < '2017-01-06Z00:00:00'
        ORDER BY pr_id, pa_id, t DESC
    ) latest
    GROUP BY
        1,
        2
    ORDER BY 3, 2, 1 DESC
QUERY PLAN
Sort  (cost=2744.82..2745.32 rows=200 width=15) (actual time=20.915..20.916 rows=30 loops=1)
Sort Key: (count(1)), events.r, events.pr_id
Sort Method: quicksort  Memory: 27kB
->  HashAggregate  (cost=2735.18..2737.18 rows=200 width=15) (actual time=20.883..20.892 rows=30 loops=1)
->  Unique  (cost=0.42..2707.20 rows=1599 width=40) (actual time=0.037..16.488 rows=12098 loops=1)
->  Index Only Scan using idx_events_flutter on events  (cost=0.42..2627.25 rows=15990 width=40) (actual time=0.036..10.893 rows=15966 loops=1)
Index Cond: ((c_id = 5) AND (t >= '2017-01-03 00:00:00'::timestamp without time zone) AND (t < '2017-01-06 00:00:00'::timestamp without time zone))
Heap Fetches: 0
Total runtime: 20.964 ms
query_a: 5228.518 ms
query_b: 5708.962 ms
query_c: 538.329 ms
CREATE TABLE dates (
    id SERIAL,
    year_part INTEGER NOT NULL,
    month_part INTEGER NOT NULL,
    day_part INTEGER NOT NULL
);
-- Total runtime: 8.281 ms

INSERT INTO dates(year_part, month_part, day_part) SELECT DISTINCT
    EXTRACT(YEAR FROM t), EXTRACT(MONTH FROM t), EXTRACT(DAY FROM t)
FROM events;
-- Total runtime: 12802.900 ms

CREATE INDEX dates_ymd ON dates USING btree(year_part, month_part, day_part);
-- Total runtime: 13.750 ms

ALTER TABLE events ADD COLUMN date_id INTEGER;
-- Total runtime: 2.468ms

UPDATE events SET date_id = dates.id
FROM dates
WHERE EXTRACT(YEAR FROM t) = dates.year_part
AND EXTRACT(MONTH FROM t) = dates.month_part
AND EXTRACT(DAY FROM T) = dates.day_part
;
-- Total runtime: 388024.520 ms
CREATE TABLE prpa (
    id SERIAL,
    pr_id TEXT NOT NULL,
    pa_id TEXT NOT NULL
);
-- Total runtime: 5.451 ms

CREATE INDEX events_prpa ON events USING btree(pr_id, pa_id);
-- Total runtime: 218,908.894 ms

INSERT INTO prpa(pr_id, pa_id) SELECT DISTINCT pr_id, pa_id FROM events;
-- Total runtime: 5566.760 ms

CREATE INDEX prpa_idx ON prpa USING btree(pr_id, pa_id);
-- Total runtime: 84185.057 ms

ALTER TABLE events ADD COLUMN prpa_id INTEGER;
-- Total runtime: 2.067 ms

UPDATE events SET prpa_id = prpa.id
FROM prpa
WHERE events.pr_id = prpa.pr_id
AND events.pa_id = prpa.pa_id;
-- Total runtime: 757915.192

DROP INDEX events_prpa;
-- Total runtime: 1041.556 ms
DROP INDEX events_idx;
-- Total runtime: 1139.508 ms

ALTER TABLE events
    DROP COLUMN pr_id,
    DROP COLUMN pa_id
;
-- Total runtime: 5.376 ms

VACUUM ANALYSE prpa;
-- Total runtime: 1030.142

VACUUM ANALYSE dates;
-- Total runtime: 6652.151
events (c_id, r, t, prpa_id, date_id)
dates (id, year_part, month_part, day_part)
prpa (id, pr_id, pa_id)
CREATE INDEX events_idx_new ON events USING btree (c_id, date_id, prpa_id, t DESC);
-- Total runtime: 27697.795
VACUUM ANALYSE events;
DROP TABLE IF EXISTS temp_results;

SELECT DISTINCT ON (prpa_id)
    prpa_id,
    r
INTO TEMPORARY temp_results
FROM events
INNER JOIN dates
    ON dates.id = events.date_id
WHERE c_id = 5
AND dates.year_part BETWEEN 2017 AND 2017
AND dates.month_part BETWEEN 1 AND 1
AND dates.day_part BETWEEN 3 AND 5
ORDER BY prpa_id, t DESC;

SELECT
    prpa.pr_id,
    r,
    count(1) AS quantity
FROM temp_results
INNER JOIN prpa ON prpa.id = temp_results.prpa_id
GROUP BY
    1,
    2
ORDER BY 3, 2, 1 DESC;
-- Total runtime: 1233.281 ms
Unique  (cost=171839.95..172360.53 rows=51332 width=16) (actual time=819.385..857.777 rows=117471 loops=1)
  ->  Sort  (cost=171839.95..172100.24 rows=104117 width=16) (actual time=819.382..836.924 rows=155202 loops=1)
        Sort Key: events.prpa_id, events.t
        Sort Method: external sort  Disk: 3944kB
        ->  Hash Join  (cost=14340.24..163162.92 rows=104117 width=16) (actual time=126.929..673.293 rows=155202 loops=1)
              Hash Cond: (events.date_id = dates.id)
              ->  Bitmap Heap Scan on events  (cost=14338.97..160168.28 rows=520585 width=20) (actual time=126.572..575.852 rows=516503 loops=1)
                    Recheck Cond: (c_id = 5)
                    Heap Blocks: exact=29610
                    ->  Bitmap Index Scan on events_idx2  (cost=0.00..14208.82 rows=520585 width=0) (actual time=118.769..118.769 rows=516503 loops=1)
                          Index Cond: (c_id = 5)
              ->  Hash  (cost=1.25..1.25 rows=2 width=4) (actual time=0.326..0.326 rows=3 loops=1)
                    Buckets: 1024  Batches: 1  Memory Usage: 1kB
                    ->  Seq Scan on dates  (cost=0.00..1.25 rows=2 width=4) (actual time=0.320..0.323 rows=3 loops=1)
                          Filter: ((year_part >= 2017) AND (year_part <= 2017) AND (month_part >= 1) AND (month_part <= 1) AND (day_part >= 3) AND (day_part <= 5))
                          Rows Removed by Filter: 7
Planning time: 3.091 ms
Execution time: 913.543 ms
Sort  (cost=89590.66..89595.66 rows=2000 width=15) (actual time=1248.535..1248.537 rows=30 loops=1)
  Sort Key: (count(1)), temp_results.r, prpa.pr_id
  Sort Method: quicksort  Memory: 27kB
  ->  HashAggregate  (cost=89461.00..89481.00 rows=2000 width=15) (actual time=1248.460..1248.468 rows=30 loops=1)
        Group Key: prpa.pr_id, temp_results.r
        ->  Hash Join  (cost=73821.20..88626.40 rows=111280 width=15) (actual time=798.861..1213.494 rows=117471 loops=1)
              Hash Cond: (temp_results.prpa_id = prpa.id)
              ->  Seq Scan on temp_results  (cost=0.00..1632.80 rows=111280 width=8) (actual time=0.024..17.401 rows=117471 loops=1)
              ->  Hash  (cost=36958.31..36958.31 rows=2120631 width=15) (actual time=798.484..798.484 rows=2120631 loops=1)
                    Buckets: 16384  Batches: 32  Memory Usage: 3129kB
                    ->  Seq Scan on prpa  (cost=0.00..36958.31 rows=2120631 width=15) (actual time=0.126..350.664 rows=2120631 loops=1)
Planning time: 1.073 ms
Execution time: 1248.660 ms