Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/postgresql/10.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何优化此组智能max sql查询?_Sql_Postgresql_Postgresql 9.3 - Fatal编程技术网

如何优化此组智能max sql查询?

如何优化此组智能max sql查询?,sql,postgresql,postgresql-9.3,Sql,Postgresql,Postgresql 9.3,这就是我想要的。它基本上按照用户id从daily_statistics表和组中获取所有记录。同时,它执行以下操作: 用户的值按最新值分组 附件ID表示为一个数组,因此我可以确定用户有多少个附件 结果是: user_id | country_id | time_at | assumed_gender | attachment_ids ---------+------------+---------------------+----------------+-----

这就是我想要的。它基本上按照用户id从daily_statistics表和组中获取所有记录。同时,它执行以下操作:

用户的值按最新值分组 附件ID表示为一个数组,因此我可以确定用户有多少个附件 结果是:

 user_id | country_id |       time_at       | assumed_gender |    attachment_ids
---------+------------+---------------------+----------------+----------------------
   21581 |        172 | 2015-04-18 17:55:00 |                | [5942]
   21610 |        140 | 2015-04-18 19:55:00 | male           | [5940]
   22044 |        174 | 2015-04-18 21:55:00 | female         | [12312313, 12312313]

   21353 |        174 | 2015-04-18 20:59:00 | male           | [5938]
   21573 |        246 | 2015-04-18 21:57:00 | male           | [5936]
(5 rows)
下面的查询执行缓慢。大约17秒

  SELECT
    ds.user_id,
    max(case when id=maxid then country_id end) AS country_id,
    max(case when id=maxid then time_at end) AS time_at,
    max(case when id=maxid then properties->'assumed_gender' end) AS assumed_gender,
    json_agg(to_json(attachment_id)) AS attachment_ids
  FROM daily_statistics ds JOIN (
      SELECT u.id as user_id, (
        SELECT ds2.id FROM daily_statistics ds2 WHERE ds2.user_id=u.id AND ds2.metric = 'participation' AND ds2.status = 'active' AND ds2.campaign_id = 39
        ORDER BY ds2.id DESC LIMIT 1
      ) AS maxid FROM users u
      WHERE u.properties -> 'provider' IN ('twitter')
  ) mu ON (ds.user_id=mu.user_id)
  WHERE ds.campaign_id = 39 AND ds.metric = 'participation' AND ds.status = 'active'
  GROUP BY ds.user_id;
问题在于group wise max语句。有没有办法优化此查询并获得相同的输出?我在考虑使用某种横向连接?但这样我就无法获得每个用户的附件id数量

编辑:在表大小为2m行的9k++记录上:执行此查询大约需要25秒

foobar_production=> EXPLAIN ANALYZE SELECT
foobar_production->     ds.user_id,
foobar_production->     max(case when id=maxid then country_id end) AS country_id,
foobar_production->     max(case when id=maxid then time_at end) AS time_at,
foobar_production->     max(case when id=maxid then properties->'assumed_gender' end) AS assumed_gender,
foobar_production->     json_agg(to_json(attachment_id)) AS attachment_ids
foobar_production->   FROM daily_statistics ds JOIN (
foobar_production(>       SELECT u.id as user_id, (
foobar_production(>         SELECT ds2.id FROM daily_statistics ds2 WHERE ds2.user_id=u.id AND ds2.metric = 'participation' AND ds2.status = 'active' AND ds2.campaign_id = 4742
foobar_production(>         ORDER BY ds2.id DESC LIMIT 1
foobar_production(>       ) AS maxid FROM users u
foobar_production(>       WHERE u.properties -> 'provider' IN ('twitter')
foobar_production(>   ) mu ON (ds.user_id=mu.user_id)
foobar_production->   WHERE ds.campaign_id = 4742 AND ds.metric = 'participation' AND ds.status = 'active'
foobar_production->   GROUP BY ds.user_id;
                                                                                        QUERY PLAN                                             
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 HashAggregate  (cost=2063.07..2063.08 rows=1 width=103) (actual time=25155.963..25156.859 rows=775 loops=1)
   ->  Nested Loop  (cost=0.98..1883.99 rows=2 width=103) (actual time=0.744..382.699 rows=2787 loops=1)
         ->  Index Scan using index_daily_statistics_on_campaign_id_and_type on daily_statistics ds  (cost=0.56..1621.73 rows=31 width=99) (actual time=0.107..33.513 rows=9751 loops=1)
               Index Cond: (campaign_id = 4742)
               Filter: (((metric)::text = 'participation'::text) AND ((status)::text = 'active'::text))
         ->  Index Scan using index_users_on_id_and_type on users u  (cost=0.42..8.45 rows=1 width=4) (actual time=0.024..0.024 rows=0 loops=9751)
               Index Cond: (id = ds.user_id)
               Filter: ((properties -> 'provider'::text) = 'twitter'::text)
               Rows Removed by Filter: 1
   SubPlan 1
     ->  Limit  (cost=29.83..29.84 rows=1 width=4) (actual time=2.953..2.954 rows=1 loops=2787)
           ->  Sort  (cost=29.83..29.84 rows=1 width=4) (actual time=2.951..2.951 rows=1 loops=2787)
                 Sort Key: ds2.id
                 Sort Method: top-N heapsort  Memory: 25kB
                 ->  Bitmap Heap Scan on daily_statistics ds2  (cost=25.80..29.82 rows=1 width=4) (actual time=2.381..2.702 rows=105 loops=2787)
                       Recheck Cond: ((user_id = u.id) AND (campaign_id = 4742))
                       Filter: (((metric)::text = 'participation'::text) AND ((status)::text = 'active'::text))
                       ->  BitmapAnd  (cost=25.80..25.80 rows=1 width=0) (actual time=2.365..2.365 rows=0 loops=2787)
                             ->  Bitmap Index Scan on index_daily_statistics_on_user_id  (cost=0.00..5.60 rows=156 width=0) (actual time=0.072..0.072 rows=292 loops=2787)
                                   Index Cond: (user_id = u.id)
                             ->  Bitmap Index Scan on index_daily_statistics_on_campaign_id_and_type  (cost=0.00..19.95 rows=453 width=0) (actual time=2.241..2.241 rows=9751 loops=2787)
                                   Index Cond: (campaign_id = 4742)
   SubPlan 2
     ->  Limit  (cost=29.83..29.84 rows=1 width=4) (actual time=2.879..2.880 rows=1 loops=2787)
           ->  Sort  (cost=29.83..29.84 rows=1 width=4) (actual time=2.876..2.876 rows=1 loops=2787)
                 Sort Key: ds2_1.id
                 Sort Method: top-N heapsort  Memory: 25kB
                 ->  Bitmap Heap Scan on daily_statistics ds2_1  (cost=25.80..29.82 rows=1 width=4) (actual time=2.241..2.585 rows=105 loops=2787)
                       Recheck Cond: ((user_id = u.id) AND (campaign_id = 4742))
                       Filter: (((metric)::text = 'participation'::text) AND ((status)::text = 'active'::text))
                       ->  BitmapAnd  (cost=25.80..25.80 rows=1 width=0) (actual time=2.222..2.222 rows=0 loops=2787)
                             ->  Bitmap Index Scan on index_daily_statistics_on_user_id  (cost=0.00..5.60 rows=156 width=0) (actual time=0.062..0.062 rows=292 loops=2787)
                                   Index Cond: (user_id = u.id)
                             ->  Bitmap Index Scan on index_daily_statistics_on_campaign_id_and_type  (cost=0.00..19.95 rows=453 width=0) (actual time=2.124..2.124 rows=9751 loops=2787)
                                   Index Cond: (campaign_id = 4742)
   SubPlan 3
     ->  Limit  (cost=29.83..29.84 rows=1 width=4) (actual time=3.030..3.030 rows=1 loops=2787)
           ->  Sort  (cost=29.83..29.84 rows=1 width=4) (actual time=3.018..3.018 rows=1 loops=2787)
                 Sort Key: ds2_2.id
                 Sort Method: top-N heapsort  Memory: 25kB
                 ->  Bitmap Heap Scan on daily_statistics ds2_2  (cost=25.80..29.82 rows=1 width=4) (actual time=2.407..2.755 rows=105 loops=2787)
                       Recheck Cond: ((user_id = u.id) AND (campaign_id = 4742))
                       Filter: (((metric)::text = 'participation'::text) AND ((status)::text = 'active'::text))
                       ->  BitmapAnd  (cost=25.80..25.80 rows=1 width=0) (actual time=2.390..2.390 rows=0 loops=2787)
                             ->  Bitmap Index Scan on index_daily_statistics_on_user_id  (cost=0.00..5.60 rows=156 width=0) (actual time=0.121..0.121 rows=292 loops=2787)
                                   Index Cond: (user_id = u.id)
                             ->  Bitmap Index Scan on index_daily_statistics_on_campaign_id_and_type  (cost=0.00..19.95 rows=453 width=0) (actual time=2.233..2.233 rows=9751 loops=2787)
                                   Index Cond: (campaign_id = 4742)
 Total runtime: 25158.063 ms
(49 rows)


foobar_production=> \d daily_statistics;
                                       Table "public.daily_statistics"
    Column     |            Type             |                           Modifiers
---------------+-----------------------------+---------------------------------------------------------------
 id            | integer                     | not null default nextval('daily_statistics_id_seq'::regclass)
 type          | character varying(255)      |
 metric        | character varying(255)      |
 campaign_id   | integer                     |
 user_id       | integer                     |
 country_id    | integer                     |
 attachment_id | integer                     |
 time_at       | timestamp without time zone |
 properties    | hstore                      |
 status        | character varying(255)      | default 'active'::character varying
Indexes:
    "daily_statistics_pkey" PRIMARY KEY, btree (id)
    "index_daily_statistics_on_attachment_id" btree (attachment_id)
    "index_daily_statistics_on_campaign_id_and_type" btree (campaign_id, type)
    "index_daily_statistics_on_country_id" btree (country_id)
    "index_daily_statistics_on_id" btree (id)
    "index_daily_statistics_on_metric" btree (metric)
    "index_daily_statistics_on_properties" gin (properties)
    "index_daily_statistics_on_status" btree (status)
    "index_daily_statistics_on_time_at" btree (time_at)
    "index_daily_statistics_on_user_id" btree (user_id)

您的想法将不胜感激。

您在这里似乎有两个部分:

第一个是获取用户的最新统计数据,以及 另一个是为用户积累所有附件id。 两者都适用于特定类型的统计数据。 由于您对用户感兴趣,我首先从他们开始

使用此查询搜索最新条目:

选择u.id, ds.country_id, ds.time_在, ds.属性->“假定性别”作为假定性别 来自用户u 横向连接 从每日统计中选择* 其中user_id=u.id 和活动id=39 指标=‘参与’ 和状态='活动' 按id顺序描述限制1 真的吗 其中u.properties->“twitter”中的“提供者”; 我使用这里,这对于此类查询很好

不过,聚合不会从中受益,因此需要另一个子查询

最后,我提出了以下问题:

选择u.id, ds.country_id, ds.time_在, ds.属性->假定性别为假定性别, g、 附件(i) 来自用户u 横向连接 从每日统计中选择* 其中user_id=u.id 和活动id=39 指标=‘参与’ 和状态='活动' 按id顺序描述限制1 真的吗 参加 选择用户标识、json标识和json标识作为附件标识 来自每日统计 其中,活动id=39 指标=‘参与’ 和状态='活动' 按用户id分组 g ON g.user_id=u.id 其中u.properties->“twitter”中的“提供者”; 我假设,该指数:

CREATE INDEX i_ds_campaign4status
 ON daily_statistics(campaign_id, user_id, id)
 WHERE status='active';
会有帮助的。这取决于您的数据,但是,如果您的所有统计数据都处于活动状态,则可以删除WHERE子句

编辑:根据提供的计划,第二次查询从聚合的联接中获益,因为它减少了横向部分的迭代次数。
我会坚持这种方法。

请按照@vyegorov提供更多信息请参见上面的编辑请将您的代码和结果直接粘贴到问题中,而不是发布到pastebin这样的外部服务中。外部代码可以在任何时候消失,并且没有版本控制,就像SO问题一样。因此,未来的读者可能无法看到外部代码,因此这个问题将是无用的或没有多大意义。他们肯定无法通过关键字搜索找到这个问题。StackOverflow不仅关系到你今天得到答案,也关系到其他人将来的参考。@vyegov-Yipes,第一个查询需要更长的时间。差不多184秒。最后的查询要快得多。不到一秒钟:D@ChristianFazzini,这可能是由于缓存效应。你能提供解释,分析,缓冲区输出吗?你可以将你的计划粘贴到这里,并在这里以评论的形式发布链接。当然,我已经更改了活动的id,以反映有数千条记录的活动。第一个:第二个:@ChristianFazzini,看那些模糊的计划不是那么方便。。。你确定你的数据是最新的吗?你现在有什么索引?没有更改任何索引