Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/laravel/10.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Mysql 如何更快地检索500k记录DB数据?_Mysql_Laravel_Laravel 5.2 - Fatal编程技术网

Mysql 如何更快地检索500k记录DB数据?

Mysql 如何更快地检索500k记录DB数据?,mysql,laravel,laravel-5.2,Mysql,Laravel,Laravel 5.2,我有两个表T1有1000条记录,T2有500000条记录。我有一个查询,在其中运行它们之间的连接,并通过执行一些聚合来获取数据。我的页面加载速度似乎很慢。有什么方法可以加快查询速度吗 我已经为正在执行聚合的列创建了索引。我认为这是一个笼统的说法 $query = Mymodel::selectRaw("supplier_data.name as distributor,supplier_data.name as name, supplier_data.group_id as grou

我有两个表T1有1000条记录,T2有500000条记录。我有一个查询,在其中运行它们之间的连接,并通过执行一些聚合来获取数据。我的页面加载速度似乎很慢。有什么方法可以加快查询速度吗

我已经为正在执行聚合的列创建了索引。我认为这是一个笼统的说法

      $query = Mymodel::selectRaw("supplier_data.name as distributor,supplier_data.name as name, supplier_data.group_id as group_id, supplier_data.pay,supplier_data.group_id as submitted_group_plan,supplier_data.group_id as group_id_string,
            (SELECT sum(t.net_claim) AS trans_number 
            FROM transactions_data_new as t 
            JOIN  `supplier_data` AS d ON  `t`.`member_id` =  `d`.`group_id`
            WHERE
            (
                (
                t.`submit_date`>= '$date_from' and t.`submit_date`<= '$date_to' 
                AND t.`member_id` = supplier_data.group_id
                )
                OR
                (
                    (t.claim_status  IS NULL)
                    AND
                    (t.submit_date is NULL)
                )
            )
            AND d.id = supplier_data.id
        ) as trans_number,


        (SELECT sum(t.claim) AS trans_number 
            FROM transactions_data_new as t 
            JOIN  `supplier_data` AS d ON  `t`.`member_id` =  `d`.`group_id`
            WHERE
            (
                (
                t.`submit_date`>= '$date_from' and t.`submit_date`<= '$date_to' 
                AND t.`member_id` = supplier_data.group_id
                )
                OR
                (
                    (t.claim_status  IS NULL)
                    AND
                    (t.submit_date is NULL)
                )
            )
            AND d.id = supplier_data.id
        ) as claim,

        (SELECT sum(t.reversed) AS trans_number 
            FROM transactions_data_new as t 
            JOIN  `supplier_data` AS d ON  `t`.`member_id` =  `d`.`group_id`
            WHERE
            (
                (
                t.`submit_date`>= '$date_from' and t.`submit_date`<= '$date_to' 
                AND t.`member_id` = supplier_data.group_id
                )
                OR
                (
                    (t.claim_status  IS NULL)
                    AND
                    (t.submit_date is NULL)
                )
            )
            AND d.id = supplier_data.id
        ) as reversed,

        (SELECT sum(t.reversal) AS trans_number 
            FROM transactions_data_new as t 
            JOIN  `supplier_data` AS d ON  `t`.`member_id` =  `d`.`group_id`
            WHERE
            (
                (
                t.`submit_date`>= '$date_from' and t.`submit_date`<= '$date_to'
                AND t.`member_id` = supplier_data.group_id
                )
                OR
                (
                    (t.claim_status  IS NULL)
                    AND
                    (t.submit_date is NULL)
                )
            )
            AND d.id = supplier_data.id
        ) as reversal
            "); 

我不认为有必要对同一个表使用相同的子句和多个子选择(可以使用单个左连接完成)来进行太复杂/重复的操作

SELECT 
  s.name AS distributor,
  s.name AS name,
  s.group_id AS group_id,
  s.pay,
  s.group_id AS submitted_group_plan,
  s.group_id AS group_id_string,
  SUM(t.net_claim) AS trans_number,
  SUM(t.claim) AS claim,
  SUM(t.reversed) reversed,
  SUM(t.reversal) reversal 
FROM
  supplier_data s 
  LEFT JOIN transactions_data_new t 
    ON `t`.`member_id` = s.`group_id` 
    AND (
      (
        t.`submit_date` >= '$date_from' 
        AND t.`submit_date` <= '$date_to'
      ) 
      OR (
        t.claim_status IS NULL 
        AND t.submit_date IS NULL
      )
    ) 
GROUP BY s.name,
  s.group_id,
  s.pay 

据我所知,chunk方法适用于需要处理大型数据集并逐块对该数据执行操作的情况

从你的问题来看,听起来好像你在执行一个查询,然后以JSON的形式返回数据,所以对我来说,听起来不像你在对数据集采取需要分块的操作

如果您想分解返回的JSON数据,您应该查看分页

您可以对查询应用分页,如下所示:

$data = Inspector::latest('id')
    ->select('id', 'firstname', 'status', 'state', 'phone')
    ->where('firstname', 'LIKE', '%' . $searchtext . '%')
    ->paginate();
通过将数字传递给paginate方法,可以指定每个集合的大小:

$data = Inspector::latest('id')
    ->select('id', 'firstname', 'status', 'state', 'phone')
    ->where('firstname', 'LIKE', '%' . $searchtext . '%')
    ->paginate(25);
如果我误解了,而你真的想做分块,我相信你可以做以下几点:

$data = Inspector::latest('id')
    ->select('id', 'firstname', 'status', 'state', 'phone')
    ->where('firstname', 'LIKE', '%' . $searchtext . '%')
    ->chunk(50, function($inspectors) {
        foreach ($inspectors as $inspector) {
            // apply some action to the chunked results here
        }
    });

此外,如果您返回一个有说服力的对象,它将自动转换为json,因此据我所知,您无需执行json_编码。

在孟加拉国、不丹、印度、缅甸、尼泊尔、巴基斯坦和斯里兰卡的官方和其他上下文中使用单位名称。但它在其他任何地方都没有被使用。500000表示50万或50万使用此类型的页码请不要引用来自外部源的代码,只需将其包含在您的问题中。@Used\u By\u已根据您的建议更正了问题。你对此有什么解决方案或建议吗?@kunal已经实施了分页。但仍然不满意的产出。优化哈立德的税收。我只是想知道,我们是否有任何最佳做法可以让数据运行得更快。举一个例子,比如谷歌、FB、Linkedin和许多流行的应用程序,而不是拥有如此巨大的数据,它们是如何像火花一样运行的。我确实使用AWS进行数据托管。我们是否需要使用任何附加插件或任何软件包来让海量数据运行更顺畅?@user7325973它与服务器架构有更多的关系,但我不是那么专业,但它可能涉及多个具有缓存技术的专用服务器负载平衡器,多个数据中心等你需要研究一下,这里有一个例子,可以帮助你从哪里开始,因为最好的做法是遵循官方文件和研究