Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/mysql/72.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/performance/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在MySql中插入500个简单行,耗时超过40秒_Mysql_Performance_Ubuntu_Query Performance_Mysql 5.7 - Fatal编程技术网

在MySql中插入500个简单行,耗时超过40秒

在MySql中插入500个简单行,耗时超过40秒,mysql,performance,ubuntu,query-performance,mysql-5.7,Mysql,Performance,Ubuntu,Query Performance,Mysql 5.7,我们有一个既没有外键也没有主键的简单表(用于此测试)。所有列都是int或tinyint,但类型为decimal(5,4)的p除外 下面是我们正在运行的查询的一部分。完成需要40秒 我们现在正在从Phpmyadmin运行这个。但是,在测试Laravel应用程序时出现了这个问题,因为完成这个简单的任务花费了太多的时间 我们的表配置为: InnoDB-行格式:动态-拉丁语1_瑞典语ci Mysql版本:5.7.31-0ubuntu0.18.04.1 在Ubuntu服务器上运行 DELETE FROM

我们有一个既没有外键也没有主键的简单表(用于此测试)。所有列都是int或tinyint,但类型为decimal(5,4)的p除外

下面是我们正在运行的查询的一部分。完成需要40秒

我们现在正在从Phpmyadmin运行这个。但是,在测试Laravel应用程序时出现了这个问题,因为完成这个简单的任务花费了太多的时间

我们的表配置为: InnoDB-行格式:动态-拉丁语1_瑞典语ci

Mysql版本:5.7.31-0ubuntu0.18.04.1 在Ubuntu服务器上运行

DELETE FROM test WHERE species_id = 290;

INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 16);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 15);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 14);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 11);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 17);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 13);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 99999);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 5);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 29);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 21);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 38);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 7);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 30);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 6);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 37);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 40);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 36);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100003);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100008);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100015);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 8);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 2);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 39);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 1);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 42);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 4);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100016);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 28);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 12);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 26);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 24);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 23);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100000);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 25);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 9);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 22);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 18);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 10);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100005);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 20);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100002);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 27);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 19);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 3);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 35);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100004);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100007);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100006);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 34);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 33);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100010);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100011);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100009);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100012);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100014);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 100013);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 32);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 1, 0, 31);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 16);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 15);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 14);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 11);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 17);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 13);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 99999);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 5);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 29);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 21);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 38);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 7);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 30);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 6);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 37);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 40);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 36);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 100003);
INSERT INTO test (species_id, month, p, beta_set_id) VALUES (290, 2, 0, 100008);

MySQL在一次插入一个插件方面不是特别快;每个请求都有很多开销。批处理它们,如:

INSERT INTO test (species_id, month, p, beta_set_id) VALUES
(290, 1, 0, 16),
(290, 1, 0, 15),
(290, 1, 0, 14),
(290, 1, 0, 11),
(290, 1, 0, 17),
(290, 1, 0, 13),
(290, 1, 0, 99999),
(290, 1, 0, 5),
(290, 1, 0, 29),
(290, 1, 0, 21),
(290, 1, 0, 38),
(290, 1, 0, 7),
(290, 1, 0, 30),
(290, 1, 0, 6),
(290, 1, 0, 37),
(290, 1, 0, 40),
(290, 1, 0, 36),
(290, 1, 0, 100003),
(290, 1, 0, 100008),
(290, 1, 0, 100015),
(290, 1, 0, 8),
(290, 1, 0, 2),
(290, 1, 0, 39),
(290, 1, 0, 1),
(290, 1, 0, 42),
(290, 1, 0, 4),
(290, 1, 0, 100016),
(290, 1, 0, 28),
(290, 1, 0, 12),
(290, 1, 0, 26),
(290, 1, 0, 24),
(290, 1, 0, 23),
(290, 1, 0, 100000),
(290, 1, 0, 25),
(290, 1, 0, 9),
(290, 1, 0, 22),
(290, 1, 0, 18),
(290, 1, 0, 10),
(290, 1, 0, 100005),
(290, 1, 0, 20),
(290, 1, 0, 100002),
(290, 1, 0, 27),
(290, 1, 0, 19),
(290, 1, 0, 3),
(290, 1, 0, 35),
(290, 1, 0, 100004),
(290, 1, 0, 100007),
(290, 1, 0, 100006),
(290, 1, 0, 34),
(290, 1, 0, 33),
(290, 1, 0, 100010),
(290, 1, 0, 100011),
(290, 1, 0, 100009),
(290, 1, 0, 100012),
(290, 1, 0, 100014),
(290, 1, 0, 100013),
(290, 1, 0, 32),
(290, 1, 0, 31),
(290, 2, 0, 16),
(290, 2, 0, 15),
(290, 2, 0, 14),
(290, 2, 0, 11),
(290, 2, 0, 17),
(290, 2, 0, 13),
(290, 2, 0, 99999),
(290, 2, 0, 5),
(290, 2, 0, 29),
(290, 2, 0, 21),
(290, 2, 0, 38),
(290, 2, 0, 7),
(290, 2, 0, 30),
(290, 2, 0, 6),
(290, 2, 0, 37),
(290, 2, 0, 40),
(290, 2, 0, 36),
(290, 2, 0, 100003),
(290, 2, 0, 100008);

只要完整的insert语句小于@max_allowed_packet(通常至少为一百万),您就可以根据需要插入任意多的行。如果需要,可以通过编程将其分成1000行的批次。

主键
(物种id、月份、beta\u集id)
不是顺序的,这将导致。拥有顺序主键将节省大量插入时间。首先,我们需要它,或者至少我们需要它是唯一的键。但不管怎样,我们刚刚删除了PK,仍然需要相同的时间。你是说这需要40秒?问题是,如果我真的需要在每次插入之间进行500次插入并进行一些选择,会发生什么?@StephenH.Anderson很有可能。试试看。你不能提前做选择吗?这一切都在交易中吗?这里的帖子只是简化了一下。但我们真正的应用程序是在每次插入之间进行选择。所以我不确定我是否能够应用这个,因为我认为一些值将取决于之前的插入。。。如果你有什么建议,请告诉我。谢谢