Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/database/8.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
创建大型MySQL索引(1B行)失败,原因是;查询期间与MySQL服务器的连接中断;_Mysql_Database - Fatal编程技术网

创建大型MySQL索引(1B行)失败,原因是;查询期间与MySQL服务器的连接中断;

创建大型MySQL索引(1B行)失败,原因是;查询期间与MySQL服务器的连接中断;,mysql,database,Mysql,Database,我试图在一个相当大的MySQL表(超过10亿行,144GB)上创建一个复合索引 我让它在一夜之间运行了好几次,但它一直失败,下面的消息(错误日志中没有其他内容)。我不能确定查询运行了多长时间,但可能会运行大约八个小时 ERROR 2013 (HY000) at line 3: Lost connection to MySQL server during query 我用SET expand\u fast\u index\u creation=ON尝试了它但这似乎只会让它更快地失败(可能一个小时

我试图在一个相当大的MySQL表(超过10亿行,144GB)上创建一个复合索引

我让它在一夜之间运行了好几次,但它一直失败,下面的消息(错误日志中没有其他内容)。我不能确定查询运行了多长时间,但可能会运行大约八个小时

ERROR 2013 (HY000) at line 3: Lost connection to MySQL server during query
我用
SET expand\u fast\u index\u creation=ON尝试了它但这似乎只会让它更快地失败(可能一个小时)

服务器运行在Hetzner提供的专用Ubuntu设备上,具有32G内存、4GB交换和8个内核。大量可用磁盘空间(1TB磁盘)

以下是my.cnf文件,主要是反复试验的结果:

[mysqld]
# General
binlog_cache_size = 8M
binlog_format = row
character-set-server = utf8
connect_timeout = 10
datadir = /var/lib/mysql/data
delay_key_write = OFF
expire_logs_days = 10
join_buffer_size = 8M
log-bin=/var/lib/mysql/logs/mysql-bin
log_warnings = 2
max_allowed_packet = 100M
max_binlog_size = 1024M
max_connect_errors = 20
max_connections = 512
max_heap_table_size = 64M
net_read_timeout = 600
net_write_timeout = 600
query_cache_limit = 8M
query_cache_size = 128M
server-id = 1
skip_name_resolve
slave_net_timeout = 60
thread_cache_size = 8
thread_concurrency = 24
tmpdir = /var/tmp
tmp_table_size = 64M
transaction_isolation = READ-COMMITTED
wait_timeout = 57600
net_buffer_length = 1M

# MyISAM
bulk_insert_buffer_size = 64M
key_buffer_size = 384M
myisam_recover_options = BACKUP,FORCE
myisam_sort_buffer_size = 128M

# InnoDB
innodb_additional_mem_pool_size = 16M
innodb_buffer_pool_size = 25G
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
#innodb_lazy_drop_table = 1
innodb_log_buffer_size = 16M
innodb_log_files_in_group = 3
innodb_log_file_size = 1024M
innodb_max_dirty_pages_pct = 90
innodb_locks_unsafe_for_binlog = 1

[client]
default-character-set = utf8

[mysqldump]
max_allowed_packet = 16M

任何线索都将不胜感激

作为一种解决方法,我建议创建一个与旧表类似的新表,添加索引,插入旧表中的数据(可能是合理的分块),然后切换到新表。在您的情况下,检查您想要为数据使用哪个存储引擎听起来是个不错的主意—如果您有原始数据,您无论如何都想处理,那么“归档”可能是您的一个选择。或者,如果您的数据中保存有任何类型的结构/关系信息,请尝试规范化您的数据模型并缩小有问题的表。

10亿行?:减少你的行数怎么样。。。哈哈,MySQL中有10亿行?可能需要进行重组,但MySQL到目前为止运行良好。查询始终失败,这使我相信配置更改可能能够解决手头的问题。我将研究分区。你认为16小时后可能会失败吗?您的等待超时设置为16小时(57600秒)。@BrentBaisley事实上,我在发现默认值为8小时后添加了该值。从那以后,如果没有“expand_fast_index_creation”(大约一小时后失败),我就没有尝试过。我目前正在使用新索引将数据迁移到分区表中,似乎进展顺利。希望这能解决我的问题。我还没有很好地使用“INSERT..SELECT”,我正在尝试将“innodb\u lock\u wait\u timeout”设置为更高的值。数据是关系型的,但重要的查询只对一个表进行操作。你对如何进行分块有什么建议吗?分块成功了,我的配置上限似乎是一次100毫米左右的行。每次查询大约需要2个小时。“插入到新表中,从旧表中选择*限制0,9999999;”
Server version: 5.6.13-rc61.0-log Percona Server (GPL), Release 61.0
[mysqld]
# General
binlog_cache_size = 8M
binlog_format = row
character-set-server = utf8
connect_timeout = 10
datadir = /var/lib/mysql/data
delay_key_write = OFF
expire_logs_days = 10
join_buffer_size = 8M
log-bin=/var/lib/mysql/logs/mysql-bin
log_warnings = 2
max_allowed_packet = 100M
max_binlog_size = 1024M
max_connect_errors = 20
max_connections = 512
max_heap_table_size = 64M
net_read_timeout = 600
net_write_timeout = 600
query_cache_limit = 8M
query_cache_size = 128M
server-id = 1
skip_name_resolve
slave_net_timeout = 60
thread_cache_size = 8
thread_concurrency = 24
tmpdir = /var/tmp
tmp_table_size = 64M
transaction_isolation = READ-COMMITTED
wait_timeout = 57600
net_buffer_length = 1M

# MyISAM
bulk_insert_buffer_size = 64M
key_buffer_size = 384M
myisam_recover_options = BACKUP,FORCE
myisam_sort_buffer_size = 128M

# InnoDB
innodb_additional_mem_pool_size = 16M
innodb_buffer_pool_size = 25G
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit = 2
innodb_flush_method = O_DIRECT
#innodb_lazy_drop_table = 1
innodb_log_buffer_size = 16M
innodb_log_files_in_group = 3
innodb_log_file_size = 1024M
innodb_max_dirty_pages_pct = 90
innodb_locks_unsafe_for_binlog = 1

[client]
default-character-set = utf8

[mysqldump]
max_allowed_packet = 16M