压缩/优化MongoDB集合的安全方法是什么?

压缩/优化MongoDB集合的安全方法是什么?,mongodb,optimization,compact-database,Mongodb,Optimization,Compact Database,我想压缩一些收藏,但出乎意料的是,这并不像我想象的那么简单。几秒钟后失败,现在我无法再连接到数据库: Collection: MyDB.logs --- Indexes : 7 Extents : 9 Documents : 59278 Size : 8824KB Storage size : 12376KB Index size : 2888KB Compacting MyDB.logs... Fri Sep

我想压缩一些收藏,但出乎意料的是,这并不像我想象的那么简单。几秒钟后失败,现在我无法再连接到数据库:

Collection: MyDB.logs
---
Indexes         : 7
Extents         : 9
Documents       : 59278

Size            : 8824KB
Storage size    : 12376KB
Index size      : 2888KB
Compacting MyDB.logs...
Fri Sep 26 13:14:40.919 DBClientCursor::init call() failed
Fri Sep 26 13:14:41.086 Error: error doing query: failed at src/mongo/shell/query.js:78
正如您在下面的日志中所看到的,发生这种情况是因为内存不足:
断言:13524:内存不足

这还会中断MongoDB守护进程:

$ mongo MyDB
MongoDB shell version: 2.4.6
connecting to: MyDB
Fri Sep 26 13:38:00.730 Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145
exception: connect failed
这是个糟糕的主意。关闭守护进程会延长停机时间。有人会认为这样的服务器端操作足够聪明,可以看到某个操作是否能够成功,并优雅地取消该操作或回滚问题,而无需在重新启动之前关闭守护进程

压缩/优化收藏的安全方法是什么?


虽然这是关于
compact
语句的,但下面是我使用的完整脚本:

#!/usr/bin/env mongo

// Stats and fixes
// Redsandro 2014-09-26

var dbName = 'MyDB';
print('Connecting to: '+ dbName);
print('---');

myDb=db.getSiblingDB(dbName);

print();

myDb.getCollectionNames().forEach(function (collectionName) {
    // print('Collection: ' + collectionName);

    var stats = myDb[collectionName].stats(10240);
    print('Collection: ' + stats.ns);
    print('---');

    print('Indexes         : ' + stats.nindexes);
    print('Extents         : ' + stats.numExtents);
    print('Documents       : ' + stats.count);
    print();
    print('Size            : ' + stats.size + 'KB');
    print('Storage size    : ' + stats.storageSize + 'KB');
    print('Index size      : ' + stats.totalIndexSize + 'KB');

    print('Compacting ' + stats.ns + '...');
    myDb.runCommand({ compact: collectionName }); // BREAKS!!!!111

    print('Done.');
    print();
});

这里是
mongodb.log

Fri Sep 26 13:14:30.696 [initandlisten] connection accepted from 127.0.0.1:44775 #26 (6 connections now open)
Fri Sep 26 13:14:30.714 [conn26] compact MyDB.logs begin
Fri Sep 26 13:14:30.714 [conn26] paddingFactor:1 paddingBytes:0
Fri Sep 26 13:14:30.714 [conn26] compact 9 extents
Fri Sep 26 13:14:30.715 [conn26] compact orphan deleted lists
Fri Sep 26 13:14:30.715 [conn26] compact dropping indexes
Fri Sep 26 13:14:30.732 [conn26] compact begin extent #0 for namespace MyDB.logs
Fri Sep 26 13:14:30.732 [conn26] compact paging in len=0.028672MB
Fri Sep 26 13:14:30.732 [conn26] compact copying records
Fri Sep 26 13:14:30.750 [conn26] compact finished extent #0 containing 35 documents (0.027904MB) oldPadding: 1.03077 1
Fri Sep 26 13:14:30.750 [conn26] compact begin extent #1 for namespace MyDB.logs
Fri Sep 26 13:14:30.750 [conn26] compact paging in len=0.192512MB
Fri Sep 26 13:14:30.750 [conn26] compact copying records
Fri Sep 26 13:14:30.760 [conn26] compact finished extent #1 containing 341 documents (0.187024MB) oldPadding: 1.03333 1
Fri Sep 26 13:14:30.760 [conn26] compact begin extent #2 for namespace MyDB.logs
Fri Sep 26 13:14:30.760 [conn26] compact paging in len=0.770048MB
Fri Sep 26 13:14:30.760 [conn26] compact copying records
Fri Sep 26 13:14:30.782 [conn26] compact finished extent #2 containing 616 documents (0.76MB) oldPadding: 1.053 1
Fri Sep 26 13:14:30.782 [conn26] compact begin extent #3 for namespace MyDB.logs
Fri Sep 26 13:14:30.782 [conn26] compact paging in len=3.08019MB
Fri Sep 26 13:14:30.785 [conn26] compact copying records
Fri Sep 26 13:14:30.924 [conn26] compact finished extent #3 containing 2062 documents (3.04699MB) oldPadding: 1.05165 1
Fri Sep 26 13:14:30.924 [conn26] compact begin extent #4 for namespace MyDB.logs
Fri Sep 26 13:14:30.924 [conn26] compact paging in len=12.3208MB
Fri Sep 26 13:14:31.280 [conn26] compact copying records
Fri Sep 26 13:14:31.694 [conn26] compact finished extent #4 containing 8447 documents (12.1854MB) oldPadding: 1.04567 1
Fri Sep 26 13:14:31.694 [conn26] compact begin extent #5 for namespace MyDB.logs
Fri Sep 26 13:14:31.694 [conn26] compact paging in len=16.6339MB
Fri Sep 26 13:14:32.458 [conn26] compact copying records
Fri Sep 26 13:14:33.120 [conn26] compact finished extent #5 containing 11406 documents (16.4509MB) oldPadding: 1.0412 1
Fri Sep 26 13:14:33.120 [conn26] compact begin extent #6 for namespace MyDB.logs
Fri Sep 26 13:14:33.120 [conn26] compact paging in len=22.4584MB
Fri Sep 26 13:14:33.874 [conn26] compact copying records
Fri Sep 26 13:14:34.637 [conn26] compact finished extent #6 containing 19310 documents (22.1493MB) oldPadding: 1.04563 1
Fri Sep 26 13:14:34.637 [conn26] compact begin extent #7 for namespace MyDB.logs
Fri Sep 26 13:14:34.637 [conn26] compact paging in len=30.3186MB
Fri Sep 26 13:14:36.526 [conn26] compact end paging in 1888ms 0.0160586MB/sec
Fri Sep 26 13:14:36.526 [conn26] compact copying records
Fri Sep 26 13:14:39.306 [conn26] compact finished extent #7 containing 15090 documents (30.0768MB) oldPadding: 1.04621 1
Fri Sep 26 13:14:39.306 [conn26] compact begin extent #8 for namespace MyDB.logs
Fri Sep 26 13:14:39.306 [conn26] compact paging in len=40.9313MB
Fri Sep 26 13:14:39.333 [conn26] compact copying records
Fri Sep 26 13:14:40.148 [conn26] compact finished extent #8 containing 1971 documents (5.47893MB) oldPadding: 1.04616 1
Fri Sep 26 13:14:40.148 [conn26] compact create index { _id: 1 }
Fri Sep 26 13:14:40.149 [conn26] build index MyDB.logs { _id: 1 }
Fri Sep 26 13:14:40.564 [conn26] Assertion: 13524:out of memory AlignedBuilder
0xac9f86 0xa8e47a 0xa8d1f8 0xa8d25b 0x8291ed 0x8294c4 0x820011 0x8203ea 0x820d7c 0x792146 0x792e0e 0x88ee1f 0x88c0b0 0x88c6c5 0x903429 0x9053e3 0x88ab23 0x8838a5 0x904cfc 0x6eae7d 
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x26) [0xac9f86]
 /usr/bin/mongod(_ZN5mongo11msgassertedEiPKc+0x9a) [0xa8e47a]
 /usr/bin/mongod() [0xa8d1f8]
 /usr/bin/mongod(_ZN5mongo14AlignedBuilder8_reallocEjj+0x1b) [0xa8d25b]
 /usr/bin/mongod() [0x8291ed]
 /usr/bin/mongod(_ZN5mongo3dur13PREPLOGBUFFERERNS0_11JSectHeaderERNS_14AlignedBuilderE+0x1c4) [0x8294c4]
 /usr/bin/mongod() [0x820011]
 /usr/bin/mongod(_ZN5mongo3dur11DurableImpl9commitNowEv+0x1a) [0x8203ea]
 /usr/bin/mongod(_ZN5mongo3dur11DurableImpl16_aCommitIsNeededEv+0xac) [0x820d7c]
 /usr/bin/mongod(_ZN5mongo12BtreeBuilderINS_12BtreeData_V1EE24mayCommitProgressDurablyEv+0x16) [0x792146]
 /usr/bin/mongod(_ZN5mongo12BtreeBuilderINS_12BtreeData_V1EE6addKeyERNS_7BSONObjENS_7DiskLocE+0x23e) [0x792e0e]
 /usr/bin/mongod(_ZN5mongo24buildBottomUpPhases2And3INS_12BtreeData_V1EEEvbRNS_12IndexDetailsERNS_21BSONObjExternalSorterEbRSt3setINS_7DiskLocESt4lessIS7_ESaIS7_EEPNS_5CurOpEPNS_12SortPhaseOneERNS_19ProgressMeterHolderERNS_5TimerEb+0x20f) [0x88ee1f]
 /usr/bin/mongod(_ZN5mongo14fastBuildIndexEPKcPNS_16NamespaceDetailsERNS_12IndexDetailsEb+0x3f0) [0x88c0b0]
 /usr/bin/mongod(_ZN5mongo12buildAnIndexERKSsPNS_16NamespaceDetailsERNS_12IndexDetailsEbb+0x135) [0x88c6c5]
 /usr/bin/mongod(_ZN5mongo16insert_makeIndexEPNS_16NamespaceDetailsERKSsRKNS_7DiskLocEb+0x2c9) [0x903429]
 /usr/bin/mongod(_ZN5mongo11DataFileMgr6insertEPKcPKvibbbPb+0x8d3) [0x9053e3]
 /usr/bin/mongod(_ZN5mongo17ensureHaveIdIndexEPKcb+0x463) [0x88ab23]
 /usr/bin/mongod(_ZN5mongo19prepareToBuildIndexERKNS_7BSONObjEbbRSsRPNS_16NamespaceDetailsERS0_+0x1155) [0x8838a5]
 /usr/bin/mongod(_ZN5mongo11DataFileMgr6insertEPKcPKvibbbPb+0x1ec) [0x904cfc]
 /usr/bin/mongod(_ZN5mongo8_compactEPKcPNS_16NamespaceDetailsERSsbRNS_14BSONObjBuilderEdi+0x119d) [0x6eae7d]
Fri Sep 26 13:14:40.696 [conn26] dbexception in groupCommit causing immediate shutdown: 13524 out of memory AlignedBuilder
Fri Sep 26 13:14:40.696 gc1
Fri Sep 26 13:14:40.774 Got signal: 6 (Aborted).

Fri Sep 26 13:14:40.789 Backtrace:
0xac9f86 0x6cfcd4 0x7feff2f5dff0 0x7feff2f5df77 0x7feff2f615e8 0x8944ef 0x820334 0x8203ea 0x820d7c 0x792146 0x792e0e 0x88ee1f 0x88c0b0 0x88c6c5 0x903429 0x9053e3 0x88ab23 0x8838a5 0x904cfc 0x6eae7d 
 /usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x26) [0xac9f86]
 /usr/bin/mongod(_ZN5mongo10abruptQuitEi+0x254) [0x6cfcd4]
 /lib/x86_64-linux-gnu/libc.so.6(+0x36ff0) [0x7feff2f5dff0]
 /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x37) [0x7feff2f5df77]
 /lib/x86_64-linux-gnu/libc.so.6(abort+0x148) [0x7feff2f615e8]
 /usr/bin/mongod(_ZN5mongo10mongoAbortEPKc+0x4f) [0x8944ef]
 /usr/bin/mongod() [0x820334]
 /usr/bin/mongod(_ZN5mongo3dur11DurableImpl9commitNowEv+0x1a) [0x8203ea]
 /usr/bin/mongod(_ZN5mongo3dur11DurableImpl16_aCommitIsNeededEv+0xac) [0x820d7c]
 /usr/bin/mongod(_ZN5mongo12BtreeBuilderINS_12BtreeData_V1EE24mayCommitProgressDurablyEv+0x16) [0x792146]
 /usr/bin/mongod(_ZN5mongo12BtreeBuilderINS_12BtreeData_V1EE6addKeyERNS_7BSONObjENS_7DiskLocE+0x23e) [0x792e0e]
 /usr/bin/mongod(_ZN5mongo24buildBottomUpPhases2And3INS_12BtreeData_V1EEEvbRNS_12IndexDetailsERNS_21BSONObjExternalSorterEbRSt3setINS_7DiskLocESt4lessIS7_ESaIS7_EEPNS_5CurOpEPNS_12SortPhaseOneERNS_19ProgressMeterHolderERNS_5TimerEb+0x20f) [0x88ee1f]
 /usr/bin/mongod(_ZN5mongo14fastBuildIndexEPKcPNS_16NamespaceDetailsERNS_12IndexDetailsEb+0x3f0) [0x88c0b0]
 /usr/bin/mongod(_ZN5mongo12buildAnIndexERKSsPNS_16NamespaceDetailsERNS_12IndexDetailsEbb+0x135) [0x88c6c5]
 /usr/bin/mongod(_ZN5mongo16insert_makeIndexEPNS_16NamespaceDetailsERKSsRKNS_7DiskLocEb+0x2c9) [0x903429]
 /usr/bin/mongod(_ZN5mongo11DataFileMgr6insertEPKcPKvibbbPb+0x8d3) [0x9053e3]
 /usr/bin/mongod(_ZN5mongo17ensureHaveIdIndexEPKcb+0x463) [0x88ab23]
 /usr/bin/mongod(_ZN5mongo19prepareToBuildIndexERKNS_7BSONObjEbbRSsRPNS_16NamespaceDetailsERS0_+0x1155) [0x8838a5]
 /usr/bin/mongod(_ZN5mongo11DataFileMgr6insertEPKcPKvibbbPb+0x1ec) [0x904cfc]
 /usr/bin/mongod(_ZN5mongo8_compactEPKcPNS_16NamespaceDetailsERSsbRNS_14BSONObjBuilderEdi+0x119d) [0x6eae7d]

当compact命令运行时,mongod日志文件说明了什么?服务器在启动时会抛出什么错误?错误是:错误:无法连接到src/mongo/shell/mongo.js处的服务器127.0.0.1:27017。异常:连接失败我添加了日志。mongod崩溃。在重新连接之前,您需要将其恢复。小型计算机内存不足,导致崩溃。你们有多少内存可用?索引的大小是多少?总共大约45MB。在我的统计脚本中有一个因子10错误。这个实例有大约512MB的RAM。运行最小的Ubuntu(并且有大量的交换)我无法想象这会导致中断。请看一下“groupCommit中的dbException导致立即关闭”这一行,它几乎没有留下想象的空间显然,伯克利是对的。而且我很确定压缩并不是性能提升器,因为MongoDB使用内存映射文件。我在compact中看到的唯一真正有用的东西是它允许更改填充因子,而您没有这样做。是的,在某些情况下,通过压缩可能会阻止分配额外的数据文件,但考虑到今天的GB价格,这甚至不值得花时间开始压缩。