Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/277.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
无法使用python脚本在Mongodb集合中插入文档_Python_Mongodb_Pymongo_Database - Fatal编程技术网

无法使用python脚本在Mongodb集合中插入文档

无法使用python脚本在Mongodb集合中插入文档,python,mongodb,pymongo,database,Python,Mongodb,Pymongo,Database,我已经用python编写了一个脚本,用pymongo填充Mongodb数据库。运行脚本后,我会得到以下日志: cmd command: insert { $msg: "query not recording (too large)" } keyUpdates:0 numYields:0 locks(micros) w:48 reslen:40 175ms 通过在线研究,我发现这与Mongodb生成的两种日志的维护有关,而与文档插入无关 然后,为了检查,我在脚本中修改了以下代码 connecti

我已经用python编写了一个脚本,用pymongo填充Mongodb数据库。运行脚本后,我会得到以下日志:

cmd command: insert { $msg: "query not recording (too large)" } keyUpdates:0 numYields:0 locks(micros) w:48 reslen:40 175ms
通过在线研究,我发现这与Mongodb生成的两种日志的维护有关,而与文档插入无关

然后,为了检查,我在脚本中修改了以下代码

connection.dbName.collectionName.insert(document)

我得到的输出,即输入的文档的_id:

...
149  record entered
54636c409c2e912fbf433622
150  record entered
54636c409c2e912fbf433623
151  record entered
54636c409c2e912fbf433624
152  record entered
54636c409c2e912fbf433625
153  record entered
54636c409c2e912fbf433626
154  record entered
...
正在生成_id,但是当我在mongo shell中使用db.collectionName.findOne时,输出为null。 我还必须说明Mongodb服务器是在NUMA机器上运行的。通过在线查看,我能够解决此问题。 请帮我解决这个问题

更新

以下是mongo服务器的日志:

2014-11-12T21:43:36.742+0530 [conn24] allocating new ns file /data/db/db-name.ns, filling with     
zeroes...
2014-11-12T21:43:36.916+0530 [FileAllocator] allocating new datafile /data/db/db-name.0, filling   
with zeroes...
2014-11-12T21:43:36.924+0530 [FileAllocator] done allocating datafile /data/db/db-name.0, size: 
64MB,  took 0.007 secs
" }operties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "db-name.collection-name 2014-11- 
12T21:43:36.925+0530 [conn24]    added index to empty collection
query: { _id: ObjectId('546387309c2e9132e0a2f6f8'), //rest of the document} ninserted:1 
keyUpdates:0 numYields:0 locks(micros) w:183005 183ms
2014-11-12T21:43:36.925+0530 [conn24] command AVAYA-IVR.$cmd command: insert { $                                                                                              
msg: "query not recording (too large)" } keyUpdates:0 numYields:0 locks(micros)                                                                                           
w:271 reslen:40 183ms 
2014-11-12T21:43:38.100+0530 [conn24] end connection 127.0.0.1:56906 (1 connection now open)
2014-11-12T21:43:59.811+0530 [clientcursormon] mem (MB) res:42 virt:665
2014-11-12T21:43:59.811+0530 [clientcursormon]  mapped (incl journal view):480
2014-11-12T21:43:59.811+0530 [clientcursormon]  connections:1
2014-11-12T21:48:59.823+0530 [clientcursormon] mem (MB) res:43 virt:665
2014-11-12T21:48:59.823+0530 [clientcursormon]  mapped (incl journal view):480
2014-11-12T21:48:59.823+0530 [clientcursormon]  connections:1
2014-11-12T21:53:59.835+0530 [clientcursormon] mem (MB) res:43 virt:665
2014-11-12T21:53:59.835+0530 [clientcursormon]  mapped (incl journal view):480
2014-11-12T21:53:59.835+0530 [clientcursormon]  connections:1
2014-11-12T21:57:01.028+0530 [initandlisten] connection accepted from 127.0.0.1:56909 #25 (2  
connections now open)
2014-11-12T21:58:59.847+0530 [clientcursormon] mem (MB) res:42 virt:665
2014-11-12T21:58:59.847+0530 [clientcursormon]  mapped (incl journal view):480
2014-11-12T21:58:59.847+0530 [clientcursormon]  connections:2

日志上说的就是这些。我不明白的是,日志只显示输入的第一个文档。这是我麻烦的根源吗

听起来像是用ObjectId的字符串值而不是实际的ObjectId发出.findOne。通常,这种抽象只由比基本驱动程序更高级别的库执行。要么是错误的集合,要么是错误的数据库或服务器,或者是别的什么。我找到了解决这个问题的方法。我对hashmap中的数据进行了pickle处理,然后将其取消pickle并插入到集合中。这起作用了。尽管如此,我还是无法理解为什么没有这项工作,我无法填充这些集合。数据甚至没有那么大,大约有500000个文档。
2014-11-12T21:43:36.742+0530 [conn24] allocating new ns file /data/db/db-name.ns, filling with     
zeroes...
2014-11-12T21:43:36.916+0530 [FileAllocator] allocating new datafile /data/db/db-name.0, filling   
with zeroes...
2014-11-12T21:43:36.924+0530 [FileAllocator] done allocating datafile /data/db/db-name.0, size: 
64MB,  took 0.007 secs
" }operties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "db-name.collection-name 2014-11- 
12T21:43:36.925+0530 [conn24]    added index to empty collection
query: { _id: ObjectId('546387309c2e9132e0a2f6f8'), //rest of the document} ninserted:1 
keyUpdates:0 numYields:0 locks(micros) w:183005 183ms
2014-11-12T21:43:36.925+0530 [conn24] command AVAYA-IVR.$cmd command: insert { $                                                                                              
msg: "query not recording (too large)" } keyUpdates:0 numYields:0 locks(micros)                                                                                           
w:271 reslen:40 183ms 
2014-11-12T21:43:38.100+0530 [conn24] end connection 127.0.0.1:56906 (1 connection now open)
2014-11-12T21:43:59.811+0530 [clientcursormon] mem (MB) res:42 virt:665
2014-11-12T21:43:59.811+0530 [clientcursormon]  mapped (incl journal view):480
2014-11-12T21:43:59.811+0530 [clientcursormon]  connections:1
2014-11-12T21:48:59.823+0530 [clientcursormon] mem (MB) res:43 virt:665
2014-11-12T21:48:59.823+0530 [clientcursormon]  mapped (incl journal view):480
2014-11-12T21:48:59.823+0530 [clientcursormon]  connections:1
2014-11-12T21:53:59.835+0530 [clientcursormon] mem (MB) res:43 virt:665
2014-11-12T21:53:59.835+0530 [clientcursormon]  mapped (incl journal view):480
2014-11-12T21:53:59.835+0530 [clientcursormon]  connections:1
2014-11-12T21:57:01.028+0530 [initandlisten] connection accepted from 127.0.0.1:56909 #25 (2  
connections now open)
2014-11-12T21:58:59.847+0530 [clientcursormon] mem (MB) res:42 virt:665
2014-11-12T21:58:59.847+0530 [clientcursormon]  mapped (incl journal view):480
2014-11-12T21:58:59.847+0530 [clientcursormon]  connections:2