Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/mongodb/13.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
mongoDB有没有办法让aggregate$gte不显示虚假数据_Mongodb_Mongodb Query_Aggregation Framework - Fatal编程技术网

mongoDB有没有办法让aggregate$gte不显示虚假数据

mongoDB有没有办法让aggregate$gte不显示虚假数据,mongodb,mongodb-query,aggregation-framework,Mongodb,Mongodb Query,Aggregation Framework,正如您在上面的mongo db文档中所看到的,$gte返回的数据也是false json数据示例: { "_id" : 1, "item" : "abc1", description: "product 1", qty: 300 } { "_id" : 2, "item" : "abc2", description: "product 2", qty: 200 } { "_id" : 3, "item" : "xyz1", description: "product 3", qty: 250 }

正如您在上面的mongo db文档中所看到的,$gte返回的数据也是false

json数据示例:

{ "_id" : 1, "item" : "abc1", description: "product 1", qty: 300 }
{ "_id" : 2, "item" : "abc2", description: "product 2", qty: 200 }
{ "_id" : 3, "item" : "xyz1", description: "product 3", qty: 250 }
{ "_id" : 4, "item" : "VWZ1", description: "product 4", qty: 300 }
{ "_id" : 5, "item" : "VWZ2", description: "product 5", qty: 180 }
查询获取数量大于250的数据:

db.inventory.aggregate(
   [
     {
       $project:
          {
            item: 1,
            qty: 1,
            qtyGte250: { $gte: [ "$qty", 250 ] },
            _id: 0
          }
     }
   ]
)
输出:

{ "item" : "abc1", "qty" : 300, "qtyGte250" : true }
{ "item" : "abc2", "qty" : 200, "qtyGte250" : false }
{ "item" : "xyz1", "qty" : 250, "qtyGte250" : true }
{ "item" : "VWZ1", "qty" : 300, "qtyGte250" : true }
{ "item" : "VWZ2", "qty" : 180, "qtyGte250" : false }
问题: 我希望数据的数量大于250,但mongo db会显示所有数据,因此当记录数量如此之多时,站点就会变得如此缓慢

我在mongoid中使用RubyonRails,我有一些查询需要使用GROUPBY子句,所以我必须聚合,但这是返回所有数据。 我的原始查询:

data = SomeModel.collection.aggregate([
      {"$project" => {
        "dayOfMonth" => {"$dayOfMonth" => "$created_time"},
        "month" => {"$month" => "$created_time"},
        "year" => {"$year" => "$created_time"},
        "date_check_gte" => {"$gte" => ["$created_time",start_time]},
        "date_check_lte" => {"$lte" => ["$created_time",end_time]},
      }},
      {"$group" => {
        "_id" => { "dayOfMonth" => "$dayOfMonth", "month" => "$month", "year" => "$year"},
        "Total" => {"$sum" => 1},
        "check_one" => {"$first" => "$date_check_gte"},
        "check_two" => {"$first" => "$date_check_lte"}
      }},
      {"$sort" => {
        "Total" => 1
      }}
    ])
完全分组,但返回所有数据,尽管使用了gte和lte。
有什么方法可以避免出现虚假数据吗?

您是否尝试在管道中使用
$match
来过滤
数量>250的文档

例:

获取数量大于250的数据的“查询”涉及到管道操作符,该操作符过滤文档,以仅将符合指定条件的文档传递到下一管道阶段,而不是当前执行的管道:

db.inventory.aggregate([
    { "$match": { "qty": { "$gte": 250 } } }   
)
或者使用相同的管道(尽管没有必要,因为如上所述只使用单个管道就足够了):

db.inventory.aggregate([
    { "$match": { "qty": { "$gte": 250 } } }   
)
db.inventory.aggregate([
    {
        "$project": {
            "item": 1,
            "qty": 1,
            "qtyGte250": { "$gte": [ "$qty", 250 ] },
            "_id": 0
        }
    },
    { "$match": { "qtyGte250": true } }   
])