Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/373.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/mongodb/11.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java MongoDB搜索大数据采集_Java_Mongodb - Fatal编程技术网

Java MongoDB搜索大数据采集

Java MongoDB搜索大数据采集,java,mongodb,Java,Mongodb,我正在尝试搜索对象的大数据集合(100万个元素)。 示例元素如下所示: Document{{_id=588e6f317367651f34a06c2c, busId=34, time=1262305558050, createdDate=Sun Jan 29 23:39:42 CET 2017}} 从0到300之间有BUSID,每个记录上的时间增量约为30毫秒,从 SimpleDataFormat sdf=新的SimpleDataFormat(“yyyy.MM.dd HH:MM:ss”); lo

我正在尝试搜索对象的大数据集合(100万个元素)。 示例元素如下所示:

Document{{_id=588e6f317367651f34a06c2c, busId=34, time=1262305558050, createdDate=Sun Jan 29 23:39:42 CET 2017}}
从0到300之间有BUSID,每个记录上的时间增量约为30毫秒,从

SimpleDataFormat sdf=新的SimpleDataFormat(“yyyy.MM.dd HH:MM:ss”);
long startDate=sdf.parse(“2010.01.01 00:00:00”).getTime()

现在,我正在查找此查询的所有数据:

    BasicDBObject gtQuery = new BasicDBObject();
    List<BasicDBObject> obj = new ArrayList<BasicDBObject>();
    obj.add(new BasicDBObject("busId", vehicleId));
    obj.add(new BasicDBObject("time", new BasicDBObject("$gt", startDate.getTime()).append("$lt", endDate.getTime())));
    gtQuery.put("$and", obj);
    System.out.println(gtQuery.toString());
    FindIterable<Document> curs = collection.find(gtQuery);
BasicDBObject gtQuery=new BasicDBObject();
List obj=new ArrayList();
增加目标(新的基本目标(“业务”,车辆ID));
添加(新的BasicDBObject(“time”,新的BasicDBObject(“gt”,startDate.getTime()).append(“lt”,endDate.getTime())));
gtQuery.put(“$and”,obj);
System.out.println(gtQuery.toString());
FindItemerable curs=collection.find(gtQuery);
gtQuery输出:

{“$and”:[{“busId”:“34”},{“time”:{“$gt”:1262304705000,$lt:1262308305000}}}

查询正在工作,但它以这种方式迭代集合中的整个1000000个元素。
有什么方法可以更快地完成吗?

尝试按照@ares

db.collection.createIndex({busId:1,time:1})的建议创建一个on-busId和time(