Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/320.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java MongoDB 3.6-4.0批量插入限制_Java_Mongodb_Performance_Bulkinsert - Fatal编程技术网

Java MongoDB 3.6-4.0批量插入限制

Java MongoDB 3.6-4.0批量插入限制,java,mongodb,performance,bulkinsert,Java,Mongodb,Performance,Bulkinsert,据该官员称,自3.6版以来,批量插入的限制是100000次操作(相比之下,旧版本中的1000次操作) 作为基准,使用Java客户机,我将100K文档插入到一个集合中,批量大小从1到100K不等。以下是一些配置的结果: 我看到3.2版和3.6版一样,对于大于500-1000 ops的散货,给出了相同的结果 有关上次基准测试设置的一些信息: public static void main(String[] args) { MongoCredential credenti

据该官员称,自3.6版以来,批量插入的限制是100000次操作(相比之下,旧版本中的1000次操作)

作为基准,使用Java客户机,我将100K文档插入到一个集合中,批量大小从1到100K不等。以下是一些配置的结果:

我看到3.2版和3.6版一样,对于大于500-1000 ops的散货,给出了相同的结果

有关上次基准测试设置的一些信息:

    public static void main(String[] args) {


        MongoCredential credential = MongoCredential.createCredential("user", "admin", "password".toCharArray());
        MongoClient mongoClient = new MongoClient(new ServerAddress("localhost", 27017), Arrays.asList(credential));
        MongoDatabase db = mongoClient.getDatabase( "bulktest" );

        MongoCollection<Document> col = db.getCollection("test1");

        List<InsertOneModel<Document>> content;

        // iterating on different bulk sizes
        for (int size:TEST_SIZES){
            long total = 0;

            // doing each test a few times to get a more stable average
            for (int a = 0; a < AVG_FACTOR; a++) {

                // creating the 100K documents for insert BEFORE the benchmark, so that we won't deal with document creation as a part of the test
                content = new ArrayList<>();
                for (int d = 0; d < DOC_COUNT; d++) {
                    content.add(new InsertOneModel<>(createSampleDocument()));
                }
                Iterator<InsertOneModel<Document>> iterator = content.iterator();

                // each 'testBulkWrite' returns total insert time of 100K documents in the bulk size as passed in the parameter. We sum these times.
                total += testBulkWrite(col, size, iterator);
            }

            System.out.println(String.format("%d", total / AVG_FACTOR));

        }
    }

    private static long testBulkWrite(MongoCollection<Document> col, int bulkSize, Iterator<InsertOneModel<Document>> iterator) {
        long start = System.currentTimeMillis();

        for (int i = 0; i < DOC_COUNT; ){
            ArrayList<InsertOneModel<Document>> bulk = new ArrayList<>();

            for(int j = 0; j < bulkSize && i < DOC_COUNT; j++,i++){
                bulk.add(iterator.next());
            }

            col.bulkWrite(bulk, new BulkWriteOptions().ordered(false));
        }
        return (System.currentTimeMillis() - start);
    }

    private static Document createSampleDocument(){
        Document doc = new Document();
        doc.append("fname", "My");
        doc.append("lname", "Name");
        for (int i = 0; i < 3; i++){
            doc.append("text" + i, "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.");
        }
        return doc;
    }
  • 6虚拟CPU
  • Windows Server 2012
  • 128GB内存
  • MongoDB 3.6版,运行
    mongod
  • wiredTigerCacheSizeGB=20
  • 插入的文档有5个字段:
    fName
    lName
    text1
    text2
    text3
    。后面3个字段包含类似的“Lorem ipsum”段落
  • Java mongoDB客户端:
    org.mongoDB.mongo Java驱动程序
    version
    3.8.0
这是基准的代码:

    public static void main(String[] args) {


        MongoCredential credential = MongoCredential.createCredential("user", "admin", "password".toCharArray());
        MongoClient mongoClient = new MongoClient(new ServerAddress("localhost", 27017), Arrays.asList(credential));
        MongoDatabase db = mongoClient.getDatabase( "bulktest" );

        MongoCollection<Document> col = db.getCollection("test1");

        List<InsertOneModel<Document>> content;

        // iterating on different bulk sizes
        for (int size:TEST_SIZES){
            long total = 0;

            // doing each test a few times to get a more stable average
            for (int a = 0; a < AVG_FACTOR; a++) {

                // creating the 100K documents for insert BEFORE the benchmark, so that we won't deal with document creation as a part of the test
                content = new ArrayList<>();
                for (int d = 0; d < DOC_COUNT; d++) {
                    content.add(new InsertOneModel<>(createSampleDocument()));
                }
                Iterator<InsertOneModel<Document>> iterator = content.iterator();

                // each 'testBulkWrite' returns total insert time of 100K documents in the bulk size as passed in the parameter. We sum these times.
                total += testBulkWrite(col, size, iterator);
            }

            System.out.println(String.format("%d", total / AVG_FACTOR));

        }
    }

    private static long testBulkWrite(MongoCollection<Document> col, int bulkSize, Iterator<InsertOneModel<Document>> iterator) {
        long start = System.currentTimeMillis();

        for (int i = 0; i < DOC_COUNT; ){
            ArrayList<InsertOneModel<Document>> bulk = new ArrayList<>();

            for(int j = 0; j < bulkSize && i < DOC_COUNT; j++,i++){
                bulk.add(iterator.next());
            }

            col.bulkWrite(bulk, new BulkWriteOptions().ordered(false));
        }
        return (System.currentTimeMillis() - start);
    }

    private static Document createSampleDocument(){
        Document doc = new Document();
        doc.append("fname", "My");
        doc.append("lname", "Name");
        for (int i = 0; i < 3; i++){
            doc.append("text" + i, "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.");
        }
        return doc;
    }
publicstaticvoidmain(字符串[]args){
MongoCredential credential=MongoCredential.createCredential(“用户”、“管理员”、“密码”.tocharray());
MongoClient MongoClient=newmongoclient(新服务器地址(“localhost”,27017),Arrays.asList(凭证));
MongoDatabase db=mongoClient.getDatabase(“bulktest”);
MongoCollection col=db.getCollection(“test1”);
列出内容;
//在不同的批量大小上迭代
用于(整数大小:测试大小){
长总计=0;
//每次测试几次,以获得更稳定的平均值
对于(int a=0;a
还有其他固有的瓶颈吗?我错过了什么


谢谢

基准测试很难。有很多因素在起作用,很容易意外地对硬件或代码进行基准测试,而不是对试图进行基准测试的软件进行基准测试。对于数据库来说,大部分时间的瓶颈是磁盘性能。如果在AWS上进行基准测试,则更为棘手,因为AWS对某些insta应用了CAP&burst周期nce类型。您如何进行基准测试与结果一样重要。请发布更多关于您使用的代码、硬件、文档的可压缩性、您是否在同一硬件中运行Java代码、iostat的输出、mongostat的输出等的详细信息。磁盘性能可能是一个问题,但我希望非常大的WiredTiger缓存就可以了。我在问题正文中添加了一些关于我的设置的信息。如果磁盘速度慢,大型WiredTiger缓存实际上可能有害。这是因为缓存的填充速度快于磁盘可以写入的速度。由于您正在进行100%的插入工作,磁盘速度通常是瓶颈。您是使用ssd还是旋转ning磁盘?我使用1TB SSD。CrystalDiskMark显示Seq Q32T1的写入速度为1786.8 MB/s