Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/database/9.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 有没有办法加快mapdb的速度?_Java_Database_Hash_Hashmap - Fatal编程技术网

Java 有没有办法加快mapdb的速度?

Java 有没有办法加快mapdb的速度?,java,database,hash,hashmap,Java,Database,Hash,Hashmap,我使用整型键和字符串值测试了mapdb,以便在其中插入10000000个元素。以下是我看到的: Processed 1.0E-5 percent of the data / time so far = 0 seconds Processed 1.00001 percent of the data / time so far = 7 seconds Processed 2.00001 percent of the data / time so far = 14 seconds

我使用整型键和字符串值测试了mapdb,以便在其中插入10000000个元素。以下是我看到的:

Processed 1.0E-5  percent of the data  / time so far = 0  seconds 
Processed 1.00001  percent of the data  / time so far = 7  seconds 
Processed 2.00001  percent of the data  / time so far = 14  seconds 
Processed 3.00001  percent of the data  / time so far = 20  seconds 
Processed 4.00001  percent of the data  / time so far = 26  seconds 
Processed 5.00001  percent of the data  / time so far = 33  seconds 
Processed 6.00001  percent of the data  / time so far = 39  seconds 
Processed 7.00001  percent of the data  / time so far = 45  seconds 
Processed 8.00001  percent of the data  / time so far = 53  seconds 
Processed 9.00001  percent of the data  / time so far = 60  seconds 
Processed 10.00001  percent of the data  / time so far = 66  seconds 
Processed 11.00001  percent of the data  / time so far = 73  seconds 
Processed 12.00001  percent of the data  / time so far = 80  seconds 
Processed 13.00001  percent of the data  / time so far = 88  seconds 
Processed 14.00001  percent of the data  / time so far = 96  seconds 
Processed 15.00001  percent of the data  / time so far = 102  seconds 
Processed 16.00001  percent of the data  / time so far = 110  seconds 
Processed 17.00001  percent of the data  / time so far = 119  seconds 
Processed 18.00001  percent of the data  / time so far = 127  seconds 
Processed 19.00001  percent of the data  / time so far = 134  seconds 
Processed 20.00001  percent of the data  / time so far = 141  seconds 
Processed 21.00001  percent of the data  / time so far = 149  seconds 
Processed 22.00001  percent of the data  / time so far = 157  seconds 
Processed 23.00001  percent of the data  / time so far = 164  seconds 
Processed 24.00001  percent of the data  / time so far = 171  seconds 
Processed 25.00001  percent of the data  / time so far = 178  seconds 
.... 
大约250万个实例在178秒内放入地图。对于1000万人来说,这大约是12分钟

然后我切换到更复杂的值,速度大幅下降(将整个10000000个实例添加到地图中需要3-4天)。有人对加快mapdb插入有什么建议吗?MabDB之前是否有任何与速度相关的经验/问题

这里还有一个评估:

更新:我使用了创建地图的通用过程。下面是一个伪代码:

DB db = DBMaker.newFileDB()....; 
... map = db.getHashMap(...);
loop (...) {   
map.put(...);
} 
db.commit();
我在官方网站上看到以下内容:

并发-MapDB具有记录级锁定和最先进的 并发引擎。它的性能几乎与数量成线性关系 核心部分。数据可以由多个并行线程写入

我想,就是这样,然后写了一个简单的测试:

package com.stackoverflow.test;

import java.io.File;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.Callable;
import java.util.concurrent.ConcurrentNavigableMap;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;

import org.mapdb.*;

public class Test {
  private static final int AMOUNT = 100000;

  private static final class MapAddingThread implements Runnable {
    private Integer fromElement;
    private Integer toElement;
    private Map<Integer, String> map;
    private CountDownLatch countDownLatch;

    public MapAddingThread(CountDownLatch countDownLatch, Map<Integer, String> map, Integer fromElement, Integer toElement) {
      this.countDownLatch = countDownLatch;
      this.map = map;
      this.fromElement = fromElement;
      this.toElement = toElement;
    }

    public void run() {
      for (Integer i = this.fromElement; i < this.toElement; i++) {
        map.put(i, i.toString());
      }
      this.countDownLatch.countDown();
    }

  }

  public static void main(String[] args) throws InterruptedException, ExecutionException {
   // int cores = 1;
    int cores = Runtime.getRuntime().availableProcessors();
    CountDownLatch countDownLatch = new CountDownLatch(cores);
    ExecutorService executorService = Executors.newFixedThreadPool(cores);
    int part = AMOUNT / cores;
    long startTime = new Date().getTime();
    System.out.println("Starting test in " + cores + " threads");
    DB db = DBMaker.newFileDB(new File("testdb5")).cacheDisable().closeOnJvmShutdown().make();
    Map<Integer, String> map = db.getHashMap("collectionName5");
    for (Integer i = 0; i < cores; i++) {
      executorService.execute(new MapAddingThread(countDownLatch, map, i * part, (i + 1) * part));
    }
    countDownLatch.await();
    long endTime = new Date().getTime();
    System.out.println("Filling elements takes : " + (endTime - startTime));
    db.commit();
    System.out.println("Commit takes : " + (new Date().getTime() - endTime));
    db.close();

  }
}
package com.stackoverflow.test;
导入java.io.File;
导入java.util.ArrayList;
导入java.util.Date;
导入java.util.HashMap;
导入java.util.List;
导入java.util.Map;
导入java.util.concurrent.Callable;
导入java.util.concurrent.ConcurrentNavigableMap;
导入java.util.concurrent.CountDownLatch;
导入java.util.concurrent.ExecutionException;
导入java.util.concurrent.ExecutorService;
导入java.util.concurrent.Executors;
导入java.util.concurrent.Future;
导入java.util.concurrent.TimeUnit;
导入org.mapdb.*;
公开课考试{
私人静态最终整数金额=100000;
私有静态最终类MapAddingRead实现可运行{
私有整数from元素;
私有整数元素;
私人地图;
私人倒计时锁存器倒计时锁存器;
公共映射AddingRead(CountDownLatch CountDownLatch,映射映射,整型fromElement,整型toElement){
this.countDownLatch=countDownLatch;
this.map=map;
this.fromElement=fromElement;
this.toElement=toElement;
}
公开募捐{
for(整数i=this.fromElement;i
并取得了以下成果:

在4个线程中开始测试

填充元素:4424

提交时间:901

然后我在一个线程中运行相同的程序:

package com.stackoverflow.test;

import java.io.File;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.Callable;
import java.util.concurrent.ConcurrentNavigableMap;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.Future;
import java.util.concurrent.TimeUnit;

import org.mapdb.*;

public class Test {
  private static final int AMOUNT = 100000;

  private static final class MapAddingThread implements Runnable {
    private Integer fromElement;
    private Integer toElement;
    private Map<Integer, String> map;
    private CountDownLatch countDownLatch;

    public MapAddingThread(CountDownLatch countDownLatch, Map<Integer, String> map, Integer fromElement, Integer toElement) {
      this.countDownLatch = countDownLatch;
      this.map = map;
      this.fromElement = fromElement;
      this.toElement = toElement;
    }

    public void run() {
      for (Integer i = this.fromElement; i < this.toElement; i++) {
        map.put(i, i.toString());
      }
      this.countDownLatch.countDown();
    }

  }

  public static void main(String[] args) throws InterruptedException, ExecutionException {
    int cores = 1;
//    int cores = Runtime.getRuntime().availableProcessors();
    CountDownLatch countDownLatch = new CountDownLatch(cores);
    ExecutorService executorService = Executors.newFixedThreadPool(cores);
    int part = AMOUNT / cores;
    long startTime = new Date().getTime();
    System.out.println("Starting test in " + cores + " threads");
    DB db = DBMaker.newFileDB(new File("testdb5")).cacheDisable().closeOnJvmShutdown().make();
    Map<Integer, String> map = db.getHashMap("collectionName5");
    for (Integer i = 0; i < cores; i++) {
      executorService.execute(new MapAddingThread(countDownLatch, map, i * part, (i + 1) * part));
    }
    countDownLatch.await();
    long endTime = new Date().getTime();
    System.out.println("Filling elements takes : " + (endTime - startTime));
    db.commit();
    System.out.println("Commit takes : " + (new Date().getTime() - endTime));
    db.close();

  }
}
package com.stackoverflow.test;
导入java.io.File;
导入java.util.ArrayList;
导入java.util.Date;
导入java.util.HashMap;
导入java.util.List;
导入java.util.Map;
导入java.util.concurrent.Callable;
导入java.util.concurrent.ConcurrentNavigableMap;
导入java.util.concurrent.CountDownLatch;
导入java.util.concurrent.ExecutionException;
导入java.util.concurrent.ExecutorService;
导入java.util.concurrent.Executors;
导入java.util.concurrent.Future;
导入java.util.concurrent.TimeUnit;
导入org.mapdb.*;
公开课考试{
私人静态最终整数金额=100000;
私有静态最终类MapAddingRead实现可运行{
私有整数from元素;
私有整数元素;
私人地图;
私人倒计时锁存器倒计时锁存器;
公共映射AddingRead(CountDownLatch CountDownLatch,映射映射,整型fromElement,整型toElement){
this.countDownLatch=countDownLatch;
this.map=map;
this.fromElement=fromElement;
this.toElement=toElement;
}
公开募捐{
for(整数i=this.fromElement;i
并取得了以下成果:

在1个线程中启动测试

填充元件:3639

提交时间:924

所以,如果我做的每件事都正确的话,那么mapdb就核心数量而言似乎不可伸缩

只有你可以玩的东西:

  • Api方法(例如加密切换、缓存、树映射/ha)