Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/multithreading/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 实现LRU缓存的最佳方法_Java_Multithreading_Caching_Collections_Lru - Fatal编程技术网

Java 实现LRU缓存的最佳方法

Java 实现LRU缓存的最佳方法,java,multithreading,caching,collections,lru,Java,Multithreading,Caching,Collections,Lru,我想创建一个高效的LRU缓存实现。我发现最方便的方法是使用LinkedHashMap,但不幸的是,如果许多线程都在使用缓存,那么速度会非常慢。我的实施是: /** * Class provides API for FixedSizeCache. * Its inheritors represent classes * with concrete strategies * for choosing elements to delete * in case of

我想创建一个高效的LRU缓存实现。我发现最方便的方法是使用
LinkedHashMap
,但不幸的是,如果许多线程都在使用缓存,那么速度会非常慢。我的实施是:

/**
 * Class provides API for FixedSizeCache.
 * Its inheritors represent classes         
 * with concrete strategies     
 * for choosing elements to delete
 * in case of cache overflow. All inheritors
 * must implement {@link #getSize(K, V)}. 
 */
public abstract class FixedSizeCache <K, V> implements ICache <K, V> {
    /**
     * Current cache size.
     */
    private int currentSize;


    /**
     *  Maximum allowable cache size.
     */
    private int maxSize;


    /**
     * Number of {@link #get(K)} queries for which appropriate {@code value} was found.
     */
    private int keysFound;


    /**
     * Number of {@link #get(K)} queries for which appropriate {@code value} was not found.
     */
    private int keysMissed;


    /** 
     * Number {@code key-value} associations that were deleted from cache
     * because of cash overflow.
     */
    private int erasedCount; 


    /**
     * Basic data structure LinkedHashMap provides
     * convenient way for designing both types of cache:
     * LRU and FIFO. Depending on its constructor parameters
     * it can represent either of FIFO or LRU HashMap.
     */
    private LinkedHashMap <K, V> entries;


    /** 
     * If {@code type} variable equals {@code true}
     * then LinkedHashMap will represent LRU HashMap.
     * And it will represent FIFO HashMap otherwise.
     */ 
    public FixedSizeCache(int maxSize, boolean type) {

        if (maxSize <= 0) {
            throw new IllegalArgumentException("int maxSize parameter must be greater than 0");
        }

        this.maxSize = maxSize;
        this.entries = new LinkedHashMap<K, V> (0, 0.75f, type);
    }


    /** 
     * Method deletes {@code key-value} associations 
     * until current cache size {@link #currentSize} will become 
     * less than or equal to maximum allowable
     * cache size {@link #maxSize}
     */
    private void relaxSize()  {

        while (currentSize > maxSize) {

             // The strategy for choosing entry with the lowest precedence
             // depends on {@code type} variable that was used to create  {@link #entries} variable. 
             // If it was created with constructor LinkedHashMap(int size,double loadfactor, boolean type)
             // with {@code type} equaled to {@code true} then variable {@link #entries} represents
             // LRU LinkedHashMap and iterator of its entrySet will return elements in order
             // from least recently used to the most recently used.
             // Otherwise, if {@code type} equaled to {@code false} then {@link #entries} represents
             // FIFO LinkedHashMap and iterator will return its entrySet elements in FIFO order -
             // from oldest in the cache to the most recently added.

            Map.Entry <K, V> entryToDelete = entries.entrySet().iterator().next();

            if (entryToDelete == null) {
                throw new IllegalStateException(" Implemented method int getSize(K key, V value) " +
                        " returns different results for the same arguments.");  
            }

            entries.remove(entryToDelete.getKey());
            currentSize -= getAssociationSize(entryToDelete.getKey(), entryToDelete.getValue());
            erasedCount++;
        }

        if (currentSize < 0) {
            throw new IllegalStateException(" Implemented method int getSize(K key, V value) " +
                    " returns different results for the same arguments.");
        }
    }


    /** 
     * All inheritors must implement this method
     * which evaluates the weight of key-value association.
     * Sum of weights of all key-value association in the cache
     * equals to {@link #currentSize}.  
     * But developer must ensure that
     * implementation will satisfy two conditions:
     * <br>1) method always returns non negative integers;
     * <br>2) for every two pairs {@code key-value} and {@code key_1-value_1}
     * if {@code key.equals(key_1)} and {@code value.equals(value_1)} then 
     * {@code getSize(key, value)==getSize(key_1, value_1)};
     * <br> Otherwise cache can work incorrectly.
     */
    protected abstract int getSize(K key, V value);


    /** 
     * Helps to detect if the implementation of {@link #getSize(K, V)} method
     * can return negative values. 
     */
    private int getAssociationSize(K key, V value)  {

        int entrySize = getSize(key, value);

        if (entrySize < 0 ) {
            throw new IllegalStateException("int getSize(K key, V value) method implementation is invalid. It returned negative value.");
        }

        return entrySize;
    }


   /**
    * Returns the {@code value} corresponding to {@code key} or
    * {@code null} if  {@code key} is not present in the cache. 
    * Increases {@link #keysFound} if finds a corresponding {@code value}
    * or increases {@link #keysMissed} otherwise. 
    */
    public synchronized final V get(K key)  {

        if (key == null) {
            throw new NullPointerException("K key is null");
        }

        V value = entries.get(key);
        if (value != null) {
            keysFound++;
            return value;
        }

        keysMissed++;
        return value;
    }


    /** 
     * Removes the {@code key-value} association, if any, with the
    *  given {@code key}; returns the {@code value} with which it
    *  was associated, or {@code null}.
    */
    public synchronized final V remove(K key)  {

        if (key == null) {
            throw new NullPointerException("K key is null");
        }

        V value = entries.remove(key);

        // if appropriate value was present in the cache than decrease
        // current size of cache

        if (value != null) {
            currentSize -= getAssociationSize(key, value);
        }

        return value;
    }


   /**
    * Adds or replaces a {@code key-value} association.
    * Returns the old {@code value} if the
    * {@code key} was present; otherwise returns {@code null}.
    * If after insertion of a {@code key-value} association 
    * to cache its size becomes greater than
    * maximum allowable cache size then it calls {@link #relaxSize()} method which
    * releases needed free space. 
    */
    public synchronized final V put(K key, V value)  {

        if (key == null || value == null) {
            throw new NullPointerException("K key is null or V value is null");
        }

        currentSize += getAssociationSize(key, value);      
        value = entries.put(key, value);

        // if key was not present then decrease cache size

        if (value != null) {
            currentSize -= getAssociationSize(key, value);
        }

        // if cache size with new entry is greater
        // than maximum allowable cache size
        // then get some free space

        if (currentSize > maxSize) {
            relaxSize();
        }

        return value;
    }


    /**
     * Returns current size of cache. 
     */
    public synchronized int currentSize() {
        return currentSize;
    }


    /** 
     * Returns maximum allowable cache size. 
     */ 
    public synchronized int maxSize() {
        return maxSize;
    }


    /** 
     * Returns number of {@code key-value} associations that were deleted
     * because of cache overflow.   
     */
    public synchronized int erasedCount() {
        return erasedCount;
    }


    /** 
     * Number of {@link #get(K)} queries for which appropriate {@code value} was found.
     */
    public synchronized int keysFoundCount() {
        return keysFound;
    }


    /** 
     * Number of {@link #get(K)} queries for which appropriate {@code value} was not found.
     */
    public synchronized int keysMissedCount() {
        return keysMissed;
    }


    /**
     * Removes all {@code key-value} associations
     * from the cache. And turns {@link #currentSize},
     * {@link #keysFound}, {@link #keysMissed} to {@code zero}.  
     */
    public synchronized void clear() {
        entries.clear();
        currentSize = 0;
        keysMissed = 0;
        keysFound = 0;
        erasedCount = 0;
    }


    /**
     * Returns a copy of {@link #entries}
     * that has the same content.
     */
    public synchronized LinkedHashMap<K, V> getCopy() {
        return new LinkedHashMap<K, V> (entries);
    }
}
/**
*类为FixedSizeCache提供API。
*它的继承者代表类
*用具体的策略
*用于选择要删除的图元
*在缓存溢出的情况下。所有继承人
*必须实现{@link#getSize(K,V)}。
*/
公共抽象类FixedSizeCache实现了ICache{
/**
*当前缓存大小。
*/
私有int-currentSize;
/**
*允许的最大缓存大小。
*/
私有int-maxSize;
/**
*找到相应{@code value}的{@link#get(K)}查询数。
*/
发现私有int密钥;
/**
*未找到相应{@code value}的{@link#get(K)}查询数。
*/
私钥丢失;
/** 
*从缓存中删除的{@code key-value}关联数
*因为现金过剩。
*/
私有整数擦除计数;
/**
*LinkedHashMap提供的基本数据结构
*设计这两种类型缓存的便捷方法:
*LRU和FIFO。取决于其构造函数参数
*它可以表示FIFO或LRU哈希映射。
*/
私有LinkedHashMap条目;
/** 
*如果{@code type}变量等于{@code true}
*然后LinkedHashMap将表示LRU HashMap。
*否则,它将表示FIFO哈希映射。
*/ 
public FixedSizeCache(int-maxSize,布尔类型){
如果(最大尺寸最大尺寸){
//选择优先级最低的条目的策略
//依赖于用于创建{@link#entries}变量的{@code type}变量。
//如果它是使用构造函数LinkedHashMap创建的(int size,double loadfactor,boolean类型)
//当{@code type}等于{@code true}时,变量{@link#entries}表示
//LRU LinkedHashMap及其入口集的迭代器将按顺序返回元素
//从最近使用到最近使用。
//否则,如果{@code type}等于{@code false},则{@link#entries}表示
//FIFO LinkedHashMap和迭代器将按FIFO顺序返回其entrySet元素-
//从缓存中最早的到最近添加的。
Map.Entry entryToDelete=entries.entrySet().iterator().next();
if(entryToDelete==null){
抛出新的IllegalStateException(“实现的方法int getSize(K键,V值)”+
“为相同的参数返回不同的结果。”);
}
remove(entryToDelete.getKey());
currentSize-=getAssociationSize(entryToDelete.getKey(),entryToDelete.getValue());
擦除计数++;
}
如果(当前大小<0){
抛出新的IllegalStateException(“实现的方法int getSize(K键,V值)”+
“为相同的参数返回不同的结果。”);
}
}
/** 
*所有继承者都必须实现此方法
*它评估键值关联的权重。
*缓存中所有键值关联的权重之和
*等于{@link#currentSize}。
*但开发者必须确保
*实施将满足两个条件:
*
1)方法始终返回非负整数; *
2)每两对{@code key-value}和{@code-key\u 1-value\u 1} *如果{@code key.equals(key_1)}和{@code value.equals(value_1)},那么 *{@code getSize(key,value)==getSize(key_1,value_1)}; *
否则缓存可能无法正常工作。 */ 受保护的抽象int getSize(K键,V值); /** *帮助检测{@link#getSize(K,V)}方法的实现 *可以返回负值。 */ private int getAssociationSize(K键,V值){ int entrySize=getSize(键,值); if(入口大小<0){ 抛出新的IllegalStateException(“int-getSize(K键,V值)方法实现无效。它返回负值。”); } 返回入口大小; } /** *返回与{@code key}对应的{@code value}或 *{@code null}如果缓存中不存在{@code key}。 *如果找到对应的{@code value},则增加{@link#keysFound} *或者以其他方式增加{@link#keysMissed}。 */ 公共同步最终V get(K密钥){ if(key==null){ 抛出新的NullPointerException(“K键为null”); } V值=条目。获取(键); if(值!=null){ keysFound++; 返回值; } 键盘输入++; 返回值; } /** *删除{@code key-value}与 *给定{@code key};返回它所使用的{@code value} *已关联,或{@code null}。 */ 公共同步最终V删除(K键){ if(key==null){ 抛出新的NullPointerException(“K键为null”); } V值=条目。删除(键); //如果缓存中存在适当的值,则减少 //缓存的当前大小 if(值!=null){ currentSize-=getAssociationSize(键,值); } 返回值; } /** *添加或替换{@code key-value}关联。 *如果 *{@code key}存在;否则返回{@code null}。 *如果在插入{@code key-value}关联之后 *要缓存,其大小将大于 *允许的最大缓存大小,然后调用{@link#relaxSize()}方法 *释放所需的可用空间。 */ 公众同步决赛(
Map<Object, Object> map = Collections.synchronizedMap(new LinkedHashMap<Object, Object>(16, 0.7f, true) {
    @Override
    protected boolean removeEldestEntry(Map.Entry<Object, Object> eldest) {
        return size() > 1000;
    }
});
Integer[] values = new Integer[10000];
for (int i = 0; i < values.length; i++)
    values[i] = i;

long start = System.nanoTime();
for (int i = 0; i < 1000; i++) {
    for (int j = 0; j < values.length; j++) {
        map.get(values[j]);
        map.get(values[j / 2]);
        map.get(values[j / 3]);
        map.get(values[j / 4]);
        map.put(values[j], values[j]);
    }
}
long time = System.nanoTime() - start;
long rate = (5 * values.length * 1000) * 1000000000L / time;
System.out.printf("Performed get/put operations at a rate of %,d per second%n", rate);
Performed get/put operations at a rate of 27,170,035 per second