1. 程式人生 > >android中圖片的三級cache策略(記憶體、檔案、網路)之二:記憶體快取策略

android中圖片的三級cache策略(記憶體、檔案、網路)之二:記憶體快取策略

前言

記得很久之前我寫了一篇banner的文章,好多朋友找我要程式碼,並要我開放banner中使用的圖片管理工廠-ImageManager。如果想很好地理解下面的故事,請參看我半年前寫的兩篇博文:android中圖片的三級cache策略(記憶體、檔案、網路) 一 和 android中左右滑屏的實現(廣告位banner元件)。當時沒有發上來是由於如下幾點原因:首先程式碼較多,其次當時寫的時候也參考了網路上存在的三級cache策略(大同小異),並且採用了Android專案中開源的LruCache頁面淘汰演算法(近期最少使用演算法),還有一點就是這是實際專案使用的程式碼,不便直接開放,但是現在我決定把它稍作修改後開放給大家。這裡我想說說那個banner,平心而論,banner的程式碼很多,如果採用ViewPager之類的則可以減少不少程式碼,但是我更看重banner的實現思想以及它的封裝和事件傳遞,在自定義控制元件的封裝和架構上,我到現在還覺得banner是及其成功的,尤其是banner和ImageManager結合以後,整個功能渾然天成,超高內聚,使用起來及其方便,最少只需要兩行程式碼,你不需要匯入xml,也不需要處理Json拉取策略,因為相關業務層都被封裝在了banner內部,對外只保留很少的幾個介面,只要實現它就能和banner內部進行互動。下面我將要介紹三級cache策略之二:記憶體快取策略。

記憶體快取策略

當有一個圖片要去從網路下載的時候,我們並不會直接去從網路下載,因為在這個時代,使用者的流量是寶貴的,耗流量的應用是不會得到使用者的青睞的。那我們該怎麼辦呢?這樣,我們會先從記憶體快取中去查詢是否有該圖片,如果沒有就去檔案快取中查詢是否有該圖片,如果還沒有,我們就從網路下載圖片。本博文的側重點是如何做記憶體快取,記憶體快取的查詢策略是:先從強引用快取中查詢,如果沒有再從軟引用快取中查詢,如果在軟引用快取中找到了,就把它移入強引用快取;如果強引用快取滿了,就會根據Lru演算法把某些圖片移入軟引用快取,如果軟引用快取也滿了,最早的軟引用就會被刪除。這裡,我有必要說明下幾個概念:強引用、軟引用、弱引用、Lru。

強引用:就是直接引用一個物件,一般的物件引用均是強引用

軟引用:引用一個物件,當記憶體不足並且除了我們的引用之外沒有其他地方引用此物件的情況 下,該物件會被gc回收

弱引用:引用一個物件,當除了我們的引用之外沒有其他地方引用此物件的情況下,只要gc被呼叫,它就會被回收(請注意它和軟引用的區別)

LruLeast Recently Used 近期最少使用演算法,是一種頁面置換演算法,其思想是在快取的頁面數目固定的情況下,那些最近使用次數最少的頁面將被移出,對於我們的記憶體快取來說,強引用快取大小固定為4M,如果當快取的圖片大於4M的時候,有些圖片就會被從強引用快取中刪除,哪些圖片會被刪除呢,就是那些近期使用次數最少的圖片。

程式碼

public class ImageMemoryCache {
    /**
     * 從記憶體讀取資料速度是最快的,為了更大限度使用記憶體,這裡使用了兩層快取。
     *  強引用快取不會輕易被回收,用來儲存常用資料,不常用的轉入軟引用快取。
     */
    private static final String TAG = "ImageMemoryCache";

    private static LruCache<String, Bitmap> mLruCache; // 強引用快取

    private static LinkedHashMap<String, SoftReference<Bitmap>> mSoftCache; // 軟引用快取

    private static final int LRU_CACHE_SIZE = 4 * 1024 * 1024; // 強引用快取容量:4MB

    private static final int SOFT_CACHE_NUM = 20; // 軟引用快取個數

    // 在這裡分別初始化強引用快取和弱引用快取
    public ImageMemoryCache() {
        mLruCache = new LruCache<String, Bitmap>(LRU_CACHE_SIZE) {
            @Override
            // sizeOf返回為單個hashmap value的大小
            protected int sizeOf(String key, Bitmap value) {
                if (value != null)
                    return value.getRowBytes() * value.getHeight();
                else
                    return 0;
            }

            @Override
            protected void entryRemoved(boolean evicted, String key,
                    Bitmap oldValue, Bitmap newValue) {
                if (oldValue != null) {
                    // 強引用快取容量滿的時候,會根據LRU演算法把最近沒有被使用的圖片轉入此軟引用快取
                    Logger.d(TAG, "LruCache is full,move to SoftRefernceCache");
                    mSoftCache.put(key, new SoftReference<Bitmap>(oldValue));
                }
            }
        };

        mSoftCache = new LinkedHashMap<String, SoftReference<Bitmap>>(
                SOFT_CACHE_NUM, 0.75f, true) {
            private static final long serialVersionUID = 1L;

            /**
             * 當軟引用數量大於20的時候,最舊的軟引用將會被從鏈式雜湊表中移出
             */
            @Override
            protected boolean removeEldestEntry(
                    Entry<String, SoftReference<Bitmap>> eldest) {
                if (size() > SOFT_CACHE_NUM) {
                    Logger.d(TAG, "should remove the eldest from SoftReference");
                    return true;
                }
                return false;
            }
        };
    }

    /**
     * 從快取中獲取圖片
     */
    public Bitmap getBitmapFromMemory(String url) {
        Bitmap bitmap;

        // 先從強引用快取中獲取
        synchronized (mLruCache) {
            bitmap = mLruCache.get(url);
            if (bitmap != null) {
                // 如果找到的話,把元素移到LinkedHashMap的最前面,從而保證在LRU演算法中是最後被刪除
                mLruCache.remove(url);
                mLruCache.put(url, bitmap);
                Logger.d(TAG, "get bmp from LruCache,url=" + url);
                return bitmap;
            }
        }

        // 如果強引用快取中找不到,到軟引用快取中找,找到後就把它從軟引用中移到強引用快取中
        synchronized (mSoftCache) {
            SoftReference<Bitmap> bitmapReference = mSoftCache.get(url);
            if (bitmapReference != null) {
                bitmap = bitmapReference.get();
                if (bitmap != null) {
                    // 將圖片移回LruCache
                    mLruCache.put(url, bitmap);
                    mSoftCache.remove(url);
                    Logger.d(TAG, "get bmp from SoftReferenceCache, url=" + url);
                    return bitmap;
                } else {
                    mSoftCache.remove(url);
                }
            }
        }
        return null;
    }

    /**
     * 新增圖片到快取
     */
    public void addBitmapToMemory(String url, Bitmap bitmap) {
        if (bitmap != null) {
            synchronized (mLruCache) {
                mLruCache.put(url, bitmap);
            }
        }
    }

    public void clearCache() {
        mSoftCache.clear();
    }
}

另外,給出LruCache供大家參考:

/*
 * Copyright (C) 2011 The Android Open Source Project
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

/**
 * Cache儲存一個強引用來限制內容數量,每當Item被訪問的時候,此Item就會移動到佇列的頭部。
 * 當cache已滿的時候加入新的item時,在佇列尾部的item會被回收。
 * 
 * 如果你cache的某個值需要明確釋放,重寫entryRemoved()
 * 
 * 如果key相對應的item丟掉啦,重寫create().這簡化了呼叫程式碼,即使丟失了也總會返回。
 * 
 * 預設cache大小是測量的item的數量,重寫sizeof計算不同item的大小。
 *  
 * <pre>   {@code
 *   int cacheSize = 4 * 1024 * 1024; // 4MiB
 *   LruCache<String, Bitmap> bitmapCache = new LruCache<String, Bitmap>(cacheSize) {
 *       protected int sizeOf(String key, Bitmap value) {
 *           return value.getByteCount();
 *       }
 *   }}</pre>
 *
 * <p>This class is thread-safe. Perform multiple cache operations atomically by
 * synchronizing on the cache: <pre>   {@code
 *   synchronized (cache) {
 *     if (cache.get(key) == null) {
 *         cache.put(key, value);
 *     }
 *   }}</pre>
 *
 * 不允許key或者value為null
 *  當get(),put(),remove()返回值為null時,key相應的項不在cache中
 */
public class LruCache<K, V> {
    private final LinkedHashMap<K, V> map;

    /** Size of this cache in units. Not necessarily the number of elements. */
    private int size;//已經儲存的大小
    private int maxSize;//規定的最大儲存空間

    private int putCount;//put的次數
    private int createCount;//create的次數
    private int evictionCount; //回收的次數
    private int hitCount;//命中的次數
    private int missCount;//丟失的次數

    /**
     * @param maxSize for caches that do not override {@link #sizeOf}, this is
     *     the maximum number of entries in the cache. For all other caches,
     *     this is the maximum sum of the sizes of the entries in this cache.
     */
    public LruCache(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        this.map = new LinkedHashMap<K, V>(0, 0.75f, true);
    }

    /**
     *通過key返回相應的item,或者建立返回相應的item。相應的item會移動到佇列的頭部,
     * 如果item的value沒有被cache或者不能被建立,則返回null。
     */
    public final V get(K key) {
        if (key == null) {
            throw new NullPointerException("key == null");
        }

        V mapValue;
        synchronized (this) {
            mapValue = map.get(key);
            if (mapValue != null) {
                hitCount++;
                return mapValue;
            }
            missCount++;
        }

        /*
         * Attempt to create a value. This may take a long time, and the map
         * may be different when create() returns. If a conflicting value was
         * added to the map while create() was working, we leave that value in
         * the map and release the created value.
         */

        V createdValue = create(key);
        if (createdValue == null) {
            return null;
        }

        synchronized (this) {
            createCount++;
            mapValue = map.put(key, createdValue);

            if (mapValue != null) {
                // There was a conflict so undo that last put
                map.put(key, mapValue);
            } else {
                size += safeSizeOf(key, createdValue);
            }
        }

        if (mapValue != null) {
            entryRemoved(false, key, createdValue, mapValue);
            return mapValue;
        } else {
            trimToSize(maxSize);
            return createdValue;
        }
    }

    /**
     * Caches {@code value} for {@code key}. The value is moved to the head of
     * the queue.
     *
     * @return the previous value mapped by {@code key}.
     */
    public final V put(K key, V value) {
        if (key == null || value == null) {
            throw new NullPointerException("key == null || value == null");
        }

        V previous;
        synchronized (this) {
            putCount++;
            size += safeSizeOf(key, value);
            previous = map.put(key, value);
            if (previous != null) {
                size -= safeSizeOf(key, previous);
            }
        }

        if (previous != null) {
            entryRemoved(false, key, previous, value);
        }

        trimToSize(maxSize);
        return previous;
    }

    /**
     * @param maxSize the maximum size of the cache before returning. May be -1
     *     to evict even 0-sized elements.
     */
    private void trimToSize(int maxSize) {
        while (true) {
            K key;
            V value;
            synchronized (this) {
                if (size < 0 || (map.isEmpty() && size != 0)) {
                    throw new IllegalStateException(getClass().getName()
                            + ".sizeOf() is reporting inconsistent results!");
                }

                if (size <= maxSize) {
                    break;
                }
            
                /*
                 * Map.Entry<K, V> toEvict = map.eldest();               
                 */
                //modify by echy
                
                Iterator<Entry<K, V>> iter = map.entrySet().iterator(); 
                Map.Entry<K, V> toEvict = null;
                while (iter.hasNext()) 
    			{
    				
                	toEvict = (Entry<K, V>) iter.next();
    				break;
    			}
                
                
                if (toEvict == null) {
                    break;
                }
                
                key = toEvict.getKey();
                value = toEvict.getValue();
                
                
                map.remove(key);
                size -= safeSizeOf(key, value);
                evictionCount++;
            }

            entryRemoved(true, key, value, null);
        }
    }

    /**
     * Removes the entry for {@code key} if it exists.
     *
     * @return the previous value mapped by {@code key}.
     */
    public final V remove(K key) {
        if (key == null) {
            throw new NullPointerException("key == null");
        }

        V previous;
        synchronized (this) {
            previous = map.remove(key);
            if (previous != null) {
                size -= safeSizeOf(key, previous);
            }
        }

        if (previous != null) {
            entryRemoved(false, key, previous, null);
        }

        return previous;
    }

    /**
     * Called for entries that have been evicted or removed. This method is
     * invoked when a value is evicted to make space, removed by a call to
     * {@link #remove}, or replaced by a call to {@link #put}. The default
     * implementation does nothing.
     *
     * <p>The method is called without synchronization: other threads may
     * access the cache while this method is executing.
     *
     * @param evicted true if the entry is being removed to make space, false
     *     if the removal was caused by a {@link #put} or {@link #remove}.
     * @param newValue the new value for {@code key}, if it exists. If non-null,
     *     this removal was caused by a {@link #put}. Otherwise it was caused by
     *     an eviction or a {@link #remove}.
     */
    protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {}

    /**
     * Called after a cache miss to compute a value for the corresponding key.
     * Returns the computed value or null if no value can be computed. The
     * default implementation returns null.
     *
     * <p>The method is called without synchronization: other threads may
     * access the cache while this method is executing.
     *
     * <p>If a value for {@code key} exists in the cache when this method
     * returns, the created value will be released with {@link #entryRemoved}
     * and discarded. This can occur when multiple threads request the same key
     * at the same time (causing multiple values to be created), or when one
     * thread calls {@link #put} while another is creating a value for the same
     * key.
     */
    protected V create(K key) {
        return null;
    }

    private int safeSizeOf(K key, V value) {
        int result = sizeOf(key, value);
        if (result < 0) {
            throw new IllegalStateException("Negative size: " + key + "=" + value);
        }
        return result;
    }

    /**
     * Returns the size of the entry for {@code key} and {@code value} in
     * user-defined units.  The default implementation returns 1 so that size
     * is the number of entries and max size is the maximum number of entries.
     *
     * <p>An entry's size must not change while it is in the cache.
     */
    protected int sizeOf(K key, V value) {
        return 1;
    }

    /**
     * Clear the cache, calling {@link #entryRemoved} on each removed entry.
     */
    public final void evictAll() {
        trimToSize(-1); // -1 will evict 0-sized elements
    }

    /**
     * For caches that do not override {@link #sizeOf}, this returns the number
     * of entries in the cache. For all other caches, this returns the sum of
     * the sizes of the entries in this cache.
     */
    public synchronized final int size() {
        return size;
    }

    /**
     * For caches that do not override {@link #sizeOf}, this returns the maximum
     * number of entries in the cache. For all other caches, this returns the
     * maximum sum of the sizes of the entries in this cache.
     */
    public synchronized final int maxSize() {
        return maxSize;
    }

    /**
     * Returns the number of times {@link #get} returned a value that was
     * already present in the cache.
     */
    public synchronized final int hitCount() {
        return hitCount;
    }

    /**
     * Returns the number of times {@link #get} returned null or required a new
     * value to be created.
     */
    public synchronized final int missCount() {
        return missCount;
    }

    /**
     * Returns the number of times {@link #create(Object)} returned a value.
     */
    public synchronized final int createCount() {
        return createCount;
    }

    /**
     * Returns the number of times {@link #put} was called.
     */
    public synchronized final int putCount() {
        return putCount;
    }

    /**
     * Returns the number of values that have been evicted.
     */
    public synchronized final int evictionCount() {
        return evictionCount;
    }

    /**
     * Returns a copy of the current contents of the cache, ordered from least
     * recently accessed to most recently accessed.
     */
    public synchronized final Map<K, V> snapshot() {
        return new LinkedHashMap<K, V>(map);
    }

    @Override public synchronized final String toString() {
        int accesses = hitCount + missCount;
        int hitPercent = accesses != 0 ? (100 * hitCount / accesses) : 0;
        return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]",
                maxSize, hitCount, missCount, hitPercent);
    }
}