1. LruCache 是什么?
了解:HashMap 底層:哈希表(hashcode,equals) 線程不安全,效率高(針對(duì)key) ?
LinkedHashMap 底層: 鏈表(保證有序) 哈希表(hashcode,equals) ? TreeMap 底層:紅黑樹(shù) (有序:1.自然排序 2.比較器排序)
要搞清楚 LruCache 是什么之前,首先要知道 Android 的緩存策略。其實(shí)緩存策略很簡(jiǎn)單,舉個(gè)例子,就是用戶第一次使用網(wǎng)絡(luò)加載一張圖片后,下次加載這張圖片的時(shí)候,并不會(huì)從網(wǎng)絡(luò)加載,而是會(huì)從內(nèi)存或者硬盤加載這張圖片。
緩存策略分為添加、獲取和刪除,為什么需要?jiǎng)h除緩存呢?因?yàn)槊總€(gè)設(shè)備都會(huì)有一定的容量限制,當(dāng)容量滿了的話就需要?jiǎng)h除。
那什么是 LruCache 呢?其實(shí) LRU(Least Recently Used) 的意思就是近期最少使用算法,它的核心思想就是會(huì)優(yōu)先淘汰那些近期最少使用的緩存對(duì)象。
LruCache原理解析
LruCache是一個(gè)泛型類,它內(nèi)部采用LinkedHashMap,并以強(qiáng)引用的方式存儲(chǔ)外界的緩存對(duì)象,提供get和put方法來(lái)完成緩存的獲取和添加操作。當(dāng)緩存滿時(shí),LruCache會(huì)移除較早的緩存對(duì)象,然后再添加新的緩存對(duì)象。對(duì)Java中四種引用類型還不是特別清楚的讀者可以自行查閱相關(guān)資料,這里不再給出介紹。
介紹源碼前 先介紹LinkedHashMap一些特性
LinkedHashMap實(shí)現(xiàn)與HashMap的不同之處在于,后者維護(hù)著一個(gè)運(yùn)行于所有條目的雙重鏈接列表。此鏈接列表定義了迭代順序,該迭代順序可以是插入順序或者是訪問(wèn)順序。
對(duì)于LinkedHashMap而言,它繼承與HashMap、底層使用哈希表與雙向鏈表來(lái)保存所有元素。其基本操作與父類HashMap相似,它通過(guò)重寫(xiě)父類相關(guān)的方法,來(lái)實(shí)現(xiàn)自己的鏈接列表特性
- Entry元素:
LinkedHashMap采用的hash算法和HashMap相同,但是它重新定義了數(shù)組中保存的元素Entry,該Entry除了保存當(dāng)前對(duì)象的引用外,還保存了其上一個(gè)元素before和下一個(gè)元素after的引用,從而在哈希表的基礎(chǔ)上又構(gòu)成了雙向鏈接列表。
/**
* 雙向鏈表的表頭元素。
*/
private transient Entry<K,V> header;
/**
* LinkedHashMap的Entry元素。
* 繼承HashMap的Entry元素,又保存了其上一個(gè)元素
before和下一個(gè)元素after的引用。
*/
private static class Entry<K,V> extends
HashMap.Entry<K,V> {
Entry<K,V> before, after;
……
}
- 讀?。?/li>
LinkedHashMap重寫(xiě)了父類HashMap的get方法,實(shí)際在調(diào)用父類getEntry()方法取得查找的元素后,再判斷當(dāng)排序模式accessOrder為true時(shí),記錄訪問(wèn)順序,將最新訪問(wèn)的元素添加到雙向鏈表的表頭(這個(gè)特性保證了LRU最近最少使用),并從原來(lái)的位置刪除。由于的鏈表的增加、刪除操作是常量級(jí)的,故并不會(huì)帶來(lái)性能的損失。
@Override public V get(Object key) {
/*
* This method is overridden to eliminate the need for a polymorphic
* invocation in superclass at the expense of code duplication.
*/
if (key == null) {
HashMapEntry<K, V> e = entryForNullKey;
if (e == null)
return null;
if (accessOrder)
makeTail((LinkedEntry<K, V>) e);
return e.value;
}
int hash = Collections.secondaryHash(key);
HashMapEntry<K, V>[] tab = table;
for (HashMapEntry<K, V> e = tab[hash & (tab.length - 1)];
e != null; e = e.next) {
K eKey = e.key;
if (eKey == key || (e.hash == hash && key.equals(eKey))) {
if (accessOrder)
makeTail((LinkedEntry<K, V>) e);
return e.value;
}
}
return null;
}
/**
* Relinks the given entry to the tail of the list. Under access ordering,
* this method is invoked whenever the value of a pre-existing entry is
* read by Map.get or modified by Map.put.
*/
private void makeTail(LinkedEntry<K, V> e) {
// Unlink e
e.prv.nxt = e.nxt;
e.nxt.prv = e.prv;
// Relink e as tail
LinkedEntry<K, V> header = this.header;
LinkedEntry<K, V> oldTail = header.prv;
e.nxt = header;
e.prv = oldTail;
oldTail.nxt = header.prv = e;
modCount++;
}
總結(jié)
LRU (Least Recently Used) 就是最近最少使用算法,LruCache當(dāng)然就是依據(jù) LRU 算法實(shí)現(xiàn)的緩存。簡(jiǎn)單說(shuō)就是,設(shè)置好緩存大??;當(dāng)緩存空間不足的時(shí)候,就把最近最少使用(也就是最長(zhǎng)時(shí)間沒(méi)有使用)的緩存項(xiàng)清除掉;然后提供新的緩存。
1、LruCache(HashMap+LinkedHashMap) 是基于 Lru 算法實(shí)現(xiàn)的一種緩存機(jī)制;
LruCache 其實(shí)使用了 LinkedHashMap 維護(hù)了強(qiáng)引用對(duì)象
總緩存的大小一般是可用內(nèi)存的 1/8,當(dāng)超過(guò)總緩存大小會(huì)刪除最少使用的元
素,也就是內(nèi)部 LinkedHashMap 的頭部元素
當(dāng)使用 get() 訪問(wèn)元素后,會(huì)將該元素移動(dòng)到 LinkedHashMap 的尾部
2、Lru算法的原理是把近期最少使用的數(shù)據(jù)給移除掉,當(dāng)然前提是當(dāng)前數(shù)據(jù)的量大于設(shè)定的最大值。
3、LruCache 沒(méi)有真正的釋放內(nèi)存,只是從 Map中移除掉數(shù)據(jù),真正釋放內(nèi)存還是要用戶手動(dòng)釋放。
歸結(jié)幾點(diǎn)
LruCache 內(nèi)部使用 LinkedHashMap 實(shí)現(xiàn),所以 LruCache 保存的是鍵值對(duì)
LruCache 本身對(duì)緩存項(xiàng)是強(qiáng)引用
LruCache 的讀寫(xiě)是線程安全的,內(nèi)部加了 synchronized。也就是 put(K key, V value) 和 get(K key) 內(nèi)部有 synchronized
key 和 value 不接受 null 。所以如果 get 到了 null ,那就說(shuō)明是沒(méi)有緩存
Override sizeOf(K key, V value) 方法
根據(jù)需要Override entryRemoved(boolean evicted, K key, V oldValue, V newValue) 和 create(K key) 方法
源碼分析
public class LruCache<K, V> {
private final LinkedHashMap<K, V> map;
/** Size of this cache in units. Not necessarily the number of elements. */
private int size;//當(dāng)前緩存大小
private int maxSize;//緩存最大
private int putCount;//put次數(shù)
private int createCount;
private int evictionCount;//回收次數(shù)
private int hitCount;//命中次數(shù)
private int missCount;//沒(méi)有命中次數(shù)
/**
* @param maxSize for caches that do not override {@link #sizeOf}, this is
* the maximum number of entries in the cache. For all other caches,
* this is the maximum sum of the sizes of the entries in this cache.
*/
public LruCache(int maxSize) {
if (maxSize <= 0) {
throw new IllegalArgumentException("maxSize <= 0");
}
this.maxSize = maxSize;
this.map = new LinkedHashMap<K, V>(0, 0.75f, true);
}
/**
* Sets the size of the cache.
*
* @param maxSize The new maximum size.
*/
public void resize(int maxSize) {
if (maxSize <= 0) {
throw new IllegalArgumentException("maxSize <= 0");
}
synchronized (this) {
this.maxSize = maxSize;
}
trimToSize(maxSize);
}
/**
* 返回緩存中key對(duì)應(yīng)的value,如果不存在則創(chuàng)建一個(gè)并返回。
* 如果value被返回,它就會(huì)被移動(dòng)到隊(duì)列的頭部,如果value為null或者不能被創(chuàng)建,方法返回nul
*/
public final V get(K key) {
if (key == null) {
throw new NullPointerException("key == null");
}
V mapValue;
synchronized (this) {
mapValue = map.get(key);
if (mapValue != null) {
hitCount++;
return mapValue;
}
missCount++;
}
/*
* 如果未被命中,則試圖創(chuàng)建一個(gè)value.這將會(huì)消耗較長(zhǎng)時(shí)間,創(chuàng)建過(guò)程中,
* 如果要添加的value值和map中已有的值沖突,則釋放已經(jīng)創(chuàng)建value.
*/
V createdValue = create(key);
if (createdValue == null) {
return null;
}
synchronized (this) {
createCount++;
mapValue = map.put(key, createdValue);
if (mapValue != null) {
// There was a conflict so undo that last put
map.put(key, mapValue);
} else {
size += safeSizeOf(key, createdValue);
}
}
if (mapValue != null) {
entryRemoved(false, key, createdValue, mapValue);
return mapValue;
} else {
//判斷緩存是否越界
trimToSize(maxSize);
return createdValue;
}
}
/**
* 緩存key對(duì)應(yīng)的value.value 會(huì)被移動(dòng)至隊(duì)列頭部。
* the queue.
*
* @return the previous value mapped by {@code key}.
*/
public final V put(K key, V value) {
if (key == null || value == null) {
throw new NullPointerException("key == null || value == null");
}
V previous;
synchronized (this) {
putCount++;
size += safeSizeOf(key, value);
previous = map.put(key, value);
if (previous != null) {
size -= safeSizeOf(key, previous);
}
}
if (previous != null) {
entryRemoved(false, key, previous, value);
}
trimToSize(maxSize);
return previous;
}
/**
* Remove the eldest entries until the total of remaining entries is at or
* below the requested size.
*
* @param maxSize the maximum size of the cache before returning. May be -1
* to evict even 0-sized elements.
*/
public void trimToSize(int maxSize) {
while (true) {
K key;
V value;
synchronized (this) {
if (size < 0 || (map.isEmpty() && size != 0)) {
throw new IllegalStateException(getClass().getName()
+ ".sizeOf() is reporting inconsistent results!");
}
if (size <= maxSize) {
break;
}
Map.Entry<K, V> toEvict = map.eldest();
if (toEvict == null) {
break;
}
key = toEvict.getKey();
value = toEvict.getValue();
map.remove(key);
size -= safeSizeOf(key, value);
evictionCount++;
}
entryRemoved(true, key, value, null);
}
}
/**
* Removes the entry for {@code key} if it exists.
*
* @return the previous value mapped by {@code key}.
*/
public final V remove(K key) {
if (key == null) {
throw new NullPointerException("key == null");
}
V previous;
synchronized (this) {
previous = map.remove(key);
if (previous != null) {
size -= safeSizeOf(key, previous);
}
}
if (previous != null) {
entryRemoved(false, key, previous, null);
}
return previous;
}
/**
* Called for entries that have been evicted or removed. This method is
* invoked when a value is evicted to make space, removed by a call to
* {@link #remove}, or replaced by a call to {@link #put}. The default
* implementation does nothing.
*
* <p>The method is called without synchronization: other threads may
* access the cache while this method is executing.
*
* @param evicted true if the entry is being removed to make space, false
* if the removal was caused by a {@link #put} or {@link #remove}.
* @param newValue the new value for {@code key}, if it exists. If non-null,
* this removal was caused by a {@link #put}. Otherwise it was caused by
* an eviction or a {@link #remove}.
*/
protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {}
/**
* Called after a cache miss to compute a value for the corresponding key.
* Returns the computed value or null if no value can be computed. The
* default implementation returns null.
*
* <p>The method is called without synchronization: other threads may
* access the cache while this method is executing.
*
* <p>If a value for {@code key} exists in the cache when this method
* returns, the created value will be released with {@link #entryRemoved}
* and discarded. This can occur when multiple threads request the same key
* at the same time (causing multiple values to be created), or when one
* thread calls {@link #put} while another is creating a value for the same
* key.
*/
protected V create(K key) {
return null;
}
private int safeSizeOf(K key, V value) {
int result = sizeOf(key, value);
if (result < 0) {
throw new IllegalStateException("Negative size: " + key + "=" + value);
}
return result;
}
/**
* Returns the size of the entry for {@code key} and {@code value} in
* user-defined units. The default implementation returns 1 so that size
* is the number of entries and max size is the maximum number of entries.
*
* <p>An entry's size must not change while it is in the cache.
*/
protected int sizeOf(K key, V value) {
return 1;
}
/**
* Clear the cache, calling {@link #entryRemoved} on each removed entry.
*/
public final void evictAll() {
trimToSize(-1); // -1 will evict 0-sized elements
}
/**
* For caches that do not override {@link #sizeOf}, this returns the number
* of entries in the cache. For all other caches, this returns the sum of
* the sizes of the entries in this cache.
*/
public synchronized final int size() {
return size;
}
/**
* For caches that do not override {@link #sizeOf}, this returns the maximum
* number of entries in the cache. For all other caches, this returns the
* maximum sum of the sizes of the entries in this cache.
*/
public synchronized final int maxSize() {
return maxSize;
}
/**
* Returns the number of times {@link #get} returned a value that was
* already present in the cache.
*/
public synchronized final int hitCount() {
return hitCount;
}
/**
* Returns the number of times {@link #get} returned null or required a new
* value to be created.
*/
public synchronized final int missCount() {
return missCount;
}
/**
* Returns the number of times {@link #create(Object)} returned a value.
*/
public synchronized final int createCount() {
return createCount;
}
/**
* Returns the number of times {@link #put} was called.
*/
public synchronized final int putCount() {
return putCount;
}
/**
* Returns the number of values that have been evicted.
*/
public synchronized final int evictionCount() {
return evictionCount;
}
/**
* Returns a copy of the current contents of the cache, ordered from least
* recently accessed to most recently accessed.
*/
public synchronized final Map<K, V> snapshot() {
return new LinkedHashMap<K, V>(map);
}
@Override public synchronized final String toString() {
int accesses = hitCount + missCount;
int hitPercent = accesses != 0 ? (100 * hitCount / accesses) : 0;
return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]",
maxSize, hitCount, missCount, hitPercent);
}
}