JAVA原始碼分析之HashMap
前言
從事了好長時間的開發工作,平時只注重業務程式碼的開發,而忽略了java本身一些基礎,所以從現在開始閱讀以下jdk的原始碼,首先從集合開始吧!這一篇先看下HashMap的原始碼。java集合架構
這一段是引用了 http://blog.csdn.net/ns_code/article/details/35564663大神文章中的內容。Java集合工具包為於Java.util包下,包含了很多常用的資料結構,如陣列、連結串列、棧、佇列、集合、雜湊表等。學習Java集合框架大致可以分為以下五個部分:List列表、Set集合、Map、迭代器(Iterator、Enumeration)、工具類(Arrays、Collections)。
Java集合類的整體框架如下:
從上圖中可以看出,集合類主要分為兩大類:Collection和Map。
Collection是List、Set等集合高度抽象出來的介面,它包含了這些集合的基本操作,它主要又分為兩大部分:List和Set。
List介面通常表示一個列表(陣列、佇列、連結串列、棧等),其中的元素可以重複,常用實現類為ArrayList和LinkedList,另外還有不常用的Vector。另外,LinkedList還是實現了Queue介面,因此也可以作為佇列使用。
Set介面通常表示一個集合,其中的元素不允許重複(通過hashcode和equals函式保證),常用實現類有HashSet和TreeSet,HashSet是通過Map中的HashMap實現的,而TreeSet是通過Map中的TreeMap實現的。另外,TreeSet還實現了SortedSet介面,因此是有序的集合(集合中的元素要實現Comparable介面,並覆寫Compartor函式才行)。
我們看到,抽象類AbstractCollection、AbstractList和AbstractSet分別實現了Collection、List和Set介面,這就是在Java集合框架中用的很多的介面卡設計模式,用這些抽象類去實現介面,在抽象類中實現介面中的若干或全部方法,這樣下面的一些類只需直接繼承該抽象類,並實現自己需要的方法即可,而不用實現介面中的全部抽象方法。
Map是一個對映介面,其中的每個元素都是一個key-value鍵值對,同樣抽象類AbstractMap通過介面卡模式實現了Map介面中的大部分函式,TreeMap、HashMap、WeakHashMap等實現類都通過繼承AbstractMap來實現,另外,不常用的HashTable直接實現了Map介面,它和Vector都是JDK1.0就引入的集合類。
Iterator是遍歷集合的迭代器(不能遍歷Map,只用來遍歷Collection),Collection的實現類都實現了iterator()函式,它返回一個Iterator物件,用來遍歷集合,ListIterator則專門用來遍歷List。而Enumeration則是JDK1.0時引入的,作用與Iterator相同,但它的功能比Iterator要少,它只能再Hashtable、Vector和Stack中使用。
Arrays和Collections是用來運算元組、集合的兩個工具類,例如在ArrayList和Vector中大量呼叫了Arrays.Copyof()方法,而Collections中有很多靜態方法可以返回各集合類的synchronized版本,即執行緒安全的版本,當然了,如果要用執行緒安全的結合類,首選Concurrent併發包下的對應的集合類。
HashMap簡介
HashMap是基於雜湊表實現的,他的底層是一個Entry連結串列陣列,並且陣列的長度是2的倍數,在執行put操作的時候,將key進行hash操作, 將hash值h跟陣列長度length-1做與操作h & (length-1),值就是key在Entry陣列中下標,但是前提條件就是陣列長度必須為2的倍數。當如果發生雜湊碰撞(就是說key做雜湊後得到的下標位置衝突),那麼判斷下,在下標對應連結串列中是否存在當前key,如果存在那麼將其對應值替換,否則,建立一個新的Entry物件放到下標位置,並且將他的下一個元素指向原來位置上的Entry。HashTable實現了Map介面中所有的方法,並且允許key和value都為null。HashMap大致上和HashTable一樣,實現了Map介面所有的方法,但是HashMap是執行緒不安全的,並且允許key可value為null。並且HashMap不保證集合中元素的順序和也不保證在不同時間段集合中元素順序是一致的。
容量(capacity)和負載因子(loadFactor)
capacity表示的是雜湊表中bucket的數量,loadFactor則表示的是雜湊表中bucket填滿數量佔雜湊表總數量的百分比。當雜湊表中存有extry的bucket數量大於等於capacity*loadFactor時,雜湊表容量增大一倍,也就是bucket數量為當前2倍的最小2的次冪。注意,在擴容前後capacity都是2的次冪,這是因為在獲取key對應hash值在雜湊表中下標時公式h&(length-1)就是建立在length是2的次冪的前提下。原始碼解析
import java.io.*;
public class HashMap1<K,V>
extends AbstractMap<K,V>
implements Map<K,V>, Cloneable, Serializable
{
/**
* The default initial capacity - MUST be a power of two.
預設初始化容量16, 並且容量必須是2的整數次冪
*/
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4; // aka 16
/**
* The maximum capacity, used if a higher value is implicitly specified
* by either of the constructors with arguments.
* MUST be a power of two <= 1<<30.
通過引數構建HashMap時,最大容量是2的30次冪,傳入容量過大時將會被這個值替換
*/
static final int MAXIMUM_CAPACITY = 1 << 30;
/**
* The load factor used when none specified in constructor.
負載因子預設值是0.75f
*/
static final float DEFAULT_LOAD_FACTOR = 0.75f;
/**
* An empty table instance to share when the table is not inflated.
雜湊表沒有被初始化時,先初始化為一個空表
*/
static final Entry<?,?>[] EMPTY_TABLE = {};
/**
* The table, resized as necessary. Length MUST Always be a power of two.
雜湊表,有必要時會擴容,長度必須是2的整數次冪
*/
transient Entry<K,V>[] table = (Entry<K,V>[]) EMPTY_TABLE;
/**
* The number of key-value mappings contained in this map.
map中存放鍵值對的數量
*/
transient int size;
/**
* The next size value at which to resize (capacity * load factor).
* HashMap閾值,用於判斷是否需要擴容(threshold = capacity*loadfactor)
*/
//初始化HashMap時,雜湊表為空,此時threshold為容量capacity大小,
//當inflate初始化雜湊表時,threshold賦值為capacity*loadfactor
int threshold;
//負載因子
final float loadFactor;
//HashMap改動的次數
transient int modCount;
/**
* The default threshold of map capacity above which alternative hashing is
* used for String keys. Alternative hashing reduces the incidence of
* collisions due to weak hash code calculation for String keys.
* <p/>
* This value may be overridden by defining the system property
* {@code jdk.map.althashing.threshold}. A property value of {@code 1}
* forces alternative hashing to be used at all times whereas
* {@code -1} value ensures that alternative hashing is never used.
*/
static final int ALTERNATIVE_HASHING_THRESHOLD_DEFAULT = Integer.MAX_VALUE;
/**
* holds values which can't be initialized until after VM is booted.
*/
private static class Holder {
/**
* Table capacity above which to switch to use alternative hashing.
*/
static final int ALTERNATIVE_HASHING_THRESHOLD;
static {
String altThreshold = java.security.AccessController.doPrivileged(
new sun.security.action.GetPropertyAction(
"jdk.map.althashing.threshold"));
int threshold;
try {
threshold = (null != altThreshold)
? Integer.parseInt(altThreshold)
: ALTERNATIVE_HASHING_THRESHOLD_DEFAULT;
// disable alternative hashing if -1
if (threshold == -1) {
threshold = Integer.MAX_VALUE;
}
if (threshold < 0) {
throw new IllegalArgumentException("value must be positive integer.");
}
} catch(IllegalArgumentException failed) {
throw new Error("Illegal value for 'jdk.map.althashing.threshold'", failed);
}
ALTERNATIVE_HASHING_THRESHOLD = threshold;
}
}
/**
* A randomizing value associated with this instance that is applied to
* hash code of keys to make hash collisions harder to find. If 0 then
* alternative hashing is disabled.
*/
transient int hashSeed = 0;
/**
* Constructs an empty <tt>HashMap</tt> with the specified initial
* capacity and load factor.
根據指定的容量和負載因子建立一個空的HashMap,在進行put操作時會判斷雜湊表是否為空,空就初始化雜湊表
*/
public HashMap(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " +
initialCapacity);
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " +
loadFactor);
this.loadFactor = loadFactor;
threshold = initialCapacity;
init();
}
/**
* Constructs an empty <tt>HashMap</tt> with the specified initial
* capacity and the default load factor (0.75).
根據指定的容量和預設的負載因子0.75建立一個空的HashMap
*/
public HashMap(int initialCapacity) {
this(initialCapacity, DEFAULT_LOAD_FACTOR);
}
/**
* Constructs an empty <tt>HashMap</tt> with the default initial capacity
* (16) and the default load factor (0.75).
*/
public HashMap() {
this(DEFAULT_INITIAL_CAPACITY, DEFAULT_LOAD_FACTOR);
}
/**
* Constructs a new <tt>HashMap</tt> with the same mappings as the
* specified <tt>Map</tt>. The <tt>HashMap</tt> is created with
* default load factor (0.75) and an initial capacity sufficient to
* hold the mappings in the specified <tt>Map</tt>.
*
* @param m the map whose mappings are to be placed in this map
* @throws NullPointerException if the specified map is null
*/
public HashMap(Map<? extends K, ? extends V> m) {
this(Math.max((int) (m.size() / DEFAULT_LOAD_FACTOR) + 1,
DEFAULT_INITIAL_CAPACITY), DEFAULT_LOAD_FACTOR);
inflateTable(threshold);
putAllForCreate(m);
}
private static int roundUpToPowerOf2(int number) {
// assert number >= 0 : "number must be non-negative";
int rounded = number >= MAXIMUM_CAPACITY
? MAXIMUM_CAPACITY
: (rounded = Integer.highestOneBit(number)) != 0
? (Integer.bitCount(number) > 1) ? rounded << 1 : rounded
: 1;
return rounded;
}
/**
* Inflates the table.
*/
private void inflateTable(int toSize) {
// Find a power of 2 >= toSize
//找到大於toSize最小的2的整數次冪
int capacity = roundUpToPowerOf2(toSize);
threshold = (int) Math.min(capacity * loadFactor, MAXIMUM_CAPACITY + 1);
table = new Entry[capacity];
initHashSeedAsNeeded(capacity);
}
void init() {
}
final boolean initHashSeedAsNeeded(int capacity) {
boolean currentAltHashing = hashSeed != 0;
boolean useAltHashing = sun.misc.VM.isBooted() &&
(capacity >= Holder.ALTERNATIVE_HASHING_THRESHOLD);
boolean switching = currentAltHashing ^ useAltHashing;
if (switching) {
hashSeed = useAltHashing
? sun.misc.Hashing.randomHashSeed(this)
: 0;
}
return switching;
}
/**
* Retrieve object hash code and applies a supplemental hash function to the
* result hash, which defends against poor quality hash functions. This is
* critical because HashMap uses power-of-two length hash tables, that
* otherwise encounter collisions for hashCodes that do not differ
* in lower bits. Note: Null keys always map to hash 0, thus index 0.
*/
final int hash(Object k) {
int h = hashSeed;
if (0 != h && k instanceof String) {
return sun.misc.Hashing.stringHash32((String) k);
}
h ^= k.hashCode();
// This function ensures that hashCodes that differ only by
// constant multiples at each bit position have a bounded
// number of collisions (approximately 8 at default load factor).
h ^= (h >>> 20) ^ (h >>> 12);
return h ^ (h >>> 7) ^ (h >>> 4);
}
/**
* 根據key對應hash值和雜湊表長度獲取key在雜湊表中下標
*/
static int indexFor(int h, int length) {
// assert Integer.bitCount(length) == 1 : "length must be a non-zero power of 2";
return h & (length-1);
}
/**
* 獲取map中鍵值對數量
*/
public int size() {
return size;
}
public boolean isEmpty() {
return size == 0;
}
/**
*/
public V get(Object key) {
if (key == null)
return getForNullKey();
Entry<K,V> entry = getEntry(key);
return null == entry ? null : entry.getValue();
}
/**
* Offloaded version of get() to look up null keys. Null keys map
* to index 0. This null case is split out into separate methods
* for the sake of performance in the two most commonly used
* operations (get and put), but incorporated with conditionals in
* others.
*/
private V getForNullKey() {
if (size == 0) {
return null;
}
for (Entry<K,V> e = table[0]; e != null; e = e.next) {
if (e.key == null)
return e.value;
}
return null;
}
/**
* Returns <tt>true</tt> if this map contains a mapping for the
* specified key.
*
* @param key The key whose presence in this map is to be tested
* @return <tt>true</tt> if this map contains a mapping for the specified
* key.
*/
public boolean containsKey(Object key) {
return getEntry(key) != null;
}
/**
根據key獲取entry物件,如果 沒有那麼返回null
整體操作就是首先根據key的雜湊值獲取在table中下標,然後拿到連結串列後遍歷連結串列
*/
final Entry<K,V> getEntry(Object key) {
if (size == 0) {
return null;
}
int hash = (key == null) ? 0 : hash(key);
for (Entry<K,V> e = table[indexFor(hash, table.length)];
e != null;
e = e.next) {
Object k;
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k))))
return e;
}
return null;
}
//儲存key-value鍵值對
public V put(K key, V value) {
//判斷下如果table為空那麼初始化雜湊表
if (table == EMPTY_TABLE) {
inflateTable(threshold);
}
if (key == null)
return putForNullKey(value);
int hash = hash(key);
//獲取在雜湊表中下標
int i = indexFor(hash, table.length);
//拿到下標位置的Entry連結串列物件,遍歷連結串列所有元素,判斷連結串列上是否已存在key為當前插入key的Entry物件
for (Entry<K,V> e = table[i]; e != null; e = e.next) {
Object k;
//如果連結串列上已存在當前插入的key那麼將原來value替換掉
if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
V oldValue = e.value;
e.value = value;
e.recordAccess(this);
return oldValue;
}
}
//如果連結串列上不存在當前key,建立Entry
modCount++;
addEntry(hash, key, value, i);
return null;
}
/**
* 插入key為null的value
*/
private V putForNullKey(V value) {
//講key為null的鍵值對放到table下標為0的連結串列中
for (Entry<K,V> e = table[0]; e != null; e = e.next) {
if (e.key == null) {
V oldValue = e.value;
e.value = value;
e.recordAccess(this);
return oldValue;
}
}
modCount++;
addEntry(0, null, value, 0);
return null;
}
/**
* This method is used instead of put by constructors and
* pseudoconstructors (clone, readObject). It does not resize the table,
* check for comodification, etc. It calls createEntry rather than
* addEntry.
*/
private void putForCreate(K key, V value) {
int hash = null == key ? 0 : hash(key);
int i = indexFor(hash, table.length);
/**
* Look for preexisting entry for key. This will never happen for
* clone or deserialize. It will only happen for construction if the
* input Map is a sorted map whose ordering is inconsistent w/ equals.
*/
for (Entry<K,V> e = table[i]; e != null; e = e.next) {
Object k;
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k)))) {
e.value = value;
return;
}
}
createEntry(hash, key, value, i);
}
private void putAllForCreate(Map<? extends K, ? extends V> m) {
for (Map.Entry<? extends K, ? extends V> e : m.entrySet())
putForCreate(e.getKey(), e.getValue());
}
/**
* Rehashes the contents of this map into a new array with a
* larger capacity. This method is called automatically when the
* number of keys in this map reaches its threshold.
*
* If current capacity is MAXIMUM_CAPACITY, this method does not
* resize the map, but sets threshold to Integer.MAX_VALUE.
* This has the effect of preventing future calls.
*
* @param newCapacity the new capacity, MUST be a power of two;
* must be greater than current capacity unless current
* capacity is MAXIMUM_CAPACITY (in which case value
* is irrelevant).
*/
void resize(int newCapacity) {
Entry[] oldTable = table;
int oldCapacity = oldTable.length;
if (oldCapacity == MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return;
}
Entry[] newTable = new Entry[newCapacity];
transfer(newTable, initHashSeedAsNeeded(newCapacity));
table = newTable;
threshold = (int)Math.min(newCapacity * loadFactor, MAXIMUM_CAPACITY + 1);
}
/**
* Transfers all entries from current table to newTable.
*/
void transfer(Entry[] newTable, boolean rehash) {
int newCapacity = newTable.length;
for (Entry<K,V> e : table) {
while(null != e) {
Entry<K,V> next = e.next;
if (rehash) {
e.hash = null == e.key ? 0 : hash(e.key);
}
int i = indexFor(e.hash, newCapacity);
e.next = newTable[i];
newTable[i] = e;
e = next;
}
}
}
/**
* Copies all of the mappings from the specified map to this map.
* These mappings will replace any mappings that this map had for
* any of the keys currently in the specified map.
*
* @param m mappings to be stored in this map
* @throws NullPointerException if the specified map is null
*/
public void putAll(Map<? extends K, ? extends V> m) {
int numKeysToBeAdded = m.size();
if (numKeysToBeAdded == 0)
return;
if (table == EMPTY_TABLE) {
inflateTable((int) Math.max(numKeysToBeAdded * loadFactor, threshold));
}
/*
* Expand the map if the map if the number of mappings to be added
* is greater than or equal to threshold. This is conservative; the
* obvious condition is (m.size() + size) >= threshold, but this
* condition could result in a map with twice the appropriate capacity,
* if the keys to be added overlap with the keys already in this map.
* By using the conservative calculation, we subject ourself
* to at most one extra resize.
*/
if (numKeysToBeAdded > threshold) {
int targetCapacity = (int)(numKeysToBeAdded / loadFactor + 1);
if (targetCapacity > MAXIMUM_CAPACITY)
targetCapacity = MAXIMUM_CAPACITY;
int newCapacity = table.length;
while (newCapacity < targetCapacity)
newCapacity <<= 1;
if (newCapacity > table.length)
resize(newCapacity);
}
for (Map.Entry<? extends K, ? extends V> e : m.entrySet())
put(e.getKey(), e.getValue());
}
/**
* Removes the mapping for the specified key from this map if present.
*
* @param key key whose mapping is to be removed from the map
* @return the previous value associated with <tt>key</tt>, or
* <tt>null</tt> if there was no mapping for <tt>key</tt>.
* (A <tt>null</tt> return can also indicate that the map
* previously associated <tt>null</tt> with <tt>key</tt>.)
*/
public V remove(Object key) {
Entry<K,V> e = removeEntryForKey(key);
return (e == null ? null : e.value);
}
/**
* Removes and returns the entry associated with the specified key
* in the HashMap. Returns null if the HashMap contains no mapping
* for this key.
*/
final Entry<K,V> removeEntryForKey(Object key) {
if (size == 0) {
return null;
}
int hash = (key == null) ? 0 : hash(key);
int i = indexFor(hash, table.length);
Entry<K,V> prev = table[i];
Entry<K,V> e = prev;
while (e != null) {
Entry<K,V> next = e.next;
Object k;
if (e.hash == hash &&
((k = e.key) == key || (key != null && key.equals(k)))) {
modCount++;
size--;
if (prev == e)
table[i] = next;
else
prev.next = next;
e.recordRemoval(this);
return e;
}
prev = e;
e = next;
}
return e;
}
/**
* Special version of remove for EntrySet using {@code Map.Entry.equals()}
* for matching.
*/
final Entry<K,V> removeMapping(Object o) {
if (size == 0 || !(o instanceof Map.Entry))
return null;
Map.Entry<K,V> entry = (Map.Entry<K,V>) o;
Object key = entry.getKey();
int hash = (key == null) ? 0 : hash(key);
int i = indexFor(hash, table.length);
Entry<K,V> prev = table[i];
Entry<K,V> e = prev;
while (e != null) {
Entry<K,V> next = e.next;
if (e.hash == hash && e.equals(entry)) {
modCount++;
size--;
if (prev == e)
table[i] = next;
else
prev.next = next;
e.recordRemoval(this);
return e;
}
prev = e;
e = next;
}
return e;
}
/**
* Removes all of the mappings from this map.
* The map will be empty after this call returns.
*/
public void clear() {
modCount++;
Arrays.fill(table, null);
size = 0;
}
/**
* Returns <tt>true</tt> if this map maps one or more keys to the
* specified value.
*
* @param value value whose presence in this map is to be tested
* @return <tt>true</tt> if this map maps one or more keys to the
* specified value
*/
public boolean containsValue(Object value) {
if (value == null)
return containsNullValue();
Entry[] tab = table;
for (int i = 0; i < tab.length ; i++)
for (Entry e = tab[i] ; e != null ; e = e.next)
if (value.equals(e.value))
return true;
return false;
}
/**
* Special-case code for containsValue with null argument
*/
private boolean containsNullValue() {
Entry[] tab = table;
for (int i = 0; i < tab.length ; i++)
for (Entry e = tab[i] ; e != null ; e = e.next)
if (e.value == null)
return true;
return false;
}
/**
* Returns a shallow copy of this <tt>HashMap</tt> instance: the keys and
* values themselves are not cloned.
*
* @return a shallow copy of this map
*/
public Object clone() {
HashMap<K,V> result = null;
try {
result = (HashMap<K,V>)super.clone();
} catch (CloneNotSupportedException e) {
// assert false;
}
if (result.table != EMPTY_TABLE) {
result.inflateTable(Math.min(
(int) Math.min(
size * Math.min(1 / loadFactor, 4.0f),
// we have limits...
HashMap.MAXIMUM_CAPACITY),
table.length));
}
result.entrySet = null;
result.modCount = 0;
result.size = 0;
result.init();
result.putAllForCreate(this);
return result;
}
static class Entry<K,V> implements Map.Entry<K,V> {
final K key;
V value;
Entry<K,V> next;
int hash;
/**
* Creates new entry.
*/
Entry(int h, K k, V v, Entry<K,V> n) {
value = v;
next = n;
key = k;
hash = h;
}
public final K getKey() {
return key;
}
public final V getValue() {
return value;
}
public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
}
public final boolean equals(Object o) {
if (!(o instanceof Map.Entry))
return false;
Map.Entry e = (Map.Entry)o;
Object k1 = getKey();
Object k2 = e.getKey();
if (k1 == k2 || (k1 != null && k1.equals(k2))) {
Object v1 = getValue();
Object v2 = e.getValue();
if (v1 == v2 || (v1 != null && v1.equals(v2)))
return true;
}
return false;
}
public final int hashCode() {
return Objects.hashCode(getKey()) ^ Objects.hashCode(getValue());
}
public final String toString() {
return getKey() + "=" + getValue();
}
/**
* This method is invoked whenever the value in an entry is
* overwritten by an invocation of put(k,v) for a key k that's already
* in the HashMap.
*/
void recordAccess(HashMap<K,V> m) {
}
/**
* This method is invoked whenever the entry is
* removed from the table.
*/
void recordRemoval(HashMap<K,V> m) {
}
}
/**
* Adds a new entry with the specified key, value and hash code to
* the specified bucket. It is the responsibility of this
* method to resize the table if appropriate.
*
* Subclass overrides this to alter the behavior of put method.
*/
void addEntry(int hash, K key, V value, int bucketIndex) {
//如果雜湊表大小已經達到擴容閾值,並且下標對應值不為空,那麼將雜湊表擴容為原來兩倍
if ((size >= threshold) && (null != table[bucketIndex])) {
resize(2 * table.length);
hash = (null != key) ? hash(key) : 0;
bucketIndex = indexFor(hash, table.length);
}
createEntry(hash, key, value, bucketIndex);
}
/**
建立新的entry物件
*/
void createEntry(int hash, K key, V value, int bucketIndex) {
//首先獲取雜湊表中bucketIndex下標對應的Entry物件
Entry<K,V> e = table[bucketIndex];
//然後根據新傳進來的key-value鍵值對建立一個entry物件,並且next屬性指向原來bucketIndex下標位置上的entry物件,
//然後將新建立的entry物件,放到雜湊表bucketIndex位置。
table[bucketIndex] = new Entry<>(hash, key, value, e);
size++;
}
private abstract class HashIterator<E> implements Iterator<E> {
Entry<K,V> next; // next entry to return
int expectedModCount; // For fast-fail
int index; // current slot
Entry<K,V> current; // current entry
HashIterator() {
expectedModCount = modCount;
if (size > 0) { // advance to first entry
Entry[] t = table;
while (index < t.length && (next = t[index++]) == null)
;
}
}
public final boolean hasNext() {
return next != null;
}
final Entry<K,V> nextEntry() {
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
Entry<K,V> e = next;
if (e == null)
throw new NoSuchElementException();
if ((next = e.next) == null) {
Entry[] t = table;
while (index < t.length && (next = t[index++]) == null)
;
}
current = e;
return e;
}
public void remove() {
if (current == null)
throw new IllegalStateException();
if (modCount != expectedModCount)
throw new ConcurrentModificationException();
Object k = current.key;
current = null;
HashMap.this.removeEntryForKey(k);
expectedModCount = modCount;
}
}
private final class ValueIterator extends HashIterator<V> {
public V next() {
return nextEntry().value;
}
}
private final class KeyIterator extends HashIterator<K> {
public K next() {
return nextEntry().getKey();
}
}
private final class EntryIterator extends HashIterator<Map.Entry<K,V>> {
public Map.Entry<K,V> next() {
return nextEntry();
}
}
// Subclass overrides these to alter behavior of views' iterator() method
Iterator<K> newKeyIterator() {
return new KeyIterator();
}
Iterator<V> newValueIterator() {
return new ValueIterator();
}
Iterator<Map.Entry<K,V>> newEntryIterator() {
return new EntryIterator();
}
// Views
private transient Set<Map.Entry<K,V>> entrySet = null;
/**
* Returns a {@link Set} view of the keys contained in this map.
* The set is backed by the map, so changes to the map are
* reflected in the set, and vice-versa. If the map is modified
* while an iteration over the set is in progress (except through
* the iterator's own <tt>remove</tt> operation), the results of
* the iteration are undefined. The set supports element removal,
* which removes the corresponding mapping from the map, via the
* <tt>Iterator.remove</tt>, <tt>Set.remove</tt>,
* <tt>removeAll</tt>, <tt>retainAll</tt>, and <tt>clear</tt>
* operations. It does not support the <tt>add</tt> or <tt>addAll</tt>
* operations.
*/
public Set<K> keySet() {
Set<K> ks = keySet;
return (ks != null ? ks : (keySet = new KeySet()));
}
private final class KeySet extends AbstractSet<K> {
public Iterator<K> iterator() {
return newKeyIterator();
}
public int size() {
return size;
}
public boolean contains(Object o) {
return containsKey(o);
}
public boolean remove(Object o) {
return HashMap.this.removeEntryForKey(o) != null;
}
public void clear() {
HashMap.this.clear();
}
}
/**
* Returns a {@link Collection} view of the values contained in this map.
* The collection is backed by the map, so changes to the map are
* reflected in the collection, and vice-versa. If the map is
* modified while an iteration over the collection is in progress
* (except through the iterator's own <tt>remove</tt> operation),
* the results of the iteration are undefined. The collection
* supports element removal, which removes the corresponding
* mapping from the map, via the <tt>Iterator.remove</tt>,
* <tt>Collection.remove</tt>, <tt>removeAll</tt>,
* <tt>retainAll</tt> and <tt>clear</tt> operations. It does not
* support the <tt>add</tt> or <tt>addAll</tt> operations.
*/
public Collection<V> values() {
Collection<V> vs = values;
return (vs != null ? vs : (values = new Values()));
}
private final class Values extends AbstractCollection<V> {
public Iterator<V> iterator() {
return newValueIterator();
}
public int size() {
return size;
}
public boolean contains(Object o) {
return containsValue(o);
}
public void clear() {
HashMap.this.clear();
}
}
/**
* Returns a {@link Set} view of the mappings contained in this map.
* The set is backed by the map, so changes to the map are
* reflected in the set, and vice-versa. If the map is modified
* while an iteration over the set is in progress (except through
* the iterator's own <tt>remove</tt> operation, or through the
* <tt>setValue</tt> operation on a map entry returned by the
* iterator) the results of the iteration are undefined. The set
* supports element removal, which removes the corresponding
* mapping from the map, via the <tt>Iterator.remove</tt>,
* <tt>Set.remove</tt>, <tt>removeAll</tt>, <tt>retainAll</tt> and
* <tt>clear</tt> operations. It does not support the
* <tt>add</tt> or <tt>addAll</tt> operations.
*
* @return a set view of the mappings contained in this map
*/
public Set<Map.Entry<K,V>> entrySet() {
return entrySet0();
}
private Set<Map.Entry<K,V>> entrySet0() {
Set<Map.Entry<K,V>> es = entrySet;
return es != null ? es : (entrySet = new EntrySet());
}
private final class EntrySet extends AbstractSet<Map.Entry<K,V>> {
public Iterator<Map.Entry<K,V>> iterator() {
return newEntryIterator();
}
public boolean contains(Object o) {
if (!(o instanceof Map.Entry))
return false;
Map.Entry<K,V> e = (Map.Entry<K,V>) o;
Entry<K,V> candidate = getEntry(e.getKey());
return candidate != null && candidate.equals(e);
}
public boolean remove(Object o) {
return removeMapping(o) != null;
}
public int size() {
return size;
}
public void clear() {
HashMap.this.clear();
}
}
/**
* Save the state of the <tt>HashMap</tt> instance to a stream (i.e.,
* serialize it).
*
* @serialData The <i>capacity</i> of the HashMap (the length of the
* bucket array) is emitted (int), followed by the
* <i>size</i> (an int, the number of key-value
* mappings), followed by the key (Object) and value (Object)
* for each key-value mapping. The key-value mappings are
* emitted in no particular order.
*/
private void writeObject(java.io.ObjectOutputStream s)
throws IOException
{
// Write out the threshold, loadfactor, and any hidden stuff
s.defaultWriteObject();
// Write out number of buckets
if (table==EMPTY_TABLE) {
s.writeInt(roundUpToPowerOf2(threshold));
} else {
s.writeInt(table.length);
}
// Write out size (number of Mappings)
s.writeInt(size);
// Write out keys and values (alternating)
if (size > 0) {
for(Map.Entry<K,V> e : entrySet0()) {
s.writeObject(e.getKey());
s.writeObject(e.getValue());
}
}
}
private static final long serialVersionUID = 362498820763181265L;
/**
* Reconstitute the {@code HashMap} instance from a stream (i.e.,
* deserialize it).
*/
private void readObject(java.io.ObjectInputStream s)
throws IOException, ClassNotFoundException
{
// Read in the threshold (ignored), loadfactor, and any hidden stuff
s.defaultReadObject();
if (loadFactor <= 0 || Float.isNaN(loadFactor)) {
throw new InvalidObjectException("Illegal load factor: " +
loadFactor);
}
// set other fields that need values
table = (Entry<K,V>[]) EMPTY_TABLE;
// Read in number of buckets
s.readInt(); // ignored.
// Read number of mappings
int mappings = s.readInt();
if (mappings < 0)
throw new InvalidObjectException("Illegal mappings count: " +
mappings);
// capacity chosen by number of mappings and desired load (if >= 0.25)
int capacity = (int) Math.min(
mappings * Math.min(1 / loadFactor, 4.0f),
// we have limits...
HashMap.MAXIMUM_CAPACITY);
// allocate the bucket array;
if (mappings > 0) {
inflateTable(capacity);
} else {
threshold = capacity;
}
init(); // Give subclass a chance to do its thing.
// Read the keys and values, and put the mappings in the HashMap
for (int i = 0; i < mappings; i++) {
K key = (K) s.readObject();
V value = (V) s.readObject();
putForCreate(key, value);
}
}
// These methods are used when serializing HashSets
int capacity() { return table.length; }
float loadFactor() { return loadFactor; }
}
上面程式碼太多,有點亂,下面總結幾點:
下面內容轉載的http://blog.csdn.net/ns_code/article/details/36034955大神的文章。1、首先要清楚HashMap的儲存結構,如下圖所示:
圖中,紫色部分即代表雜湊表table,也稱為雜湊陣列,陣列的每個元素都是一個單鏈表的頭節點,連結串列是用來解決衝突的,如果不同的key對映到了陣列的同一位置處,就將其放入單鏈表中。
2、首先看連結串列中節點的資料結構:
[java] view plain copy
- // Entry是單向連結串列。
- // 它是 “HashMap鏈式儲存法”對應的連結串列。
- // 它實現了Map.Entry 介面,即實現getKey(), getValue(), setValue(V value), equals(Object o), hashCode()這些函式
- static class Entry<K,V> implements Map.Entry<K,V> {
- final K key;
- V value;
- // 指向下一個節點
- Entry<K,V> next;
- final int hash;
- // 建構函式。
- // 輸入引數包括"雜湊值(h)", "鍵(k)", "值(v)", "下一節點(n)"
- Entry(int h, K k, V v, Entry<K,V> n) {
- value = v;
- next = n;
- key = k;
- hash = h;
- }
- public final K getKey() {
- return key;
- }
- public final V getValue() {
- return value;
- }
- public final V setValue(V newValue) {
- V oldValue = value;
- value = newValue;
- return oldValue;
-
相關推薦
JAVA原始碼分析之HashMap
前言 從事了好長時間的開發工作,平時只注重業務程式碼的開發,而忽略了java本身一些基礎,所以從現在開始閱讀以下jdk的原始碼,首先從集合開始吧!這一篇先看下HashMap的原始碼。 java集合架構 &nbs
Java原始碼分析之HashMap(JDK1.8)
一、HashMap概述 HashMap是常用的Java集合之一,是基於雜湊表的Map介面的實現。與HashTable主要區別為不支援同步和允許null作為key和value。由於HashMap不是執行緒安全的,如果想要執行緒安全,可以使用Concurren
java原始碼分析之集合框架HashMap 10
HashMap HashMap 是一個散列表,它儲存的內容是鍵值對(key-value)對映。 HashMap 繼承於AbstractMap,實現了Map、Cloneable、java.io.Serializable介面。 HashMap 的實現不是同步的,
java原始碼解讀之HashMap
1:首先下載openjdk(http://pan.baidu.com/s/1dFMZXg1),把原始碼匯入eclipse,以便看到jdk原始碼 Windows-Prefe
JDK10原始碼分析之HashMap
HashMap在工作中大量使用,但是具體原理和實現是如何的呢?技術細節是什麼?帶著很多疑問,我們來看下JDK10原始碼吧。 1、資料結構 採用Node<K,V>[]陣列,其中,Node<K,V>這個類實現Map.Entry<K,V>,是一個連結串列結構的物件,並且在一定
java原始碼分析之集合框架HashTable 11
HashTable : 此類實現一個雜湊表,該雜湊表將鍵對映到相應的值。任何非null 物件都可以用作鍵或值。為了成功地在雜湊表中儲存和獲取物件,用作鍵的物件必須實現 hashCode 方法和equals 方法。Hashtable 的例項有兩個引數影響其效能:初始容量
【java集合框架原始碼剖析系列】java原始碼剖析之HashMap
前言:之所以打算寫java集合框架原始碼剖析系列部落格是因為自己反思了一下阿里內推一面的失敗(估計沒過,因為寫此部落格已距阿里巴巴一面一個星期),當時面試完之後感覺自己回答的挺好的,而且據面試官最後說的這幾天可能會和你聯絡來看當時以為自己一面應該是通過的,但是事與願違,痛定
java原始碼分析之集合框架TreeMap 12
TreeMap 基於紅黑樹(Red-Black tree)的 NavigableMap 實現。該對映根據其鍵的自然順序進行排序,或者根據建立對映時提供的Comparator 進行排序,具體取決於使用的構造方法。 一、紅黑樹的介紹
java原始碼分析之集合框架HashTable 11
HashTable : 此類實現一個雜湊表,該雜湊表將鍵對映到相應的值。任何非null 物件都可以用作鍵或值。 為了成功地在雜湊表中儲存和獲取物件,用作鍵的物件必須實現 hashCode 方法和equals 方法。 Hashtable 的例項有兩個引數影響其效能:
java原始碼分析之集合框架SortedMap 、 NavigableMap 、Dictionary 09
SortedMap SortedMap也是一個介面,繼承與Map介面,Sorted表示它是一個有序的鍵值對映。 SortedMap的排序方式有兩種:自然排序和指定比較器排序。插入有序的SortedMap的所有元素都必須實現Comparable介面
java原始碼分析之集合框架AbstractMap 08
AbstractMap: AbstractMap AbstractMap繼承了Map,但沒有實現entrySet()方法(該方法還是abstract修飾),如果要繼承Ab
java原始碼分析之集合框架Map 07
Map 概覽: 1. Map是一個介面,Map中儲存的內容是鍵值對(key-value)。 2. 為了方便,我們抽象出AbstractMap類來讓其他
java原始碼分析之集合框架 ArrayList 03
java原始碼分析之集合框架 ArrayList 03 ArrayList就是傳說中的動態陣列,就是Array的複雜版本,它提供瞭如下一些好處:動態的增加和減少元素、靈活的設定陣列的大小…… 首先看到對ArrayList的定義: public class A
JAVA原始碼分析之---Object類(一)---registerNatives,getClass方法的使用
本人java碼農一名,在工作中,萌生了分析java原始碼的想法,從今天開始,一步一步開始分析java原始碼吧。 本人閱讀jdk原始碼版本為jdk1.7。 既然是Java,那麼肯定要從最頂層的Object類開始分析。 Object全稱:java.lang.Objec
java原始碼分析之集合框架AbstractMap 08
AbstractMap: AbstractMap AbstractMap繼承了Map,但沒有實現entrySet()方法(該方法還是abstract修飾),如果要繼承AbstractMap,需要自己實現entrySet()方法。沒有真正實現pu
JAVA常用集合原始碼分析:HashMap
我們這篇文章就來試著分析下 HashMap 的原始碼,由於 HashMap 底層涉及到太多方面,一篇文章總是不能面面俱到,所以我們可以帶著面試官常問的幾個問題去看原始碼: 瞭解底層如何儲存資料的 HashMap 的幾個主要方法 HashMap 是如何確定元素儲存位置的以及如何處
Java集合原始碼分析之LikedList
一、LinkedList結構 LinkedList是一種可以在任何位置進行高效地插入和移除操作的有序序列,它是基於雙向連結串列實現的。 LinkedList 是一個繼承於AbstractSequentialList的雙向連結串列。它也可以被當作堆疊、佇列或雙端佇列進行操作。
Java原始碼分析——java.util工具包解析(三)——HashMap、TreeMap、LinkedHashMap、Hashtable類解析
Map,中文名字對映,它儲存了鍵-值對的一對一的關係形式,並用雜湊值來作為存貯的索引依據,在查詢、插入以及刪除時的時間複雜度都為O(1),是一種在程式中用的最多的幾種資料結構。Java在java.util工具包中實現了Map介面,來作為各大
Java集合框架之HashMap的原始碼解析
1.首先看一下HashMap的繼承關係 java.lang.Object ↳ java.util.AbstractMap<K, V> ↳ java.util.HashMap<K, V> pub
【搞定Java併發程式設計】第17篇:佇列同步器AQS原始碼分析之共享模式
AQS系列文章: 1、佇列同步器AQS原始碼分析之概要分析 2、佇列同步器AQS原始碼分析之獨佔模式 3、佇列同步器AQS原始碼分析之共享模式 4、佇列同步器AQS原始碼分析之Condition介面、等待佇列 通過上一篇文章的的分析,我們知道獨佔模式獲取同步狀態(或者說獲取鎖