在深度學(xué)習(xí)中,神經(jīng)網(wǎng)絡(luò)的權(quán)重初始化方法對(duì)(weight initialization)對(duì)模型的收斂速度和性能有著至關(guān)重要的影響。說白了,神經(jīng)網(wǎng)絡(luò)其實(shí)就是對(duì)權(quán)重參數(shù)w的不停迭代更新,以期達(dá)到較好的性能。在深度神經(jīng)網(wǎng)絡(luò)中,隨著層數(shù)的增多,我們?cè)谔荻认陆档倪^程中,極易出現(xiàn)梯度消失或者梯度爆炸。因此,對(duì)權(quán)重w的初始化則顯得至關(guān)重要,一個(gè)好的權(quán)重初始化雖然不能完全解決梯度消失和梯度爆炸的問題,但是對(duì)于處理這兩個(gè)問題是有很大的幫助的,并且十分有利于模型性能和收斂速度。在這篇博客中,我們主要討論四種權(quán)重初始化方法:
把w初始化為0
對(duì)w隨機(jī)初始化
Xavier initialization
He initialization
1.把w初始化為0
我們?cè)诰€性回歸,logistics回歸的時(shí)候,基本上都是把參數(shù)初始化為0,我們的模型也能夠很好的工作。然后在神經(jīng)網(wǎng)絡(luò)中,把w初始化為0是不可以的。這是因?yàn)槿绻褀初始化0,那么每一層的神經(jīng)元學(xué)到的東西都是一樣的(輸出是一樣的),而且在bp的時(shí)候,每一層內(nèi)的神經(jīng)元也是相同的,因?yàn)樗麄兊膅radient相同。下面用一段代碼來演示,當(dāng)把w初始化為0:
def initialize_parameters_zeros(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
parameters = {}
np.random.seed(3)
L = len(layers_dims) # number of layers in the network
for l in range(1, L):
parameters['W' + str(l)] = np.zeros((layers_dims[l], layers_dims[l - 1]))
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
return parameters
我們可以看看cost function是如何變化的:
把w初始化為0

能夠看到代價(jià)函數(shù)降到0.64(迭代1000次)后,再迭代已經(jīng)不起什么作用了。
2.對(duì)w隨機(jī)初始化
目前常用的就是隨機(jī)初始化,即W隨機(jī)初始化。隨機(jī)初始化的代碼如下:
def initialize_parameters_random(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3) # This seed makes sure your "random" numbers will be the as ours
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1])*0.01
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
return parameters
乘0.01是因?yàn)橐裌隨機(jī)初始化到一個(gè)相對(duì)較小的值,因?yàn)槿绻鸛很大的話,W又相對(duì)較大,會(huì)導(dǎo)致Z非常大,這樣如果激活函數(shù)是sigmoid,就會(huì)導(dǎo)致sigmoid的輸出值1或者0,然后會(huì)導(dǎo)致一系列問題(比如cost function計(jì)算的時(shí)候,log里是0,這樣會(huì)有點(diǎn)麻煩)。隨機(jī)初始化后,cost function隨著迭代次數(shù)的變化示意圖為:

能夠看出,cost function的變化是比較正常的。但是隨機(jī)初始化也有缺點(diǎn),np.random.randn()其實(shí)是一個(gè)均值為0,方差為1的高斯分布中采樣。當(dāng)神經(jīng)網(wǎng)絡(luò)的層數(shù)增多時(shí),會(huì)發(fā)現(xiàn)越往后面的層的激活函數(shù)(使用tanH)的輸出值幾乎都接近于0,如下圖所示:
隨機(jī)初始化分布

順便把畫分布的圖的代碼也貼出來吧:
import numpy as np
import matplotlib.pyplot as plt
def initialize_parameters(layer_dims):
"""
:param layer_dims: list,每一層單元的個(gè)數(shù)(維度)
:return:dictionary,存儲(chǔ)參數(shù)w1,w2,...,wL,b1,...,bL
"""
np.random.seed(3)
L = len(layer_dims)#the number of layers in the network
parameters = {}
for l in range(1,L):
parameters["W" + str(l)] = np.random.randn(layer_dims[l],layer_dims[l-1])*0.01
parameters["b" + str(l)] = np.zeros((layer_dims[l],1))
return parameters
def forward_propagation():
data = np.random.randn(1000, 100000)
# layer_sizes = [100 - 10 * i for i in range(0,5)]
layer_sizes = [1000,800,500,300,200,100,10]
num_layers = len(layer_sizes)
parameters = initialize_parameters(layer_sizes)
A = data
for l in range(1,num_layers):
A_pre = A
W = parameters["W" + str(l)]
b = parameters["b" + str(l)]
z = np.dot(W,A_pre) + b #計(jì)算z = wx + b
A = np.tanh(z)
#畫圖
plt.subplot(2,3,l)
plt.hist(A.flatten(),facecolor='g')
plt.xlim([-1,1])
plt.yticks([])
plt.show()
還記得我們?cè)谏弦黄┛鸵徊讲绞謱懮窠?jīng)網(wǎng)絡(luò)中關(guān)于bp部分導(dǎo)數(shù)的推導(dǎo)嗎?激活函數(shù)輸出值接近于0會(huì)導(dǎo)致梯度非常接近于0,因此會(huì)導(dǎo)致梯度消失。
3.Xavier initialization
Xavier initialization是 Glorot 等人為了解決隨機(jī)初始化的問題提出來的另一種初始化方法,他們的思想倒也簡(jiǎn)單,就是盡可能的讓輸入和輸出服從相同的分布,這樣就能夠避免后面層的激活函數(shù)的輸出值趨向于0。他們的初始化方法為:
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * np.sqrt(1 / layers_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
return parameters
來看下Xavier initialization后每層的激活函數(shù)輸出值的分布:

能夠看出,深層的激活函數(shù)輸出值還是非常漂亮的服從標(biāo)準(zhǔn)高斯分布。雖然Xavier initialization能夠很好的 tanH 激活函數(shù),但是對(duì)于目前神經(jīng)網(wǎng)絡(luò)中最常用的ReLU激活函數(shù),還是無能能力,請(qǐng)看下圖:
ReLU分布

當(dāng)達(dá)到5,6層后幾乎又開始趨向于0,更深層的話很明顯又會(huì)趨向于0。
4.He initialization
為了解決上面的問題,我們的何愷明大神(關(guān)于愷明大神的軼事有興趣的可以八卦下,哈哈哈,蠻有意思的)提出了一種針對(duì)ReLU的初始化方法,一般稱作 He initialization。初始化方式為:
def initialize_parameters_he(layers_dims):
"""
Arguments:
layer_dims -- python array (list) containing the size of each layer.
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
W1 -- weight matrix of shape (layers_dims[1], layers_dims[0])
b1 -- bias vector of shape (layers_dims[1], 1)
...
WL -- weight matrix of shape (layers_dims[L], layers_dims[L-1])
bL -- bias vector of shape (layers_dims[L], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layers_dims) # integer representing the number of layers
for l in range(1, L):
parameters['W' + str(l)] = np.random.randn(layers_dims[l], layers_dims[l - 1]) * np.sqrt(2 / layers_dims[l - 1])
parameters['b' + str(l)] = np.zeros((layers_dims[l], 1))
return parameters
來看看經(jīng)過He initialization后,當(dāng)隱藏層使用ReLU時(shí),激活函數(shù)的輸出值的分布情況:

效果是比Xavier initialization好很多。現(xiàn)在神經(jīng)網(wǎng)絡(luò)中,隱藏層常使用ReLU,權(quán)重初始化常用He initialization這種方法。
關(guān)于深度學(xué)習(xí)中神經(jīng)網(wǎng)絡(luò)的幾種初始化方法的對(duì)比就介紹這么多,現(xiàn)在深度學(xué)習(xí)中常用的隱藏層激活函數(shù)是ReLU,因此常用的初始化方法就是 He initialization。
以上所有代碼都放到github上了,感興趣的可以看一波:compare_initialization.
參考文獻(xiàn)
- Xavier Glorot et al., Understanding the Difficult of Training Deep Feedforward Neural Networks
- Kaiming He et al., Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classfication
- Andrew ng coursera 《deep learning》課
- 夏飛 《聊一聊深度學(xué)習(xí)的weight initialization》
作者:天澤28
來源:CSDN
原文:https://blog.csdn.net/u012328159/article/details/80025785