1. 背景:
以人腦中的神經(jīng)網(wǎng)絡(luò)為啟發(fā),歷史上出現(xiàn)過很多不同版本
最著名的算法是1980年的 backpropagation
2. 多層向前神經(jīng)網(wǎng)絡(luò)(Multilayer Feed-Forward Neural Network)
Backpropagation被使用在多層向前神經(jīng)網(wǎng)絡(luò)上
2.1 多層向前神經(jīng)網(wǎng)絡(luò)由以下部分組成:
輸入層(input layer), 隱藏層 (hidden layers), 輸入層 (output layers)

- 每層由單元(units)組成
- 輸入層(input layer)是由訓(xùn)練集的實(shí)例特征向量傳入
- 經(jīng)過連接結(jié)點(diǎn)的權(quán)重(weight)傳入下一層,一層的輸出是下一層的輸入
- 隱藏層的個數(shù)可以是任意的,輸入層有一層,輸出層有一層
- 每個單元(unit)也可以被稱作神經(jīng)結(jié)點(diǎn),根據(jù)生物學(xué)來源定義
- 以上成為2層的神經(jīng)網(wǎng)絡(luò)(輸入層不算)
- 一層中加權(quán)的求和,然后根據(jù)非線性方程轉(zhuǎn)化輸出
- 作為多層向前神經(jīng)網(wǎng)絡(luò),理論上,如果有足夠多的隱藏層(hidden layers) 和足夠大的訓(xùn)練集, 可以模擬出任何方程
3. 設(shè)計(jì)神經(jīng)網(wǎng)絡(luò)結(jié)構(gòu)
3.1 使用神經(jīng)網(wǎng)絡(luò)訓(xùn)練數(shù)據(jù)之前,必須確定神經(jīng)網(wǎng)絡(luò)的層數(shù),以及每層單元的個數(shù)
3.2 特征向量在被傳入輸入層時通常被先標(biāo)準(zhǔn)化(normalize)到0和1之間 (為了加速學(xué)習(xí)過程)
3.3 離散型變量可以被編碼成每一個輸入單元對應(yīng)一個特征值可能賦的值
比如:特征值A(chǔ)可能取三個值(a0, a1, a2), 可以使用3個輸入單元來代表A。
如果A=a0, 那么代表a0的單元值就取1, 其他取0;
如果A=a1, 那么代表a1de單元值就取1,其他取0,以此類推
3.4 神經(jīng)網(wǎng)絡(luò)即可以用來做分類(classification)問題,也可以解決回歸(regression)問題
對于分類問題,如果是2類,可以用一個輸出單元表示(0和1分別代表2類)。如果多余2類,每一個類別用一個輸出單元表示,所以輸入層的單元數(shù)量通常等于類別的數(shù)量
沒有明確的規(guī)則來設(shè)計(jì)最好有多少個隱藏層,根據(jù)實(shí)驗(yàn)測試和誤差,以及準(zhǔn)確度來實(shí)驗(yàn)并改進(jìn)
4. 交叉驗(yàn)證方法(Cross-Validation)

5. Backpropagation算法
5.1 通過迭代性的來處理訓(xùn)練集中的實(shí)例
5.2 對比經(jīng)過神經(jīng)網(wǎng)絡(luò)后輸入層預(yù)測值(predicted value)與真實(shí)值(target value)之間
5.3 反方向(從輸出層=>隱藏層=>輸入層)來以最小化誤差(error)來更新每個連接的權(quán)重(weight)
5.4 算法詳細(xì)介紹
輸入:D:數(shù)據(jù)集,l 學(xué)習(xí)率(learning rate), 一個多層前向神經(jīng)網(wǎng)絡(luò)
輸入:一個訓(xùn)練好的神經(jīng)網(wǎng)絡(luò)(a trained neural network)
初始化權(quán)重(weights)和偏向(bias): 隨機(jī)初始化在-1到1之間,或者-0.5到0.5之間,每個單元有 一個偏向
對于每一個訓(xùn)練實(shí)例X,執(zhí)行以下步驟:
(1)由輸入層向前傳送




(2) 根據(jù)誤差(error)反向傳送

(3) 終止條件
- 權(quán)重的更新低于某個閾值
- 預(yù)測的錯誤率低于某個閾值
- 達(dá)到預(yù)設(shè)一定的循環(huán)次數(shù)
6. Backpropagation 算法舉例

7. 關(guān)于非線性轉(zhuǎn)化方程(non-linear transformation function)
7.1 Sigmoid函數(shù)
Sigmoid函數(shù)(S 曲線)用來作為activation function:
Sigmoid函數(shù)是一個在生物學(xué)中常見的S型的函數(shù),也稱為S型生長曲線。在信息科學(xué)中,由于其單增以及反函數(shù)單增等性質(zhì),Sigmoid函數(shù)常被用作神經(jīng)網(wǎng)絡(luò)的閾值函數(shù),將變量映射到0,1之間。


7.2 雙曲函數(shù)(tanh)
定義:


圖像:

維基百科鏈接:https://zh.wikipedia.org/wiki/%E5%8F%8C%E6%9B%B2%E5%87%BD%E6%95%B0
7.3 邏輯函數(shù)(logistic function)
Logistic函數(shù)可用下式表示:

圖像:

維基百科鏈接:
https://en.wikipedia.org/wiki/Logistic_function
https://zh.wikipedia.org/wiki/%E9%82%8F%E8%BC%AF%E5%87%BD%E6%95%B8
8. 用python實(shí)現(xiàn)神經(jīng)網(wǎng)絡(luò)算法
8.2 編寫神經(jīng)網(wǎng)絡(luò)算法的一個類NeuralNetwork
import numpy as np
# 雙曲函數(shù)(tanh)
def tanh(x):
return np.tanh(x)
# 雙曲函數(shù)(tanh)的導(dǎo)數(shù)
def tanh_deriv(x):
return 1.0 - np.tanh(x)*np.tanh(x)
# 邏輯函數(shù)(logistic function)
def logistic(x):
return 1/(1 + np.exp(-x))
# 邏輯函數(shù)(logistic function)的導(dǎo)數(shù)
def logistic_derivative(x):
return logistic(x)*(1-logistic(x))
class NeuralNetwork:
# 默認(rèn)使用雙曲函數(shù)
def __init__(self, layers, activation='tanh'):
"""
:param layers: A list containing the number of units in each layer.
Should be at least two values
:param activation: The activation function to be used. Can be
"logistic" or "tanh"
"""
if activation == 'logistic':
self.activation = logistic
self.activation_deriv = logistic_derivative
elif activation == 'tanh':
self.activation = tanh
self.activation_deriv = tanh_deriv
# 權(quán)重
self.weights = []
for i in range(1, len(layers) - 1):
self.weights.append((2*np.random.random((layers[i - 1] + 1, layers[i] + 1))-1)*0.25)
self.weights.append((2*np.random.random((layers[i] + 1, layers[i + 1]))-1)*0.25)
# X:數(shù)據(jù)集,是一個特征值矩陣 ;y:分類標(biāo)記
def fit(self, X, y, learning_rate=0.2, epochs=10000):
X = np.atleast_2d(X)
temp = np.ones([X.shape[0], X.shape[1]+1])
temp[:, 0:-1] = X # adding the bias unit to the input layer
X = temp
y = np.array(y)
for k in range(epochs):
i = np.random.randint(X.shape[0])
a = [X[i]]
for l in range(len(self.weights)): #going forward network, for each layer
a.append(self.activation(np.dot(a[l], self.weights[l]))) #Computer the node value for each layer (O_i) using activation function
error = y[i] - a[-1] #Computer the error at the top layer
deltas = [error * self.activation_deriv(a[-1])] #For output layer, Err calculation (delta is updated error)
#Staring backprobagation
for l in range(len(a) - 2, 0, -1): # we need to begin at the second to last layer
#Compute the updated error (i,e, deltas) for each node going from top layer to input layer
deltas.append(deltas[-1].dot(self.weights[l].T)*self.activation_deriv(a[l]))
deltas.reverse()
for i in range(len(self.weights)):
layer = np.atleast_2d(a[i])
delta = np.atleast_2d(deltas[i])
self.weights[i] += learning_rate * layer.T.dot(delta)
def predict(self, x):
x = np.array(x)
temp = np.ones(x.shape[0]+1)
temp[0:-1] = x
a = temp
for l in range(0, len(self.weights)):
a = self.activation(np.dot(a, self.weights[l]))
return a
8.2 簡單非線性關(guān)系數(shù)據(jù)集測試(XOR):

代碼:
from NeuralNetwork import NeuralNetwork
import numpy as np
nn = NeuralNetwork([2, 2, 1], 'tanh')
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0])
nn.fit(X, y)
for i in [[0, 0], [0, 1], [1, 0], [1, 1]]:
print(i, nn.predict(i))
運(yùn)行結(jié)果:
[0, 0] [ 0.00158086]
[0, 1] [ 0.99841709]
[1, 0] [ 0.99839162]
[1, 1] [ 0.01167852]
8.2 手寫數(shù)字識別:
每個圖片8x8
識別數(shù)字:0,1,2,3,4,5,6,7,8,9
代碼:
#!/usr/bin/python
# -*- coding:utf-8 -*-
# 每個圖片8x8 識別數(shù)字:0,1,2,3,4,5,6,7,8,9
import numpy as np
from sklearn.datasets import load_digits
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.preprocessing import LabelBinarizer
from NeuralNetwork import NeuralNetwork
from sklearn.cross_validation import train_test_split
digits = load_digits()
X = digits.data
y = digits.target
X -= X.min() # normalize the values to bring them into the range 0-1
X /= X.max()
nn = NeuralNetwork([64, 100, 10], 'logistic')
X_train, X_test, y_train, y_test = train_test_split(X, y)
labels_train = LabelBinarizer().fit_transform(y_train)
labels_test = LabelBinarizer().fit_transform(y_test)
print("start fitting")
nn.fit(X_train, labels_train, epochs=3000)
predictions = []
for i in range(X_test.shape[0]):
o = nn.predict(X_test[i])
predictions.append(np.argmax(o))
print(confusion_matrix(y_test, predictions))
print(classification_report(y_test, predictions))
運(yùn)行結(jié)果:
[[48 0 0 0 0 0 0 0 0 0]
[ 0 30 0 0 0 0 1 0 4 5]
[ 0 0 45 0 0 0 0 1 0 0]
[ 0 0 1 34 0 0 0 0 2 4]
[ 0 0 0 0 47 0 0 0 0 0]
[ 0 1 0 0 0 36 0 0 0 4]
[ 1 0 0 0 0 0 53 0 0 0]
[ 0 0 0 0 1 0 0 40 0 0]
[ 0 3 0 0 0 0 0 0 42 1]
[ 0 0 0 0 0 0 0 0 1 45]]
precision recall f1-score support
0 0.98 1.00 0.99 48
1 0.88 0.75 0.81 40
2 0.98 0.98 0.98 46
3 1.00 0.83 0.91 41
4 0.98 1.00 0.99 47
5 1.00 0.88 0.94 41
6 0.98 0.98 0.98 54
7 0.98 0.98 0.98 41
8 0.86 0.91 0.88 46
9 0.76 0.98 0.86 46
avg / total 0.94 0.93 0.93 450
????????????【注】:本文為麥子學(xué)院機(jī)器學(xué)習(xí)課程的學(xué)習(xí)筆記