上一章完成了一個KNN Classifier,這一章就來到了熟悉又陌生的SVM...感覺自己雖然以前用過SVM,但是從來沒有真正搞懂過,就著這門好課鞏固一下吧!
1. Preprocessing
和上一章不同的是,先visualize mean image:

然后從所有image中減去這個mean image,這個數據預處理過程是為了統一數據的量級。對于圖像而言,像素值一定在0-255之間,所以直接減去mean就可以了,但是如果是其他數據,通常用標準化或者歸一化來做。下面代碼為預處理過程:
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
增加bias
# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
print X_train.shape, X_val.shape, X_test.shape, X_dev.shape
2. implement a fully-vectorized loss function for the SVM
注意SVM的loss function, 這里的delta設置為1(即SVM所要求的間隔):

naive implementation
首先看一下naive的for loop解法,只要正常計算 dL/dW就可以了,這個沒什么難度。需要稍微解釋一下的是,代碼中的dW其實是實際意義的dL/dW。
按照求導表達式為:
dW[:,j] += X[i].transpose()
dW[:,y[i]] -= X[i]
def svm_loss_naive(W, X, y, reg):
"""
Structured SVM loss function, naive implementation (with loops).
Inputs have dimension D, there are C classes, and we operate on minibatches
of N examples.
Inputs:
- W: A numpy array of shape (D, C) containing weights.
- X: A numpy array of shape (N, D) containing a minibatch of data.
- y: A numpy array of shape (N,) containing training labels; y[i] = c means
that X[i] has label c, where 0 <= c < C.
- reg: (float) regularization strength
Returns a tuple of:
- loss as single float
- gradient with respect to weights W; an array of same shape as W
"""
dW = np.zeros(W.shape) # initialize the gradient as zero
# compute the loss and the gradient
num_classes = W.shape[1]
num_train = X.shape[0]
loss = 0.0
for i in xrange(num_train):
scores = X[i].dot(W)
correct_class_score = scores[y[i]]
for j in xrange(num_classes):
if j == y[i]:
continue
margin = scores[j] - correct_class_score + 1 # note delta = 1
# dW[:,j] += X[i].transpose()
if margin > 0:
loss += margin
dW[:,j] += X[i].transpose()
dW[:,y[i]] -= X[i]
# Right now the loss is a sum over all training examples, but we want it
# to be an average instead so we divide by num_train.
loss /= num_train
dW /= num_train
# Add regularization to the loss.
loss += 0.5 * reg * np.sum(W * W)
dW += reg * np.sum(W)
return loss, dW
vectorized implementation
接下來看vectorized version:
def svm_loss_vectorized(W, X, y, reg):
"""
Structured SVM loss function, vectorized implementation.
Inputs and outputs are the same as svm_loss_naive.
"""
loss = 0.0
dW = np.zeros(W.shape) # initialize the gradient as zero
num_train = X.shape[0]
num_classes = W.shape[1]
#############################################################################
# TODO: #
# Implement a vectorized version of the structured SVM loss, storing the #
# result in loss. #
#############################################################################
scores = X.dot(W)
# true labels
s_yi = scores[np.arange(num_train), y]
mat = scores - np.tile(s_yi, (num_classes,1)).transpose() + 1
loss_mat = np.maximum(np.zeros((num_train, num_classes)), mat)
# loss_mat[loss_mat<0] = 0 # this worked out as well
loss_mat[np.arange(num_train), y] = 0
loss = np.sum(loss_mat)/num_train
loss += 0.5 * reg * np.sum(W * W)
#############################################################################
# TODO: #
# Implement a vectorized version of the gradient for the structured SVM #
# loss, storing the result in dW. #
# #
# Hint: Instead of computing the gradient from scratch, it may be easier #
# to reuse some of the intermediate values that you used to compute the #
# loss. #
#############################################################################
# I don't know what's wrong about the following commented code
#############################################################################
# loss_pos = np.array(np.nonzero(loss_mat))
# print loss_pos, loss_pos.shape
# dW[ :, y[loss_pos[0,:]] ] -= X[ loss_pos[0,:],: ].transpose()
# dW[ :, loss_pos[1,:] ] += X[ loss_pos[0,:],: ].transpose()
# dW /= num_train
# dW += reg * W
# Binarize into integers
binary = loss_mat
binary[loss_mat > 0] = 1
# Perform the two operations simultaneously
# (1) for all j: dW[j,:] = sum_{i, j produces positive margin with i} X[:,i].T
# (2) for all i: dW[y[i],:] = sum_{j != y_i, j produces positive margin with i} -X[:,i].T
col_sum = np.sum(binary, axis=1)
binary[range(num_train), y] = -col_sum[range(num_train)]
dW = np.dot(X.T, binary)
# Divide
dW /= num_train
# Regularize
dW += reg*W
return loss, dW
一開始我自己實現的代碼是代碼中我說我不知道哪里錯了的部分..到現在我也覺得是對的,還沒有看出到底哪里錯了,如果有人剛好看到這篇文章愿意指正出來,我會非常感謝您。第二種方法是在github上看到的一種方法,也很巧妙,搬過來用了發(fā)現代碼跑的結果是對的,但是還是不明白為什么自己那個方法是錯的QAQ...
3. Stochastic Gradient Descent (SGD)
每一次迭代訓練的時候隨機選取一個batch_size的數據個數。在給定的樣本集合M中,隨機取出副本N代替原始樣本M來作為全集,對模型進行訓練,這種訓練由于是抽取部分數據,所以有較大的幾率得到的是,一個局部最優(yōu)解,但是一個明顯的好處是,如果在樣本抽取合適范圍內,既會求出結果,而且速度還快。這個理解摘自:http://www.cnblogs.com/gongxijun/p/5890548.html
順便發(fā)現了一個這個課程的bug,就是前面單純實現svm的optimization的時候和后面做SGD的時候要求輸入的數據維度是反著的……所以做這里的時候還把前面給改了……anyway……
class LinearClassifier(object):
def __init__(self):
self.W = None
def train(self, X, y, learning_rate=1e-3, reg=1e-5, num_iters=100,
batch_size=200, verbose=False):
"""
Train this linear classifier using stochastic gradient descent.
Inputs:
- X: A numpy array of shape (N, D) containing training data; there are N
training samples each of dimension D.
- y: A numpy array of shape (N,) containing training labels; y[i] = c
means that X[i] has label 0 <= c < C for C classes.
- learning_rate: (float) learning rate for optimization.
- reg: (float) regularization strength.
- num_iters: (integer) number of steps to take when optimizing
- batch_size: (integer) number of training examples to use at each step.
- verbose: (boolean) If true, print progress during optimization.
Outputs:
A list containing the value of the loss function at each training iteration.
"""
num_train, dim = X.shape
num_classes = np.max(y) + 1 # assume y takes values 0...K-1 where K is number of classes
if self.W is None:
# lazily initialize W
self.W = 0.001 * np.random.randn(dim, num_classes)
# Run stochastic gradient descent to optimize W
loss_history = []
for it in xrange(num_iters):
X_batch = None
y_batch = None
#########################################################################
# TODO: #
# Sample batch_size elements from the training data and their #
# corresponding labels to use in this round of gradient descent. #
# Store the data in X_batch and their corresponding labels in #
# y_batch; after sampling X_batch should have shape (dim, batch_size) #
# and y_batch should have shape (batch_size,) #
# #
# Hint: Use np.random.choice to generate indices. Sampling with #
# replacement is faster than sampling without replacement. #
#########################################################################
num_random = np.random.choice(num_train, batch_size, replace=True)
X_batch = X[num_random, :].transpose()
# print X_batch.shape
y_batch = y[num_random]
#########################################################################
# END OF YOUR CODE #
#########################################################################
# evaluate loss and gradient
loss, grad = self.loss(X_batch, y_batch, reg)
loss_history.append(loss)
# perform parameter update
#########################################################################
# TODO: #
# Update the weights using the gradient and the learning rate. #
#########################################################################
self.W += -grad * learning_rate
#########################################################################
# END OF YOUR CODE #
#########################################################################
if verbose and it % 100 == 0:
print 'iteration %d / %d: loss %f' % (it, num_iters, loss)
return loss_history
注意按照grad的反方向調整W!你想啊,這個grad代表的意義是,每正向改變w的值,會造成最終的目標函數有多大的改變。比如這個grad是一個正數,那么自變量就是正向作用,它越大則目標函數值越大,那么我們?yōu)榱说玫綐O小值,是不是就應該按照反方向來對權值(自變量)進行調整呢?恩~

4. Play with hyperparameters
至于怎么決定那些learning_rate, regularization_parameter的大小,就是用cross-validation集做驗證的事了,將不同的參數用train來訓練好后用X_val, y_val來做驗證,然后選出正確率最高的一組參數。這里就不細說了,一些dirty work...最后的正確率也就是0.35-0.4罷了,這個svm的單層訓練器的有效性可想而知嘛。想說的是最后的一步可視化:

的確是,很難看呢?。?!
5 Hinge Loss
The Hinge Loss 定義為 E(z) = max(0,1-z)。Hinge loss是凸的,但是因為導數不連續(xù),還有一些變種,比如 Squared Hing Loss (L2 SVM)。Hinge loss是下圖的綠線。黑線是0-1函數,紅線是log loss(基于最大似然的負log)。

一般來講,Hinge于soft-margin svm算法;log于LR算法(Logistric Regression);squared loss,也就是最小二乘法,于線性回歸(Liner Regression);基于指數函數的loss于Boosting。