LDA有標(biāo)簽數(shù)據(jù)降維

之前無標(biāo)簽數(shù)據(jù)降維PCA,那么像下圖帶有標(biāo)簽數(shù)據(jù),如果用PCA降維將會投影到v軸上,這個投影方差最大,數(shù)據(jù)將變成不可分狀態(tài),LDA將把數(shù)據(jù)投影加上已有分類這個變量,將數(shù)據(jù)投影到u軸上


假設(shè)原數(shù)據(jù)分成n類,用矩陣Di表示i類數(shù)據(jù),均值向量mi,將設(shè)將數(shù)據(jù)投影到向量w上后,均值分別為Mi,向量w模長為1,則有



矩陣Di投影后類方差為



我們希望
盡可能大,這樣數(shù)據(jù)才能保留之前的分類特性,問題轉(zhuǎn)化為求

的最大值
設(shè)



對J求導(dǎo)

J最大值應(yīng)該是矩陣
的最大特征值

例子

from numpy.random import random_sample
import numpy as np
# fig = plt.figure()
N = 600
# 設(shè)橢圓中心center
cx = 5
cy = 6
a = 1/8.0
b = 4
X,scale = 2*a*random_sample((N,))+cx-a,60
Y = [2*b*np.sqrt(1.0-((xi-cx)/a)**2)*random_sample()+cy-b*np.sqrt(1.0-((xi-cx)/a)**2) for xi in X]
colors = ['green', 'green']*150
fig, ax = plt.subplots()
fig.set_size_inches(4, 6)
ax.scatter(X, Y,c = "none",s=scale,alpha=1, edgecolors=['green']*N)
X1,scale = 2*a*random_sample((N,))+cx-a,60
Y1 = [2*b*np.sqrt(1.0-((xi-cx)/a)**2)*random_sample()+cy-b*np.sqrt(1.0-((xi-cx)/a)**2) for xi in X1]
ax.scatter(X1+0.3, Y1,c = "none",s=scale,alpha=1, edgecolors=['red']*N)
plt.savefig('lda.png')
plt.show()

自己實現(xiàn)

D1 = np.array([X, Y])
D2 = np.array([X1+0.3, Y1])
m1 = np.mean(D1, axis=1)
m1 = m1[None,]
print m1
m2 = np.mean(D2, axis=1)
m2 = m2[None,]
print m2
SA = np.dot((m1-m2).T,(m1-m2))
S1 = np.dot(D1-m1.T,(D1-m1.T).T)
print S1
S2 = np.dot(D2-m2.T,(D2-m2.T).T)
SB = S1+S2
S = np.dot(np.linalg.inv(SB), SA)
evalue, evec = np.linalg.eig(S)
data1 = np.dot(evec[:,0], D1)
plt.scatter(data1, [0]*data1.size,c = 'g',s=scale,alpha=1, edgecolors=['none']*N)
data2 = np.dot(evec[:,0], D2)
plt.scatter(data2, [0]*data2.size,c = 'r',s=scale,alpha=1, edgecolors=['none']*N)
plt.show()

調(diào)用sklearn

from sklearn.lda import LDA
lda = LDA(n_components=1)
X3 = np.column_stack((D1,D2))
print X3.shape
Y = np.ones(X3.shape[1])
print Y.shape
Y[0:N/2]=0
X_trainn_lda = lda.fit_transform(X3.T, Y.T)
print X_trainn_lda.shape
xy = X_trainn_lda.size
plt.scatter(X_trainn_lda, [0]*xy,c = (['g']*(xy/2)+['r']*(xy/2)),s=scale,alpha=1, edgecolors=['none']*N)
plt.show()

完美投影成兩個線段,

多個分組情況

下圖是由一個三維空間的三組數(shù)據(jù),降維到二維的投影



不再是一個向量,而是一個矩陣形式,
分子分母需要重新刻畫,多維數(shù)據(jù)離散程度用協(xié)方差來刻畫,分子可以用每組均值數(shù)據(jù)的協(xié)方差來表示

最后是兩個矩陣的比值,這個沒有具體的意義,pca知變換后特征值大小代表在該特征向量下投影的離散程度,而特征值的乘積=矩陣行列式,那么

例子

import scipy.io as sio  
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from numpy.random import random_sample
import numpy as np
ax=plt.subplot(111,projection='3d') #創(chuàng)建一個三維的繪圖工程
N = 200
scale = 60
# 設(shè)橢球中心center
cx = 2
cy = 2
cz = 2
a = 1.0
b = 1.5
c = 4.0
def plot(cx,cy,cz, a,b,c,N, color):
    X,scale = 2*a*random_sample((N,))+cx-a,60
    Y = [b*np.sqrt(1.0-((xi-cx)/a)**2)*(2*random_sample()-1)+cy for xi in X]
    Z = [c*np.sqrt(1-((xi-cx)/a)**2-((yi-cy)/b)**2)*(2*random_sample()-1)+cz for xi, yi in zip(X,Y)]
    ax.scatter(X, Y, Z,c = color,s=scale,alpha=1, edgecolors=['none']*N)
    lr =  np.array((X,Y,Z))
    return lr
data1 = plot(cx,cy,cz,a,b,c,N, 'b')
data2 = plot(cx+3,cy,cz,a,b,c,N,'r')
data3 = plot(cx,cy+4,cz,a,b,c,N,'g')
data = np.hstack((data1,data2,data3))
print data.shape
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(data)
print X_train_pca.shape
train = np.dot(X_train_pca.T, data)
ax.set_xlim([0,5])
ax.set_ylim([0,5])
ax.set_zlim([0,5])
ax.set_xlabel("X")
ax.set_ylabel("Y")
ax.set_zlabel("Z")

plt.show()

生成三個橢球,數(shù)據(jù)點紅、綠、藍(lán)三組



PCA降維后數(shù)據(jù)

plt.scatter(train[0,:], train[1,:],c = (['r']*N+['g']*N+['b']*N),s=scale,alpha=1, edgecolors=['none']*N)
plt.show()

LDA降維后數(shù)據(jù)

m1 = np.mean(data1, axis=1)[None,].T
m2 = np.mean(data2, axis=1)[None,].T
m3 = np.mean(data3, axis=1)[None,].T
print m1.shape
m = np.hstack((m1,m2,m3))
mTotal = np.mean(data, axis=1)[None,].T

SA = np.dot(m-mTotal, (m-mTotal).T)
SB = np.dot(data1-m1, (data1-m1).T)+np.dot(data2-m2, (data2-m2).T)+np.dot(data3-m3, (data3-m3).T)

S = np.dot(np.linalg.inv(SB), SA)
evalue, evec = np.linalg.eig(S)
myTrain =np.dot(evec, data)
plt.scatter(myTrain[0,:], myTrain[1,:],c = (['r']*N+['g']*N+['b']*N),s=scale,alpha=1, edgecolors=['none']*N)
plt.show()

調(diào)用sklearn

from sklearn.lda import LDA
lda = LDA(n_components=2)
y_train =[0]*N+[1]*N+[2]*N
y_train = np.array(y_train)
X_train_lda = lda.fit_transform(data.T, y_train.T)
print X_train_lda.shape
plt.scatter(X_train_lda.T[0,:], X_train_lda.T[1,:],c = (['r']*N+['g']*N+['b']*N),s=scale,alpha=1, edgecolors=['none']*N)
plt.show()

注意 矩陣并不一定可逆,可以先進(jìn)行pca降維,再LDA

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容