介紹了用LR,Random Forest,KNN,SVM,神經(jīng)網(wǎng)絡(luò)來實(shí)現(xiàn)預(yù)測(cè),并做了幾種方法的比較。
KNN
sklearn.neighbors.
KNeighborsClassifier
(*n_neighbors=5*, *weights='uniform'*, *algorithm='auto'*, *leaf_size=30*, *p=2*, *metric='minkowski'*, *metric_params=None*, *n_jobs=1*, ***kwargs*)
需要選擇的參數(shù):
- n_neighbors (即K)
- weights (K大的時(shí)候可以選distance降低權(quán)重)
- algorithm (只影響預(yù)測(cè)時(shí)間,不影響精度)
- p (p=2時(shí)是歐拉距離,p=1就是曼哈頓距離)
疑問
- 沒有把圖像二值化?
完整代碼:
KNN_choosePara.py
# -*- coding: UTF-8 -*-
import pandas as pd
import numpy as np
import time
from sklearn.cross_validation import cross_val_score
from sklearn.neighbors import KNeighborsClassifier
#read data
print "reading data"
dataset = pd.read_csv("../train.csv")
X_train = dataset.values[0:, 1:]
y_train = dataset.values[0:, 0]
#for fast evaluation
X_train_small = X_train[:10000, :]
y_train_small = y_train[:10000]
X_test = pd.read_csv("../test.csv").values
#knn
#-----------------------用于小范圍測(cè)試選擇最佳參數(shù)-----------------------------#
#begin time
start = time.clock()
#progressing
print "selecting best paramater range"
knn_clf=KNeighborsClassifier(n_neighbors=5, algorithm='kd_tree', weights='distance', p=3)
score = cross_val_score(knn_clf, X_train_small, y_train_small, cv=3)
print( score.mean() )
#end time
elapsed = (time.clock() - start)
print("Time used:",int(elapsed), "s")
#k=3
#0.942300738697
#0.946100822903 weights='distance'
#0.950799888775 p=3
#k=5
#0.939899237556
#0.94259888029
#k=7
#0.935395994386
#0.938997377902
#k=9
#0.933897851978
KNN_predict.py:
# -*- coding: UTF-8 -*-
from KNN.KNN_choosePara import *
clf=KNeighborsClassifier(n_neighbors=5, algorithm='kd_tree', weights='distance', p=3)
start=time.clock()
#read data
print "reading data"
dataset = pd.read_csv("train.csv")
clf.fit(X_train,y_train) #針對(duì)整個(gè)訓(xùn)練集訓(xùn)練分類器
elapsed = (time.clock() - start)
print("Training Time used:",int(elapsed/60) , "min")
print "predicting"
result=clf.predict(X_test)
result=np.c_[range(1,len(result)+1),result.astype(int)] #轉(zhuǎn)化為int格式生成一列
df_result=pd.DataFrame(result,columns=['ImageID','Label'])
df_result.to_csv('./results.knn.csv',index=False)
#end time
elapsed = (time.clock() - start)
print("Test Time used:",int(elapsed/60) , "min")
# choosing parameters
# 0.947298365455 score
# ('Time used:', 983, 's')
# reading data
# ('Training Time used:', 0, 'min')
# predicting
# ('Test Time used:', 244, 'min')
# 0.97214 final score
</br></br>
LR
sklearn.model_selection.GridSearchCV
(estimator, param_grid, scoring=None, fit_params=None, n_jobs=1,
iid=True, refit=True, cv=None, verbose=0, pre_dispatch='2*n_jobs',
error_score='raise', return_train_score=True)
需要選擇的參數(shù):
- estimator : lr_clf
- param_grid : 參數(shù)列表,字典格式
- n_jobs :并行計(jì)算數(shù)?
完整代碼:
LR_choosePara:
# -*- coding: UTF-8 -*-
from KNN.KNN_choosePara import *
import pandas as pd
import numpy as np
import time
from sklearn.linear_model import LogisticRegression
from sklearn.grid_search import GridSearchCV
start=time.clock()
lr_clf=LogisticRegression(solver='newton-cg',multi_class='ovr',max_iter=100,C=1)
# 用GridSearchCV尋找最佳參數(shù)空間
parameters={'penalty:':['12'],'C':[2e-2, 4e-2,8e-2, 12e-2, 2e-1]} #dict格式的參數(shù)列表
gs_clf=GridSearchCV(lr_clf,parameters,n_jobs=1,verbose=True) #estimator, fit_params, n_jobs,
gs_clf.fit(X_train_small.astype('float')/256,y_train_small)
#打印最佳參數(shù)空間結(jié)果
print()
for params, mean_score, scores in gs_clf.grid_scores_:
print "%0.3f (+/-%0.03f) for %r" % (mean_score, scores.std() * 2, params)
print()
elapsed=(time.clock()-start)
print "time used:",elapsed
</br></br>
SVM
SVC和NuSVC的區(qū)別:
NuSVC是核支持向量分類,和SVC類似,但不同的是通過一個(gè)參數(shù)空值支持向量的個(gè)數(shù)。
SVM_choosePara.py
# -*- coding: UTF-8 -*-
from KNN.KNN_choosePara import *
import pandas as pd
import numpy as np
import time
from sklearn.svm import SVC, NuSVC
from sklearn.grid_search import GridSearchCV
start=time.clock()
#classificator
svm_clf=NuSVC(nu='0.1',kernel='rbf',gamma=0.1,verbose=True)
#choose the parameters
parameters=[{'nu':[0.05,0.02],'gamma':[3e-2, 2e-2, 1e-2]}]
gs_clf=GridSearchCV(svm_clf,parameters,n_jobs=1,verbose=True)
gs_clf.fit(X_train_small.astype('float')/256,y_train_small)
print()
for params,mean_score, scores in gs_clf.grid_scores_:
print("%0.3f (+/-%0.03f) for %r" % (mean_score, scores.std() * 2, params))
print()
elapsed=time.clock()-start
print("Time used:",elapsed)
SVM_predict.py
# -*- coding: UTF-8 -*-
import time
import numpy as np
import pandas as pd
from sklearn.svm import SVC, NuSVC
svm_clf=NuSVC(nu=0.2,kernel='rbf',gamma=0.2,verbose=True)
start=time.clock()
#read data
print "reading data"
dataset = pd.read_csv("../train.csv")
X_train = dataset.values[0:, 1:]
y_train = dataset.values[0:, 0]
svm_clf.fit(X_train,y_train) #針對(duì)整個(gè)訓(xùn)練集訓(xùn)練分類器
elapsed = (time.clock() - start)
print("Training Time used:",int(elapsed/60) , "min")
print "predicting"
X_test = pd.read_csv("../test.csv").values
result=svm_clf.predict(X_test)
result=np.c_[range(1,len(result)+1),result.astype(int)] #轉(zhuǎn)化為int格式生成一列
df_result=pd.DataFrame(result,columns=['ImageID','Label'])
df_result.to_csv('./results.svm.csv',index=False)
#end time
elapsed = (time.clock() - start)
print("Test Time used:",int(elapsed/60) , "min")
# optimization finished, #iter = 16982
# C = 49.988524
# obj = 4126.039937, rho = 0.015150
# nSV = 8251, nBSV = 0
#
# Total nSV = 42000
# ('Training Time used:', 175, 'min')
# predicting
# ('Test Time used:', 201, 'min')
#0.11614
不知道為什么,測(cè)試選參數(shù)的時(shí)候得分還是蠻高的,預(yù)測(cè)就只有0.1,還需要在看看。
</br></br>
Random forest
隨機(jī)森林,因?yàn)橹皫讉€(gè)算法跑起來都需要一個(gè)小時(shí)以上,隨機(jī)森林選參+測(cè)試用了不到十分鐘,將信將疑的把結(jié)果上傳kaggle,得分居然也很高, 先上代碼,之后再分析。
RF_choosePara.py
# -*- coding: UTF-8 -*-
from sklearn.ensemble import RandomForestClassifier
import time
import numpy as np
from sklearn.grid_search import GridSearchCV
import pandas as pd
#begin time
start = time.clock()
#reading data
print "reading data"
dataset = pd.read_csv("../train.csv")
X_train = dataset.values[0:, 1:]
y_train = dataset.values[0:, 0]
#for fast evaluation
X_train_small = X_train[:10000, :]
y_train_small = y_train[:10000]
elapsed = (time.clock() - start)
#progressing
parameters = {'criterion':['gini','entropy'] , 'max_features':['auto', 12, 100]}
rf_clf=RandomForestClassifier(n_estimators=400, n_jobs=4, verbose=1)
gs_clf = GridSearchCV(rf_clf, parameters, n_jobs=1, verbose=True )
gs_clf.fit( X_train_small.astype('int'), y_train_small )
print()
for params, mean_score, scores in gs_clf.grid_scores_:
print("%0.3f (+/-%0.03f) for %r" % (mean_score, scores.std() * 2, params))
print()
#end time
elapsed = (time.clock() - start)
print("Time used:",elapsed) #seconds
# 0.946 (+/-0.003) for {'max_features': 'auto', 'criterion': 'gini'}
# 0.947 (+/-0.001) for {'max_features': 12, 'criterion': 'gini'}
# 0.944 (+/-0.004) for {'max_features': 100, 'criterion': 'gini'}
# 0.945 (+/-0.004) for {'max_features': 'auto', 'criterion': 'entropy'}
# 0.946 (+/-0.003) for {'max_features': 12, 'criterion': 'entropy'}
# 0.941 (+/-0.004) for {'max_features': 100, 'criterion': 'entropy'}
# ()
# ('Time used:', 814.006147)
RF_predict.py
# -*- coding: UTF-8 -*-
import time
import numpy as np
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
clf=RandomForestClassifier(n_estimators=12)
start=time.clock()
#read data
print "reading data"
dataset = pd.read_csv("../train.csv")
X_train = dataset.values[0:, 1:]
y_train = dataset.values[0:, 0]
print "fitting the model"
clf.fit(X_train,y_train) #針對(duì)整個(gè)訓(xùn)練集訓(xùn)練分類器
elapsed = (time.clock() - start)
print("Training Time used:",int(elapsed/60) , "min")
#predicting data
print "predicting"
X_test = pd.read_csv("../test.csv").values
result=clf.predict(X_test)
result=np.c_[range(1,len(result)+1),result.astype(int)] #轉(zhuǎn)化為int格式生成一列
df_result=pd.DataFrame(result,columns=['ImageID','Label'])
df_result.to_csv('./results.rf.csv',index=False)
#end time
elapsed = (time.clock() - start)
print("Test Time used:",int(elapsed/60) , "min")
# reading data
# fitting the model
# ('Training Time used:', 0, 'min')
# predicting
# ('Test Time used:', 0, 'min')
#0.94629
隨機(jī)森林屬于集成學(xué)習(xí)的方法。
隨機(jī)森林由LeoBreiman(2001)提出,它通過自助法(bootstrap)重采樣技術(shù),從原始訓(xùn)練樣本集N中有放回地重復(fù)隨機(jī)抽取k個(gè)樣本生成新的訓(xùn)練樣本集合,然后根據(jù)自助樣本集生成k個(gè)分類樹組成隨機(jī)森林,新數(shù)據(jù)的分類結(jié)果按分類樹投票多少形成的分?jǐn)?shù)而定。其實(shí)質(zhì)是對(duì)決策樹算法的一種改進(jìn),將多個(gè)決策樹合并在一起,每棵樹的建立依賴于一個(gè)獨(dú)立抽取的樣品,森林中的每棵樹具有相同的分布,分類誤差取決于每一棵樹的分類能力和它們之間的相關(guān)性。特征選擇采用隨機(jī)的方法去分裂每一個(gè)節(jié)點(diǎn),然后比較不同情況下產(chǎn)生的誤差。能夠檢測(cè)到的內(nèi)在估計(jì)誤差、分類能力和相關(guān)性決定選擇特征的數(shù)目。單棵樹的分類能力可能很小,但在隨機(jī)產(chǎn)生大量的決策樹后,一個(gè)測(cè)試樣品可以通過每一棵樹的分類結(jié)果經(jīng)統(tǒng)計(jì)后選擇最可能的分類
隨機(jī)森林優(yōu)點(diǎn)
a. 在數(shù)據(jù)集上表現(xiàn)良好,兩個(gè)隨機(jī)性的引入,使得隨機(jī)森林不容易陷入過擬合
b. 在當(dāng)前的很多數(shù)據(jù)集上,相對(duì)其他算法有著很大的優(yōu)勢(shì),兩個(gè)隨機(jī)性的引入,使得隨機(jī)森林具有很好的抗噪聲能力
c. 它能夠處理很高維度(feature很多)的數(shù)據(jù),并且不用做特征選擇,對(duì)數(shù)據(jù)集的適應(yīng)能力強(qiáng):既能處理離散型數(shù)據(jù),也能處理連續(xù)型數(shù)據(jù),數(shù)據(jù)集無需規(guī)范化
d. 可生成一個(gè)Proximities=(pij)矩陣,用于度量樣本之間的相似性: pij=aij/N, aij表示樣本i和j出現(xiàn)在隨機(jī)森林中同一個(gè)葉子結(jié)點(diǎn)的次數(shù),N隨機(jī)森林中樹的顆數(shù)
e. 在創(chuàng)建隨機(jī)森林的時(shí)候,對(duì)generlization error使用的是無偏估計(jì)
f. 訓(xùn)練速度快,可以得到變量重要性排序(兩種:基于OOB誤分率的增加量和基于分裂時(shí)的GINI下降量
g. 在訓(xùn)練過程中,能夠檢測(cè)到feature間的互相影響
h. 容易做成并行化方法
i. 實(shí)現(xiàn)比較簡(jiǎn)單
</br>

LR線性模型顯然最弱。神經(jīng)網(wǎng)絡(luò)處理這種圖像問題確實(shí)目前是最強(qiáng)的。svm的support vector在這里起到作用非常明顯,準(zhǔn)確地找出了最具區(qū)分度的“特征圖像”。RF有點(diǎn)像非線性問題的萬金油,這里默認(rèn)參數(shù)已經(jīng)很可以了。只比KNN結(jié)果稍微差一點(diǎn),因?yàn)橹挥昧讼袼氐木植啃畔?。?dāng)然了,模型的對(duì)比這里只針對(duì)數(shù)字識(shí)別的問題,對(duì)于其他問題可能有不同的結(jié)果,要具體問題具體分析,結(jié)合模型特點(diǎn),選取合適的模型。