調(diào)參必備--Grid Search網(wǎng)格搜索

求關(guān)注

關(guān)注小編的公眾號,一起來交流學(xué)習(xí)吧!

什么是Grid Search 網(wǎng)格搜索?

Grid Search:一種調(diào)參手段;窮舉搜索:在所有候選的參數(shù)選擇中,通過循環(huán)遍歷,嘗試每一種可能性,表現(xiàn)最好的參數(shù)就是最終的結(jié)果。其原理就像是在數(shù)組里找最大值。(為什么叫網(wǎng)格搜索?以有兩個參數(shù)的模型為例,參數(shù)a有3種可能,參數(shù)b有4種可能,把所有可能性列出來,可以表示成一個3*4的表格,其中每個cell就是一個網(wǎng)格,循環(huán)過程就像是在每個網(wǎng)格里遍歷、搜索,所以叫g(shù)rid search)

parameters table

Simple Grid Search:簡單的網(wǎng)格搜索

以2個參數(shù)的調(diào)優(yōu)過程為例:

from sklearn.datasets import load_iris
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split

iris = load_iris()
X_train,X_test,y_train,y_test = train_test_split(iris.data,iris.target,random_state=0)
print("Size of training set:{} size of testing set:{}".format(X_train.shape[0],X_test.shape[0]))

####   grid search start
best_score = 0
for gamma in [0.001,0.01,0.1,1,10,100]:
    for C in [0.001,0.01,0.1,1,10,100]:
        svm = SVC(gamma=gamma,C=C)#對于每種參數(shù)可能的組合,進行一次訓(xùn)練;
        svm.fit(X_train,y_train)
        score = svm.score(X_test,y_test)
        if score > best_score:#找到表現(xiàn)最好的參數(shù)
            best_score = score
            best_parameters = {'gamma':gamma,'C':C}
####   grid search end

print("Best score:{:.2f}".format(best_score))
print("Best parameters:{}".format(best_parameters))

輸出:

Size of training set:112 size of testing set:38
Best score:0.973684
Best parameters:{'gamma': 0.001, 'C': 100}

存在的問題:

原始數(shù)據(jù)集劃分成訓(xùn)練集和測試集以后,其中測試集除了用作調(diào)整參數(shù),也用來測量模型的好壞;這樣做導(dǎo)致最終的評分結(jié)果比實際效果要好。(因為測試集在調(diào)參過程中,送到了模型里,而我們的目的是將訓(xùn)練模型應(yīng)用在unseen data上);

解決方法:

對訓(xùn)練集再進行一次劃分,分成訓(xùn)練集和驗證集,這樣劃分的結(jié)果就是:原始數(shù)據(jù)劃分為3份,分別為:訓(xùn)練集、驗證集和測試集;其中訓(xùn)練集用來模型訓(xùn)練,驗證集用來調(diào)整參數(shù),而測試集用來衡量模型表現(xiàn)好壞。


three parts
X_trainval,X_test,y_trainval,y_test = train_test_split(iris.data,iris.target,random_state=0)
X_train,X_val,y_train,y_val = train_test_split(X_trainval,y_trainval,random_state=1)
print("Size of training set:{} size of validation set:{} size of teseting set:{}".format(X_train.shape[0],X_val.shape[0],X_test.shape[0]))

best_score = 0.0
for gamma in [0.001,0.01,0.1,1,10,100]:
    for C in [0.001,0.01,0.1,1,10,100]:
        svm = SVC(gamma=gamma,C=C)
        svm.fit(X_train,y_train)
        score = svm.score(X_val,y_val)
        if score > best_score:
            best_score = score
            best_parameters = {'gamma':gamma,'C':C}
svm = SVC(**best_parameters) #使用最佳參數(shù),構(gòu)建新的模型
svm.fit(X_trainval,y_trainval) #使用訓(xùn)練集和驗證集進行訓(xùn)練,more data always results in good performance.
test_score = svm.score(X_test,y_test) # evaluation模型評估
print("Best score on validation set:{:.2f}".format(best_score))
print("Best parameters:{}".format(best_parameters))
print("Best score on test set:{:.2f}".format(test_score))

輸出:

Size of training set:84 size of validation set:28 size of teseting set:38
Best score on validation set:0.96
Best parameters:{'gamma': 0.001, 'C': 10}
Best score on test set:0.92
然而,這種間的的grid search方法,其最終的表現(xiàn)好壞與初始數(shù)據(jù)的劃分結(jié)果有很大的關(guān)系,為了處理這種情況,我們采用交叉驗證的方式來減少偶然性。

Grid Search with Cross Validation

from sklearn.model_selection import cross_val_score

best_score = 0.0
for gamma in [0.001,0.01,0.1,1,10,100]:
    for C in [0.001,0.01,0.1,1,10,100]:
        svm = SVC(gamma=gamma,C=C)
        scores = cross_val_score(svm,X_trainval,y_trainval,cv=5) #5折交叉驗證
        score = scores.mean() #取平均數(shù)
        if score > best_score:
            best_score = score
            best_parameters = {"gamma":gamma,"C":C}
svm = SVC(**best_parameters)
svm.fit(X_trainval,y_trainval)
test_score = svm.score(X_test,y_test)
print("Best score on validation set:{:.2f}".format(best_score))
print("Best parameters:{}".format(best_parameters))
print("Score on testing set:{:.2f}".format(test_score))

輸出:

Best score on validation set:0.97
Best parameters:{'gamma': 0.01, 'C': 100}
Score on testing set:0.97

交叉驗證經(jīng)常與網(wǎng)格搜索進行結(jié)合,作為參數(shù)評價的一種方法,這種方法叫做grid search with cross validation。sklearn因此設(shè)計了一個這樣的類GridSearchCV,這個類實現(xiàn)了fit,predict,score等方法,被當(dāng)做了一個estimator,使用fit方法,該過程中:(1)搜索到最佳參數(shù);(2)實例化了一個最佳參數(shù)的estimator;

from sklearn.model_selection import GridSearchCV

#把要調(diào)整的參數(shù)以及其候選值 列出來;
param_grid = {"gamma":[0.001,0.01,0.1,1,10,100],
             "C":[0.001,0.01,0.1,1,10,100]}
print("Parameters:{}".format(param_grid))

grid_search = GridSearchCV(SVC(),param_grid,cv=5) #實例化一個GridSearchCV類
X_train,X_test,y_train,y_test = train_test_split(iris.data,iris.target,random_state=10)
grid_search.fit(X_train,y_train) #訓(xùn)練,找到最優(yōu)的參數(shù),同時使用最優(yōu)的參數(shù)實例化一個新的SVC estimator。
print("Test set score:{:.2f}".format(grid_search.score(X_test,y_test)))
print("Best parameters:{}".format(grid_search.best_params_))
print("Best score on train set:{:.2f}".format(grid_search.best_score_))

輸出:

Parameters:{'gamma': [0.001, 0.01, 0.1, 1, 10, 100], 'C': [0.001, 0.01, 0.1, 1, 10, 100]}
Test set score:0.97
Best parameters:{'C': 10, 'gamma': 0.1}
Best score on train set:0.98
Grid Search 調(diào)參方法存在的共性弊端就是:耗時;參數(shù)越多,候選值越多,耗費時間越長!所以,一般情況下,先定一個大范圍,然后再細(xì)化。
GridSearchCV參數(shù)調(diào)優(yōu)過程

總而言之,言而總之

  • Grid Search:一種調(diào)優(yōu)方法,在參數(shù)列表中進行窮舉搜索,對每種情況進行訓(xùn)練,找到最優(yōu)的參數(shù);由此可知,這種方法的主要缺點是 比較耗時!
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容