常用方法:(1)簡(jiǎn)單分離訓(xùn)練集和測(cè)試集,(2)K折交叉驗(yàn)證分離
1.最簡(jiǎn)單分離測(cè)試集和測(cè)試集:train_test_split
train_test_split是交叉驗(yàn)證中常用的函數(shù),功能是從樣本中隨機(jī)的按比例選取訓(xùn)練集和測(cè)試集。
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
#導(dǎo)入數(shù)據(jù)
filename = "d:/my_project/input/pima-indians-diabetes.data.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
data = pd.read_csv(filename, names=names)
#將數(shù)據(jù)分為輸入數(shù)據(jù)和輸出數(shù)據(jù)
X = data.iloc[:, 0:8].as_matrix()
y = data.iloc[:, 8].as_matrix()
#劃分訓(xùn)練集和測(cè)試集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=4)
#訓(xùn)練模型
model = LogisticRegression()
model.fit(X_train, y_train)
result = model.score(X_test, y_test)
print("算法評(píng)估結(jié)果:%.3f%%" % (result))
- K折交叉驗(yàn)證
K折交叉驗(yàn)證是用來(lái)評(píng)估機(jī)器學(xué)習(xí)算法的黃金法則,K=10是一般的建議。
import cross_val_score
import sklearn.model_selection import KFold
kfold = KFold(n_splits=10, shuffle=False)
model = LogisticRegression()
result = cross_val_score(model, X, y, cv=kfold)
print("評(píng)估結(jié)果:%.3f, %.3f" %(result.mean(), result.std()))

Snipaste_2018-05-16_11-19-57.png
Stratified k-fold與k-fold類似,將數(shù)據(jù)集劃分成k份,不同點(diǎn)在于,劃分的k份中,每一份內(nèi)各個(gè)類別數(shù)據(jù)的比例和原始數(shù)據(jù)集中各個(gè)類別的比例相同。