當(dāng)我們對(duì)訓(xùn)練集應(yīng)用各種預(yù)處理操作時(shí)(特征標(biāo)準(zhǔn)化、主成分分析等等),
我們都需要對(duì)測試集重復(fù)利用這些參數(shù)。
pipeline 實(shí)現(xiàn)了對(duì)全部步驟的流式化封裝和管理,可以很方便地使參數(shù)集在新數(shù)據(jù)集上被重復(fù)使用。
pipeline 可以用于下面幾處:
- 模塊化 Feature Transform,只需寫很少的代碼就能將新的 Feature 更新到訓(xùn)練集中。
- 自動(dòng)化 Grid Search,只要預(yù)先設(shè)定好使用的 Model 和參數(shù)的候選,就能自動(dòng)搜索并記錄最佳的 Model。
- 自動(dòng)化 Ensemble Generation,每隔一段時(shí)間將現(xiàn)有最好的 K 個(gè) Model 拿來做 Ensemble。
栗子:
問題是要對(duì)數(shù)據(jù)集 Breast Cancer Wisconsin 進(jìn)行分類,
它包含 569 個(gè)樣本,第一列 ID,第二列類別(M=惡性腫瘤,B=良性腫瘤),
第 3-32 列是實(shí)數(shù)值的特征。
from pandas as pd
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import LabelEncoder
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/'
'breast-cancer-wisconsin/wdbc.data', header=None)
# Breast Cancer Wisconsin dataset
X, y = df.values[:, 2:], df.values[:, 1]
encoder = LabelEncoder()
y = encoder.fit_transform(y)
>>> encoder.transform(['M', 'B'])
array([1, 0])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2, random_state=0)
我們要用 Pipeline 對(duì)訓(xùn)練集和測試集進(jìn)行如下操作:
- 先用
StandardScaler對(duì)數(shù)據(jù)集每一列做標(biāo)準(zhǔn)化處理,(是 transformer) - 再用
PCA將原始的 30 維度特征壓縮的 2 維度,(是 transformer) - 最后再用模型
LogisticRegression。(是 Estimator)
調(diào)用 Pipeline 時(shí),輸入由元組構(gòu)成的列表,每個(gè)元組第一個(gè)值為變量名,元組第二個(gè)元素是 sklearn 中的 transformer 或 Estimator。
注意中間每一步是 transformer,即它們必須包含 fit 和 transform 方法,或者 fit_transform。
最后一步是一個(gè) Estimator,即最后一步模型要有 fit 方法,可以沒有 transform 方法。
然后用 Pipeline.fit對(duì)訓(xùn)練集進(jìn)行訓(xùn)練,pipe_lr.fit(X_train, y_train)
再直接用 Pipeline.score 對(duì)測試集進(jìn)行預(yù)測并評(píng)分 pipe_lr.score(X_test, y_test)
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
pipe_lr = Pipeline([('sc', StandardScaler()),
('pca', PCA(n_components=2)),
('clf', LogisticRegression(random_state=1))
])
pipe_lr.fit(X_train, y_train)
print('Test accuracy: %.3f' % pipe_lr.score(X_test, y_test))
# Test accuracy: 0.947
還可以用來選擇特征:
例如用 SelectKBest 選擇特征,
分類器為 SVM,
anova_filter = SelectKBest(f_regression, k=5)
clf = svm.SVC(kernel='linear')
anova_svm = Pipeline([('anova', anova_filter), ('svc', clf)])
完整:
from sklearn import svm
from sklearn.datasets import samples_generator
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_regression
from sklearn.pipeline import Pipeline
# generate some data to play with
X, y = samples_generator.make_classification(
n_informative=5, n_redundant=0, random_state=42)
# ANOVA SVM-C
anova_filter = SelectKBest(f_regression, k=5)
clf = svm.SVC(kernel='linear')
anova_svm = Pipeline([('anova', anova_filter), ('svc', clf)])
anova_svm.set_params(anova__k=10, svc__C=.1).fit(X, y)
prediction = anova_svm.predict(X)
anova_svm.score(X, y)
當(dāng)然也可以應(yīng)用 K-fold cross validation:
model = Pipeline(estimators)
seed = 7
kfold = KFold(n_splits=10, random_state=seed)
results = cross_val_score(model, X, Y, cv=kfold)
print(results.mean())
完整:
# Create a pipeline that standardizes the data then creates a model
from pandas import read_csv
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
# load data
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = read_csv(url, names=names)
array = dataframe.values
X = array[:,0:8]
Y = array[:,8]
# create pipeline
estimators = []
estimators.append(('standardize', StandardScaler()))
estimators.append(('lda', LinearDiscriminantAnalysis()))
model = Pipeline(estimators)
# evaluate pipeline
seed = 7
kfold = KFold(n_splits=10, random_state=seed)
results = cross_val_score(model, X, Y, cv=kfold)
print(results.mean())
Pipeline 的工作方式:
當(dāng)管道 Pipeline 執(zhí)行 fit 方法時(shí),
首先 StandardScaler 執(zhí)行 fit 和 transform 方法,
然后將轉(zhuǎn)換后的數(shù)據(jù)輸入給 PCA,
PCA 同樣執(zhí)行 fit 和 transform 方法,
再將數(shù)據(jù)輸入給 LogisticRegression,進(jìn)行訓(xùn)練。
如下圖

資料:
http://blog.csdn.net/lanchunhui/article/details/50521648
https://dnc1994.com/2016/04/rank-10-percent-in-first-kaggle-competition/
推薦閱讀 歷史技術(shù)博文鏈接匯總
http://www.itdecent.cn/p/28f02bb59fe5
也許可以找到你想要的:
[入門問題][TensorFlow][深度學(xué)習(xí)][強(qiáng)化學(xué)習(xí)][神經(jīng)網(wǎng)絡(luò)][機(jī)器學(xué)習(xí)][自然語言處理][聊天機(jī)器人]