分類算法-線性分類器 LogisticRegression and SGDClassifier

前言

此程序基于良/惡性腫瘤預(yù)測(cè)實(shí)驗(yàn)。

分別用LogisticRegression模型和SGDClassifier模型實(shí)現(xiàn)預(yù)測(cè)任務(wù)。

本程序可以流暢運(yùn)行于Python3.6環(huán)境,但是Python2.x版本需要修正的地方也已經(jīng)在注釋中說明。

requirements:pandas,numpy,scikit-learn

想查看其他經(jīng)典算法實(shí)現(xiàn)可以關(guān)注查看本人其他文集。

實(shí)驗(yàn)結(jié)果分析

LogisticRegression比起SGDClassifier在測(cè)試機(jī)上表現(xiàn)有更高的準(zhǔn)確性,這是因?yàn)镾cikit-learn中采用解析的方式精確計(jì)算LogisticRegression的參數(shù),而使用梯度法估計(jì)SGDClassifier的參數(shù)。

相比之下,前者計(jì)算時(shí)間長(zhǎng)但是模型性能略高;后者采用隨機(jī)梯度上升算法估計(jì)模型參數(shù),計(jì)算時(shí)間短,但是產(chǎn)出的模型性能略低。一般而言,對(duì)于訓(xùn)練數(shù)據(jù)規(guī)模在10萬量級(jí)以上的數(shù)據(jù),考慮到時(shí)間的耗用,更適合使用隨機(jī)梯度算法對(duì)模型進(jìn)行估計(jì)。

程序源碼

import pandas as pd

import numpy as np

# features column names

column_names = ['Sample code number','Clump Thickness','Uniformity of Cell Size' ,'Uniformity of Cell Shape','Marginal Adhesion',

'Single Epithelial Cell Size','Bare Nuclei','Bland Chromatin','Normal Nucleoli','Mitoses','Class']

#read data from csv file

data = pd.read_csv('./breast-cancer-wisconsin.data',names=column_names)

#Data preprocessing

#replace all ? with standard missing value

data = data.replace(to_replace='?',value=np.nan)

#drop all data rows which have any missing feature

data=data.dropna(how='any')

# data.to_csv('./text.csv')# save data to csv file

#notes:you should use cross_valiation instead of model_valiation in python 2.7

#from sklearn.cross_validation import train_test_split #DeprecationWarning

from sklearn.model_selection import train_test_split #use train_test_split module of sklearn.model_valiation to split data

#take 25 percent of data randomly for testing,and others for training

X_train,X_test,y_train,y_test = train_test_split(data[column_names[1:10]],data[column_names[10]],test_size=0.25,random_state=33)

#check the numbers and category distribution of the test samples

# print(y_train.value_counts())

# print(y_test.value_counts())

#import relative package

from sklearn.preprocessing import StandardScaler

from sklearn.linear_model import LogisticRegression

from sklearn.linear_model import SGDClassifier

#standardizing data in train set and test set

ss = StandardScaler()

X_train = ss.fit_transform(X_train)

X_test = ss.transform(X_test)

#initializing logisticregression and sgdcclassifier model

lr=LogisticRegression()

#notes:the default parameters in python2.7 are max_iter=5 tol=none,you can don't specify the parameters of sgdclassifier

#sgdc=SGDClassifier() #DeprecationWarning

sgdc=SGDClassifier(max_iter=5,tol=None)

#call fit function to trainning arguments ofmodel

lr.fit(X_train,y_train)

#save the prediction of test set in variable

lr_y_predict=lr.predict(X_test)

sgdc.fit(X_train,y_train)

sgdc_y_predict=sgdc.predict(X_test)

#performance analysis

from sklearn.metrics import classification_report

#get accuracy by the score function in LR model

print('Accuracy of LR Classifier:',lr.score(X_test,y_test))

#get? precision ,recall and f1-score from classification_report module

print(classification_report(y_test,lr_y_predict,target_names=['Benign','Malignant']))

#get accuracy by the score function in SGD classifier

print('Accuracy of SGD Classifier:',sgdc.score(X_test,y_test))

#get? precision ,recall and f1-score from classification_report module

print(classification_report(y_test,sgdc_y_predict,target_names=['Benign','Malignant']))

Ubuntu16.04 Python3.6 程序輸出結(jié)果:

Accuracy of LR Classifier: 0.9883040935672515

? ? ? ? ? ? precision? ? recall? f1-score? support

? ? Benign? ? ? 0.99? ? ? 0.99? ? ? 0.99? ? ? 100

? Malignant? ? ? 0.99? ? ? 0.99? ? ? 0.99? ? ? ? 71

avg / total? ? ? 0.99? ? ? 0.99? ? ? 0.99? ? ? 171

Accuracy of SGD Classifier: 0.9824561403508771

? ? ? ? ? ? precision? ? recall? f1-score? support

? ? Benign? ? ? 1.00? ? ? 0.97? ? ? 0.98? ? ? 100

? Malignant? ? ? 0.96? ? ? 1.00? ? ? 0.98? ? ? ? 71

avg / total? ? ? 0.98? ? ? 0.98? ? ? 0.98? ? ? 171

數(shù)據(jù)下載地址

歡迎指正錯(cuò)誤,包括英語和程序錯(cuò)誤。有問題也歡迎提問,一起加油一起進(jìn)步。

本程序完全是本人逐字符輸入的勞動(dòng)結(jié)果,轉(zhuǎn)載請(qǐng)注明出處。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請(qǐng)結(jié)合常識(shí)與多方信息審慎甄別。
平臺(tái)聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡(jiǎn)書系信息發(fā)布平臺(tái),僅提供信息存儲(chǔ)服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容