鳶尾花數(shù)據(jù)集與數(shù)據(jù)清洗

1. 背景

鳶尾花數(shù)據(jù)集是原則20世紀30年代的經(jīng)典數(shù)據(jù)集。它是用統(tǒng)計進行分類的鼻祖。早在1936年,模式識別的先驅(qū)Fisher就在論文The use of multiple measurements in taxonomic problems中使用了它 (直至今日該論文仍然被頻繁引用)。
該數(shù)據(jù)集包括3個鳶尾花類別,每個類別有50個樣本。其中一個類別是與另外兩類線性可分的,而另外兩類不能線性可分。

2.數(shù)據(jù)描述

該數(shù)據(jù)集共150行,每行1個樣本。 每個樣本有5個字段,分別是

  1. 花萼長度 (單位cm)
  2. 花萼寬度(單位:cm)
  3. 花瓣長度(單位:cm)
  4. 花瓣寬度(單位:cm)
  5. 類別(共3類) Iris Setosa / Iris Versicolour / Iris Virginica

例如:

5.1,3.5,1.4,0.2,Iris-setosa
4.9,3.0,1.4,0.2,Iris-setosa
4.7,3.2,1.3,0.2,Iris-setosa
4.6,3.1,1.5,0.2,Iris-setosa
5.0,3.6,1.4,0.2,Iris-setosa
5.4,3.9,1.7,0.4,Iris-setosa
4.6,3.4,1.4,0.3,Iris-setosa
5.0,3.4,1.5,0.2,Iris-setosa
4.4,2.9,1.4,0.2,Iris-setosa
4.9,3.1,1.5,0.1,Iris-setosa
......

3.數(shù)據(jù)清洗工具

  1. 語言:python
  2. 庫:pandas,numpy,sklearn,matplotlib
# -*- coding:utf-8 -*-

import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest, SelectPercentile, chi2
from sklearn.linear_model import LogisticRegressionCV
from sklearn import metrics
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.manifold import TSNE
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches


def extend(a, b):
    return 1.05*a-0.05*b, 1.05*b-0.05*a


if __name__ == '__main__':
    stype = 'pca'
    pd.set_option('display.width', 200)
    data = pd.read_csv('/Users/admin/PycharmProjects/TF_tutorial/six/iris.data', header=None)
    # columns = np.array(['sepal_length', 'sepal_width', 'petal_length', 'petal_width', 'type'])
    columns = np.array(['花萼長度', '花萼寬度', '花瓣長度', '花瓣寬度', '類型'])
    data.rename(columns=dict(list(zip(np.arange(5), columns))), inplace=True)
    data['類型'] = pd.Categorical(data['類型']).codes
    print(data.head(5))
    x = data[columns[:-1]]
    y = data[columns[-1]]

    if stype == 'pca':
        pca = PCA(n_components=2, whiten=True, random_state=0)
        x = pca.fit_transform(x)
        print('各方向方差:', pca.explained_variance_)
        print('方差所占比例:', pca.explained_variance_ratio_)
        x1_label, x2_label = '組分1', '組分2'
        title = '鳶尾花數(shù)據(jù)PCA降維'
    else:
        fs = SelectKBest(chi2, k=2)
        # fs = SelectPercentile(chi2, percentile=60)
        fs.fit(x, y)
        idx = fs.get_support(indices=True)
        print('fs.get_support() = ', idx)
        x = x[idx]
        x = x.values    # 為下面使用方便,DataFrame轉(zhuǎn)換成ndarray
        x1_label, x2_label = columns[idx]
        title = '鳶尾花數(shù)據(jù)特征選擇'
    print(x[:5])
    cm_light = mpl.colors.ListedColormap(['#77E0A0', '#FF8080', '#A0A0FF'])
    cm_dark = mpl.colors.ListedColormap(['g', 'r', 'b'])
    mpl.rcParams['font.sans-serif'] = 'SimHei'
    mpl.rcParams['axes.unicode_minus'] = False
    plt.figure(facecolor='w')
    plt.scatter(x[:, 0], x[:, 1], s=30, c=y, marker='o', cmap=cm_dark)
    plt.grid(b=True, ls=':', color='k')
    plt.xlabel(x1_label, fontsize=12)
    plt.ylabel(x2_label, fontsize=12)
    plt.title(title, fontsize=15)
    # plt.savefig('1.png')
    plt.show()

    x, x_test, y, y_test = train_test_split(x, y, train_size=0.7)
    model = Pipeline([
        ('poly', PolynomialFeatures(degree=2, include_bias=True)),
        ('lr', LogisticRegressionCV(Cs=np.logspace(-3, 4, 8), cv=5, fit_intercept=False))
    ])
    model.fit(x, y)
    print('最優(yōu)參數(shù):', model.get_params('lr')['lr'].C_)
    y_hat = model.predict(x)
    print('訓練集精確度:', metrics.accuracy_score(y, y_hat))
    y_test_hat = model.predict(x_test)
    print('測試集精確度:', metrics.accuracy_score(y_test, y_test_hat))

    N, M = 500, 500     # 橫縱各采樣多少個值
    x1_min, x1_max = extend(x[:, 0].min(), x[:, 0].max())   # 第0列的范圍
    x2_min, x2_max = extend(x[:, 1].min(), x[:, 1].max())   # 第1列的范圍
    t1 = np.linspace(x1_min, x1_max, N)
    t2 = np.linspace(x2_min, x2_max, M)
    x1, x2 = np.meshgrid(t1, t2)                    # 生成網(wǎng)格采樣點
    x_show = np.stack((x1.flat, x2.flat), axis=1)   # 測試點
    y_hat = model.predict(x_show)  # 預測值
    y_hat = y_hat.reshape(x1.shape)  # 使之與輸入的形狀相同
    plt.figure(facecolor='w')
    plt.pcolormesh(x1, x2, y_hat, cmap=cm_light)  # 預測值的顯示
    plt.scatter(x[:, 0], x[:, 1], s=30, c=y, edgecolors='k', cmap=cm_dark)  # 樣本的顯示
    plt.xlabel(x1_label, fontsize=12)
    plt.ylabel(x2_label, fontsize=12)
    plt.xlim(x1_min, x1_max)
    plt.ylim(x2_min, x2_max)
    plt.grid(b=True, ls=':', color='k')
    # 畫各種圖
    # a = mpl.patches.Wedge(((x1_min+x1_max)/2, (x2_min+x2_max)/2), 1.5, 0, 360, width=0.5, alpha=0.5, color='r')
    # plt.gca().add_patch(a)
    patchs = [mpatches.Patch(color='#77E0A0', label='Iris-setosa'),
              mpatches.Patch(color='#FF8080', label='Iris-versicolor'),
              mpatches.Patch(color='#A0A0FF', label='Iris-virginica')]
    plt.legend(handles=patchs, fancybox=True, framealpha=0.8, loc='lower right')
    plt.title('鳶尾花Logistic回歸分類效果', fontsize=15)
    plt.show()

4.結果

數(shù)據(jù)結果
最后編輯于
?著作權歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務。

相關閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容