tensorflow2學(xué)習(xí)筆記 10keras練習(xí)(iris/mnist/fashion)

鳶尾花分類

import tensorflow as tf
from sklearn import datasets
import numpy as np
#數(shù)據(jù)獲取
x_train = datasets.load_iris().data
y_train = datasets.load_iris().target

np.random.seed(1024)
np.random.shuffle(x_train)
np.random.seed(1024)
np.random.shuffle(y_train)
np.random.seed(1024)
#網(wǎng)絡(luò)搭建
model = tf.keras.models.Sequential([
    tf.keras.layers.Dense(3,activation='softmax',kernel_regularizer=tf.keras.regularizers.l2())
])
#訓(xùn)練參數(shù)設(shè)置
model.compile(optimizer=tf.keras.optimizers.SGD(lr=0.1),
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
    metrics=['sparse_categorical_accuracy']
)
#訓(xùn)練
model.fit(x_train,y_train,batch_size=32,epochs=500,validation_split=0.2,validation_freq=20)
#網(wǎng)絡(luò)結(jié)構(gòu)和參數(shù)顯示
model.summary()

運行結(jié)果

curacy: 0.8667 - val_loss: 0.2987 - val_sparse_categorical_accuracy: 1.0000
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense (Dense)                multiple                  15        
=================================================================
Total params: 15
Trainable params: 15
Non-trainable params: 0
_________________________________________________________________

手寫數(shù)字識別

import tensorflow as tf
#獲取數(shù)據(jù)集
mnist = tf.keras.datasets.mnist
(x_train ,y_train),(x_test,y_test) = mnist.load_data()
x_train = tf.cast(x_train,dtype=tf.float32)
y_train = tf.cast(y_train,dtype=tf.float32)
x_test = tf.cast(x_test,dtype=tf.float32)
y_test = tf.cast(y_test,dtype=tf.float32)
#網(wǎng)絡(luò)結(jié)構(gòu)搭建
model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(128,activation='relu',kernel_regularizer=tf.keras.regularizers.l2()),
    tf.keras.layers.Dense(10,activation='softmax',kernel_regularizer=tf.keras.regularizers.l2())
])
#訓(xùn)練參數(shù)設(shè)置
model.compile(
    optimizer=tf.keras.optimizers.Adam(),
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
    metrics=['sparse_categorical_accuracy']
)
#設(shè)置數(shù)據(jù)集和訓(xùn)練集進行訓(xùn)練
model.fit(x_train,y_train,batch_size=32,epochs=50,validation_data=(x_test,y_test),validation_freq=1)
#打印網(wǎng)絡(luò)結(jié)構(gòu)和參數(shù)信息
model.summary()

運行結(jié)果

tegorical_accuracy: 0.9393
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            multiple                  0         
_________________________________________________________________
dense (Dense)                multiple                  100480    
_________________________________________________________________
dense_1 (Dense)              multiple                  1290      
=================================================================
Total params: 101,770
Trainable params: 101,770
Non-trainable params: 0
________________________________________________________________

FASHION服裝分類

import tensorflow as tf
#導(dǎo)入數(shù)據(jù)集
fashion = tf.keras.datasets.fashion_mnist
(x_train,y_train),(x_test,y_test) = fashion.load_data()
x_train = tf.cast(x_train,dtype=tf.float32)
y_train = tf.cast(y_train,dtype=tf.float32)
x_test = tf.cast(x_test,dtype=tf.float32)
y_test = tf.cast(y_test,dtype=tf.float32)
#網(wǎng)絡(luò)結(jié)構(gòu)搭建
model = tf.keras.models.Sequential([
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(300,activation='relu',kernel_regularizer=tf.keras.regularizers.l2()),
    tf.keras.layers.Dense(100,activation='relu',kernel_regularizer=tf.keras.regularizers.l2()),
    tf.keras.layers.Dense(20,activation='relu',kernel_regularizer=tf.keras.regularizers.l2()),
    tf.keras.layers.Dense(10,activation='softmax',kernel_regularizer=tf.keras.regularizers.l2())
])
#訓(xùn)練參數(shù)設(shè)置
model.compile(
    optimizer=tf.keras.optimizers.Adam(),
    loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
    metrics=['sparse_categorical_accuracy']
)
#設(shè)置數(shù)據(jù)集和訓(xùn)練集進行訓(xùn)練
model.fit(x_train,y_train,batch_size=32,epochs=50,validation_data=(x_test,y_test),validation_freq=1)
#打印網(wǎng)絡(luò)結(jié)構(gòu)和參數(shù)信息
model.summary()

運行結(jié)果

categorical_accuracy: 0.8264
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            multiple                  0         
_________________________________________________________________
dense (Dense)                multiple                  235500    
_________________________________________________________________
dense_1 (Dense)              multiple                  30100     
_________________________________________________________________
dense_2 (Dense)              multiple                  2020      
_________________________________________________________________
dense_3 (Dense)              multiple                  210       
=================================================================
Total params: 267,830
Trainable params: 267,830
Non-trainable params: 0
_________________________________________________________________

精確率感人,提升層數(shù)帶來的精度提升有限,只用全連接不太行

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點,簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

友情鏈接更多精彩內(nèi)容