1. 程式人生 > >tf.keras入門(1) Basic Classification(Fashion MNIST資料集)

tf.keras入門(1) Basic Classification(Fashion MNIST資料集)

基本分類(Fashion MNIST)

使用tf.keras ,是一種在TensorFlow中構建和訓練模型的高階api

本指南使用 Fashion MNIST 資料集,其中包含 70000 張灰度影象,涵蓋 10 個類別。以下影象顯示了單件服飾在較低解析度(28x28 畫素)下的效果:

Fashion MNIST sprite

主要介面解釋

keras.datasets 自帶的一些資料集

tf.nn.relu 使用relu啟用函式

tf.nn.softmax 使用sotfmax啟用函式

tf.train.AdamOptimizer() 使用AdamOptimizer 優化演算法

keras.layers.Flatten

keras.layers.Dense

model = keras.Sequential([])

model.compile(optimizer = tf.train.AdamOptimizer(), ​ loss='sparse_categorical_crossentropy', ​ metrics = ['accuracy'])

  • 這裡 s p a

    r s e _ c a t e g
    o r i c a l _ c r o s s e n t r o p y sparse\_categorical\_crossentropy c a t e g o r i c a l _ c r o s s e n t r o p y categorical\_crossentropy 的區別是:

    如果你的Label是one-hot encoding的就用 c a t e g o r i c a l _ c r o s s e n t r o p y categorical\_crossentropy

    如果Label是一個integer,就用 s p a r s e _ c a t e g o r i c a l _ c r o s s e n t r o p y sparse\_categorical\_crossentropy

model.fit(train_images, train_labels, epochs = 5)

model.evaluate(test_images, test_labels)

img = (np.expand_dims(img,0))

  • np.expand_dims的用法:
>>> x = np.array([1,2])
>>> x.shape
(2,)


>>> y = np.expand_dims(x, axis=0)
>>> y
array([[1, 2]])
>>> y.shape
(1, 2)


>>> y = np.expand_dims(x, axis=1)  # Equivalent to x[:,newaxis]
>>> y
array([[1],
       [2]])
>>> y.shape
(2, 1)

predictions_single = model.predict(img)

可以一次性對樣本批次或樣本集進行預測。因此,即使我們使用單個影象,仍需要將其新增到列表中:

cross entropy

s o f t m a x softmax 函式最常用的損失函式就是交叉熵。

​ 如何理解該函式呢?

​ 首先我們先來了解一下資訊熵的概念,其衡量的是一種”意外程度“,計算公式為:
H ( x ) = i = 1 n p ( x i ) l o g 2 P ( x i ) ,   w h i c h   i = 1 , 2 ,   , n H(x)=-\sum_{i=1}^np(x_i)log_2P(x_i),\ which \ i=1,2,\cdots,n
​ 舉個例子來進一步說明,現在中美兩國打乒乓球比賽,歷史交手共64次,其中中國勝利63次。那這次中國勝利的資訊量是 l o g 2 63 64 0.023 -log_2{63 \over 64} \approx 0.023 而美國隊是 l o g 2 1 64 = 6 -log_2{1\over 64} =6 所以,比賽結果的資訊熵為:
0.023 63 64 + 6 1 64 = 0.1164 0.023* {63\over64}+6* {1\over 64} =0.1164
這個熵還是比較小的,即結果蠻確定的,結果很意外的概率比較小!

…後面推導待補

完整程式碼

load_data.py

from tensorflow.python.keras.utils import get_file
import gzip
import numpy as np

def load_datas():
    base = r"file:///C:/Users/your_username/Desktop/test_tensor/data/fashion-mnist/"
    files = [
        'train-labels-idx1-ubyte.gz', 'train-images-idx3-ubyte.gz',
        't10k-labels-idx1-ubyte.gz', 't10k-images-idx3-ubyte.gz'
    ]
    paths = []
    for fname in files:
        paths.append(get_file(fname, origin=base + fname))

    with gzip.open(paths[0], 'rb') as lbpath:
        y_train = np.frombuffer(lbpath.read(), np.uint8, offset=8)

    with gzip.open(paths[1], 'rb') as imgpath:
        x_train = np.frombuffer(
            imgpath.read(), np.uint8, offset=16).reshape(len(y_train), 28, 28)

    with gzip.open(paths[2], 'rb') as lbpath:
        y_test = np.frombuffer(lbpath.read(), np.uint8, offset=8)

    with gzip.open(paths[3], 'rb') as imgpath:
        x_test = np.frombuffer(
            imgpath.read(), np.uint8, offset=16).reshape(len(y_test), 28, 28)

    return (x_train, y_train), (x_test, y_test)

基本分類.py

import tensorflow as tf 
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import load_data

fashion_mnist = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = load_data.load_datas()
train_images = train_images / 255.0
test_images = test_images / 255.0
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']

# print(train_images.shape)

# plt.figure()
# plt.imshow(train_images[0])
# plt.colorbar()
# plt.grid(False)
# plt.figure(figsize=(10,10))
# for i in range(25):
#     plt.subplot(5,5,i+1)
#     plt.xticks([])
#     plt.yticks([])
#     plt.grid(False)
#     plt.imshow(train_images[i], cmap=plt.cm.binary)
#     plt.xlabel(class_names[train_labels[i]])
# plt.show()


model = keras.Sequential([
    keras.layers.Flatten(input_shape = (28, 28)),
    keras.layers.Dense(100, activation = tf.nn.relu),
    keras.layers.Dense(10, activation = tf.nn.softmax)
])

model.compile(optimizer = tf.train.AdamOptimizer(),
            loss='sparse_categorical_crossentropy',
            metrics = ['accuracy'])

model.fit(train_images, train_labels, epochs = 5)

test_loss, test_acc = model.evaluate(test_images, test_labels)

print('\nTest accuracy:', test_acc)

predictions = model.predict(test_images)
print(predictions[0],np.argmax(predictions[0]),test_labels[0])

img = test_images[0]
print(img.shape)
img = (np.expand_dims(img,0))
print(img.shape)

predictions_single = model.predict(img)
print(predictions_single)