1. 程式人生 > >keras學習例項(二):mnist 手寫體分類

keras學習例項(二):mnist 手寫體分類

承接上次筆記,這次進行mnist 的手寫體目標識別例項,先說明一下出現的問題。

如上圖,源程式類似keras的mnist_example例項,資料來源是通過 url = https://s3.amazonaws.com/img-datasets/mnist.npz 進行下載的。訪問該 url 地址被牆了,導致 MNIST 相關的案例都卡在資料下載的環節。

所以小編選擇事先下載好mnist的資料集,然後修改程式,直接呼叫本地的資料集。這裡給出一個博主給的連結,可以下載mnist.npz 資料集:

程式如下:

#-*- coding: UTF-8 -*-
"""
To know more or get code samples, please visit my website:
https://morvanzhou.github.io/tutorials/
Or search: 莫煩Python
Thank you for supporting!
"""

# please note, all tutorial code are running under python3.5.
# If you use the version like python2.7, please modify the code accordingly

# 5 - Classifier example

import numpy as np
np.random.seed(1337)  # for reproducibility
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.optimizers import RMSprop

# download the mnist to the path '~/.keras/datasets/' if it is the first time to be called
# X shape (60,000 28x28), y shape (10,000, )
#(X_train, y_train), (X_test, y_test) = mnist.load_data()


###呼叫本地mnist資料集
import numpy as np
path='/home/ren_dong/文件/mnist.npz'
f = np.load(path)
X_train, y_train = f['x_train'], f['y_train']
X_test, y_test = f['x_test'], f['y_test']
f.close()

# data pre-processing
X_train = X_train.reshape(X_train.shape[0], -1) / 255.   # normalize
X_test = X_test.reshape(X_test.shape[0], -1) / 255.      # normalize
y_train = np_utils.to_categorical(y_train, num_classes=10)
y_test = np_utils.to_categorical(y_test, num_classes=10)

# Another way to build your neural net

model = Sequential([
    Dense(32, input_dim=784),
    Activation('relu'),
    Dense(10),
    Activation('softmax'),
])

# Another way to define your optimizer
rmsprop = RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)

# We add metrics to get more results you want to see
model.compile(optimizer=rmsprop,
              loss='categorical_crossentropy',
              metrics=['accuracy'])

print('Training ------------')
# Another way to train the model
model.fit(X_train, y_train, epochs=2, batch_size=32)

print('\nTesting ------------')
# Evaluate the model with the metrics we defined earlier
loss, accuracy = model.evaluate(X_test, y_test)

print('test loss: ', loss)
print('test accuracy: ', accuracy)

執行結果:

 

一共迭代兩個epoch,最後loss為0.19,accurary為0.94,在測試階段loss為0.17,accurary為0.95,可以說迭代還是學到了很多東西的。

參考部落格: