1. 程式人生 > >keras的基本用法(三)——建立神經網路

keras的基本用法(三)——建立神經網路

文章作者:Tyan
部落格:noahsnail.com  |  CSDN  |  簡書

本文主要介紹Keras的一些基本用法。

  • Demo
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Activation, Conv2D, MaxPooling2D, Flatten
from keras.optimizers import Adam

# 載入資料集
(X_train, y_train), (X_test, y_test) = mnist.load_data()

# 資料集預處理
X_train = X_train.reshape(-1, 1, 28, 28) X_test = X_test.reshape(-1, 1, 28, 28) # 將label變為向量 y_train = np_utils.to_categorical(y_train, 10) y_test = np_utils.to_categorical(y_test, 10) # 構建神經網路 model = Sequential() # 卷積層一 model.add(Conv2D(32, kernel_size = (5, 5), strides = (1, 1), padding = 'same'
, activation = 'relu', input_shape = (1, 28, 28))) # 池化層一 model.add(MaxPooling2D(pool_size = (2, 2), strides = (1, 1), padding = 'same')) # 卷積層二 model.add(Conv2D(64, kernel_size = (5, 5), strides = (1, 1), padding = 'same', activation = 'relu')) # 池化層二 model.add(MaxPooling2D(pool_size = (2, 2), strides = (1
, 1), padding = 'same')) # 全連線層一 model.add(Flatten()) model.add(Dense(1024)) model.add(Activation('relu')) # 全連線層二 model.add(Dense(10)) model.add(Activation('softmax')) # 選擇並定義優化求解方法 adam = Adam(lr = 1e-4) # 選擇損失函式、求解方法、度量方法 model.compile(optimizer = adam, loss = 'categorical_crossentropy', metrics = ['accuracy']) # 訓練模型 model.fit(X_train, y_train, epochs = 2, batch_size = 32) # 評估模型 loss, accuracy = model.evaluate(X_test, y_test) print '' print 'loss: ', loss print 'accuracy: ', accuracy
  • 結果
Using TensorFlow backend.
Epoch 1/2
60000/60000 [==============================] - 55s - loss: 0.4141 - acc: 0.9234
Epoch 2/2
60000/60000 [==============================] - 56s - loss: 0.0743 - acc: 0.9770
 9920/10000 [============================>.] - ETA: 0s
loss:  0.103529265788
accuracy:  0.9711

參考資料