tensorflow 學習專欄(六):使用卷積神經網路(CNN)在mnist資料集上實現分類
阿新 • • 發佈:2019-01-01
卷積神經網路(Convolutional Neural Network, CNN)是一種前饋神經網路,它的人工神經元可以響應一部分覆蓋範圍內的周圍單元,對於大型影象處理有出色表現。
卷積神經網路CNN的結構一般包含這幾個層:
- 輸入層:用於資料的輸入
- 卷積層:使用卷積核進行特徵提取和特徵對映
- 激勵層:由於卷積也是一種線性運算,因此需要增加非線性對映
- 池化層:進行下采樣,對特徵圖稀疏處理,減少資料運算量。
- 全連線層:通常在CNN的尾部進行重新擬合,減少特徵資訊的損失
- 輸出層:用於輸出結果
卷積神經網路結構如下:
我們使用兩種方法來實現卷積神經網路:
方法一:
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data tf.set_random_seed(1) np.random.seed(1) LR = 0.001 batch_size=50 mnist = input_data.read_data_sets('./mnist',one_hot=True) test_x = mnist.test.images[:2000] test_y = mnist.test.labels[:2000] x = tf.placeholder(tf.float32,[None,784]) img = tf.reshape(x,[-1,28,28,1]) y = tf.placeholder(tf.int32,[None,10]) def add_cnn_layer(inputs,filters,strides,Activation_function=None): x = tf.nn.conv2d(inputs,filters,[1,strides,strides,1],padding='SAME') if Activation_function is None: out = x else: out = Activation_function(x) return out def add_maxpooling_layer(inputs,k): out = tf.nn.max_pool(inputs,ksize=[1,k,k,1],strides=[1,k,k,1],padding='SAME') return out # bulid cnn net w1 = tf.Variable(tf.random_normal([5,5,1,16])) w2 = tf.Variable(tf.random_normal([5,5,16,32])) conv1 = add_cnn_layer(img,w1,1,tf.nn.relu) pool1 = add_maxpooling_layer(conv1,2) conv2 = add_cnn_layer(pool1,w2,1,tf.nn.relu) pool2 = add_maxpooling_layer(conv2,2) flat = tf.reshape(pool2,[-1,7*7*32]) output = tf.layers.dense(flat,10) loss = tf.losses.softmax_cross_entropy(onehot_labels = y,logits = output) train = tf.train.AdamOptimizer(LR).minimize(loss) accuracy = tf.metrics.accuracy(labels=tf.argmax(y,axis=1),predictions=tf.argmax(output,axis=1),)[1] sess = tf.Session() sess.run(tf.global_variables_initializer()) sess.run(tf.local_variables_initializer()) for step in range(5000): b_x,b_y = mnist.train.next_batch(batch_size) _,loss_ = sess.run([train,loss],feed_dict={x:b_x,y:b_y}) if step%50==0: accuracy_ = sess.run(accuracy,feed_dict={x:test_x,y:test_y}) print('train loss:%.4f'%loss_, '|test accuracy%.4f'%accuracy_)
方法二:
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from tensorflow.examples.tutorials.mnist import input_data tf.set_random_seed(1) np.random.seed(1) LR = 0.001 batch_size=50 mnist = input_data.read_data_sets('./mnist',one_hot=True) test_x = mnist.test.images[:2000] test_y = mnist.test.labels[:2000] x = tf.placeholder(tf.float32,[None,784]) img = tf.reshape(x,[-1,28,28,1]) y = tf.placeholder(tf.int32,[None,10]) def add_cnn_layer(input,filter,k,stride,Activation_function=None): x = tf.layers.conv2d( inputs = input, filters = filter, kernel_size = k, strides = stride, padding ='same', activation=Activation_function) return x def add_maxpooling_layer(inputs,k): out = tf.nn.max_pool(inputs,ksize=[1,k,k,1],strides=[1,k,k,1],padding='SAME') return out # bulid cnn net conv1 = add_cnn_layer(img,16,5,1,tf.nn.relu) pool1 = add_maxpooling_layer(conv1,2) conv2 = add_cnn_layer(pool1,32,5,1,tf.nn.relu) pool2 = add_maxpooling_layer(conv2,2) flat = tf.reshape(pool2,[-1,7*7*32]) output = tf.layers.dense(flat,10) loss = tf.losses.softmax_cross_entropy(onehot_labels = y,logits = output) train = tf.train.AdamOptimizer(LR).minimize(loss) accuracy = tf.metrics.accuracy(labels=tf.argmax(y,axis=1),predictions=tf.argmax(output,axis=1),)[1] sess = tf.Session() sess.run(tf.global_variables_initializer()) sess.run(tf.local_variables_initializer()) for step in range(5000): b_x,b_y = mnist.train.next_batch(batch_size) _,loss_ = sess.run([train,loss],feed_dict={x:b_x,y:b_y}) if step%50==0: accuracy_ = sess.run(accuracy,feed_dict={x:test_x,y:test_y}) print('train loss:%.4f'%loss_, '|test accuracy%.4f'%accuracy_)
我們可以發現上述兩種方法的差別在於定義卷積神經網路的函式不同,方法一使用:
tf.nn.conv2d(input, filter, strides, padding, use_cudnn_on_gpu=None, data_format=None, name=None)
來定義卷積網路,而方法二則使用了:
tf.layers.conv2d(inputs, filters, kernel_size, strides=(1,1), padding='valid', data_format='channels_last', dilation_rate=(1,1), activation=None, use_bias=True, kernel_initializer=None, bias_initializer=init_ops.zeros_initializer(), kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, trainable=True, name=None, reuse=None)
來定義卷積神經網路.
對於卷積神經網路而言,上述兩者演算法的實現的功能是一樣的,只不過tf.layers.conv2d使用tf.nn.conv2d作為後端。
需要注意的是 tf.nn.conv2d中的filter為一個四維張量其格式必須為:
[filter_height, filter_width, in_channels, out_channels]
而 tf.layers.conv2d中的filters為一個整數,即輸出空間的維度。
兩者應用的選擇如下:
tf.layers.conv2d引數豐富,一般用於從頭訓練一個模型。
tf.nn.conv2d,一般在下載預訓練好的模型時使用。
實驗結果如下圖所示:
由於電腦未使用GPU加速,訓練速度緩慢,所以accuracy達到94.47%便停止了訓練,
如繼續訓練accuracy還有進一步的提升空間!