1. 程式人生 > >Tensorlayer學習筆記——多層神經網路

Tensorlayer學習筆記——多層神經網路

一、簡單粗暴,先看程式碼

import tensorflow as tf
import tensorlayer as tl

sess = tf.InteractiveSession()
# 匯入資料
X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1,784))

# 定義placeholder
x = tf.placeholder(tf.float32, shape=[None, 784], name='x')
y_ = tf.placeholder(tf.int64, shape = [None, ], name='y_')

# 定義模型
network = tl.layers.InputLayer(inputs=x, name='input_layer')
network = tl.layers.DropoutLayer(network, keep=0.5, name='dropout1')
network = tl.layers.DenseLayer(network, n_units=1024, act=tf.nn.relu, name='relu1')
network = tl.layers.DropoutLayer(network, keep=0.5, name='dropout2')
network = tl.layers.DenseLayer(network, n_units=512, act=tf.nn.relu, name='relu2')
network = tl.layers.DropoutLayer(network, keep=0.5, name='dropout3')
network = tl.layers.DenseLayer(network, n_units=10, act=tf.identity, name='output_layer')

# 定義損失函式
y = network.outputs
cost = tl.cost.cross_entropy(y, y_, name='cost')
correct_prediction = tf.equal(tf.arg_max(y, 1), y_)
acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
y_op = tf.arg_max(tf.nn.softmax(y), 1)

# 定義優化器
train_param = network.all_params
train_op = tf.train.AdamOptimizer(learning_rate=0.0001, use_locking=False).minimize(cost, var_list=train_param)

# tensorboard
acc_summ = tf.summary.scalar('acc', acc)
cost_summ = tf.summary.scalar('cost', cost)
summary = tf.summary.merge_all()
writer = tf.summary.FileWriter('./logs')
writer.add_graph(sess.graph)

#初始化引數
tl.layers.initialize_global_variables(sess)

# 列出模型資訊
network.print_layers()
network.print_params()

# 訓練模型
tl.utils.fit(sess, network, train_op, cost, X_train, y_train, x, y_,
             acc=acc, batch_size=512, n_epoch=100, print_freq=10,
             X_val=X_val, y_val=y_val,eval_train=False, tensorboard=True)

# 評估
tl.utils.test(sess, network, acc, X_test, y_test, x, y_, batch_size=None, cost=cost)

# 儲存模型
tl.files.save_npz(network.all_params, name='model.npz')
sess.close()

二、細嚼慢嚥,慢慢學習

Tensorlayer簡單程度可以說是介於Keras和Tensorflow之間的,簡化了Tensorflow很多繁瑣的步驟。

1、第一步依然是建立會話

tf.InteractiveSession()

2、第二步匯入資料,這裡以MNIST為例

X_train, y_train, X_val, y_val, X_test, y_test = tl.files.load_mnist_dataset(shape=(-1,784))
第一次執行需要下載,再一次執行不需要下載

3、第三步,定義佔位符,分別為網路的輸入和目標輸出

x = tf.placeholder(tf.float32, 
shape=[None, 784], name='x') y_ = tf.placeholder(tf.int64, shape = [None, ], name='y_')

4、第四步,建立模型

network = tl.layers.InputLayer(inputs=x, name='input_layer')
network = tl.layers.DropoutLayer(network, keep=0.5, name='dropout1')
network = tl.layers.DenseLayer(network, n_units=1024, act=tf.nn.relu, name
='relu1') network = tl.layers.DropoutLayer(network, keep=0.5, name='dropout2') network = tl.layers.DenseLayer(network, n_units=512, act=tf.nn.relu, name='relu2') network = tl.layers.DropoutLayer(network, keep=0.5, name='dropout3') network = tl.layers.DenseLayer(network, n_units=10, act=tf.identity, name='output_layer')
這裡可以看出,風格和Keras很像,像搭積木一樣堆積層就行。這裡網路的結構是一個輸入層,兩個節點為1024、512的隱藏層,一個10節點的輸出層,10個節點表示One-Hot編碼。每層之間加入Dropout層,節點保留率為0.5。隱藏層的啟用函式為relu函式。注意輸出層的啟用函式為tf.identity,DenseLayer的預設啟用函式就是tf.identity,可以理解為線性啟用函式,即輸入輸出相同。

5、第五步,定義損失函式

↓得到網路的輸出

y = network.outputs
↓計算交叉熵損失,這個API呼叫tf的函式在內部實現了softmax
cost = tl.cost.cross_entropy(y, y_, name='cost')
↓判斷預測結果和真實標籤是否相同,得到的是一系列的布林型Tensor
correct_prediction = tf.equal(tf.arg_max(y, 1), y_)

↓計算準確率

acc = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

↓y是one-hot形式,y_op得到的是代表類別的索引

y_op = tf.arg_max(tf.nn.softmax(y), 1)

6、第六步,定義優化器

train_param = network.all_params
train_op = tf.train.AdamOptimizer(learning_rate=0.0001, use_locking=False).minimize(cost, var_list=train_param)
優化所有引數,最小化cost損失,採用Adam優化器

7、第七步,開啟Tensorboard

acc_summ = tf.summary.scalar('acc', acc)
cost_summ = tf.summary.scalar('cost', cost)
summary = tf.summary.merge_all()
writer = tf.summary.FileWriter('./logs')
writer.add_graph(sess.graph)
這裡監控準確率acc和損失cost

8、第八步,初始化所有引數

tl.layers.initialize_global_variables(sess)

9、第九步,列出模型的資訊

network.print_layers()
network.print_params()
包括網路結構資訊和引數資訊

10、第十步,訓練模型

tl.utils.fit(sess, network, train_op, cost, X_train, y_train, x, y_,
acc=acc, batch_size=512, n_epoch=100, print_freq=10,
X_val=X_val, y_val=y_val,eval_train=False, tensorboard=True)
輸入會話sess、要訓練的網路network、優化器train_op、損失cost、訓練集、佔位符、驗證集,mini-batch大小為512,迭代100輪,每10輪列印一次訓練資訊。eval_train=False表示不驗證訓練集,如果要開啟Tensorboard,則tensorboard=Ture。

11、第十一步,評估測試集

tl.utils.test(sess, network, acc, X_test, y_test, x, y_, batch_size=None, cost=cost)

12、第十二步,儲存模型,關閉會話

tl.files.save_npz(network.all_params, name='model.npz')
sess.close()

三、飛起來吧

執行,輸出如下:

Load or Download MNIST > data/mnist/
data/mnist/train-images-idx3-ubyte.gz
2017-12-11 21:30:58.096836: I C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\35\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX
data/mnist/t10k-images-idx3-ubyte.gz
  [TL] InputLayer  input_layer: (?, 784)
  [TL] DropoutLayer dropout1: keep:0.500000 is_fix:False
  [TL] DenseLayer  relu1: 1024 relu
  [TL] DropoutLayer dropout2: keep:0.500000 is_fix:False
  [TL] DenseLayer  relu2: 512 relu
  [TL] DropoutLayer dropout3: keep:0.500000 is_fix:False
  [TL] DenseLayer  output_layer: 10 identity
WARNING:tensorflow:From E:/Machine_Learning/TensorLayer_code/mlp/mnist.py:24: arg_max (from tensorflow.python.ops.gen_math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `argmax` instead
  layer   0: dropout1/mul:0       (?, 784)           float32
  layer   1: relu1/Relu:0         (?, 1024)          float32
  layer   2: dropout2/mul:0       (?, 1024)          float32
  layer   3: relu2/Relu:0         (?, 512)           float32
  layer   4: dropout3/mul:0       (?, 512)           float32
  layer   5: output_layer/Identity:0 (?, 10)            float32
  param   0: relu1/W:0            (784, 1024)        float32_ref (mean: 2.0586548998835497e-05, median: -7.415843720082194e-05, std: 0.08802532404661179)   
  param   1: relu1/b:0            (1024,)            float32_ref (mean: 0.0               , median: 0.0               , std: 0.0               )   
  param   2: relu2/W:0            (1024, 512)        float32_ref (mean: -7.740945875411853e-05, median: -7.019848271738738e-05, std: 0.08793578296899796)   
  param   3: relu2/b:0            (512,)             float32_ref (mean: 0.0               , median: 0.0               , std: 0.0               )   
  param   4: output_layer/W:0     (512, 10)          float32_ref (mean: -0.0003667508135549724, median: 0.0003241676895413548, std: 0.08744712173938751)   
  param   5: output_layer/b:0     (10,)              float32_ref (mean: 0.0               , median: 0.0               , std: 0.0               )   
  num of params: 1333770
Setting up tensorboard ...
[!] logs/ exists ...
Param name  relu1/W:0
Param name  relu1/b:0
Param name  relu2/W:0
Param name  relu2/b:0
Param name  output_layer/W:0
Param name  output_layer/b:0
Finished! use $tensorboard --logdir=logs/ to start server
Start training the network ...
Epoch 1 of 100 took 28.844650s
   val loss: 0.782603
   val acc: 0.752262
Epoch 10 of 100 took 27.462571s
   val loss: 0.433058
   val acc: 0.898335

在專案目錄下回生成一個logs資料夾,裡面生成了tensorboard的events檔案,在專案目錄下執行cmd,輸入tensorboard --logdir=logs/,根據提示開啟瀏覽器輸入網址,即可實時監控acc和cost如下圖所示(如果打不開嘗試一下谷歌瀏覽器):