1. 程式人生 > >TensorFlow實現Softmax回歸(模型存儲與加載)

TensorFlow實現Softmax回歸(模型存儲與加載)

oat 出現 each softmax reat equal des points optimizer

 1 # -*- coding: utf-8 -*-
 2 """
 3 Created on Thu Oct 18 18:02:26 2018
 4 
 5 @author: zhen
 6 """
 7 
 8 from tensorflow.examples.tutorials.mnist import input_data
 9 import tensorflow as tf
10 
11 # mn.SOURCE_URL = "http://yann.lecun.com/exdb/mnist/"
12 my_mnist = input_data.read_data_sets("C:/Users/zhen/MNIST_data_bak/
", one_hot=True) 13 14 # The MNIST data is split into three parts: 15 # 55,000 data points of training data (mnist.train) 16 # 10,000 points of test data (mnist.test), and 17 # 5,000 points of validation data (mnist.validation). 18 19 # Each image is 28 pixels by 28 pixels 20 21 # 輸入的是一堆圖片,None表示不限輸入條數,784表示每張圖片都是一個784個像素值的一維向量
22 # 所以輸入的矩陣是None乘以784二維矩陣 23 x = tf.placeholder(dtype=tf.float32, shape=(None, 784)) 24 # 初始化都是0,二維矩陣784乘以10個W值 25 W = tf.Variable(tf.zeros([784, 10])) 26 b = tf.Variable(tf.zeros([10])) 27 28 y = tf.nn.softmax(tf.matmul(x, W) + b) 29 30 # 訓練 31 # labels是每張圖片都對應一個one-hot的10個值的向量 32 y_ = tf.placeholder(dtype=tf.float32, shape=(None, 10))
33 # 定義損失函數,交叉熵損失函數 34 # 對於多分類問題,通常使用交叉熵損失函數 35 # reduction_indices等價於axis,指明按照每行加,還是按照每列加 36 cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), 37 reduction_indices=[1])) 38 train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) 39 40 # 評估 41 42 # tf.argmax()是一個從tensor中尋找最大值的序號,tf.argmax就是求各個預測的數字中概率最大的那一個 43 44 correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 45 46 # 用tf.cast將之前correct_prediction輸出的bool值轉換為float32,再求平均 47 accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 48 49 # 初始化變量 50 sess = tf.InteractiveSession() 51 tf.global_variables_initializer().run() 52 # 創建Saver節點,用於保存訓練的模型 53 saver = tf.train.Saver() 54 for i in range(100): 55 batch_xs, batch_ys = my_mnist.train.next_batch(100) 56 sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) 57 # 每隔一段時間保存一次中間結果 58 if i % 10 == 0: 59 save_path = saver.save(sess, "C:/Users/zhen/MNIST_data_bak/saver/softmax_middle_model.ckpt") 60 61 # print("TrainSet batch acc : %s " % accuracy.eval({x: batch_xs, y_: batch_ys})) 62 # print("ValidSet acc : %s" % accuracy.eval({x: my_mnist.validation.images, y_: my_mnist.validation.labels})) 63 64 # 測試 65 print("TestSet acc : %s" % accuracy.eval({x: my_mnist.test.images, y_: my_mnist.test.labels})) 66 # 保存最終的模型 67 save_path = saver.save(sess, "C:/Users/zhen/MNIST_data_bak/saver/softmax_final_model.ckpt") 68 69 # 使用訓練好的模型直接進行預測 70 with tf.Session() as sess_back: 71 saver.restore(sess_back, "C:/Users/zhen/MNIST_data_bak/saver/softmax_final_model.ckpt") 72 # 評估 73 correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1)) 74 accruary = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 75 # 測試 76 print(accuracy.eval({x : my_mnist.test.images, y_ : my_mnist.test.labels})) 77 # 總結 78 # 1,定義算法公式,也就是神經網絡forward時的計算 79 # 2,定義loss,選定優化器,並指定優化器優化loss 80 # 3,叠代地對數據進行訓練 81 # 4,在測試集或驗證集上對準確率進行評測

結果:

  技術分享圖片

解析:

  把訓練好的模型存儲落地磁盤,有利於多次使用和共享,也便於當訓練出現異常時能恢復模型而不是重新訓練!

TensorFlow實現Softmax回歸(模型存儲與加載)