1. 程式人生 > >【T-Tensorflow框架學習】Tensorflow簡單邏輯迴歸實現

【T-Tensorflow框架學習】Tensorflow簡單邏輯迴歸實現

Softmax迴歸介紹

我們知道MNIST的每一張圖片都表示一個數字,從0到9。我們希望得到給定圖片代表每個數字的概率。比如說,我們的模型可能推測一張包含9的圖片代表數字9的概率是80%但是判斷它是8的概率是5%(因為8和9都有上半部分的小圓),然後給予它代表其他數字的概率更小的值。
線性層的spftmax迴歸模型識別手寫字是一個使用softmax迴歸(softmax regression)模型的經典案例。softmax模型可以用來給不同的物件分配概率。即使在之後,我們訓練更加精細的模型時,最後一步也需要用softmax來分配概率。

溫故:
用到到tensnorflowzhong 的函式:

  • tf.nn.softmax()函式建立softmax模型
  • tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
    梯度下降優化器求解
  • argmax()函式 求最大值的索引

# tensorflow做邏輯迴歸,線性層的spftmax迴歸模型識別手寫字

import tensorflow as tf
import numpy as np
import matplotlib.pylab as plt
from tensorflow.examples.tutorials.mnist import input_data

#mnist資料輸入
''' 在MNIST訓練資料集中,mnist.train.images 是一個形狀為 [60000, 784] 的張量, 第一個維度數字用來索引圖片,第二個維度數字用來索引每張圖片中的畫素點。 在此張量裡的每一個元素,都表示某張圖片裡的某個畫素的強度值,值介於0和1之間。 ''' mnist = input_data.read_data_sets('data/', one_hot=True) #placeholder是一個佔位符,None表示此張量的第一個維度可以是任何長度 x = tf.placeholder("float",[None,784]) y = tf.placeholder("float"
,[None,10]) #定義W維度是[784,10],初始值為0.W 和 b都是確定的,784個畫素點,需要784個權重係數 #定義b維度是[10],初始值是0 W = tf.Variable(tf.zeros([784,10])) b = tf.Variable(tf.zeros([10])) #Logistic regression model 邏輯迴歸模型 #LR是二分類問題,要升級到softmax多分類問題 actv = tf.nn.softmax(tf.matmul(x,W)+ b) #預測值,每個樣本輸出10個值 #損失函式Cost Fuction -logP P是屬於真實樣本的概率值(預測值) ##以估計值y和實際值y_data之間的均方誤差作為損失 cost = tf.reduce_mean(-tf.reduce_sum(y*tf.log(actv),reduction_indices=1)) #optimize學習率 learning_rate = 0.01 #梯度下降優化器求解,訓練的過程就是最下化損失函式cost optm = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) #Predition argmax()函式 求最大值的索引 #其中tf.argmax(actv,1) 代表預測值第一行最大數對應的索引值 ,tf.argmax(y,1)真實值對應的索引 #其中tf.argmax(actv,0) 代表預測值第一列最大數對應的索引值 ,tf.argmax(y,0)真實值對應的索引 #預測值actv的索引和label值(真實值)的索引是否一樣,pred返回值是True 或者 False pred = tf.equal(tf.argmax(actv,1),tf.argmax(y,1)) #accuracy準確率 tf.cast將true和false轉換為float型別,true為1和False為0,累加衡量準確率 accr = tf.reduce_mean(tf.cast(pred, "float")) #Initializer初始化 init = tf.global_variables_initializer() #所有訓練樣本迭代次數 train_epochs = 50 #每次迭代的樣本數 batch_size = 100 display_step = 5 #Session sess = tf.Session() sess.run(init) #mini_batch Learning #所有樣本進行50次迭代 for epoch in range(train_epochs): avg_cost = 0 #初始損失值為0 num_batch = int(mnist.train.num_examples/batch_size) #取整,計算有多少簇 #一次迭代,50個樣本 for i in range(num_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # 將資料輸入給模型 sess.run(optm, feed_dict={x: batch_xs, y: batch_ys}) feeds = {x: batch_xs, y: batch_ys} # 損失函式值 avg_cost += sess.run(cost, feed_dict=feeds)/num_batch #把每一小簇最小化的損失值加到一起 #display if epoch % display_step == 0: #在一個簇中每5個列印一次 feeds_train = {x: batch_xs, y: batch_ys} feeds_test = {x: mnist.test.images, y: mnist.test.labels} train_acc = sess.run(accr, feed_dict=feeds_train) test_acc = sess.run(accr, feed_dict=feeds_test) print("Epoch: %03d/%03d cost: %.9f train_acc: %.3f test_acc: %.3f" % (epoch, train_epochs, avg_cost, train_acc, test_acc)) print('Done')

output:

F:\Anaconda\python.exe D:/PycharmProjects/tensorflow邏輯迴歸實現.py
WARNING:tensorflow:From D:/PycharmProjects/tensorflow邏輯迴歸實現.py:15: read_data_sets (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
WARNING:tensorflow:From F:\Anaconda\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:260: maybe_download (from tensorflow.contrib.learn.python.learn.datasets.base) is deprecated and will be removed in a future version.
Instructions for updating:
Please write your own downloading logic.
Extracting data/train-images-idx3-ubyte.gz
WARNING:tensorflow:From F:\Anaconda\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:262: extract_images (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
Extracting data/train-labels-idx1-ubyte.gz
WARNING:tensorflow:From F:\Anaconda\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:267: extract_labels (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.data to implement this functionality.
WARNING:tensorflow:From F:\Anaconda\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:110: dense_to_one_hot (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use tf.one_hot on tensors.
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
WARNING:tensorflow:From F:\Anaconda\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py:290: DataSet.__init__ (from tensorflow.contrib.learn.python.learn.datasets.mnist) is deprecated and will be removed in a future version.
Instructions for updating:
Please use alternatives such as official/mnist/dataset.py from tensorflow/models.
2018-08-30 15:04:53.025707: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
Epoch: 000/050 cost: 1.177109451 train_acc: 0.870 test_acc: 0.852
Epoch: 005/050 cost: 0.440929337 train_acc: 0.850 test_acc: 0.895
Epoch: 010/050 cost: 0.383323831 train_acc: 0.880 test_acc: 0.905
Epoch: 015/050 cost: 0.357286127 train_acc: 0.860 test_acc: 0.909
Epoch: 020/050 cost: 0.341520180 train_acc: 0.930 test_acc: 0.913
Epoch: 025/050 cost: 0.330535250 train_acc: 0.900 test_acc: 0.914
Epoch: 030/050 cost: 0.322328042 train_acc: 0.880 test_acc: 0.915
Epoch: 035/050 cost: 0.315957263 train_acc: 0.880 test_acc: 0.917
Epoch: 040/050 cost: 0.310728670 train_acc: 0.870 test_acc: 0.918
Epoch: 045/050 cost: 0.306382290 train_acc: 0.970 test_acc: 0.919
Done

Process finished with exit code 0