1. 程式人生 > >深度學習之tensorflow (一)

深度學習之tensorflow (一)

spa 計算 put range mst reduce logs 分析 pen

一、TensorFlow簡介

1.TensorFlow定義

tensor :張量,N維數組

Flow : 流,基於數據流圖的計算

TensorFlow : 張量從圖像的一端流動到另一端的計算過程,是將復雜的數據結 構傳輸至人工智能神經網絡中進行分析和處理的過程。


2. 工作模式:

圖graphs表示計算任務,圖中的節點稱之為op(operation) ,一個 op可以獲得0個 或多個張量(tensor),通過創建會話(session)對象來執行計算,產生0個或多個tensor。

其工作模式分為兩步:(1)define the computation graph

(2)run the graph (with data) in session


3. 特點:

(1)異步:一處寫、一處讀、一處訓練

(2)全局 : 操作添加到全局的graph中 , 監控添加到全局的summary中,

參數/損失添加到全局的collection中

(3)符號式的:創建時沒有具體,運行時才傳入


二、 代碼

1 、定義神經網絡的相關參數和變量

技術分享
 1 # -*- coding: utf-8 -*-
 2 # version:python 3.5
3 import tensorflow as tf 4 from numpy.random import RandomState 5 6 batch_size = 8 7 x = tf.placeholder(tf.float32, shape=(None, 2), name="x-input") 8 y_ = tf.placeholder(tf.float32, shape=(None, 1), name=y-input) 9 w1= tf.Variable(tf.random_normal([2, 1], stddev=1, seed=1)) 10 y = tf.matmul(x, w1)
View Code

2、設置自定義的損失函數

技術分享
1 # 定義損失函數使得預測少了的損失大,於是模型應該偏向多的方向預測。
2 loss_less = 10
3 loss_more = 1
4 loss = tf.reduce_sum(tf.where(tf.greater(y, y_), (y - y_) * loss_more, (y_ - y) * loss_less))
5 train_step = tf.train.AdamOptimizer(0.001).minimize(loss)
View Code

3、生成模擬數據集

技術分享
1 rdm = RandomState(1)
2 X = rdm.rand(128,2)
3 Y = [[x1+x2+rdm.rand()/10.0-0.05] for (x1, x2) in X]
View Code

4、訓練模型

技術分享
 1 with tf.Session() as sess:
 2     init_op = tf.global_variables_initializer()
 3     sess.run(init_op)
 4     STEPS = 5000
 5     for i in range(STEPS):
 6         start = (i*batch_size) % 128
 7         end = (i*batch_size) % 128 + batch_size
 8         sess.run(train_step, feed_dict={x: X[start:end], y_: Y[start:end]})
 9         if i % 1000 == 0:
10             print("After %d training step(s), w1 is: " % (i))
11             print sess.run(w1), "\n"
12     print "Final w1 is: \n", sess.run(w1)
View Code

結果:

After 0 training step(s), w1 is: 
[[-0.81031823]
 [ 1.4855988 ]] 

After 1000 training step(s), w1 is: 
[[ 0.01247112]
 [ 2.1385448 ]] 

After 2000 training step(s), w1 is: 
[[ 0.45567414]
 [ 2.17060661]] 

After 3000 training step(s), w1 is: 
[[ 0.69968724]
 [ 1.8465308 ]] 

After 4000 training step(s), w1 is: 
[[ 0.89886665]
 [ 1.29736018]] 

Final w1 is: 
[[ 1.01934695]
 [ 1.04280889]]

5、重新定義損失函數,使得預測多了的損失大,於是模型應該偏向少的方向預測

技術分享
 1 loss_less = 1
 2 loss_more = 10
 3 loss = tf.reduce_sum(tf.where(tf.greater(y, y_), (y - y_) * loss_more, (y_ - y) * loss_less))
 4 train_step = tf.train.AdamOptimizer(0.001).minimize(loss)
 5 
 6 with tf.Session() as sess:
 7     init_op = tf.global_variables_initializer()
 8     sess.run(init_op)
 9     STEPS = 5000
10     for i in range(STEPS):
11         start = (i*batch_size) % 128
12         end = (i*batch_size) % 128 + batch_size
13         sess.run(train_step, feed_dict={x: X[start:end], y_: Y[start:end]})
14         if i % 1000 == 0:
15             print("After %d training step(s), w1 is: " % (i))
16             print sess.run(w1), "\n"
17     print "Final w1 is: \n", sess.run(w1)
View Code

結果:

After 0 training step(s), w1 is: 
[[-0.81231821]
 [ 1.48359871]] 

After 1000 training step(s), w1 is: 
[[ 0.18643527]
 [ 1.07393336]] 

After 2000 training step(s), w1 is: 
[[ 0.95444274]
 [ 0.98088616]] 

After 3000 training step(s), w1 is: 
[[ 0.95574027]
 [ 0.9806633 ]] 

After 4000 training step(s), w1 is: 
[[ 0.95466018]
 [ 0.98135227]] 

Final w1 is: 
[[ 0.95525807]
 [ 0.9813394 ]]

深度學習之tensorflow (一)