1. 程式人生 > >TensorFlow框架實戰學習(1)

TensorFlow框架實戰學習(1)

msu HA gin PE var shape highlight padding lin

 1 import tensorflow as tf 
 2 import numpy as np 
 3 import matplotlib.pyplot as plt
 4 
 5 
 6 train_X = np.linspace(-1, 1, 1000);
 7 train_Y = 2 * train_X + np.random.randn(*train_X.shape)*0.3
 8 
 9 plt.plot(train_X,train_Y,ro,label = Original data)
10 plt.legend()
11 #plt.show()
12 
13 X = tf.placeholder("
float") 14 Y = tf.placeholder("float") 15 16 W = tf.Variable(tf.random_normal([1]),name = "weight") 17 b = tf.Variable(tf.zeros([1]),name = "bias") 18 19 z = tf.multiply(X,W)+b 20 21 #評估結果,利用梯度下降來反饋結果 22 cost = tf.reduce_mean(tf.square(Y - z)) 23 learning_rate = 0.01 24 optimizier = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
25 26 27 init = tf.global_variables_initializer() 28 29 training_epochs = 30 30 display_step = 2 31 32 #session 33 34 with tf.Session() as sess: 35 sess.run(init) 36 plotdata = {"batchsize":[],"loss":[]} 37 for epoch in range(training_epochs): 38 for (x,y) in zip(train_X,train_Y):
39 sess.run(optimizier,feed_dict = {X : x,Y : y}) 40 41 42 if epoch % display_step == 0: 43 loss = sess.run(cost,feed_dict = {X:train_X,Y:train_Y}) 44 print("Epoch:",epoch+1,"cost=",loss,"W=",sess.run(W),"b=",sess.run(b)) 45 if not (loss == "NA"): 46 plotdata["batchsize"].append(epoch) 47 plotdata["loss"].append(loss) 48 49 print(" Finished!") 50 print ("cost = ",sess.run(cost,feed_dict = {X : train_X , Y : train_Y}),"W = ", sess.run(W),"b = ",sess.run(b))

代碼主要是對給定的 y ≈ 2x 進行學習,模擬出合適的權值w和偏執值b。


下面記錄一下各種函數的作用,有助於本人學習記憶python庫和函數:

1.首先是linspace函數:numpy.linspace(start, stop, num=50, endpoint=True, retstep=False, dtype=None)

  返回(start,stop)間的num個數,endpoint為真sample必然有終點,為否必然沒有終點,retstep表示一定步長,最後返回(sample,step)

2.np.random.randn : numpy.random.rand(d0, d1, ..., dn)

  隨機產生維度為(d0,d1...,dn)的隨機數,從標準正態分布中產生。

  但random.rand(d0,d1...,dn)產生數值取值範圍是[0,1)

3.tf.placeholder : tf.placeholder(dtype,shape = None , name = None)

  占位數,可以理解為變量,在需要的時候才去賦值

  dtype 表示數據類型,例如tf.float32,tf.float64,shape表示數組形狀,shape = None 表示一維數,還可以shape =[3,4],shape = [None,4]表示行未定。返回Tensor類型。

4.tf.Variable.int()

  tf.Variable.init(initial_value, trainable=True, collections=None, validate_shape=True, name=None)

  參數表引用相關資料:

initial_value 所有可以轉換為Tensor的類型 變量的初始值
trainable bool 如果為True,會把它加入到GraphKeys.TRAINABLE_VARIABLES,才能對它使用Optimizer
collections list 指定該圖變量的類型、默認為[GraphKeys.GLOBAL_VARIABLES]
validate_shape bool 如果為False,則不進行類型和維度檢查
name string 變量的名稱,如果沒有指定則系統會自動分配一個唯一的值

5.tf.random_normal() : tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)

  代碼中僅用到shape=[1],表示一維數

6.tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None) 求平均值

  input_tensor:待求值的tensor。

  reduction_indices:在哪一維上求解。

7.tf.train.GrandientDescentOptimizer(learning_rate)

  產生一個規定學習率的優化器:optmizier = tf.train.GrandientDescentOptimizer(learning_rate = 0.01)

  計算梯度並且直接作用於變量上:optimizer.minimize(cost , var_list = <list of variables>)

  optimizer.run()

  計算出梯度:gradients = optimizer.compute_gradients(loss , <list of variables>)

  然後就可以根據自己的需要處理梯度了。

8.tf.global_variables_initializer()初始化所有Tensor變量的狀態

  init = tf.global_variables_initializer()

  with tf.Session() as sess:

    sess.run(init)

  

TensorFlow框架實戰學習(1)