1. 程式人生 > >tensorflow進階篇-5(反向傳播2)

tensorflow進階篇-5(反向傳播2)

分別是 blank initial BE ble com IV 個數 數組

  上面是一個簡單的回歸算法,下面是一個簡單的二分值分類算法。從兩個正態分布(N(-1,1)和N(3,1))生成100個數。所有從正態分布N(-1,1)生成的數據目標0;從正態分布N(3,1)生成的數據標為目標類1,模型算法通過sigmoid函數將這些生成的數據轉換成目標類數據。換句話講,模型算法是sigmoid(x+A),其中,A是要擬合的變量,理論上A=-1。假設,兩個正態分布的均值分別是m1和m2,則達到A的取值時,它們通過-(m1+m2)/2轉換成到0等距離的值。

  實現如下:

import matplotlib.pyplot as plt
import numpy as np
import
tensorflow as tf from tensorflow.python.framework import ops ops.reset_default_graph() # 創建計算圖 sess = tf.Session() # 生成數據,100個隨機數x_vals:50個(-1,1)之間的隨機數和50個(1,3)之間的隨機數 # 以及100個目標數y_vals:50個0、50個1 x_vals = np.concatenate((np.random.normal(-1, 1, 50), np.random.normal(3, 1, 50))) y_vals = np.concatenate((np.repeat(0., 50), np.repeat(1., 50)))
# 聲明x_data、target占位符 x_data = tf.placeholder(shape=[1], dtype=tf.float32) y_target = tf.placeholder(shape=[1], dtype=tf.float32) # 聲明變量A,(初始值為10附近,遠離理論值-1) A = tf.Variable(tf.random_normal(mean=10, shape=[1])) # 實現sigmoid(x_data+A),這裏不必封裝sigmoid函數,損失函數中會自動實現 my_output = tf.add(x_data, A) # 為my_output、y_target添加一個維度
my_output_expanded = tf.expand_dims(my_output, 0) y_target_expanded = tf.expand_dims(y_target, 0) # 初始化所有變量 init = tf.initialize_all_variables() sess.run(init) # 添加損失函數,Sigmoid交叉熵損失函數。 # L = -actual * (log(sigmoid(pred))) - (1-actual)(log(1-sigmoid(pred))) # or # L = max(actual, 0) - actual * pred + log(1 + exp(-abs(actual))) xentropy = tf.nn.sigmoid_cross_entropy_with_logits(logits=my_output_expanded,labels= y_target_expanded) # 聲明變量的優化器 my_opt = tf.train.GradientDescentOptimizer(0.05) train_step = my_opt.minimize(xentropy) # 訓練,將損失值加入數組loss_batch loss_batch = [] for i in range(1400): rand_index = np.random.choice(100) rand_x = [x_vals[rand_index]] rand_y = [y_vals[rand_index]] sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y}) target=sess.run(xentropy, feed_dict={x_data: rand_x, y_target: rand_y}) print(Step # + str(i + 1) + A = + str(sess.run(A))) print(Loss = + str(target)) loss_batch.append(float(target)) plt.plot( loss_batch, r--, label=Back Propagation) plt.legend(loc=upper right, prop={size: 11}) plt.show() # 評估預測 predictions = [] for i in range(len(x_vals)): x_val = [x_vals[i]] prediction = sess.run(tf.round(tf.sigmoid(my_output)), feed_dict={x_data: x_val}) predictions.append(prediction[0]) accuracy = sum(x == y for x, y in zip(predictions, y_vals)) / 100. print(Ending Accuracy = + str(np.round(accuracy, 2)))

技術分享圖片

輸出結果:

Step #1 A = [ 9.38409519]
Loss = [[ 8.86125183]]
Step #2 A = [ 9.38409519]
Loss = [[ 2.55701457e-06]]

.........

Step #1398 A = [-1.09940839]
Loss = [[ 1.60983551]]
Step #1399 A = [-1.09107065]
Loss = [[ 0.18104036]]
Step #1400 A = [-1.10927427]
Loss = [[ 0.44608194]]

Ending Accuracy = 0.96

對於損失函數看這裏:tensorflow進階篇-4(損失函數1),tensorflow進階篇-4(損失函數2),tensorflow進階篇-4(損失函數3)

tensorflow進階篇-5(反向傳播2)