1. 程式人生 > >TensorFlow的一個訓練權重和偏置的小例子

TensorFlow的一個訓練權重和偏置的小例子

import tensorflow as tf
import numpy as np
"""
本例子是用來演示利用TensorFlow訓練出假設的權重和偏置
"""

# set data
x_data = np.random.rand(100).astype(np.float32)
y_data = x_data*0.1+0.3

# create tensorflow structure
Weights = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
biases = tf.Variable(tf.zeros([1]))

y_pred =
Weights*x_data+biases loss = tf.reduce_mean(tf.square(y_pred-y_data)) optimizer = tf.train.GradientDescentOptimizer(0.5) train = optimizer.minimize(loss) init = tf.initialize_all_variables() # define the session sess = tf.Session() sess.run(init) # 啟用神經網路 # 開始訓練 for step in range(201): sess.run(
train) if step%20 ==0: print(step,sess.run(Weights),sess.run(biases))

輸出結果為:

0 [-0.43459344] [0.7752902]
20 [-0.05772059] [0.38117662]
40 [0.05961926] [0.3207834]
60 [0.08966146] [0.30532113]
80 [0.09735306] [0.30136237]
100 [0.09932232] [0.30034882]
120 [0.09982649] [0.3000893]
140 [0.0999556] [0.30002287]
160 [0.09998864] [0.30000585]
180 [0.09999712] [0.3000015]
200 [0.09999926] [0.3000004]

經過201次迭代得到訓練的權重和偏置近似於我們預先的假設函式的設定。
注意
TensorFlow和Numpy的構造方法千萬不要混雜,tf.Variable裡面一定用tf的方法不能用np的方法,否則型別不匹配