tensorflow訓練線性回歸模型
阿新 • • 發佈:2018-11-18
ima .py square alt %s initial sum 訓練數據 ==
完整代碼
import tensorflow as tf import matplotlib.pyplot as plt import numpy as np #樣本數據 x_train = np.linspace(-1,1,300)[:,np.newaxis] noise = np.random.normal(0, 0.1, x_train.shape) y_train = x_train * 3 + noise + 0.8 #線性模型 W = tf.Variable([0.1],dtype = tf.float32) b = tf.Variable([0.1],dtype = tf.float32) x = tf.placeholder(tf.float32) line_model = W * x + b #損失模型 y = tf.placeholder(tf.float32) loss = tf.reduce_sum(tf.square(line_model - y)) #創建優化器 optimizer = tf.train.GradientDescentOptimizer(0.001) train = optimizer.minimize(loss) #初始化變量 init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # 繪制樣本數據 fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.scatter(x_train, y_train) plt.ion() plt.show() plt.pause(3) #訓練100次 for i in range(100): #每隔10次打印1次成果 if i % 10 == 0: print(i) print('W:%s b:%s' % (sess.run(W),sess.run(b))) print('loss:%s' % (sess.run(loss,{x:x_train,y:y_train}))) sess.run(train,{x:x_train,y:y_train}) print('---') print('W:%s b:%s' % (sess.run(W),sess.run(b))) print('loss:%s' % (sess.run(loss,{x:x_train,y:y_train})))
樣本訓練數據分布如下
輸出結果如下
結論
通過打印結果可以看到W已經非常接近初始給定的3,b也非常接近給定的0.8 (誤差不可避免)
tensorflow訓練線性回歸模型