1. 程式人生 > >tensorflow課堂筆記(二)

tensorflow課堂筆記(二)

#coding utf-8
"""
反向傳播-》訓練模型引數,在所有引數上用梯度下降,使NN模型在訓練資料上的損失函式最小
損失函式(loss):預測值(y)與已知答案(y_)的差距
均方誤差MSE : MSE(Y_,Y)=(Y-Y_)^2求算術平均值
loss = tf.reduce_mean(tf.square(y_-y))
反向傳播訓練方法:以減小loss值為優化目標
train_step = tf.train.GradientDescentOptimizer(learing_rate).minimize(loss)
train_step = tf.train.MomentumOptimizer(learnin_rate,momentum).minimize(loss)
train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss)
learning_rate表示每次更新的幅度

"""
import tensorflow as tf
import numpy as np
BATCH_SIZE = 8
seed = 23455

#基於seed產生隨機數
rng = np.random.RandomState(seed)
#隨機數返回32行2列的矩陣 表示32組體積和重量 作為輸入資料集
X = rng.rand(32,2)
#從X這個32行2列的矩陣中 取出一行 判斷如果小於1給Y賦值1 如果不小於1 給Y賦值0
#作為輸入資料集的標籤 (正確答案)
#Y值為1表示合格,Y值為0表示不合格
Y = [[int(x0 + x1 < 1)] for [x0, x1] in X]
print("X:\n",X)
print("Y:\n",Y)

#1定義神經網路的輸入,引數的輸出,定義前向傳播過程
x = tf.placeholder(tf.float32, shape=[None,2])
y_ = tf.placeholder(tf.float32, shape=[None,1])

w1 = tf.Variable(tf.random_normal([2,3], stddev=1, seed=1))   #2行3列的矩陣
w2 = tf.Variable(tf.random_normal([3,1], stddev=1, seed=1))   #3行1列的矩陣

a = tf.matmul(x, w1)
y = tf.matmul(a, w2)

#2定義損失函式及反向傳播方法
loss = tf.reduce_mean(tf.square(y-y_))
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss)
#train_step = tf.train.MomentumOptimizer(0.001,0.9).minize(loss)
#train_step = tf.train.AdamOptimizer(0.001).minimize(loss)

#3生成會話,訓練STEPS輪
with tf.Session() as sess:
    init_op = tf.global_variables_initializer()
    sess.run(init_op)
    #輸出目前(未經訓練) 的引數值
    print("W1:\n", sess.run(w1))
    print("W2:\n", sess.run(w2))
    print("\n")

    #訓練模型
    STEPS = 10000
    for i in range(STEPS):
        start = (i % BATCH_SIZE) % 32 #24 25 26 27 28 29 30 31 32
        end = start + BATCH_SIZE
        sess.run(train_step, feed_dict={x:X[start:end], y_:Y[start:end]})
        if i %500 == 0:
            total_loss = sess.run(loss, feed_dict={x: X,y_: Y})
            print("After %d training step(s), loss on all data is %g"%(i, total_loss))

    #輸出訓練後的引數值
    print("\n")
    print("w1:\n", sess.run(w1))
    print("w2:\n", sess.run(w2))

    """
    輸出結果:
        X:
 [[0.83494319 0.11482951]
 [0.66899751 0.46594987]
 [0.60181666 0.58838408]
 [0.31836656 0.20502072]
 [0.87043944 0.02679395]
 [0.41539811 0.43938369]
 [0.68635684 0.24833404]
 [0.97315228 0.68541849]
 [0.03081617 0.89479913]
 [0.24665715 0.28584862]
 [0.31375667 0.47718349]
 [0.56689254 0.77079148]
 [0.7321604  0.35828963]
 [0.15724842 0.94294584]
 [0.34933722 0.84634483]
 [0.50304053 0.81299619]
 [0.23869886 0.9895604 ]
 [0.4636501  0.32531094]
 [0.36510487 0.97365522]
 [0.73350238 0.83833013]
 [0.61810158 0.12580353]
 [0.59274817 0.18779828]
 [0.87150299 0.34679501]
 [0.25883219 0.50002932]
 [0.75690948 0.83429824]
 [0.29316649 0.05646578]
 [0.10409134 0.88235166]
 [0.06727785 0.57784761]
 [0.38492705 0.48384792]
 [0.69234428 0.19687348]
 [0.42783492 0.73416985]
 [0.09696069 0.04883936]]
Y:
 [[1], [0], [0], [1], [1], [1], [1], [0], [1], [1], [1], [0], [0], [0], [0], [0], [0], [1], [0], [0], [1], [1], [0], [1], [0], [1], [1], [1], [1], [1], [0], [1]]
2018-10-30 12:48:25.423305: I C:\tf_jenkins\home\workspace\rel-win\M\windows\PY\36\tensorflow\core\platform\cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
W1:
 [[-0.8113182   1.4845988   0.06532937]
 [-2.4427042   0.0992484   0.5912243 ]]
W2:
 [[-0.8113182 ]
 [ 1.4845988 ]
 [ 0.06532937]]


After 0 training step(s), loss on all data is 5.13118
After 500 training step(s), loss on all data is 0.413201
After 1000 training step(s), loss on all data is 0.394286
After 1500 training step(s), loss on all data is 0.390501
After 2000 training step(s), loss on all data is 0.389507
After 2500 training step(s), loss on all data is 0.390754
After 3000 training step(s), loss on all data is 0.391654
After 3500 training step(s), loss on all data is 0.393698
After 4000 training step(s), loss on all data is 0.394749
After 4500 training step(s), loss on all data is 0.396768
After 5000 training step(s), loss on all data is 0.397619
After 5500 training step(s), loss on all data is 0.399442
After 6000 training step(s), loss on all data is 0.400045
After 6500 training step(s), loss on all data is 0.401661
After 7000 training step(s), loss on all data is 0.402039
After 7500 training step(s), loss on all data is 0.403473
After 8000 training step(s), loss on all data is 0.403664
After 8500 training step(s), loss on all data is 0.404948
After 9000 training step(s), loss on all data is 0.404988
After 9500 training step(s), loss on all data is 0.406149


w1:
 [[-0.6102584   0.61458135  0.09446487]
 [-2.371945   -0.00763825  0.5834913 ]]
w2:
 [[-0.19101174]
 [ 0.6060787 ]
 [-0.00777807]]
    """
    """
    搭建神經網路的八股:準備,前傳,反轉,迭代
    0 準備 import
           常量定義

           生成資料集
    1.前向傳播: 定義輸入,引數和輸出
            x=
            y_=

            w1=
            w2=

            a=
            y=
    2.反向傳播:定義損失函式,反向傳播方法
    loss=
    train_step=
    3.生成會話,訓練STEPS輪
    with tf.Session() as sess
        init_op=tf.global_variables_initializer()
        sess_run(init_op)

        STEPS=3000
        for i in range(STEPS):      #定義訓練次數
            start=                  #每次迴圈喂入BASH的個數
            end=
            sess.run(train_step,feed_dict=)
    """