1. 程式人生 > >Tensorflow Ubuntu16.04上安裝及CPU執行Tensorboard、CNN、RNN圖文教程

Tensorflow Ubuntu16.04上安裝及CPU執行Tensorboard、CNN、RNN圖文教程

Tensorflow Ubuntu16.04上安裝及CPU執行tensorboard、CNN、RNN圖文教程

Ubuntu16.04系統安裝

Win7 U盤安裝Ubuntu16.04 雙系統詳細教程參看部落格:http://blog.csdn.net/coderjyf/article/details/51241919
VMware12安裝虛擬機器教程、Ubuntu16.04安裝教程參看百度經驗:http://jingyan.baidu.com/article/c275f6ba07e269e33d756714.html
懶人版:下載備份好的已裝有Tensorflow的系統:http://pan.baidu.com/s/1eRPWlKu 密碼:j3i2
懶人版系統還原方法:
       ①安裝VMware Workstation,把下載好的懶人版系統Tensorflow_Ubuntu16.04.rar解壓,用VMware Workstation開啟,如圖:
       
       ②開機密碼:111111
                                                

建議使用雙系統的Ubuntu,因為虛擬機器有點卡,如果你用的是懶人版的備份系統,你可以直接跳到tensorflow測試執行步驟了。
python3.5安裝

1.ubuntu16.04系統會自帶python2.7,請不要解除安裝它。不同版本的Python可以共存在一個系統上。Ctrl+Alt+T開啟終端輸入命令:python

可檢視當前版本(exit()退出python編輯)。若解除安裝之後,桌面系統會被影響。

2.依次輸入命令並回車(如有密碼驗證,輸入密碼,回車):

$ sudo add-apt-repository ppa:fkrull/deadsnakes

$ sudo apt-get update

$ sudo apt-get install python3.5
$ sudo cp /usr/bin/python /usr/bin/python_bak,先備份
$ sudo rm /usr/bin/python,刪除
$ sudo ln -s /usr/bin/python3.5 /usr/bin/python,預設設定成python3.5,重建軟連結這樣在終端中輸入python預設就是 3.5版本了

Tensorflow安裝

1.安裝python3-pip

$ sudo apt-get install python3-pip 

輸入後會出現一串程式碼,然後問是否繼續,輸入y回車。

2.安裝tensorflow

$ sudo pip3 install tensorflow

(1)若出現紅色可選擇重複此步操作或下載安裝:

手動下載: https://pypi.python.org/pypi/tensorflow


網盤下載: 連結:http://pan.baidu.com/s/1eSOQ5zG 密碼:aqkr

(2)把下載好的檔案放到home目錄下,可通過ls檢視到

①執行安裝tensorflow,若出現紅色可選擇重複此步操作。

$  sudo pip3 install tensorflow-1.2.0rc0-cp35-cp35m-manylinux1_x86_64.whl 


②安裝成功介面如下:


安裝Komodo編輯工具(類似於windows下的notepad++)

1.下載
網盤下載: http://pan.baidu.com/s/1i4HIAhf密碼:e7js
官網下載:https://www.activestate.com/komodo-edit
2.把下載好的檔案放到home目錄下,解壓該檔案(在檔案上右擊執行“提取到此處”),可通過ls檢視到

①開啟Komodo-Edit-10.2.1-17670-linux-x86_64資料夾

$ cd Komodo-Edit-10.2.1-17670-linux-x86_64/


②輸入  sudo ./install.sh  安裝Komodo-Edit,安裝完成後,搜尋“Komodo”即可開啟該軟體


Tensorflow測試執行

以下程式碼(包括MNIST資料集)網盤下載:http://pan.baidu.com/s/1chFs8u 密碼:ikh7

1.tensorboard.py


"""
Please note, this code is only for python 3+. If you are using python 2+, please modify the code accordingly.
"""
from __future__ import print_function
import tensorflow as tf
import numpy as np
 
 
def add_layer(inputs, in_size, out_size, n_layer, activation_function=None):
    # add one more layer and return the output of this layer
    layer_name = 'layer%s' % n_layer
    with tf.name_scope(layer_name):
        with tf.name_scope('weights'):
            Weights = tf.Variable(tf.random_normal([in_size, out_size]), name='W')
            tf.summary.histogram(layer_name + '/weights', Weights)
        with tf.name_scope('biases'):
            biases = tf.Variable(tf.zeros([1, out_size]) + 0.1, name='b')
            tf.summary.histogram(layer_name + '/biases', biases)
        with tf.name_scope('Wx_plus_b'):
            Wx_plus_b = tf.add(tf.matmul(inputs, Weights), biases)
        if activation_function is None:
            outputs = Wx_plus_b
        else:
            outputs = activation_function(Wx_plus_b, )
        tf.summary.histogram(layer_name + '/outputs', outputs)
    return outputs
 
 
# Make up some real data
x_data = np.linspace(-1, 1, 300)[:, np.newaxis]
noise = np.random.normal(0, 0.05, x_data.shape)
y_data = np.square(x_data) - 0.5 + noise
 
# define placeholder for inputs to network
with tf.name_scope('inputs'):
    xs = tf.placeholder(tf.float32, [None, 1], name='x_input')
    ys = tf.placeholder(tf.float32, [None, 1], name='y_input')
 
# add hidden layer
l1 = add_layer(xs, 1, 10, n_layer=1, activation_function=tf.nn.relu)
# add output layer
prediction = add_layer(l1, 10, 1, n_layer=2, activation_function=None)
 
# the error between prediciton and real data
with tf.name_scope('loss'):
    loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys - prediction),
                                        reduction_indices=[1]))
    tf.summary.scalar('loss', loss)
 
with tf.name_scope('train'):
    train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss)
 
sess = tf.Session()
merged = tf.summary.merge_all()
 
writer = tf.summary.FileWriter("logs/", sess.graph)
 
init = tf.global_variables_initializer()
sess.run(init)
 
for i in range(1000):
    sess.run(train_step, feed_dict={xs: x_data, ys: y_data})
    if i % 50 == 0:
        result = sess.run(merged,
                          feed_dict={xs: x_data, ys: y_data})
        writer.add_summary(result, i)
 
# direct to the local dir and run this in terminal:
# $ tensorboard --logdir logs

①在home目錄建立tensorflower資料夾,把tensorboard.py放入該目錄下,執行如下命令:

$ cd tensorflow/
$ python tensorboard.py 

②此時tensorflow資料夾下將產生一個logs資料夾,然後輸入命令能產生檢視tensorboard的網址:

$ tensorboard --logdir logs

執行如下圖所示:

③開啟 http://ubuntu:6006即可檢視,loss曲線:


④神經網路圖:

⑤每一層的weights和biases變化情況


2.CNN.py


 """
Please note, this code is only for python 3+. If you are using python 2+, please modify the code accordingly.
"""
from __future__ import print_function
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
# number 1 to 10 data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
 
def compute_accuracy(v_xs, v_ys):
    global prediction
    y_pre = sess.run(prediction, feed_dict={xs: v_xs, keep_prob: 1})
    correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(v_ys,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
    result = sess.run(accuracy, feed_dict={xs: v_xs, ys: v_ys, keep_prob: 1})
    return result
 
def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)
 
def bias_variable(shape):
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)
 
def conv2d(x, W):
    # stride [1, x_movement, y_movement, 1]
    # Must have strides[0] = strides[3] = 1
    return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
 
def max_pool_2x2(x):
    # stride [1, x_movement, y_movement, 1]
    return tf.nn.max_pool(x, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
 
# define placeholder for inputs to network
xs = tf.placeholder(tf.float32, [None, 784])/255.   # 28x28
ys = tf.placeholder(tf.float32, [None, 10])
keep_prob = tf.placeholder(tf.float32)
x_image = tf.reshape(xs, [-1, 28, 28, 1])
# print(x_image.shape)  # [n_samples, 28,28,1]
 
## conv1 layer ##
W_conv1 = weight_variable([5,5, 1,32]) # patch 5x5, in size 1, out size 32
b_conv1 = bias_variable([32])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1) # output size 28x28x32
h_pool1 = max_pool_2x2(h_conv1)                                         # output size 14x14x32
 
## conv2 layer ##
W_conv2 = weight_variable([5,5, 32, 64]) # patch 5x5, in size 32, out size 64
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2) # output size 14x14x64
h_pool2 = max_pool_2x2(h_conv2)                                         # output size 7x7x64
 
## fc1 layer ##
W_fc1 = weight_variable([7*7*64, 1024])
b_fc1 = bias_variable([1024])
# [n_samples, 7, 7, 64] ->> [n_samples, 7*7*64]
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
 
## fc2 layer ##
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
prediction = tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)
 
 
# the error between prediction and real data
cross_entropy = tf.reduce_mean(-tf.reduce_sum(ys * tf.log(prediction),
                                              reduction_indices=[1]))       # loss
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
 
sess = tf.Session()
# important step
# tf.initialize_all_variables() no long valid from
# 2017-03-02 if using tensorflow >= 0.12
if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
    init = tf.initialize_all_variables()
else:
    init = tf.global_variables_initializer()
sess.run(init)
 
for i in range(1000):
    batch_xs, batch_ys = mnist.train.next_batch(100)
    sess.run(train_step, feed_dict={xs: batch_xs, ys: batch_ys, keep_prob: 0.5})
    if i % 50 == 0:
        print(compute_accuracy(
            mnist.test.images, mnist.test.labels))

3.RNN.py


"""
This code is a modified version of the code from this link:
https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py
His code is a very good one for RNN beginners. Feel free to check it out.
"""
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
 
# set random seed for comparing the two result calculations
tf.set_random_seed(1)
 
# this is data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
 
# hyperparameters
lr = 0.001      #learning rate
training_iters = 100000
batch_size = 128
 
n_inputs = 28   # MNIST data input (img shape: 28*28)
n_steps = 28    # time steps
n_hidden_units = 128   # neurons in hidden layer
n_classes = 10      # MNIST classes (0-9 digits)
 
# tf Graph input
x = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_classes])
 
# Define weights
weights = {
    # (28, 128)
    'in': tf.Variable(tf.random_normal([n_inputs, n_hidden_units])),
    # (128, 10)
    'out': tf.Variable(tf.random_normal([n_hidden_units, n_classes]))
}
biases = {
    # (128, )
    'in': tf.Variable(tf.constant(0.1, shape=[n_hidden_units, ])),
    # (10, )
    'out': tf.Variable(tf.constant(0.1, shape=[n_classes, ]))
}
 
 
def RNN(X, weights, biases):
    # hidden layer for input to cell
    ########################################
 
    # transpose the inputs shape from
    # X ==> (128 batch * 28 steps, 28 inputs)
    X = tf.reshape(X, [-1, n_inputs])
 
    # into hidden
    # X_in = (128 batch * 28 steps, 128 hidden)
    X_in = tf.matmul(X, weights['in']) + biases['in']
    # X_in ==> (128 batch, 28 steps, 128 hidden)
    X_in = tf.reshape(X_in, [-1, n_steps, n_hidden_units])
 
    # cell
    ##########################################
 
    # basic LSTM Cell.
    if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
        cell = tf.nn.rnn_cell.BasicLSTMCell(n_hidden_units, forget_bias=1.0, state_is_tuple=True)
    else:
        cell = tf.contrib.rnn.BasicLSTMCell(n_hidden_units)
    # lstm cell is divided into two parts (c_state, h_state)
    init_state = cell.zero_state(batch_size, dtype=tf.float32)
 
    # You have 2 options for following step.
    # 1: tf.nn.rnn(cell, inputs);
    # 2: tf.nn.dynamic_rnn(cell, inputs).
    # If use option 1, you have to modified the shape of X_in, go and check out this:
    # https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py
    # In here, we go for option 2.
    # dynamic_rnn receive Tensor (batch, steps, inputs) or (steps, batch, inputs) as X_in.
    # Make sure the time_major is changed accordingly.
    outputs, final_state = tf.nn.dynamic_rnn(cell, X_in, initial_state=init_state, time_major=False)
 
    # hidden layer for output as the final results
    #############################################
    # results = tf.matmul(final_state[1], weights['out']) + biases['out']
 
    # # or
    # unpack to list [(batch, outputs)..] * steps
    if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
        outputs = tf.unpack(tf.transpose(outputs, [1, 0, 2]))    # states is the last outputs
    else:
        outputs = tf.unstack(tf.transpose(outputs, [1,0,2]))
    results = tf.matmul(outputs[-1], weights['out']) + biases['out']    # shape = (128, 10)
 
    return results
 
 
pred = RNN(x, weights, biases)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
train_op = tf.train.AdamOptimizer(lr).minimize(cost)
 
correct_pred = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
 
with tf.Session() as sess:
    # tf.initialize_all_variables() no long valid from
    # 2017-03-02 if using tensorflow >= 0.12
    if int((tf.__version__).split('.')[1]) < 12 and int((tf.__version__).split('.')[0]) < 1:
        init = tf.initialize_all_variables()
    else:
        init = tf.global_variables_initializer()
    sess.run(init)
    step = 0
    while step * batch_size < training_iters:
        batch_xs, batch_ys = mnist.train.next_batch(batch_size)
        batch_xs = batch_xs.reshape([batch_size, n_steps, n_inputs])
        sess.run([train_op], feed_dict={
            x: batch_xs,
            y: batch_ys,
        })
        if step % 20 == 0:
            print(sess.run(accuracy, feed_dict={
            x: batch_xs,
            y: batch_ys,
            }))
        step += 1

ps:如想詳細瞭解以上程式碼的具體講解可參看:https://morvanzhou.github.io/tutorials/machine-learning/tensorflow/
或者網盤下載視訊:連結:http://pan.baidu.com/s/1i57P1hN 密碼:rxgh,以上程式碼均在該視訊學習指導下完成。
--------------------- 
作者:Wizen123 
來源:CSDN 
原文:https://blog.csdn.net/wizen641372472/article/details/72675549 
版權宣告:本文為博主原創文章,轉載請附上博文連結!