1. 程式人生 > >深度學習筆記——深度學習框架TensorFlow(三)

深度學習筆記——深度學習框架TensorFlow(三)

一. 學習網站:

二. 教程:

Deep MNIST for Experts:
TensorFlow is a powerful library for doing large-scale numerical computation. One of the tasks at which it excels is implementing and training deep neural networks. In this tutorial we will learn the basic building blocks of a TensorFlow model while constructing a deep convolutional MNIST classifier.
TensorFlow是一個非常強大的用來做大規模數值計算的庫。其所擅長的任務之一就是實現以及訓練深度神經網路。在本教程中,我們將學到構建一個TensorFlow模型的基本步驟,並將通過這些步驟為MNIST構建一個深度卷積神經網路。

This introduction assumes familiarity with neural networks and the MNIST dataset. If you don’t have a background with them, check out the introduction for beginners. Be sure to install TensorFlow before starting.
這個教程假設你已經熟悉神經網路和MNIST資料集。如果你尚未了解,請檢視新手指南.

About this tutorial:
The first part of this tutorial explains what is happening in the mnist_softmax.py code, which is a basic implementation of a Tensorflow model. The second part shows some ways to improve the accuracy.
教程的第一部分是在解釋mnist_softmax.py程式碼,它是對TensorFlow模型的基本實現,第二部分是在分析如何改善它的精度。

You can copy and paste each code snippet from this tutorial into a Python environment, or you can choose to just read through the code.
你可以將程式碼一行行復制到Python環境下,也可以選擇閱讀這些程式碼。

What we will accomplish in this tutorial:
接下來我們需要完成下面目標

a) Create a softmax regression function that is a model for recognizing MNIST digits, based on looking at every pixel in the image
建立一個softmax迴歸函式,它是基於在影象中的每個畫素以識別MNIST數字模型。

b) Use Tensorflow to train the model to recognize digits by having it “look” at thousands of examples (and run our first Tensorflow session to do so)
使用tensorflow訓練模型識別數字有了“看”在成千上萬的例子(並執行我們的第一tensorflow會話這樣做)

c) Check the model’s accuracy with our test data
用我們的測試資料檢查模型的準確性。

d) Build, train, and test a multilayer convolutional neural network to improve the results
構建、訓練和測試多層卷積神經網路以提高結果

Setup:
Before we create our model, we will first load the MNIST dataset, and start a TensorFlow session.
在建立模型之前,我們首先需要載入MNIST資料,並且啟動一個TensorFlow會話

Load MNIST Data:
If you are copying and pasting in the code from this tutorial, start here with these two lines of code which will download and read in the data automatically:
如果你從教程中複製黏貼程式碼,那麼現在下面兩句能自動幫你下載程式碼

from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('/MNIST_data',one_hot=Ture)

Here mnist is a lightweight class which stores the training, validation, and testing sets as NumPy arrays. It also provides a function for iterating through data minibatches, which we will use below.
這裡,mnist是一個輕量級的類。它以Numpy陣列的形式儲存著訓練、校驗和測試資料集。同時提供了一個函式,用於在迭代中獲得minibatch,後面我們將會用到。

Start TensorFlow InteractiveSession:
TensorFlow relies on a highly efficient C++ backend to do its computation. The connection to this backend is called a session. The common usage for TensorFlow programs is to first create a graph and then launch it in a session.
Tensorflow依賴於一個高效的C++後端來進行計算。與後端的這個連線叫做session。一般而言,使用TensorFlow程式的流程是先建立一個圖,然後在session中啟動它。

Here we instead use the convenient InteractiveSession class, which makes TensorFlow more flexible about how you structure your code. It allows you to interleave operations which build a computation graph with ones that run the graph. This is particularly convenient when working in interactive contexts like IPython. If you are not using an InteractiveSession, then you should build the entire computation graph before starting a session and launching the graph.
這裡,我們使用更加方便的InteractiveSession類。通過它,你可以更加靈活地構建你的程式碼。它能讓你在執行圖的時候,插入一些計算圖,這些計算圖是由某些操作(operations)構成的。這對於工作在互動式環境中的人們來說非常便利,比如使用IPython。如果你沒有使用InteractiveSession,那麼你需要在啟動session之前構建整個計算圖,然後啟動該計算圖。

import tensorflow as tf
sess = tf.InteractiveSession()

Computation Graph:
To do efficient numerical computing in Python, we typically use libraries like NumPy that do expensive operations such as matrix multiplication outside Python, using highly efficient code implemented in another language. Unfortunately, there can still be a lot of overhead from switching back to Python every operation. This overhead is especially bad if you want to run computations on GPUs or in a distributed manner, where there can be a high cost to transferring data.
為了在Python中進行高效的數值計算,我們通常會使用像NumPy一類的庫,將一些諸如矩陣乘法的耗時操作在Python環境的外部來計算,這些計算通常會通過其它語言並用更為高效的程式碼來實現。但遺憾的是,每一個操作切換回Python環境時仍需要不小的開銷。如果你想在GPU或者分散式環境中計算時,這一開銷更加恐怖,這一開銷主要可能是用來進行資料遷移。

TensorFlow also does its heavy lifting outside Python, but it takes things a step further to avoid this overhead. Instead of running a single expensive operation independently from Python, TensorFlow lets us describe a graph of interacting operations that run entirely outside Python. This approach is similar to that used in Theano or Torch.
TensorFlow也是在Python外部完成其主要工作,但是進行了改進以避免這種開銷。其並沒有採用在Python外部獨立執行某個耗時操作的方式,而是先讓我們描述一個互動操作圖,然後完全將其執行在Python外部。這與Theano或Torch的做法類似。

The role of the Python code is therefore to build this external computation graph, and to dictate which parts of the computation graph should be run. See the Computation Graph section of Basic Usage for more detail.
因此Python程式碼的目的是用來構建這個可以在外部執行的計算圖,以及安排計算圖的哪一部分應該被執行。詳情請檢視基本用法(https://www.tensorflow.org/versions/r0.12/get_started/basic_usage.html)中的計算圖表(https://www.tensorflow.org/versions/r0.12/get_started/basic_usage.html#the-computation-graph)一節。

Build a Softmax Regression Model:
In this section we will build a softmax regression model with a single linear layer. In the next section, we will extend this to the case of softmax regression with a multilayer convolutional network.
在這一節中我們將建立一個擁有一個線性層的softmax迴歸模型。在下一節,我們會將其擴充套件為一個擁有多層卷積網路的softmax迴歸模型。

Placeholders:
We start building the computation graph by creating nodes for the input images and target output classes.
我們通過為輸入影象和目標輸出類別建立節點,來開始構建計算圖。

x = tf.placeholder(tf.float32,shape=[None,784])
y_ = tf.placeholder(tf.float32,shape=[None,10])

Here x and y_ aren’t specific values. Rather, they are each a placeholder – a value that we’ll input when we ask TensorFlow to run a computation.
這裡的x和y並不是特定的值,相反,他們都只是一個佔位符,可以在TensorFlow執行某一計算時根據該佔位符輸入具體的值。

The input images x will consist of a 2d tensor of floating point numbers. Here we assign it a shape of [None, 784], where 784 is the dimensionality of a single flattened 28 by 28 pixel MNIST image, and None indicates that the first dimension, corresponding to the batch size, can be of any size. The target output classes y_ will also consist of a 2d tensor, where each row is a one-hot 10-dimensional vector indicating which digit class (zero through nine) the corresponding MNIST image belongs to.
輸入圖片x是一個2維的浮點數張量。這裡,分配給它的shape為[None, 784],其中784是一張展平的MNIST圖片的維度。None表示其值大小不定,在這裡作為第一個維度值,用以指代batch的大小,意即x的數量不定。輸出類別值y_也是一個2維張量,其中每一行為一個10維的one-hot向量,用於代表對應某一MNIST圖片的類別。

The shape argument to placeholder is optional, but it allows TensorFlow to automatically catch bugs stemming from inconsistent tensor shapes.
雖然placeholder的shape引數是可選的,但有了它,TensorFlow能夠自動捕捉因資料維度不一致導致的錯誤。

Variables:
We now define the weights W and biases b for our model. We could imagine treating these like additional inputs, but TensorFlow has an even better way to handle them: Variable. A Variable is a value that lives in TensorFlow’s computation graph. It can be used and even modified by the computation. In machine learning applications, one generally has the model parameters be Variables.
我們現在為模型定義權重W和偏置b。可以將它們當作額外的輸入量,但是TensorFlow有一個更好的處理方式:變數。一個變數代表著TensorFlow計算圖中的一個值,能夠在計算過程中使用,甚至進行修改。在機器學習的應用過程中,模型引數一般用Variable來表示。

w = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))

We pass the initial value for each parameter in the call to tf.Variable. In this case, we initialize both W and b as tensors full of zeros. W is a 784x10 matrix (because we have 784 input features and 10 outputs) and b is a 10-dimensional vector (because we have 10 classes).
我們在呼叫tf.Variable的時候傳入初始值。在這個例子裡,我們把W和b都初始化為零向量。W是一個784x10的矩陣(因為我們有784個特徵和10個輸出值)。b是一個10維的向量(因為我們有10個分類)。

Before Variables can be used within a session, they must be initialized using that session. This step takes the initial values (in this case tensors full of zeros) that have already been specified, and assigns them to each Variable. This can be done for all Variables at once:
變數需要通過seesion初始化後,才能在session中使用。這一初始化步驟為,為初始值指定具體值(本例當中是全為零),並將其分配給每個變數,可以一次性為所有變數完成此操作。

sess.run(tf.global_variables_initializer())

Predicted Class and Loss Function:
We can now implement our regression model. It only takes one line! We multiply the vectorized input images x by the weight matrix W, add the bias b.
現在我們可以實現我們的迴歸模型了。這隻需要一行!我們把向量化後的圖片x和權重矩陣W相乘,加上偏置b,然後計算每個分類的softmax概率值。

y = tf.nn.softmax(tf.matmul(x,W)+b)

We can specify a loss function just as easily. Loss indicates how bad the model’s prediction was on a single example; we try to minimize that while training across all the examples. Here, our loss function is the cross-entropy between the target and the softmax activation function applied to the model’s prediction. As in the beginners tutorial, we use the stable formulation:
可以很容易的為訓練過程指定最小化誤差用的損失函式,我們的損失函式是目標類別和預測類別之間的交叉熵。

cross_entropy = tf.reduce_sum(y_*tf.log(y))

Note that tf.nn.softmax_cross_entropy_with_logits internally applies the softmax on the model’s unnormalized model prediction and sums across all classes, and tf.reduce_mean takes the average over these sums.
注意,tf.reduce_sum把minibatch裡的每張圖片的交叉熵值都加起來了。我們計算的交叉熵是指整個minibatch的。
Train the Model:
Now that we have defined our model and training loss function, it is straightforward to train using TensorFlow. Because TensorFlow knows the entire computation graph, it can use automatic differentiation to find the gradients of the loss with respect to each of the variables. TensorFlow has a variety of built-in optimization algorithms. For this example, we will use steepest gradient descent, with a step length of 0.5, to descend the cross entropy.
我們已經定義好模型和訓練用的損失函式,那麼用TensorFlow進行訓練就很簡單了。因為TensorFlow知道整個計算圖,它可以使用自動微分法找到對於各個變數的損失的梯度值。TensorFlow有大量內建的優化演算法 這個例子中,我們用最速下降法讓交叉熵下降,步長為0.5.

train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)

What TensorFlow actually did in that single line was to add new operations to the computation graph. These operations included ones to compute gradients, compute parameter update steps, and apply update steps to the parameters.
這一行程式碼實際上是用來往計算圖上新增一個新操作,其中包括計算梯度,計算每個引數的步長變化,並且計算出新的引數值。

The returned operation train_step, when run, will apply the gradient descent updates to the parameters. Training the model can therefore be accomplished by repeatedly running train_step.
返回的train_step操作物件,在執行時會使用梯度下降來更新引數。因此,整個模型的訓練可以通過反覆地執行train_step來完成。

for i in range(1000):
    batch = mnist.train.next_batch(100)
    train_step.run(feed_dict={x:batch[0],y_:batch[1]})

We load 100 training examples in each training iteration. We then run the train_step operation, using feed_dict to replace the placeholder tensors x and y_ with the training examples. Note that you can replace any tensor in your computation graph using feed_dict – it’s not restricted to just placeholders.
每一步迭代,我們都會載入100個訓練樣本,然後執行一次train_step,並通過feed_dict將x 和 y_張量佔位符用訓練訓練資料替代。注意,在計算圖中,你可以用feed_dict來替代任何張量,並不僅限於替換佔位符。

Evaluate the Model:
How well did our model do?
那麼我們的模型效能如何呢?

First we’ll figure out where we predicted the correct label. tf.argmax is an extremely useful function which gives you the index of the highest entry in a tensor along some axis. For example, tf.argmax(y,1) is the label our model thinks is most likely for each input, while tf.argmax(y_,1) is the true label. We can use tf.equal to check if our prediction matches the truth.
首先讓我們找出那些預測正確的標籤。tf.argmax 是一個非常有用的函式,它能給出某個tensor物件在某一維上的其資料最大值所在的索引值。由於標籤向量是由0,1組成,因此最大值1所在的索引位置就是類別標籤,比如tf.argmax(y,1)返回的是模型對於任一輸入x預測到的標籤值,而 tf.argmax(y_,1) 代表正確的標籤,我們可以用 tf.equal 來檢測我們的預測是否真實標籤匹配(索引位置一樣表示匹配)。

correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))

That gives us a list of booleans. To determine what fraction are correct, we cast to floating point numbers and then take the mean. For example, [True, False, True, True] would become [1,0,1,1] which would become 0.75.
這裡返回一個布林陣列。為了計算我們分類的準確率,我們將布林值轉換為浮點數來代表對、錯,然後取平均值。例如:[True, False, True, True]變為[1,0,1,1],計算出平均值為0.75。

accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

Finally, we can evaluate our accuracy on the test data. This should be about 92% correct.
最後,我們可以計算出在測試資料上的準確率,大概是91%。

print(accuracy.eval(feed_dict={x:mnist.test.images,y_:mnist.test.labels}))

Build a Multilayer Convolutional Network:
Getting 92% accuracy on MNIST is bad. It’s almost embarrassingly bad. In this section, we’ll fix that, jumping from a very simple model to something moderately sophisticated: a small convolutional neural network. This will get us to around 99.2% accuracy – not state of the art, but respectable.
在MNIST上只有91%正確率,實在太糟糕。在這個小節裡,我們用一個稍微複雜的模型:卷積神經網路來改善效果。這會達到大概99.2%的準確率。雖然不是最高,但是還是比較讓人滿意。

Weight Initialization

To create this model, we’re going to need to create a lot of weights and biases. One should generally initialize weights with a small amount of noise for symmetry breaking, and to prevent 0 gradients. Since we’re using ReLU neurons, it is also good practice to initialize them with a slightly positive initial bias to avoid “dead neurons”. Instead of doing this repeatedly while we build the model, let’s create two handy functions to do it for us.
為了建立這個模型,我們需要建立大量的權重和偏置項。這個模型中的權重在初始化時應該加入少量的噪聲來打破對稱性以及避免0梯度。由於我們使用的是ReLU神經元,因此比較好的做法是用一個較小的正數來初始化偏置項,以避免神經元節點輸出恆為0的問題(dead neurons)。為了不在建立模型的時候反覆做初始化操作,我們定義兩個函式用於初始化。

def weight_variable(shape):
    initial = tf.truncated_normal(shape,stddev=0.1)
    return tf.Variable(initial)

def bias_variable(shape):
    initial = tf.constant(0.1,shape=shape)
    return tf.Variable(initial)

Convolution and Pooling

TensorFlow also gives us a lot of flexibility in convolution and pooling operations. How do we handle the boundaries? What is our stride size? In this example, we’re always going to choose the vanilla version. Our convolutions uses a stride of one and are zero padded so that the output is the same size as the input. Our pooling is plain old max pooling over 2x2 blocks. To keep our code cleaner, let’s also abstract those operations into functions.
TensorFlow在卷積和池化上有很強的靈活性。我們怎麼處理邊界?步長應該設多大?在這個例項裡,我們會一直使用vanilla版本。我們的卷積使用1步長(stride size),0邊距(padding size)的模板,保證輸出和輸入是同一個大小。我們的池化用簡單傳統的2x2大小的模板做max pooling。為了程式碼更簡潔,我們把這部分抽象成一個函式。

def conv2d(x,W):
    return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding='SAME')

def max_pool_2x2(x):
    return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')

First Convolutional Layer

We can now implement our first layer. It will consist of convolution, followed by max pooling. The convolution will compute 32 features for each 5x5 patch. Its weight tensor will have a shape of [5, 5, 1, 32]. The first two dimensions are the patch size, the next is the number of input channels, and the last is the number of output channels. We will also have a bias vector with a component for each output channel.
現在我們可以開始實現第一層了。它由一個卷積接一個max pooling完成。卷積在每個5x5的patch中算出32個特徵。卷積的權重張量形狀是[5, 5, 1, 32],【插入一個解釋:rgb是三通道圖片,這裡的32可以看成是32通道圖片,即有32張不同的圖片,比如圖片是xy方向,通道就是z方向的。這裡的[5,5,1,32]表示的就是5*5的卷積核,使得圖片從1通道捲成32通道。從1個方向的圖片變成了32個不同方向的圖片。28x28的圖片,經過[5,5,1,32]且步長為1的話,就會由28x28x1的圖片變成24x24x32的圖片。1x1的卷積核的用出就是原來是28x28x1024的話用[1,1,1024,32]就會變成28x28x32,這樣就能到達降維的效果,降低的是通道數。這個卷積過程也可以這麼理解:把不同通道的相同位置加起來後再分解成32個不同方向。】
前兩個維度是patch的大小,接著是輸入的通道數目,最後是輸出的通道數目。 而對於每一個輸出通道都有一個對應的偏置量。
注意:
a) 輸入影象的shape是這樣的:input shape:[batch, in_height, in_width, in_channels]
b) 過濾器的shape是這樣的:filter shape:filter_height, filter_width, in_channels, out_channels]

W_conv1 = weight_variable([5,5,1,32])
b_conv1 = bias_variable([32])

To apply the layer, we first reshape x to a 4d tensor, with the second and third dimensions corresponding to image width and height, and the final dimension corresponding to the number of color channels.
為了用這一層,我們把x變成一個4d張量,其第2、第3維對應圖片的寬、高,最後一維代表圖片的顏色通道數(因為是灰度圖所以這裡的通道數為1,如果是rgb彩色圖,則為3)。

x_image = tf.reshape(x,[-1,28,28,1])
# tensor 't' is [[[1, 1, 1],
#                 [2, 2, 2]],
#                [[3, 3, 3],
#                 [4, 4, 4]],
#                [[5, 5, 5],
#                 [6, 6, 6]]]
# tensor 't' has shape [3, 2, 3],
# 即從外層到內層的括號數,以下表示了第一個值為3的原因
#     [【[1, 1, 1],
#     [2, 2, 2]】,
#     【[3, 3, 3],
#     [4, 4, 4]】,
#     【[5, 5, 5],
#     [6, 6, 6]】]     
# tensor 't' is [1, 2, 3, 4, 5, 6, 7, 8, 9]
# tensor 't' has shape [9]
# reshape(t, [3, 3]) ==> [[1, 2, 3],
#                        [4, 5, 6],
#                        [7, 8, 9]]
# tensor 't' is [[[1, 1], [2, 2]],
#                [[3, 3], [4, 4]]]
# tf.reshape(tensor, shape, name=None) 
#函式的作用是將tensor變換為引數shape的形式。 
#其中shape為一個列表形式,特殊的一點是列表中可以存在-1。-1
#代表的含義是不用我們自己指定這一維的大小,函式會自動計算,但列表中只能存在一個-1,不然就會存在多解方程了。

We then convolve x_image with the weight tensor, add the bias, apply the ReLU function, and finally max pool. The max_pool_2x2 method will reduce the image size to 14x14.
我們把x_image和權值向量進行卷積,加上偏置項,然後應用ReLU啟用函式,最後進行max pooling。

h_conv1 = tf.nn.relu(conv2d(x_image,W_conv1)+b_conv1)
h_pool1 = max_pool_2x2(h_conv1)

Second Convolutional Layer:
In order to build a deep network, we stack several layers of this type. The second layer will have 64 features for each 5x5 patch.
為了構建一個更深的網路,我們會把幾個類似的層堆疊起來。第二層中,每個5x5的patch會得到64個特徵。

W_conv2 = weight_variable([5,5,32,64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1,W_conv2)+b_conv2)
h_pool2 = max_pool_2x2(h_conv2)

Densely Connected Layer(密集連線層)

Now that the image size has been reduced to 7x7, we add a fully-connected layer with 1024 neurons to allow processing on the entire image. We reshape the tensor from the pooling layer into a batch of vectors, multiply by a weight matrix, add a bias, and apply a ReLU.
現在,圖片尺寸減小到7x7【插入一點:conv引數中’SAME’和’VALID’的區別:SAME:卷積後,輸入訊號和輸出訊號是一樣的大小,VALID就和我們平常理解的一樣,輸入影象是5x5的,卷積核是2x2的,則輸出(5-2+1)x(5-2+1)的,這裡用的是’VALID’因此只有池化層是縮減倍數的28->24->7】,我們加入一個有1024個神經元的全連線層,用於處理整個圖片。我們把池化層輸出的張量reshape成一些向量,乘上權重矩陣,加上偏置,然後對其使用ReLU。

W_fc1 = weight_variable([7*7*64,1024])
b_fc1 = bias_variable([1024])

h_pool2_flat = tf.reshape(h_pool2,[-1,7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1)+b_fc1)

Dropout

To reduce overfitting, we will apply dropout before the readout layer. We create a placeholder for the probability that a neuron’s output is kept during dropout. This allows us to turn dropout on during training, and turn it off during testing. TensorFlow’s tf.nn.dropout op automatically handles scaling neuron outputs in addition to masking them, so dropout just works without any additional scaling.1
為了減少過擬合,我們在輸出層之前加入dropout。我們用一個placeholder來代表一個神經元的輸出在dropout中保持不變的概率。這樣我們可以在訓練過程中啟用dropout,在測試過程中關閉dropout。 TensorFlow的tf.nn.dropout操作除了可以遮蔽神經元的輸出外,還會自動處理神經元輸出值的scale。所以用dropout的時候可以不用考慮scale。

Readout Layer

Finally, we add a layer, just like for the one layer softmax regression above.
最後,我們新增一個softmax層,就像前面的單層softmax regression一樣。