1. 程式人生 > >【讀書1】【2017】MATLAB與深度學習——增量規則舉例(1)

【讀書1】【2017】MATLAB與深度學習——增量規則舉例(1)

由於該神經網路是單層的,並且只包含了簡單的訓練資料,所以實現程式碼並不複雜。

As it is single-layered and contains simpletraining data, the code is not complicated.

一旦你根據程式碼逐步除錯,你將清楚地看到SGD程式碼和批處理程式碼之間的區別。

Once you follow the code, you will clearlysee the difference between the SGD code and the batch code.

如前所述,SGD逐個訓練每個資料點,並不需要權重更新的累加或平均值。

As previously addressed, the SGD trainsevery data point immediately and does not require addition or averages of theweight updates.

因此,SGD的程式碼比批處理的程式碼簡單。

Therefore, the code for the SGD is simplerthan that of the batch.

SGD方法的實現(Implementation of the SGD Method)

函式DeltaSGD是方程2.7給出的根據增量規則實現的SGD方法。

The function DeltaSGD is the SGD method ofthe delta rule given by Equation 2.7.

它採用神經網路的權值和訓練資料,並返回新訓練的權值。

It takes the weights and training data ofthe neural network and returns the newly trained weights.

W = DeltaSGD(W, X, D)

其中W為表示權值的變數,X和D分別為表示輸入和正確輸出的訓練資料。

where W is the argument that carries theweights. X and D carry the inputs and correct outputs of the training data,respectively.

為了方便起見,訓練資料被分成兩個變數。

The training data is divided into twovariables for convenience.

以下內容為實現DeltaSGD函式的DeltaSGD.m檔案。

The following listing shows the DeltaSGD.mfile, which implements the DeltaSGD function.

function W = DeltaSGD(W, X, D)

alpha = 0.9;

N = 4;

for k = 1:N

x = X(k, ?’;

d = D(k);

v = W*x;

y = Sigmoid(v);

e = d - y;

delta = y*(1-y)*e;

dW = alphadeltax; % delta rule

W(1) = W(1) + dW(1);

W(2) = W(2) + dW(2);

W(3) = W(3) + dW(3);

end end

以上程式碼進行如下操作:取一個數據點並計算輸出y。

The code proceeds as follows: Take one ofthe data points and calculate the output, y.

計算此輸出與正確輸出之間的差值d。

Calculate the difference between thisoutput and the correct output, d.

根據增量規則計算權重更新dW。

Calculate the weight update, dW, accordingto the delta rule.

利用該權值更新,調整神經網路的權值。

Using this weight update, adjust the weightof neural network.

根據訓練資料點的數量重複以上過程N次。

Repeat the process for the number of thetraining data points, N.

按照以上方法,函式DeltaSGD就為每一個時代訓練神經網路。

This way, the function DeltaSGD trains theneural network for every epoch.

下面給出了DeltaSGD呼叫的函式Sigmoid。

The function Sigmoid that DeltaSGD calls islisted next.

這裡概述了sigmoid函式的定義,並在Sigmoid.m檔案中予以實現。

This outlines the pure definition of thesigmoid function and is implemented in the Sigmoid.m file.

由於這是一個非常簡單的程式碼,我們不做進一步的討論。

As it is a very simple code, we skipfurther discussion of it.

function y = Sigmoid(x)

y = 1 / (1 +exp(-x));

end

以下列表給出測試DeltaSGD函式的TestDeltaSGD.m檔案。

The following listing shows theTestDeltaSGD.m file, which tests the DeltaSGD function.

該程式呼叫函式DeltaSGD,對其進行10000次訓練,並在輸入所有訓練資料的情況下顯示訓練神經網路的輸出。

This program calls the function DeltaSGD,trains it 10,000 times, and displays the output from the trained neural networkwith the input of all the training data.

——本文譯自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章請關注微訊號:在這裡插入圖片描述