1. 程式人生 > >【讀書1】【2017】MATLAB與深度學習——批處理方法的實現(2)

【讀書1】【2017】MATLAB與深度學習——批處理方法的實現(2)

這表明,批處理方法需要更多的時間來訓練神經網路,以產生與SGD方法類似的精度水平。

This indicates that the batch methodrequires more time to train the neural network to yield a similar level ofaccuracy of that of the SGD method.

換句話說,批處理方法學習緩慢。

In other words, the batch method learnsslowly.

比較SGD與批處理方法(Comparison of the SGD and the Batch)

在這一節中,我們將實際地研究SGD和批處理方法的學習速度。

In this section, we practically investigatethe learning speeds of the SGD and the batch.

在整個訓練資料的訓練過程結束時,比較這些方法的誤差。

The errors of these methods are compared atthe end of the training processes for the entire training data.

下面的程式清單顯示了SGDvsBatch.m檔案的詳細內容,它比較了兩種方法的平均誤差。

The following program listing shows theSGDvsBatch.m file, which compares the mean error of the two methods.

為了實現公平的比較,兩種方法的權重以相同的值初始化。

In order to evaluate a fair comparison, theweights of both methods are initialized with the same values.

clear all X = [ 0 0 1; 0 1 1; 1 0 1; 1 1 1; ];

D = [ 0 0 1 1]; E1 = zeros(1000, 1); E2 = zeros(1000, 1); W1 = 2*rand(1, 3) - 1; W2 = W1;

for epoch = 1:1000 % train

   W1= DeltaSGD(W1, X, D);

   W2= DeltaBatch(W2, X, D);

   es1= 0;

   es2= 0;

N = 4;

for k = 1:N

x = X(k, ?’; d = D(k); v1 = W1x; y1 = Sigmoid(v1); es1 = es1 + (d - y1)^2; v2 = W2x; y2 = Sigmoid(v2); es2 = es2 + (d - y2)^2;

end

E1(epoch) = es1 /N;

E2(epoch) = es2/ N; end plot(E1, ‘r’) hold on plot(E2, ‘b:’) xlabel(‘Epoch’) ylabel(‘Average of Training error’) legend(‘SGD’, ‘Batch’)

以上程式碼對函式DeltaSGD和DeltaBatch分別訓練了1000次。

This program trains the neural network1,000 times for each function, DeltaSGD and DeltaBatch.

在每一代訓練中,它將訓練資料輸入神經網路,並計算輸出的均方誤差(E1, E2)。

At each epoch, it inputs the training datainto the neural network and calculates the mean square error (E1, E2) of theoutput.

當程式完成1000次訓練,將會生成一幅圖表來顯示每一代訓練中的平均誤差。

Once the program completes 1,000 trainings,it generates a graph that shows the mean error at each epoch.

如圖2-20所示,與批處理方法相比,SGD的學習誤差下降得更快,即SGD學習得更快。

As Figure 2-20 shows, the SGD yields fasterreduction of the learning error than the batch; the SGD learns faster.

在這裡插入圖片描述

圖2-20 SGD的學習速度優於批處理方法The SGD method learnsfaster than the batch method

——本文譯自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章請關注微訊號:在這裡插入圖片描述