1. 程式人生 > >【讀書1】【2017】MATLAB與深度學習——Dropout(1)

【讀書1】【2017】MATLAB與深度學習——Dropout(1)

Dropout

本節給出dropout的執行程式碼。

This section presents the code thatimplements the dropout.

我們使用sigmoid作為隱藏節點的啟用函式。

We use the sigmoid activation function forthe hidden nodes.

該程式碼的主要目的是給出dropout方法如何程式設計的示例,由於訓練資料可能過於簡單,我們很難體會到該方法對於過度擬合的實質性改進。

This code is mainly used to see how thedropout is coded, as the training data may be too simple for us to perceive theactual improvement of overfitting.

函式DeepDropout採用反向傳播演算法訓練該神經網路示例。

The function DeepDropout trains the exampleneural network using the back-propagation algorithm.

它採用神經網路的權值和訓練資料,並返回訓練後的權值。

It takes the neural network’s weights andtraining data and returns the trained weights.

[W1,W2, W3, W4] = DeepDropout(W1, W2, W3, W4, X, D)

其中該函式中的變數與上節中DeepReLU函式的變數完全相同。

where the notation of the variables is thesame as that of the function DeepReLU of the previous section.

DeepDropout.m檔案的程式碼清單如下所示,該程式碼實現了DeepDropout函式。

The following listing shows theDeepDropout.m file, which implements the DeepDropout function.

function [W1, W2, W3, W4] = DeepDropout(W1,W2, W3, W4, X, D)

alpha = 0.01;

N = 5;

   fork = 1:N

          x= reshape(X(:, :, k), 25, 1);

          v1 = W1*x;

          y1 = Sigmoid(v1);

          y1 = y1 .* Dropout(y1, 0.2);

          v2 = W2*y1;

          y2 = Sigmoid(v2);

          y2 = y2 .* Dropout(y2, 0.2);

          v3 = W3*y2;

          y3 = Sigmoid(v3);

          y3 = y3 .* Dropout(y3, 0.2);

          v = W4*y3;

          y = Softmax(v);

          d = D(k, :)';

          e = d - y;

          delta = e;

          e3 = W4'*delta;

          delta3 = y3.*(1-y3).*e3;

          e2 = W3'*delta3;

          delta2 = y2.*(1-y2).*e2;

          e1 = W2'*delta2;

          delta1 = y1.*(1-y1).*e1;

          dW4 = alpha*delta*y3';

          W4 = W4 + dW4;

          dW3 = alpha*delta3*y2';

          W3= W3 + dW3;

          dW2= alpha*delta2*y1';

          W2= W2 + dW2;

          dW1= alpha*delta1*x';

          W1= W1 + dW1;

   end

end

該程式碼匯入訓練資料,使用增量規則計算權值更新(dW1, dW2, dW3, and dW4),並調整神經網路的權值。

This code imports the training data,calculates the weight updates (dW1, dW2, dW3, and dW4) using the delta rule,and adjusts the weight of the neural network.

以上實現過程與前面的訓練程式碼是相同的。

This process is identical to that of theprevious training codes.

它與前面程式碼的不同之處在於,一旦計算出從隱藏節點的Sigmoid啟用函式輸出結果,Dropout函式將改變該節點的最終輸出。

It differs from the previous ones in thatonce the output is calculated from the Sigmoid activation function of thehidden node, the Dropout function modifies the final output of the node.

例如,第一隱藏層的輸出計算如下:

For example, the output of the first hiddenlayer is calculated as:

y1 = Sigmoid(v1);

y1 = y1 .* Dropout(y1, 0.2);

執行以上程式碼將第一隱藏層20%的節點輸出置為0,即丟棄第一隱藏層節點總數的20%。

Executing these lines switches the outputsfrom 20% of the first hidden nodes to 0; it drops out 20% of the first hiddennodes.

——本文譯自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章請關注微訊號:在這裡插入圖片描述