1. 程式人生 > >【讀書1】【2017】MATLAB與深度學習——代價函式比較(2)

【讀書1】【2017】MATLAB與深度學習——代價函式比較(2)

如果你覺得很難趕上學習進度,不要氣餒。

If you had a hard time catching on, don’tbe discouraged.

事實上,在研究深度學習時,理解反向傳播演算法並不是一個至關重要的因素。

Actually, understanding the backpropagationalgorithms is not a vital factor when studying and developing Deep Learning.

由於大多數深度學習庫已經包含了相應的演算法,我們只需要使用它們。

As most of the Deep Learning librariesalready include the algorithms; we can just use them.

振作起來!

Cheer up!

深度學習只是其中一個章節而已。

Deep Learning is just one chapter away.

小結(Summary)

本章涵蓋以下概念:

This chapter covered the followingconcepts:

多層神經網路不能使用增量規則進行訓練,它應該使用反向傳播演算法進行訓練,它也被用作深度學習的學習規則。

The multi-layer neural network cannot betrained using the delta rule; it should be trained using the back-propagationalgorithm, which is also employed as the learning rule of Deep Learning.

反向傳播演算法定義了隱藏層誤差,因為它從輸出層向後傳播輸出誤差。

The back-propagation algorithm defines thehidden layer error as it propagates the output error backward from the outputlayer.

一旦獲得隱藏層誤差,每個層的權重使用增量規則進行調整。

Once the hidden layer error is obtained,the weights of every layer are adjusted using the delta rule.

反向傳播演算法的重要性在於它提供了一種系統的方法來定義隱藏節點的誤差。

The importance of the backpropagationalgorithm is that it provides a systematic method to define the error of thehidden node.

單層神經網路僅適用於線性可分問題,而大多數實際問題是線性不可分的。

The single-layer neural network is applicableonly to linearly separable problems, and most practical problems are linearlyinseparable.

多層神經網路能夠模擬線性不可分問題。

The multi-layer neural network is capableof modeling the linearly inseparable problems.

在反向傳播演算法中有多種型別的權重調整方法。

Many types of weight adjustments are availablein the backpropagation algorithm.

研究多種權重調整方法是為了追求更穩定和更快速的學習網路。

The development of various weightadjustment approaches is due to the pursuit of a more stable and fasterlearning of the network.

這些特點對深層次的深度學習尤其有益。

These characteristics are particularlybeneficial for hard-to-learn Deep Learning.

代價函式與神經網路的輸出誤差有關,並與該誤差成正比。

The cost function addresses the outputerror of the neural network and is proportional to the error.

最近,交叉熵得到了廣泛的應用。

Cross entropy has been widely used inrecent applications.

在大多數情況下,交叉熵驅動的學習規則能夠產生更好的效能。

In most cases, the cross entropy-drivenlearning rules are known to yield better performance.

神經網路的學習規則根據代價函式和啟用函式變化。

The learning rule of the neural networkvaries depending on the cost function and activation function.

具體來說,輸出節點的增量計算方法發生了變化。

Specifically, the delta calculation of theoutput node is changed.

作為克服過擬合的方法之一,正則化的具體實現方式是在代價函式中新增權重項之和。

The regularization, which is one of theapproaches used to overcome overfitting, is also implemented as an addition ofthe weight term to the cost function.

第4章 神經網路與分類(Neural Network and Classification)

如第一章所述,需要監督學習的主要機器學習應用是分類和迴歸。

As addressed in Chapter 1, the primaryMachine Learning applications that require supervised learning areclassification and regression.

分類用於確定資料所歸屬的類別。

Classification is used to determine thegroup the data belongs.

分類的典型應用是垃圾郵件過濾和字元識別。

Some typical applications of classificationare spam mail filtering and character recognition.

而回歸是根據已知的資料進行推斷或估計某個未知量。

In contrast, regression infers values fromthe data.

——本文譯自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章請關注微訊號:在這裡插入圖片描述