1. 程式人生 > >【讀書1】【2017】MATLAB與深度學習——深度學習(1)

【讀書1】【2017】MATLAB與深度學習——深度學習(1)

也許不容易理解為什麼只加入額外的一層卻花費了如此長的時間。

It may not be easy to understand why ittook so long for just one additional layer.

這是因為沒有找到多層神經網路的正確學習規則。

It was because the proper learning rule forthe multi-layer neural network was not found.

由於訓練是神經網路儲存資訊的唯一途徑,不可訓練的神經網路是無用的。

Since the training is the only way for theneural network to store the information, the untrainable neural network isuseless.

多層神經網路的訓練問題最終在1986年引入反向傳播演算法時得到解決。

The problem of training of the multi-layerneural network was finally solved in 1986 when the back-propagation algorithmwas introduced.

神經網路再次登上了歷史舞臺。

The neural network was on stage again.

然而,很快又遇到了另一個問題。

However, it was soon met with anotherproblem.

它在實際問題上的表現並沒有達到預期。

Its performance on practical problems didnot meet expectations.

當然,也嘗試過各種方法來克服這些限制,包括新增隱藏層和隱藏層中的節點。

Of course, there were various attempts toovercome the limitations, including the addition of hidden layers and additionof nodes in the hidden layer.

但是都沒有任何作用。

However, none of them worked.

其中很多方法的效能甚至很差。

Many of them yielded even poorerperformances.

由於神經網路具有非常簡單的結構和概念,所以沒有什麼方法可以進一步改善它了。

As the neural network has a very simplearchitecture and concept, there was nothing much to do that could improve it.

最後,神經網路被判處沒有改進可能性的死刑,它被大家遺忘了。

Finally, the neural network was sentencedto having no possibility of improvement and it was forgotten.

神經網路一直被遺忘了大約20年,直到2000年中期引入深度學習,才打開了新的研究大門。

It remained forgotten for about 20 yearsuntil the mid-2000s when Deep Learning was introduced, opening a new door.

由於深度神經網路的訓練難度較大,深度隱藏層需要較長時間才能獲得足夠的效能。

It took a while for the deep hidden layerto yield sufficient performance because of the difficulties in training thedeep neural network.

無論如何,當前的深度學習技術產生了令人眼花繚亂的效能水平,已經超越其它機器學習技術和神經網路,並且盛行於人工智慧的研究中。

Anyway, the current technologies in DeepLearning yield dazzling levels of performance, which outsmarts the otherMachine Learning techniques as well as other neural networks, and prevail inthe studies of Artificial Intelligence.

綜上所述,多層神經網路解決單層神經網路存在的問題花了30年的時間,其原因在於缺少學習規則,而學習規則最終被反向傳播演算法所解決。

In summary, the reason the multi-layerneural network took 30 years to solve the problems of the single-layer neuralnetwork was the lack of the learning rule, which was eventually solved by theback-propagation algorithm.

相比之下,另一個20年過去了,由於效能上的欠缺,才引入了基於深度神經網路的深度學習。

In contrast, the reason another 20 yearspassed until the introduction of deep neural network-based Deep Learning wasthe poor performance.

具有附加隱藏層的反向傳播訓練常常導致較差的效能。

The backpropagation training with theadditional hidden layers often resulted in poorer performance.

深度學習為這個問題提供瞭解決方案。

Deep Learning provided a solution to thisproblem.

深度神經網路的改進Improvementof the Deep Neural Network

儘管深度學習具有顯著的效能,但實際上並沒有任何關鍵技術。

Despite its outstanding achievements, DeepLearning actually does not have any critical technologies to present.

深度學習的創新是許多小技術改進的結果。

The innovation of Deep Learning is a resultof many small technical improvements.

本節簡要介紹為什麼深度神經網路產生較差的效能以及深度學習如何克服這個問題。

This section briefly introduces why thedeep neural network yielded poor performance and how Deep Learning overcamethis problem.

深度神經網路效能較差的原因是因為網路沒有得到正確的訓練。

The reason that the neural network withdeeper layers yielded poorer performance was that the network was not properlytrained.

——本文譯自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章請關注微訊號:在這裡插入圖片描述