1. 程式人生 > >【讀書1】【2017】MATLAB與深度學習——Dropout(4)

【讀書1】【2017】MATLAB與深度學習——Dropout(4)

小結(Summary)

本章涵蓋以下主題:

This chapter covered the following topics:

深度學習可以簡單地定義為採用深度神經網路的機器學習技術。

Deep Learning can be simply defined as aMachine Learning technique that employs the deep neural network.

以前的神經網路存在一個問題,即隱藏層越多,就越難訓練,並且會降低效能。

The previous neural networks had a problemwhere the deeper (more) hidden layers were harder to train and degraded theperformance.

深度學習解決了這個問題。

Deep Learning solved this problem.

深度學習的巨大成就不是由一個關鍵技術來完成的,而是來源於許多細微的改進。

The outstanding achievements of DeepLearning were not made by a critical technique but rather are due to many minorimprovements.

深度神經網路的效能不佳是由於缺乏適當的訓練。

The poor performance of the deep neuralnetwork is due to the failure of proper training.

與之相關的因素主要有三個:梯度消失、過度擬合和計算負荷。

There are three major showstoppers: thevanishing gradient, overfitting, and computational load.

梯度消失問題可以採用ReLU啟用函式和交叉熵驅動的學習規則來獲得極大提升。

The vanishing gradient problem is greatlyimproved by employing the ReLU activation function and the cross entropy-drivenlearning rule.

使用先進的梯度下降法也是有益的。

Use of the advanced gradient descent methodis also beneficial.

深度神經網路更容易被過度擬合。

The deep neural network is more vulnerableto overfitting.

深度學習利用dropout或正則化解決了這個問題。

Deep Learning solves this problem using thedropout or regularization.

由於計算量很大,因此需要大量的訓練時間。

The significant training time is requireddue to the heavy calculations.

這也導致我們需要對GPU和各種演算法進行更多的改進擴充套件。

This is relieved to a large extent by theGPU and various algorithms.

第六章 卷積神經網路(CHAPTER 6 Convolutional NeuralNetwork)

第5章指出不完全訓練是導致深度神經網路效能較差的原因,並介紹了深度學習如何解決這一問題。

Chapter 5 showed that incomplete trainingis the cause of the poor performance of the deep neural network and introducedhow Deep Learning solved the problem.

深度神經網路的重要性在於它為知識的分層處理打開了複雜非線性模型和系統方法的大門。

The importance of the deep neural networklies in the fact that it opened the door to the complicated non-linear modeland systematic approach for the hierarchical processing of knowledge.

本章介紹卷積神經網路(ConvNet),它是一種專門用於影象識別的深度神經網路。

This chapter introduces the convolutionalneural network (ConvNet), which is a deep neural network specialized for imagerecognition.

該技術展示了深層網路改進對於資訊(影象)處理的重要性。

This technique exemplifies how significantthe improvement of the deep layers is for information (images) processing.

事實上,ConvNet是一種較老的技術,它是在20世紀80年代至90年代之間發展起來的。(歷史事實說明,很多舊文獻、老技術並不一定過時了,是金子總會發光的,但需要能手去挖掘!)

Actually, ConvNet is an old technique,which was developed in the 1980s and 1990s.

然而,它被遺忘了很長的一段時間,因為在當時那個計算機還很落後的年代,它只是一種針對複雜影象的不可實現的技術。(計算能力的不斷提高是人工智慧高速發展的重要基石!)

However, it has been forgotten for a while,as it was impractical for real-world applications with complicated images.

2012年的一篇論文讓ConvNet迅速復甦,它征服了大部分計算機視覺領域的研究人員,並開始進入快速增長期。

Since 2012 when it was dramaticallyrevived, ConvNet has conquered most computer vision fields and is growing at arapid pace.

ConvNet結構(Architecture of ConvNet)

ConvNet不僅僅是一個具有許多隱藏層的深度神經網路。

ConvNet is not just a deep neural networkthat has many hidden layers.

它是一個模仿大腦視覺皮層如何處理、識別影象的深度網路。

It is a deep network that imitates how thevisual cortex of the brain processes and recognizes images.

——本文譯自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章請關注微訊號:在這裡插入圖片描述