1. 程式人生 > >網易雲深度學習第一課第一週程式設計作業

網易雲深度學習第一課第一週程式設計作業

1.1Python Basics with Numpy (optional assignment)

Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you’ve used Python before, this will help familiarize you with functions we’ll need.

介紹:
1.你將使用Python3
2.避免使用for迴圈和while迴圈,除非你被明確的要求去使用
3.Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
4.After coding your function, run the cell right below it to check if your result is correct.

通過這次作業你將獲得:
學會使用iPython Notebooks
學會使用numpy函式、numpy矩陣、向量 等操作
理解“broadcasting”的概念
能夠向量化程式碼

關於iPython Notebooks是一個嵌入在網頁當中的互動式coding環境。
有將使用iPython notebooks在這門課程中。你僅僅需要在### START CODE HERE ### and ### END CODE HERE ###之間寫你的程式碼內容。完成你的程式碼後,你可以通過同時按下shift和enter鍵編譯你的程式碼,或者直接點選“Run Cell”。

我們將在”(≈ X lines of code)”告訴你,你大概需要完成行多少程式碼。這只是一個大概估計,你不用在意你的程式碼長度。

練習: Set test to “Hello World” in the cell below to print “Hello World” and run the two cells below.

### START CODE HERE ### (≈ 1 line of code)
test = 'Hello World'
### END CODE HERE ###

這裡寫圖片描述

1 - Building basic functions with numpy
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.

1.1 - sigmoid function, np.exp()
在你使用np.exp()之前,你先使用math.exp()去實現sigmoid函式。通過對比,你將明白為什麼np.exp()是優於math.exp()的。

注意: sigmoid(x)=1/1+e^-x常用於logistic函式,它是一個非線性函式。

To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().

# GRADED FUNCTION: basic_sigmoidimport math
​
def basic_sigmoid(x):
    """
    Compute sigmoid of x.
​
    Arguments:
    x -- A scalar
​
    Return:
    s -- sigmoid(x)
    """

    ### START CODE HERE ### (≈ 1 line of code)
    s = 1/(1+ math.exp(-x))
    ### END CODE HERE ###

    return s

Expected Output:
basic_sigmoid(3)=0.9525741268224334

但實際上,我們很少在深度學習中使用math庫,因為深度學習的輸入大部分是矩陣或者向量,而math庫的輸入是一個實數。這就是為什麼numpy更適合的原因。

這裡寫圖片描述

numpy可以對x中的所有元素進行同樣的操作
這裡寫圖片描述

練習:使用Numpy實現sigmoid函式
這裡寫圖片描述

1.2 - Sigmoid gradient

你需要進行計算梯度和使用反向傳播演算法去優化你的損失函式

練習::Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x.
The formula is:sigmoid_derivative(x)=σ′(x)=σ(x)(1−σ(x)
你可以分兩步完成這個函式:
1.coding 一個sigmoid函式
2.σ′(x)=σ(x)(1−σ(x)

這裡寫圖片描述

1.3 - Reshaping arrays

Numpy有兩個函式常用於深度學習,np.shape()和np.reshape().
X.shape is used to get the shape (dimension) of a matrix/vector X.
X.reshape(…) is used to reshape X into some other dimension.

Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1).
這裡寫圖片描述

1.4 - Normalizing rows

有個常用於深度學習和機器學習的技巧是標準化我們的資料。標準化資料之後,我們的資料在梯度下降時收斂的會更好更快。
Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).

這裡寫圖片描述

Note: In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You’ll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we’ll talk about it now!

1.5 - Broadcasting and the softmax function

A very important concept to understand in numpy is “broadcasting”.
It is very useful for performing mathematical operations between arrays of different shapes.
Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
這裡寫圖片描述
這裡寫圖片描述
這裡寫圖片描述

What you need to remember:
1.np.exp(x) works for any np.array x and applies the exponential function to every coordinate
2.the sigmoid function and its gradient
3.image2vector is commonly used in deep learning
4.np.reshape is widely used. In the future, you’ll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
5.numpy has efficient built-in functions
6.broadcasting is extremely useful

Vectorization

在深度學習中,你處理的資料集是非常龐大的。因此,非最優計算函式將會是你演算法的瓶頸,會導致模型執行的很慢。為了讓你的程式碼計算更有效率,你應該將資料向量化。例如,try to tell the difference between the following implementations of the dot/outer/elementwise product.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.

2.1 Implement the L1 and L2 loss functions

Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
Reminder:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions (y^) are from the true values (y). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.

這裡寫圖片描述
這裡寫圖片描述

Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful.
這裡寫圖片描述
這裡寫圖片描述

What to remember:
- Vectorization is very important in deep learning. It provides computational efficiency and clarity.
- You have reviewed the L1 and L2 loss.
- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc…