1. 程式人生 > >Coursera 深度學習 deep learning.ai 吳恩達 神經網路和深度學習 第一課 第二週 程式設計作業 Python Basics with Numpy

Coursera 深度學習 deep learning.ai 吳恩達 神經網路和深度學習 第一課 第二週 程式設計作業 Python Basics with Numpy

Python Basics with Numpy (optional assignment)

Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you’ve used Python before, this will help familiarize you with functions we’ll need.

Instructions:
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.

After this assignment you will:
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of “broadcasting”
- Be able to vectorize code

Let’s get started!

About iPython Notebooks

iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing “SHIFT”+”ENTER” or by clicking on “Run Cell” (denoted by a play symbol) in the upper bar of the notebook.

We will often specify “(≈ X lines of code)” in the comments to tell you about how much code you need to write. It is just a rough estimate, so don’t feel bad if your code is longer or shorter.

Exercise: Set test to "Hello World" in the cell below to print “Hello World” and run the two cells below.

### START CODE HERE ### (≈ 1 line of code)
test = 'Hello World'
### END CODE HERE ###
print ("test: " + test)
test: Hello World

Expected output:
test: Hello World


What you need to remember:
- Run your cells using SHIFT+ENTER (or “Run cell”)
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas

1 - Building basic functions with numpy

Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.

1.1 - sigmoid function, np.exp()

Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().

Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.

Reminder:
sigmoid(x)=11+ex is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.

To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().

# GRADED FUNCTION: basic_sigmoid

import math

def basic_sigmoid(x):
    """
    Compute sigmoid of x.

    Arguments:
    x -- A scalar

    Return:
    s -- sigmoid(x)
    """

    ### START CODE HERE ### (≈ 1 line of code)
    s = 1/(1+ math.exp(-x))
    ### END CODE HERE ###

    return s
basic_sigmoid(3)
0.9525741268224334

Expected Output:

** basic_sigmoid(3) ** 0.9525741268224334

Actually, we rarely use the “math” library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.

### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
---------------------------------------------------------------------------

TypeError                                 Traceback (most recent call last)

<ipython-input-9-2e11097d6860> in <module>()
      1 ### One reason why we use "numpy" instead of "math" in Deep Learning ###
      2 x = [1, 2, 3]
----> 3 basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.


<ipython-input-7-c343aa0d58d2> in basic_sigmoid(x)
     15 
     16     ### START CODE HERE ### (≈ 1 line of code)
---> 17     s = 1/(1+ math.exp(-x))
     18     ### END CODE HERE ###
     19 


TypeError: bad operand type for unary -: 'list'

In fact, if x=(x1,x2,...,xn) is a row vector then np.exp(x) will apply the exponential function to every element of x. The output will thus be: np.exp(x)=(ex1,ex2,...,exn)

import numpy as np

# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
[  2.71828183   7.3890561   20.08553692]

Furthermore, if x is a vector, then a Python operation such as s=x+3 or s=1x will output s as a vector of the same size as x.

# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
[4 5 6]

Any time you need more info on a numpy function, we encourage you to look at the official documentation.

You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.

Exercise: Implement the sigmoid function using numpy.

Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices…) are called numpy arrays. You don’t need to know more for now.

For xRnsigmoid(x)=sigmoidx1x2...xn=11+ex111+ex2...11+exn(1)
# GRADED FUNCTION: sigmoid

import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()

def sigmoid(x):
    """
    Compute the sigmoid of x

    Arguments:
    x -- A scalar or numpy array of any size

    Return:
    s -- sigmoid(x)
    """

    ### START CODE HERE ### (≈ 1 line of code)
    s = 1/(1+np.exp(-x))
    ### END CODE HERE ###

    return s
x = np.array([1, 2, 3])
sigmoid(x)
array([ 0.73105858,  0.88079708,  0.95257413])

Expected Output:

**sigmoid([1,2,3])** array([ 0.73105858, 0.88079708, 0.95257413])

1.2 - Sigmoid gradient

As you’ve seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let’s code your first gradient function.

Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is:

sigmoid_derivative(x)=σ(x)=σ(x)(1σ(x))(2)
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute σ(x)=s(1s)
# GRADED FUNCTION: sigmoid_derivative

def sigmoid_derivative(x):
    """
    Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
    You can store the output of the sigmoid function into variables and then use it to calculate the gradient.

    Arguments:
    x -- A scalar or numpy array

    Return:
    ds -- Your computed gradient.
    """

    ### START CODE HERE ### (≈ 2 lines of code)
    s = 1/(1+np.exp(-x))
    ds = s * (1-s)
    ### END CODE HERE ###

    return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
sigmoid_derivative(x) = [ 0.19661193  0.10499359  0.04517666]

Expected Output:

**sigmoid_derivative([1,2,3])** [ 0.19661193 0.10499359 0.04517666]

1.3 - Reshaping arrays

Two common numpy functions used in deep learning are np.shape and np.reshape().
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(…) is used to reshape X into some other dimension.

For example, in computer science, an image is represented by a 3D array of shape (length,height,depth=3). However, when you read an image as the input of an algorithm you convert it to a vector of shape (lengthheight3,1). In other words, you “unroll”, or reshape, the 3D array into a 1D vector.

Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:

v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
  • Please don’t hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.
# GRADED FUNCTION: image2vector
def image2vector(image):
    """
    Argument:
    image -- a numpy array of shape (length, height, depth)

    Returns:
    v -- a vector of shape (length*height*depth, 1)
    """

    ### START CODE HERE ### (≈ 1 line of code)
    v = image.reshape(-1,1)
    ### END CODE HERE ###

    return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139,  0.29380381],
        [ 0.90714982,  0.52835647],
        [ 0.4215251 ,  0.45017551]],

       [[ 0.92814219,  0.96677647],
        [ 0.85304703,  0.52351845],
        [ 0.19981397,  0.27417313]],

       [[ 0.60659855,  0.00533165],
        [ 0.10820313,  0.49978937],
        [ 0.34144279,  0.94630077]]])

print ("image2vector(image) = " + str(image2vector(image)))
image2vector(image) = [[ 0.67826139]
 [ 0.29380381]
 [ 0.90714982]
 [ 0.52835647]
 [ 0.4215251 ]
 [ 0.45017551]
 [ 0.92814219]
 [ 0.96677647]
 [ 0.85304703]
 [ 0.52351845]
 [ 0.19981397]
 [ 0.27417313]
 [ 0.60659855]
 [ 0.00533165]
 [ 0.10820313]
 [ 0.49978937]
 [ 0.34144279]
 [ 0.94630077]]

Expected Output:

**image2vector(image)** [[ 0.67826139] [ 0.29380381] [ 0.90714982] [ 0.52835647] [ 0.4215251 ] [ 0.45017551] [ 0.92814219] [ 0.96677647] [ 0.85304703] [ 0.52351845] [ 0.19981397] [ 0.27417313] [ 0.60659855] [ 0.00533165] [ 0.10820313] [ 0.49978937] [ 0.34144279] [ 0.94630077]]

1.4 - Normalizing rows

Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to

相關推薦

Coursera 深度學習 deep learning.ai 神經網路深度學習 第一 第二 程式設計作業 Python Basics with Numpy

Python Basics with Numpy (optional assignment) Welcome to your first assignment. This exercise gives you a brief introduction to P

Coursera deep learning 神經網路深度學習 第四周 程式設計作業 Building your Deep Neural Network

def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):     """     Implements a two-layer neural network

神經網路深度學習 第4周程式設計作業

由於csdn的markdown編輯器及其難用,已將本文轉移至此處NoteThese are my personal programming assignments at the 4th week after studying the course neural-network

神經網路深度學習 第3周程式設計作業

由於csdn的markdown編輯器及其難用,已將本文轉移至此處NoteThese are my personal programming assignments at the third week after studying the course neural-netwo

Coursera深度學習課程 DeepLearning第一第二程式設計作業

     最近在學習吳恩達的Deep Learning 系列課程,首先在此對吳老師表示深深的謝意。第一次接觸深度學習方面的知識,更是第一次用程式碼程式設計實現深度學習的演算法。所以在完成老師的作業過程中,遇到很多問題,最終在度孃的幫助下,花了一天的時間,終於把程式設計實現了邏

神經網路深度學習——神經網路基礎習題1

python numpy 基礎 1.使用iPython Notebooks 2.使用numpy 函式 and numpy矩陣或向量操作 3.理解"broadcasting" 4.向量化程式碼 用numpy建立一個基礎函式 sigmoid 函式 math庫

神經網路深度學習——神經網路基礎習題2

神經網路思維的邏輯迴歸 1.初始化引數 2.計算代價函式及其導數 3.使用梯度下降 判斷影象上是否有貓 影象預處理 問題敘述 你得到了一個數據集(“data.h5”),包含: -標記為cat ( y = 1 )或非cat ( y = 0 )的m個訓練集 -標

神經網路深度學習——深度神經網路習題4:DNN分類應用

吳恩達神經網路與深度學習——深度神經網路習題4 DNN影象分類應用 將上次實現的DNN應用於貓分類問題 包 import time import numpy as np import h5py import matplotlib.pyplot as plt i

深度學習——Andrew Ng》第一第二程式設計作業

最近在網易雲課堂學習《深度學習》微專業,將課後的程式設計作業記錄下來。 Logistic Regression with a Neural Network mindset Welcome to your first (required) pr

深度學習哪家強?、UdacityFast.ai

 深度學習哪家強?吳恩達、Udacity和Fast.ai 轉載                 &nb

Coursera深度學習課程筆記(1-1)神經網路深度學習-深度學習概論

這系列文章是我在學習吳恩達教授深度學習課程時為了加深自己理解,同時方便後來對內容進行回顧而做的筆記,其中難免有錯誤的理解和不太好的表述方式,歡迎各位大佬指正並提供建議。1、什麼是神經網路               在簡單的從房屋面積預測價格時,神經網路可以理解為將輸入的房屋

深度學習哪家強?、UdacityFast.ai的課程我們替你分析好了

引言 過去2年,我一直積極專注於深度學習領域。我對深度學習的興趣始於2015年初,那個時候Google剛剛開源Tensorflow。我根據Tensorflow的文件快速地嘗試了幾個例程,當時的感覺是深度學習並不簡單。部分原因是因為深度學習的框架很新,也需要更好的硬體支援

深度學習-第一第二課程作業

這周作業是,給出一張圖片,判斷這張圖是不是貓。 這是一個二分類問題,結果是非0即1的,使用邏輯迴歸(Logic Regression),可以說,瞭解這個迴歸方法,有些python基礎,會使用jupyter notebook就可以嘗試著碼一遍程式碼,走完整個學習流程,能進一步

Operations on word vectors-v2 老師深度學習課程第五第二程式設計作業1

吳恩達老師深度學習課程第五課(RNN)第二週程式設計作業1, 包含答案 Operations on word vectors Welcome to your first assignment of this week! Because word embe

深度學習課程第一第二課程作業

學過吳恩達的Machine Learning課程,現在跟著學深度學習,本來是想付費的,奈何搞半天付款沒有成功,沒辦法只能下載資料集自己搞了。由於門外漢,安裝工具軟體加上完成作業花了一天時間,其實第二週的作業和機器學習課程基本是一樣的,沒有什麼太大難度,都是初級入

v2 老師深度學習第五第二程式設計作業2

吳恩達老師深度學習第五課第二週程式設計作業2,包含答案! Emojify! Welcome to the second assignment of Week 2. You are going to use word vector representation

機器學習 | 機器學習第二程式設計作業(Python版)

實驗指導書   下載密碼:hso0 本篇部落格主要講解,吳恩達機器學習第二週的程式設計作業,作業內容主要是實現單元/多元線性迴歸演算法。實驗的原始版本是用Matlab實現的,本篇部落格主要用Python來實現。   目錄 1.實驗包含的檔案 2.單元

DeepLearning 第一第二 測驗 · Neural Network Basics

      ————————————————–中文翻譯—————————————————————————————– 1、神經元的計算是什麼?(B) A. 在將輸出應用到啟用函式之前, 神經元計算所有特徵的平均值 B. 神經元計算一個線性函式 (z = Wx + b), 然後是一個啟用函

深度學習——Andrew Ng》第四第二程式設計作業

深度學習第四課是 卷積神經網路 ,共四周內容: 第一週 卷積神經網路(卷積的含義,各個層的功能,如何計算資料在不同層的大小(shape)) 第二週 深度卷積網路:例項探究(LeNet5、ResNet50等經典神經網路,遷移學習,資料擴充) 第三週

DeepLearning 第一第二程式設計題目及作業(可免費下載資源)

提示 作業裡面會有需要用到的 Python 模組以及資料集。所以我下面再附上所需要的檔案下載連結,不把所有檔案連同作業放一起打包好的目的是讓第一次接觸 Python 的人更多的瞭解 Python , 萬事開頭難,希望大傢伙明白。 檔案連結 宣告 這一