1. 程式人生 > >DeepLearning.ai作業:(4-1)-- 卷積神經網路(Foundations of CNN)

DeepLearning.ai作業:(4-1)-- 卷積神經網路(Foundations of CNN)


title: ‘DeepLearning.ai作業:(4-1)-- 卷積神經網路(Foundations of CNN)’
id: dl-ai-4-1h
tags:

  • dl.ai
  • homework
    categories:
  • AI
  • Deep Learning
    date: 2018-09-30 16:07:23


首發於個人部落格:fangzh.top,歡迎來訪
本週的作業分為了兩部分:

  • 卷積神經網路的模型搭建
  • 用TensorFlow來訓練卷積神經網路

Part1:Convolutional Neural Networks: Step by Step

主要內容:

  • convolution funtions:
    • Zero Padding
    • Convolve window
    • Convolution forward
    • Convolution backward (optional)
  • Pooling functions:
    • Pooling forward
    • Create mask
    • Distribute value
    • Pooling backward (optional)

Convolutional Neural Networks

建立CNN的主要函式

1. Zero Padding

先建立一個padding函式,用來輸入影象X,輸出padding後的影象,這裡使用的是np.pad()函式,

a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))
表示a有5個維度,在第1維的兩邊都填上1個pad,和第3維的兩邊都填上3個pad,constant_values表示兩邊要填充的值

def zero_pad(X, pad):
    """
    Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image, 
    as illustrated in Figure 1.
    
    Argument:
    X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
    pad -- integer, amount of padding around each image on vertical and horizontal dimensions
    
    Returns:
    X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
    """
### START CODE HERE ### (≈ 1 line) X_pad = np.pad(X, ((0,0),(pad,pad),(pad,pad),(0,0)), 'constant', constant_values=(0,0)) ### END CODE HERE ### return X_pad

2.Single step of convolution

建立一個單步的卷積運算,也就是一次輸入一個切片,大小和卷積核相同,對應元素相乘再求和,最後再加個bias項。

# GRADED FUNCTION: conv_single_step

def conv_single_step(a_slice_prev, W, b):
    """
    Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation 
    of the previous layer.
    
    Arguments:
    a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
    W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
    b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
    
    Returns:
    Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
    """

    ### START CODE HERE ### (≈ 2 lines of code)
    # Element-wise product between a_slice and W. Do not add the bias yet.
    s = a_slice_prev * W
    # Sum over all entries of the volume s.
    Z = np.sum(s)
    # Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
    Z = Z + float(b)
    ### END CODE HERE ###

    return Z

3.Convolutional Neural Networks - Forward pass

建立一次完整的卷積過程,也就是利用上面的一次卷積,進行for迴圈。進行切片的時候,注意邊界vert_start, vert_end, horiz_start and horiz_end

這一步應該先弄清楚A_prev,A,W,b的維度,超引數項包括了stride和pad

n H = n H p r e v f + 2 × p a d s t r i d e + 1 n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1
n W = n W p r e v f + 2 × p a d s t r i d e + 1 n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1
n C = number of filters used in the convolution n_C = \text{number of filters used in the convolution}

# GRADED FUNCTION: conv_forward

def conv_forward(A_prev, W, b, hparameters):
    """
    Implements the forward propagation for a convolution function
    
    Arguments:
    A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
    W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
    b -- Biases, numpy array of shape (1, 1, 1, n_C)
    hparameters -- python dictionary containing "stride" and "pad"
        
    Returns:
    Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
    cache -- cache of values needed for the conv_backward() function
    """
    
    ### START CODE HERE ###
    # Retrieve dimensions from A_prev's shape (≈1 line)  
    (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
    
    # Retrieve dimensions from W's shape (≈1 line)
    (f, f, n_C_prev, n_C) = W.shape
    
    # Retrieve information from "hparameters" (≈2 lines)
    stride = hparameters['stride']
    pad = hparameters['pad']
    
    # Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
    n_H = int((n_H_prev + 2 * pad - f) / stride + 1)
    n_W = int((n_W_prev + 2 * pad - f) / stride + 1)

    # Initialize the output volume Z with zeros. (≈1 line)
    Z = np.zeros((m, n_H, n_W, n_C))
    
    # Create A_prev_pad by padding A_prev
    A_prev_pad = zero_pad(A_prev, pad)
    
    for i in range(m):                               # loop over the batch of training examples
        a_prev_pad = A_prev_pad[i]                               # Select ith training example's padded activation
        for h in range(n_H):                           # loop over vertical axis of the output volume
            for w in range(n_W):                       # loop over horizontal axis of the output volume
                for c in range(n_C):                   # loop over channels (= #filters) of the output volume
                    
                    # Find the corners of the current "slice" (≈4 lines)
                    vert_start = h * stride
                    vert_end = h * stride + f
                    horiz_start = w * stride
                    horiz_end = w * stride + f
                    
                    # Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
                    a_slice_prev = a_prev_pad[vert_start : vert_end, horiz_start : horiz_end]
                    
                    # Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
                    Z[i, h, w, c] = conv_single_step(a_slice_prev,W[:,:,:,c],b[:,:,:,c])
                                        
    ### END CODE HERE ###
    
    # Making sure your output shape is correct
    assert(Z.shape == (m, n_H, n_W, n_C))
    
    # Save information in "cache" for the backprop
    cache = (A_prev, W, b, hparameters)
    
    return Z, cache

Pooling layer

建立池化層,注意得到的維度需要向下取整,用int()對float()進行轉換

n H = n H p r e v f s t r i d e + 1 n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1
n W = n W p r e v f s t r i d e + 1 n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1
n C = n C p r e v n_C = n_{C_{prev}}

同樣需要先進行切邊,而後分為max和average兩種,分別用np.max和np.mean

def pool_forward(A_prev, hparameters, mode = "max"):
    """
    Implements the forward pass of the pooling layer
    
    Arguments:
    A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
    hparameters -- python dictionary containing "f" and "stride"
    mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
    
    Returns:
    A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
    cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters 
    """
    
    # Retrieve dimensions from the input shape
    (m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
    
    # Retrieve hyperparameters from "hparameters"
    f = hparameters["f"]
    stride = hparameters["stride"]
    
    # Define the dimensions of the output
    n_H = int(1 + (n_H_prev - f) / stride)
    n_W = int(1 + (n_W_prev - f) / stride)
    n_C = n_C_prev
    
    # Initialize output matrix A
    A = np.zeros((m, n_H, n_W, n_C))              
    
    ### START CODE HERE ###
    for i in range(m):                         # loop over the training examples
        for h in range(n_H):                     # loop on the vertical axis of the output volume
            for w in range(n_W):                 # loop on the horizontal axis of the output volume
                for c in range (n_C):            # loop over the channels of the output volume
                    
                    # Find the corners of the current "slice" (≈4 lines)
                    vert_start = h * stride
                    vert_end = vert_start + f
                    horiz_start = w * stride
                    horiz_end = horiz_start + f
                    
                    # Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
                    a_prev_slice = A_prev[i, vert_start : vert_end, horiz_start : horiz_end, c]
                    
                    # Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
                    if mode == "max":
                        A[i, h, w, c] = np.max(a_prev_slice)
                    elif mode == "average":
                        A[i, h, w, c] = np.mean(a_prev_slice)
    
    ### END CODE HERE ###
    
    # Store the input and hparameters in "cache" for pool_backward()
    cache = (A_prev, hparameters)
    
    # Making sure your output shape is correct
    assert(A.shape == (m, n_H, n_W, n_C))
    
    return A, cache

Backpropagation in convolutional neural networks

卷積神經網路的求導是比較難以理解的,這裡有卷積層的求導和池化層的求導。

1.Convolutional layer backward pass

假設經過卷積層後我們的輸出 Z = W × A + b Z = W \times A +b

那麼反向傳播過程中需要求的就是 d A , d W , d b dA,dW,db ,其中 d A dA 是原輸入的資料,包含了原影象中的每一個畫素,

而這個時候假設從後面傳過來的 d Z dZ 是已經知道的。

1.計算dA

從公式可以看出, d A = W × d Z dA = W \times dZ ,具體一點, d A dA 的每一個切片就是 W c W_c 乘上 d Z dZ 在輸出圖片的每一個畫素的求和結果,從矩陣的角度,每一次 W c ×