吳恩達深度學習2-Week1課後作業3-梯度檢測
阿新 • • 發佈:2018-11-01
一、deeplearning-assignment
神經網路的反向傳播很複雜,在某些時候需要對反向傳播演算法進行驗證,以證明確實有效,這時我們引入了“梯度檢測”。
反向傳播需要計算梯度 , 其中θ表示模型的引數。J是使用前向傳播和損失函式計算的。因為前向傳播實現相對簡單, 所以確信J的計算正確。現在讓我們回頭來看一下導數(或者梯度)的定義:
考慮一維線性函式 J(θ)=θx,該模型只包含一個實值引數θ, 並採取x作為輸入。
你將實現程式碼去計算 J(.)和它的導數,然後你將使用“Gradient Checking”去確保你關於J的導數計算是正確的。
梯度檢測原理:
二、相關演算法程式碼
import numpy as np from week1.testCases import gradient_check_n_test_case from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector def forward_propagation(x, theta): """ :param x: a real-valued input :param theta:our parameter, a real number as well :return:J -- the value of function J, computed using the formula J(theta) = theta * x """ J = theta * x return J # x, theta = 2, 4 # J = forward_propagation(x, theta) # print("J = " + str(J)) def backward_propagation(x, theta): """ :param x:a real-valued input :param theta:our parameter, a real number as well :return:dtheta -- the gradient of the cost with respect to theta """ dtheta = x return dtheta # x, theta = 2, 4 # dtheta = backward_propagation(x, theta) # print("dtheta = " + str(dtheta)) def gradient_check(x, theta, epsilon=1e-7): """ :param x:a real-valued input :param theta:our parameter, a real number as well :param epsilon:tiny shift to the input to compute approximated gradient with formula(1) :return:difference -- difference (2) between the approximated gradient and the backward propagation gradient """ thetaplus = theta + epsilon thetaminus = theta - epsilon J_plus = thetaplus * x J_minus = thetaminus * x gradapprox = (J_plus - J_minus) / (2 * epsilon) grad = backward_propagation(x, theta) numerator = np.linalg.norm(grad - gradapprox) denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) difference = numerator / denominator if difference < 1e-7: print("The gradient is correct!") else: print("The gradient is wrong!") return difference # x, theta = 2, 4 # difference = gradient_check(x, theta) # print("difference = " + str(difference)) def forward_propagation_n(X, Y, parameters): """ :param X: training set for m examples :param Y:labels for m examples :param parameters:W1 -- weight matrix of shape (5, 4) b1 -- bias vector of shape (5, 1) W2 -- weight matrix of shape (3, 5) b2 -- bias vector of shape (3, 1) W3 -- weight matrix of shape (1, 3) b3 -- bias vector of shape (1, 1) :return:cost -- the cost function (logistic cost for one example) """ m = X.shape[1] W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] W3 = parameters["W3"] b3 = parameters["b3"] # LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID Z1 = np.dot(W1, X) + b1 A1 = relu(Z1) Z2 = np.dot(W2, A1) + b2 A2 = relu(Z2) Z3 = np.dot(W3, A2) + b3 A3 = sigmoid(Z3) # Cost logprobs = np.multiply(-np.log(A3), Y) + np.multiply(-np.log(1 - A3), 1 - Y) cost = 1. / m * np.sum(logprobs) cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) return cost, cache def backward_propagation_n(X, Y, cache): """ :param X:input datapoint, of shape (input size, 1) :param Y:true "label" :param cache:cache output from forward_propagation_n() :return:gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables. """ m = X.shape[1] (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache dZ3 = A3 - Y dW3 = 1. / m * np.dot(dZ3, A2.T) db3 = 1. / m * np.sum(dZ3, axis=1, keepdims=True) dA2 = np.dot(W3.T, dZ3) dZ2 = np.multiply(dA2, np.int64(A2 > 0)) # dZ2 = np.multiply(dA2, Z2) dW2 = 1. / m * np.dot(dZ2, A1.T) db2 = 1. / m * np.sum(dZ2, axis=1, keepdims=True) dA1 = np.dot(W2.T, dZ2) dZ1 = np.multiply(dA1, np.int64(A1 > 0)) # dZ1 = np.multiply(dA1, Z1) dW1 = 1. / m * np.dot(dZ1, X.T) db1 = 1. / m * np.sum(dZ1, axis=1, keepdims=True) gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3, "dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1} return gradients def gradient_check_n(parameters, gradients, X, Y, epsilon=1e-7): """ :param parameters:python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3" :param gradients:output of backward_propagation_n, contains gradients of the cost with respect to the parameters :param X:input datapoint, of shape (input size, 1) :param Y:true "label" :param epsilon:tiny shift to the input to compute approximated gradient with formula(1) :return:difference -- difference (2) between the approximated gradient and the backward propagation gradient """ parameters_values, _ = dictionary_to_vector(parameters) grad = gradients_to_vector(gradients) num_parameters = parameters_values.shape[0] J_plus = np.zeros((num_parameters, 1)) J_minus = np.zeros((num_parameters, 1)) gradapprox = np.zeros((num_parameters, 1)) for i in range(num_parameters): thetaplus = np.copy(parameters_values) thetaplus[i][0] = thetaplus[i][0] + epsilon J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) thetaminus = np.copy(parameters_values) thetaminus[i][0] = thetaminus[i][0] - epsilon J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) gradapprox[i] = (J_plus[i] - J_minus[i]) / (2 * epsilon) numerator = np.linalg.norm(gradapprox - grad) denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) difference = numerator / denominator if difference > 1e-6: print( "\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m") else: print( "\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m") return difference X, Y, parameters = gradient_check_n_test_case() cost, cache = forward_propagation_n(X, Y, parameters) gradients = backward_propagation_n(X, Y, cache) difference = gradient_check_n(parameters, gradients, X, Y)
結果發現誤差很大,通過debug檢查程式碼,發現dW2和db1的程式碼出現錯誤,通過重新改正並執行,得到正確結果:
三、總結
- 梯度檢測是非常慢的,出於這個原因我們不會在每次迭代時都運用梯度檢測演算法,只是偶爾檢測一下梯度下降是否正確。
- 不要在dropout正則化後運用梯度檢測,你只能在先通過梯度檢測來對反向傳播的梯度下降正確的前提下,再關掉它,再運用dropout。