1. 程式人生 > >吳恩達深度學習課程deeplearning.ai課程作業:Class 2 Week 3 TensorFlow Tutorial

吳恩達深度學習課程deeplearning.ai課程作業:Class 2 Week 3 TensorFlow Tutorial

吳恩達deeplearning.ai課程作業,自己寫的答案。
補充說明:
1. 評論中總有人問為什麼直接複製這些notebook執行不了?請不要直接複製貼上,不可能執行通過的,這個只是notebook中我們要自己寫的那部分,要正確執行還需要其他py檔案,請自己到GitHub上下載完整的。這裡的部分僅僅是參考用的,建議還是自己按照提示一點一點寫,如果實在卡住了再看答案。個人覺得這樣才是正確的學習方法,況且作業也不算難。
2. 關於評論中有人說我是抄襲,註釋還沒別人詳細,複製下來還執行不過。答覆是:做伸手黨之前,請先搞清這個作業是幹什麼的。大家都是從GitHub上下載原始的作業,然後根據程式碼前面的提示(通常會指定函式和公式)來編寫程式碼,而且後面還有expected output供你比對,如果程式正確,結果一般來說是一樣的。請不要無腦噴,說什麼跟別人的答案一樣的。說到底,我們要做的就是,看他的文字部分,根據提示在程式碼中加入部分自己的程式碼。我們自己要寫的部分只有那麼一小部分程式碼。
3. 由於實在很反感無腦噴子,故禁止了下面的評論功能,請見諒。如果有問題,請私信我,在力所能及的範圍內會盡量幫忙。

TensorFlow Tutorial

Welcome to this week’s programming assignment. Until now, you’ve always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow:

  • Initialize variables
  • Start your own session
  • Train algorithms
  • Implement a Neural Network

Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code.

1 - Exploring the Tensorflow Library

To start, you will import the library:

import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict

%matplotlib inline
np.random.seed(1)

Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.

(1)loss=L(y^,y)=(y^(i)y(i))2
y_hat = tf.constant(36, name='y_hat')            # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y')                    # Define y. Set to 39

loss = tf.Variable((y - y_hat)**2, name='loss')  # Create a variable for the loss

init = tf.global_variables_initializer()         # When init is run later (session.run(init)),
                                                 # the loss variable will be initialized and ready to be computed
with tf.Session() as session:                    # Create a session and print the output
    session.run(init)                            # Initializes the variables
    print(session.run(loss))                     # Prints the loss
9

Writing and running programs in TensorFlow has the following steps:

  1. Create Tensors (variables) that are not yet executed/evaluated.
  2. Write operations between those Tensors.
  3. Initialize your Tensors.
  4. Create a Session.
  5. Run the Session. This will run the operations you’d written above.

Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run init=tf.global_variables_initializer(). That initialized the loss variable, and in the last line we were finally able to evaluate the value of loss and print its value.

Now let us look at an easy example. Run the cell below:

a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
Tensor("Mul:0", shape=(), dtype=int32)

As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type “int32”. All you did was put in the ‘computation graph’, but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.

sess = tf.Session()
print(sess.run(c))
20

Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session.

Next, you’ll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a “feed dictionary” (feed_dict variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.

# Change the value of x in the feed_dict

x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
6

When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session.

Here’s what’s happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.

1.1 - Linear function

Lets start this programming exercise by computing the following equation: Y=WX+b, where W and X are random matrices and b is a random vector.

Exercise: Compute WX+b where W,X, and b are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):

X = tf.constant(np.random.randn(3,1), name = "X")

You might find the following functions helpful:
- tf.matmul(…, …) to do a matrix multiplication
- tf.add(…, …) to do an addition
- np.random.randn(…) to initialize randomly

# GRADED FUNCTION: linear_function

def linear_function():
    """
    Implements a linear function: 
            Initializes W to be a random tensor of shape (4,3)
            Initializes X to be a random tensor of shape (3,1)
            Initializes b to be a random tensor of shape (4,1)
    Returns: 
    result -- runs the session for Y = WX + b 
    """

    np.random.seed(1)

    ### START CODE HERE ### (4 lines of code)
    X = tf.constant(np.random.randn(3, 1), name="X")
    W = tf.constant(np.random.randn(4, 3), name="W")
    b = tf.constant(np.array(np.random.randn(4, 1)), name="b")
    Y = tf.add(tf.matmul(W, X), b)
    ### END CODE HERE ### 

    # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate

    ### START CODE HERE ###
    sess = tf.Session()
    result = sess.run(Y)
    ### END CODE HERE ### 

    # close the session 
    sess.close()

    return result
print( "result = " + str(linear_function()))
result = [[-2.15657382]
 [ 2.95891446]
 [-1.08926781]
 [-0.84538042]]

Expected Output:

result [[-2.15657382] [ 2.95891446] [-1.08926781] [-0.84538042]]

1.2 - Computing the sigmoid

Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softmax. For this exercise lets compute the sigmoid function of an input.

You will do this exercise using a placeholder variable x. When running the session, you should use the feed dictionary to pass in the input z. In this exercise, you will have to (i) create a placeholder x, (ii) define the operations needed to compute the sigmoid using tf.sigmoid, and then (iii) run the session.

Exercise: Implement the sigmoid function below. You should use the following:

  • tf.placeholder(tf.float32, name = "...")
  • tf.sigmoid(...)
  • sess.run(..., feed_dict = {x: z})

Note that there are two typical ways to create and use sessions in tensorflow:

Method 1:

sess = tf.Session()
# Run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
sess.close() # Close the session

Method 2:

with tf.Session() as sess: 
    # run the variables initialization (if needed), run the operations
    result = sess.run(..., feed_dict = {...})
    # This takes care of closing the session for you :)
# GRADED FUNCTION: sigmoid

def sigmoid(z):
    """
    Computes the sigmoid of z

    Arguments:
    z -- input value, scalar or vector

    Returns: 
    results -- the sigmoid of z
    """

    ### START CODE HERE ### ( approx. 4 lines of code)
    # Create a placeholder for x. Name it 'x'.
    x = tf.placeholder(tf.float32, name="x")

    # compute sigmoid(x)
    sigmoid = tf.sigmoid(x)

    # Create a session, and run it. Please use the method 2 explained above. 
    # You should use a feed_dict to pass z's value to x. 
    with tf.Session() as sess:
        # Run session and call the output "result"
        result = sess.run(sigmoid, feed_dict={x:z})

    ### END CODE HERE ###

    return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
sigmoid(0) = 0.5
sigmoid(12) = 0.999994

Expected Output:

sigmoid(0) 0.5
sigmoid(12) 0.999994


To summarize, you how know how to:
1. Create placeholders
2. Specify the computation graph corresponding to operations you want to compute
3. Create the session
4. Run the session, using a feed dictionary if necessary to specify placeholder variables’ values.

1.3 - Computing the Cost

You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of a[2](i) and y(i) for i=1…m:

(2)J=1mi=1m(y(i)loga[2](i)+(1y(i))log(1a[2](i)))

you can do it in one line of code in tensorflow!

Exercise: Implement the cross entropy loss. The function you will use is:

  • tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)

Your code should input z, compute the sigmoid (to get a) and then compute the cross entropy cost J. All this can be done using one call to tf.nn.sigmoid_cross_entropy_with_logits, which computes

(2)1mi=1m(y(i)logσ(z[2](i))+(1y(i))log(1σ(z[2](i)))
# GRADED FUNCTION: cost

def cost(logits, labels):
    """
    Computes the cost using the sigmoid cross entropy

    Arguments:
    logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
    labels -- vector of labels y (1 or 0) 

    Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels" 
    in the TensorFlow documentation. So logits will feed into z, and labels into y. 

    Returns:
    cost -- runs the session of the cost (formula (2))
    """

    ### START CODE HERE ### 

    # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
    z = tf.placeholder(tf.float32, name="z")
    y = tf.placeholder(tf.float32, name="y")

    # Use the loss function (approx. 1 line)
    cost = tf.nn.sigmoid_cross_entropy_with_logits(logits=z, labels=y)

    # Create a session (approx. 1 line). See method 1 above.
    sess = tf.Session()

    # Run the session (approx. 1 line).
    co