1. 程式人生 > >《tensorflow實戰》之實現AlexNet網路(六)

《tensorflow實戰》之實現AlexNet網路(六)

一 AlexNet網路結構及特點

1.AlexNet網路結構

AlexNet有8個需要訓練的層(不包括池化層和LRN層),前5層為卷積層後3層為全連線層。AlexNet最後一層是有1000類輸出的softmax層用做分類。其中LRN層出現在第1個和第2個卷積層後,最大池化層出現在第1,第2,第5個卷基層後。relu啟用函式則運用在這8層每一層的後面。
這裡寫圖片描述

2.AlexNet網路技術要點

  1. 成功使用ReLU作為CNN的啟用函式。第一次驗證其效果在較深的網路中效果超過了sigmoid,較好的解決了sigmoid在較深網路裡梯度彌散的問題。
  2. 訓練時使用了Dropout隨機忽略了一部分神經元,以避免模型的過擬合。主要在最後幾個全連線層使用了Dropout。
  3. 在CNN中使用重疊的最大池化。避免平均池化模糊化效果,重疊和覆蓋提升了特徵的豐富性。
  4. 提出了LRN層,對區域性神經元建立競爭機制,增強了模型的泛化能力。
  5. 使用CUDA家屬深度神經網路的訓練。
  6. 資料增強。訓練時隨機從256×256的原始圖片中擷取224*224大小的區域,相當於增加了2048倍的資料量,避免了過擬合,提升了泛化能力。預測時,取照片的四個腳和中間五個位置,並且進行旋轉,一共獲得10張照片,並對這十張照片預測後取平均值。對RGB資料的圖片進行PCA處理,加入標準差為0.1的高斯噪聲,增強資料得到豐富性。

二 AlexNet網路Tensorflow實現

1.匯入庫

from
datetime import datetime import math import time import tensorflow as tf batch_size=32#批次大小 num_batches=100#批次個數

2.網路結構

  • 函式print_activations()輸出每一個卷積層和池化層的tensor輸出尺寸。t.op.name是該層的名稱,引數列表t.get_shape().as_list()。
  • 使用Tensorflow的name_scope,通過 with tf.name_scope(‘conv1’) as scope可以將scope內生成的variable自動命名為conv1/xxx。
def inference(images):
    parameters = []
    # conv1
    with tf.name_scope('conv1') as scope:
        kernel = tf.Variable(tf.truncated_normal([11, 11, 3, 64], dtype=tf.float32,
                                                 stddev=1e-1), name='weights')
        conv = tf.nn.conv2d(images, kernel, [1, 4, 4, 1], padding='SAME')
        biases = tf.Variable(tf.constant(0.0, shape=[64], dtype=tf.float32),
                             trainable=True, name='biases')
        bias = tf.nn.bias_add(conv, biases)
        conv1 = tf.nn.relu(bias, name=scope)
        print_activations(conv1)
        parameters += [kernel, biases]


  # pool1
    lrn1 = tf.nn.lrn(conv1, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='lrn1')
    pool1 = tf.nn.max_pool(lrn1,
                           ksize=[1, 3, 3, 1],
                           strides=[1, 2, 2, 1],
                           padding='VALID',
                           name='pool1')
    print_activations(pool1)

  # conv2
    with tf.name_scope('conv2') as scope:
        kernel = tf.Variable(tf.truncated_normal([5, 5, 64, 192], dtype=tf.float32,
                                                 stddev=1e-1), name='weights')
        conv = tf.nn.conv2d(pool1, kernel, [1, 1, 1, 1], padding='SAME')
        biases = tf.Variable(tf.constant(0.0, shape=[192], dtype=tf.float32),
                             trainable=True, name='biases')
        bias = tf.nn.bias_add(conv, biases)
        conv2 = tf.nn.relu(bias, name=scope)
        parameters += [kernel, biases]
    print_activations(conv2)

  # pool2
    lrn2 = tf.nn.lrn(conv2, 4, bias=1.0, alpha=0.001 / 9.0, beta=0.75, name='lrn2')
    pool2 = tf.nn.max_pool(lrn2,
                           ksize=[1, 3, 3, 1],
                           strides=[1, 2, 2, 1],
                           padding='VALID',
                           name='pool2')
    print_activations(pool2)

  # conv3
    with tf.name_scope('conv3') as scope:
        kernel = tf.Variable(tf.truncated_normal([3, 3, 192, 384],
                                                 dtype=tf.float32,
                                                 stddev=1e-1), name='weights')
        conv = tf.nn.conv2d(pool2, kernel, [1, 1, 1, 1], padding='SAME')
        biases = tf.Variable(tf.constant(0.0, shape=[384], dtype=tf.float32),
                             trainable=True, name='biases')
        bias = tf.nn.bias_add(conv, biases)
        conv3 = tf.nn.relu(bias, name=scope)
        parameters += [kernel, biases]
        print_activations(conv3)

  # conv4
    with tf.name_scope('conv4') as scope:
        kernel = tf.Variable(tf.truncated_normal([3, 3, 384, 256],
                                                 dtype=tf.float32,
                                                 stddev=1e-1), name='weights')
        conv = tf.nn.conv2d(conv3, kernel, [1, 1, 1, 1], padding='SAME')
        biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),
                             trainable=True, name='biases')
        bias = tf.nn.bias_add(conv, biases)
        conv4 = tf.nn.relu(bias, name=scope)
        parameters += [kernel, biases]
        print_activations(conv4)

  # conv5
    with tf.name_scope('conv5') as scope:
        kernel = tf.Variable(tf.truncated_normal([3, 3, 256, 256],
                                                 dtype=tf.float32,
                                                 stddev=1e-1), name='weights')
        conv = tf.nn.conv2d(conv4, kernel, [1, 1, 1, 1], padding='SAME')
        biases = tf.Variable(tf.constant(0.0, shape=[256], dtype=tf.float32),
                             trainable=True, name='biases')
        bias = tf.nn.bias_add(conv, biases)
        conv5 = tf.nn.relu(bias, name=scope)
        parameters += [kernel, biases]
        print_activations(conv5)

  # pool5
    pool5 = tf.nn.max_pool(conv5,
                           ksize=[1, 3, 3, 1],
                           strides=[1, 2, 2, 1],
                           padding='VALID',
                           name='pool5')
    print_activations(pool5)

    return pool5, parameters

3.評估AlexNet網路的計算時間

  1. 輸入引數target需要測評的運運算元,info_string測試名稱。
  2. 預熱輪數num_steps_burn_in去除前幾層執行時硬體的影響,每10輪輸出總時間,平方和。total_duration是總時間,total_duration_squared平方和。
  3. 迴圈結束後,計算平均用時mn和標準差sd。
def time_tensorflow_run(session, target, info_string):
#  """Run the computation to obtain the target tensor and print timing stats.
#
#  Args:
#    session: the TensorFlow session to run the computation under.
#    target: the target Tensor that is passed to the session's run() function.
#    info_string: a string summarizing this run, to be printed with the stats.
#
#  Returns:
#    None
#  """
    num_steps_burn_in = 10
    total_duration = 0.0
    total_duration_squared = 0.0
    for i in range(num_batches + num_steps_burn_in):
        start_time = time.time()
        _ = session.run(target)
        duration = time.time() - start_time
        if i >= num_steps_burn_in:
            if not i % 10:
                print ('%s: step %d, duration = %.3f' %
                       (datetime.now(), i - num_steps_burn_in, duration))
            total_duration += duration
            total_duration_squared += duration * duration
    mn = total_duration / num_batches
    vr = total_duration_squared / num_batches - mn * mn
    sd = math.sqrt(vr)
    print ('%s: %s across %d steps, %.3f +/- %.3f sec / batch' %
           (datetime.now(), info_string, num_batches, mn, sd))

4.主函式

  1. with tf.Graph().as_default():定義預設的graph。
  2. 對forward計算測評。直接使用time_tensorflow_run,輸入引數為pool5,及最後一個池化層的輸出。
  3. 對backward計算測評。在這之前需要定義一個loss,再相對於loss和所有模型引數求梯度。
def run_benchmark():
#  """Run the benchmark on AlexNet."""
    with tf.Graph().as_default():
    # Generate some dummy images.
        image_size = 224
    # Note that our padding definition is slightly different the cuda-convnet.
    # In order to force the model to start with the same activations sizes,
    # we add 3 to the image_size and employ VALID padding above.
        images = tf.Variable(tf.random_normal([batch_size,
                                           image_size,
                                           image_size, 3],
                                          dtype=tf.float32,
                                          stddev=1e-1))

    # Build a Graph that computes the logits predictions from the
    # inference model.
        pool5, parameters = inference(images)

    # Build an initialization operation.
        init = tf.global_variables_initializer()

    # Start running operations on the Graph.
        config = tf.ConfigProto()
        config.gpu_options.allocator_type = 'BFC'
        sess = tf.Session(config=config)
        sess.run(init)

    # Run the forward benchmark.
        time_tensorflow_run(sess, pool5, "Forward")

    # Add a simple objective so we can calculate the backward pass.
        objective = tf.nn.l2_loss(pool5)
    # Compute the gradient with respect to all the parameters.
        grad = tf.gradients(objective, parameters)
    # Run the backward benchmark.
        time_tensorflow_run(sess, grad, "Forward-backward")


run_benchmark()