1. 程式人生 > >caffe原始碼深入學習5:超級詳細的caffe卷積層程式碼解析

caffe原始碼深入學習5:超級詳細的caffe卷積層程式碼解析

   在本篇部落格中,筆者為大家解析一下caffe卷積層的原始碼,在開篇提醒各位讀者朋友,由於caffe卷積層實現較為複雜,引數相對較多,因此,讀者朋友們如果發現筆者的部落格中的疏漏或者錯誤之處,請大家不吝賜教筆者在此表示衷心的感謝。

   在解析程式碼前,首先要強調一下卷積核的定義,許多讀者朋友理解CNN中的卷積核有誤(包括之前的筆者),因此特地把這個謬誤寫出來,供警示之用:

   舉個例子:CNN中的一個卷積層輸入64個通道的特徵子圖,輸出256個通道的特徵子圖,那麼,該層一共包含多少個卷積核?筆者在初接觸CNN的時候認為,該卷積層包含256個卷積核,每個卷積核是二維的,然後每個卷積核分別對輸入所有通道卷積,最後合成得到的結果就得到該卷積核的輸出特徵子圖。

以上的認為是錯誤的!

   在上述的例子中,正確的理解是,該層包含64*256個卷積核,每個輸出的特徵子圖對應了64個不同的卷積核,這64個不同的卷積核對相應的輸入通道進行卷積操作,最後合成各卷積結果得到一張輸出的特徵子圖。

   筆者在接下來的程式碼解析中注意糾正了上述謬誤,請讀者朋友們留意。好,讓我們先放conv_layer.hpp的程式碼:

#ifndef CAFFE_CONV_LAYER_HPP_
#define CAFFE_CONV_LAYER_HPP_

#include <vector>

#include "caffe/blob.hpp"
#include "caffe/layer.hpp"
#include "caffe/proto/caffe.pb.h"

#include "caffe/layers/base_conv_layer.hpp"

namespace caffe {

/**
 * @brief Convolves the input image with a bank of learned filters,
 *        and (optionally) adds biases.
 *
 *   Caffe convolves by reduction to matrix multiplication. This achieves
 *   high-throughput and generality of input and filter dimensions but comes at
 *   the cost of memory for matrices. This makes use of efficiency in BLAS.
 *
 *   The input is "im2col" transformed to a channel K' x H x W data matrix
 *   for multiplication with the N x K' x H x W filter matrix to yield a
 *   N' x H x W output matrix that is then "col2im" restored. K' is the
 *   input channel * kernel height * kernel width dimension of the unrolled
 *   inputs so that the im2col matrix has a column for each input region to
 *   be filtered. col2im restores the output spatial structure by rolling up
 *   the output channel N' columns of the output matrix.
 */
template <typename Dtype>
class ConvolutionLayer : public BaseConvolutionLayer<Dtype> {
 public:
  /**
   * @param param provides ConvolutionParameter convolution_param,
   *    with ConvolutionLayer options:
   *  - num_output. The number of filters.
   *  - kernel_size / kernel_h / kernel_w. The filter dimensions, given by
   *  kernel_size for square filters or kernel_h and kernel_w for rectangular
   *  filters.
   *  - stride / stride_h / stride_w (\b optional, default 1). The filter
   *  stride, given by stride_size for equal dimensions or stride_h and stride_w
   *  for different strides. By default the convolution is dense with stride 1.
   *  - pad / pad_h / pad_w (\b optional, default 0). The zero-padding for
   *  convolution, given by pad for equal dimensions or pad_h and pad_w for
   *  different padding. Input padding is computed implicitly instead of
   *  actually padding.
   *  - dilation (\b optional, default 1). The filter
   *  dilation, given by dilation_size for equal dimensions for different
   *  dilation. By default the convolution has dilation 1.
   *  - group (\b optional, default 1). The number of filter groups. Group
   *  convolution is a method for reducing parameterization by selectively
   *  connecting input and output channels. The input and output channel dimensions must be divisible
   *  by the number of groups. For group @f$ \geq 1 @f$, the
   *  convolutional filters' input and output channels are separated s.t. each
   *  group takes 1 / group of the input channels and makes 1 / group of the
   *  output channels. Concretely 4 input channels, 8 output channels, and
   *  2 groups separate input channels 1-2 and output channels 1-4 into the
   *  first group and input channels 3-4 and output channels 5-8 into the second
   *  group.
   *  - bias_term (\b optional, default true). Whether to have a bias.
   *  - engine: convolution has CAFFE (matrix multiplication) and CUDNN (library
   *    kernels + stream parallelism) engines.
   */
  explicit ConvolutionLayer(const LayerParameter& param)
      : BaseConvolutionLayer<Dtype>(param) {}//建構函式為空

  virtual inline const char* type() const { return "Convolution"; }

 protected:
  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);//卷積層的cpu前向傳播
  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);//卷積層的gpu前向傳播
  virtual void Backward_cpu(const vector<Blob<Dtype>*>& top,
      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);//卷積層的cpu反向傳播
  virtual void Backward_gpu(const vector<Blob<Dtype>*>& top,
      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom);//卷積層的gpu反向傳播
  virtual inline bool reverse_dimensions() { return false; }//判斷是否反轉卷積操作,直接返回false
  virtual void compute_output_shape();//計算卷積層的輸出形狀
};

}  // namespace caffe

#endif  // CAFFE_CONV_LAYER_HPP_
   conv_layer.hpp中除了聲明瞭前向和後向傳播,還定義了一個判斷是否反轉卷積操作的函式,這裡直接返回了false,同時還聲明瞭一個計算卷積層輸出形狀的函式compute_output_shape,下面讓我們移步conv_layer.cpp,對宣告的函式做出解析。下面是conv_layer.cpp的程式碼:
#include <vector>

#include "caffe/layers/conv_layer.hpp"

namespace caffe {

template <typename Dtype>
void ConvolutionLayer<Dtype>::compute_output_shape() {//計算卷積層的輸出形狀
  const int* kernel_shape_data = this->kernel_shape_.cpu_data();//卷積核大小
  const int* stride_data = this->stride_.cpu_data();//步長
  const int* pad_data = this->pad_.cpu_data();//pad
  const int* dilation_data = this->dilation_.cpu_data();//卷積核膨脹
  this->output_shape_.clear();
  for (int i = 0; i < this->num_spatial_axes_; ++i) {
    // i + 1 to skip channel axis
    const int input_dim = this->input_shape(i + 1);//在這裡獲取輸入blob的height與width
    const int kernel_extent = dilation_data[i] * (kernel_shape_data[i] - 1) + 1;//在這裡進行卷積核的擴充套件操作
    const int output_dim = (input_dim + 2 * pad_data[i] - kernel_extent)
        / stride_data[i] + 1;//在這裡計算卷積過後生成的blob的高和寬
    this->output_shape_.push_back(output_dim);
  }
}

template <typename Dtype>
void ConvolutionLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) {
  const Dtype* weight = this->blobs_[0]->cpu_data();//讀入卷積層的引數(權重),blobs_[0]儲存的權重,而blobs_[1]儲存的偏置
  for (int i = 0; i < bottom.size(); ++i) {
    const Dtype* bottom_data = bottom[i]->cpu_data();//讀入bottom blob的data
    Dtype* top_data = top[i]->mutable_cpu_data();
    for (int n = 0; n < this->num_; ++n) {//這裡的num_指的是batch_size,也就是說,一張一張圖片的來
      this->forward_cpu_gemm(bottom_data + n * this->bottom_dim_, weight,
          top_data + n * this->top_dim_);
      if (this->bias_term_) {//如果啟用了偏置
        const Dtype* bias = this->blobs_[1]->cpu_data();
        this->forward_cpu_bias(top_data + n * this->top_dim_, bias);//那麼加上偏置
      }
    }
  }
}

template <typename Dtype>
void ConvolutionLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
      const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
  const Dtype* weight = this->blobs_[0]->cpu_data();//讀入權重引數
  Dtype* weight_diff = this->blobs_[0]->mutable_cpu_diff();//讀入權重的梯度
  for (int i = 0; i < top.size(); ++i) {
    const Dtype* top_diff = top[i]->cpu_diff();//獲取每個top blob的梯度
    const Dtype* bottom_data = bottom[i]->cpu_data();//獲取每個bottom blob的資料
    Dtype* bottom_diff = bottom[i]->mutable_cpu_diff();//獲取每個bottom blob的梯度
    // Bias gradient, if necessary.
    if (this->bias_term_ && this->param_propagate_down_[1]) {//如果這個blob需要反傳並且啟用了偏置的話
      Dtype* bias_diff = this->blobs_[1]->mutable_cpu_diff();//獲取該層偏置的梯度
      for (int n = 0; n < this->num_; ++n) {
        this->backward_cpu_bias(bias_diff, top_diff + n * this->top_dim_);//對於每張輸入的原圖片偏置梯度的反傳
      }
    }
    if (this->param_propagate_down_[0] || propagate_down[i]) {
      for (int n = 0; n < this->num_; ++n) {
        // gradient w.r.t. weight. Note that we will accumulate diffs.
        if (this->param_propagate_down_[0]) {//如果該blob需要反傳權值梯度,則反傳
          this->weight_cpu_gemm(bottom_data + n * this->bottom_dim_,
              top_diff + n * this->top_dim_, weight_diff);
        }
        // gradient w.r.t. bottom data, if necessary.
        if (propagate_down[i]) {//如果該blob需要反傳資料梯度,則反傳
          this->backward_cpu_gemm(top_diff + n * this->top_dim_, weight,
              bottom_diff + n * this->bottom_dim_);
        }
      }
    }
  }
}

#ifdef CPU_ONLY
STUB_GPU(ConvolutionLayer);
#endif

INSTANTIATE_CLASS(ConvolutionLayer);

}  // namespace caffe
   在compute_output_shape函式中,我們計算了卷積層輸出的特徵圖的大小,是不是看到了熟悉的公式呢?輸出特徵圖尺寸= (輸入特徵圖尺寸+2*pad-卷積核尺寸)/步長+1。

其餘的函式定義了卷積層的前傳和反傳介面。

   看到這裡,是不是覺得卷積層的程式碼過於簡單了?很多操作被封裝在base_conv_layer.hpp實現,那麼,我們就開啟base_conv_layer.hpp,看看裡面定義了些什麼東西。按照老規矩先放base_conv_layer.hpp原始碼:

#ifndef CAFFE_BASE_CONVOLUTION_LAYER_HPP_
#define CAFFE_BASE_CONVOLUTION_LAYER_HPP_

#include <vector>

#include "caffe/blob.hpp"
#include "caffe/layer.hpp"
#include "caffe/proto/caffe.pb.h"
#include "caffe/util/im2col.hpp"

namespace caffe {

/**
 * @brief Abstract base class that factors out the BLAS code common to
 *        ConvolutionLayer and DeconvolutionLayer.
 */
template <typename Dtype>
class BaseConvolutionLayer : public Layer<Dtype> {
 public:
  explicit BaseConvolutionLayer(const LayerParameter& param)
      : Layer<Dtype>(param) {}//建構函式為空
  virtual void LayerSetUp(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);//卷積層初始化,詳見cpp程式碼解析
  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);//卷積層輸出形狀初始化,詳見cpp程式碼解析

  virtual inline int MinBottomBlobs() const { return 1; }
  virtual inline int MinTopBlobs() const { return 1; }
  virtual inline bool EqualNumBottomTopBlobs() const { return true; }//卷積層不會改變blob的數量,一般會改變blob的channel

 protected:
  // Helper functions that abstract away the column buffer and gemm arguments.
  // The last argument in forward_cpu_gemm is so that we can skip the im2col if
  // we just called weight_cpu_gemm with the same input.
  void forward_cpu_gemm(const Dtype* input, const Dtype* weights,
      Dtype* output, bool skip_im2col = false);//cpu模式資料的前傳函式
  void forward_cpu_bias(Dtype* output, const Dtype* bias);//cpu模式偏置的前傳函式
  void backward_cpu_gemm(const Dtype* input, const Dtype* weights,
      Dtype* output);//cpu模式資料梯度的反傳函式
  void weight_cpu_gemm(const Dtype* input, const Dtype* output, Dtype*
      weights);//cpu模式權重梯度的反傳函式
  void backward_cpu_bias(Dtype* bias, const Dtype* input);//cpu模式偏置梯度的反傳函式

#ifndef CPU_ONLY
  void forward_gpu_gemm(const Dtype* col_input, const Dtype* weights,
      Dtype* output, bool skip_im2col = false);//gpu模式資料的前傳函式
  void forward_gpu_bias(Dtype* output, const Dtype* bias);//gpu模式偏置的前傳函式
  void backward_gpu_gemm(const Dtype* input, const Dtype* weights,
      Dtype* col_output);//gpu模式資料的反傳函式
  void weight_gpu_gemm(const Dtype* col_input, const Dtype* output, Dtype*
      weights);//gpu模式權重的反傳函式
  void backward_gpu_bias(Dtype* bias, const Dtype* input);//gpu模式偏置的反傳函式
#endif

  /// @brief The spatial dimensions of the input.
  //這個函式用於卷積層的輸入blob的高(height)和寬(width),注意引數i從1開始取,代表從channel的後一維開始
  inline int input_shape(int i) {
    return (*bottom_shape_)[channel_axis_ + i];
  }
  // reverse_dimensions should return true iff we are implementing deconv, so
  // that conv helpers know which dimensions are which.
  virtual bool reverse_dimensions() = 0;//這個函式判斷是否是反捲積運算(在conv_layer.hpp中直接置為false)
  // Compute height_out_ and width_out_ from other parameters.
  virtual void compute_output_shape() = 0;//這個函式計算卷積層輸出的形狀

  /// @brief The spatial dimensions of a filter kernel.
  Blob<int> kernel_shape_;//卷積核的形狀,長*寬
  /// @brief The spatial dimensions of the stride.
  Blob<int> stride_;//步長
  /// @brief The spatial dimensions of the padding.
  Blob<int> pad_;//卷積的時候做的邊緣pad
  /// @brief The spatial dimensions of the dilation.
  Blob<int> dilation_;//描述卷積核的膨脹引數
  /// @brief The spatial dimensions of the convolution input.
  Blob<int> conv_input_shape_;//輸入的形狀
  /// @brief The spatial dimensions of the col_buffer.
  vector<int> col_buffer_shape_;//一個輸出通道對應的所有卷積核的所有卷積區域轉化成一列向量的形狀
  /// @brief The spatial dimensions of the output.
  vector<int> output_shape_;//輸出的形狀
  const vector<int>* bottom_shape_;//這個指標指向了輸入blob的shape

  int num_spatial_axes_;//這個引數描述的是卷積處理的維度數,一般為2,表示二維卷積
  int bottom_dim_;//bottom_dim_描述的是bottom blob的一個channel包含的資料量
  int top_dim_;//top_dim_描述的是top blob的一個channel包含的資料量

  int channel_axis_;//這個引數一般為1,指示卷積核是按照輸入blob各通道進行卷積
  int num_;//這個引數代表卷積操作輸入圖片的數目
  int channels_;//代表卷積層輸入的單blob的通道數
  int group_;//卷積組的大小
  int out_spatial_dim_;//卷積後的影象大小
  int weight_offset_;//權重的偏移量,尤其適用於卷積組大於1的情況
  int num_output_;//表示該卷積層輸出的通道數
  bool bias_term_;//是否啟用偏置
  bool is_1x1_;//判斷是不是1*1卷積,要求卷積核為1*1,步長為1,pad為0
  bool force_nd_im2col_;//是否需要強制n維卷積

 private:
  // wrap im2col/col2im so we don't have to remember the (long) argument lists
  inline void conv_im2col_cpu(const Dtype* data, Dtype* col_buff) {//將卷積處理的特徵圖按小窗大小轉化為並列的單列向量的cpu實現函式
    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {
      im2col_cpu(data, conv_in_channels_,
          conv_input_shape_.cpu_data()[1], conv_input_shape_.cpu_data()[2],
          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],
          pad_.cpu_data()[0], pad_.cpu_data()[1],
          stride_.cpu_data()[0], stride_.cpu_data()[1],
          dilation_.cpu_data()[0], dilation_.cpu_data()[1], col_buff);
    } else {
      im2col_nd_cpu(data, num_spatial_axes_, conv_input_shape_.cpu_data(),
          col_buffer_shape_.data(), kernel_shape_.cpu_data(),
          pad_.cpu_data(), stride_.cpu_data(), dilation_.cpu_data(), col_buff);
    }
  }
  inline void conv_col2im_cpu(const Dtype* col_buff, Dtype* data) {//將列向量還原為卷積處理的特徵圖的cpu實現函式
    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {
      col2im_cpu(col_buff, conv_in_channels_,
          conv_input_shape_.cpu_data()[1], conv_input_shape_.cpu_data()[2],
          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],
          pad_.cpu_data()[0], pad_.cpu_data()[1],
          stride_.cpu_data()[0], stride_.cpu_data()[1],
          dilation_.cpu_data()[0], dilation_.cpu_data()[1], data);
    } else {
      col2im_nd_cpu(col_buff, num_spatial_axes_, conv_input_shape_.cpu_data(),
          col_buffer_shape_.data(), kernel_shape_.cpu_data(),
          pad_.cpu_data(), stride_.cpu_data(), dilation_.cpu_data(), data);
    }
  }
#ifndef CPU_ONLY
  inline void conv_im2col_gpu(const Dtype* data, Dtype* col_buff) {//將卷積處理的特徵圖按小窗大小轉化為並列的單列向量的gpu實現函式
    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {
      im2col_gpu(data, conv_in_channels_,
          conv_input_shape_.cpu_data()[1], conv_input_shape_.cpu_data()[2],
          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],
          pad_.cpu_data()[0], pad_.cpu_data()[1],
          stride_.cpu_data()[0], stride_.cpu_data()[1],
          dilation_.cpu_data()[0], dilation_.cpu_data()[1], col_buff);
    } else {
      im2col_nd_gpu(data, num_spatial_axes_, num_kernels_im2col_,
          conv_input_shape_.gpu_data(), col_buffer_.gpu_shape(),
          kernel_shape_.gpu_data(), pad_.gpu_data(),
          stride_.gpu_data(), dilation_.gpu_data(), col_buff);
    }
  }
  inline void conv_col2im_gpu(const Dtype* col_buff, Dtype* data) {//將列向量還原為卷積處理的特徵圖的gpu實現函式
    if (!force_nd_im2col_ && num_spatial_axes_ == 2) {
      col2im_gpu(col_buff, conv_in_channels_,
          conv_input_shape_.cpu_data()[1], conv_input_shape_.cpu_data()[2],
          kernel_shape_.cpu_data()[0], kernel_shape_.cpu_data()[1],
          pad_.cpu_data()[0], pad_.cpu_data()[1],
          stride_.cpu_data()[0], stride_.cpu_data()[1],
          dilation_.cpu_data()[0], dilation_.cpu_data()[1], data);
    } else {
      col2im_nd_gpu(col_buff, num_spatial_axes_, num_kernels_col2im_,
          conv_input_shape_.gpu_data(), col_buffer_.gpu_shape(),
          kernel_shape_.gpu_data(), pad_.gpu_data(), stride_.gpu_data(),
          dilation_.gpu_data(), data);
    }
  }
#endif

  int num_kernels_im2col_;//im2col操作生成的列向量數量
  int num_kernels_col2im_;//col2im操作還原得到的卷積操作處理的的小窗的數量
  int conv_out_channels_;//描述卷積層輸出的通道數
  int conv_in_channels_;//描述卷積層輸入的通道數
  int conv_out_spatial_dim_;//卷積操作輸出的單通道資料量
  int kernel_dim_;//表示一個輸出通道對應的所有卷積核對輸入的一個卷積組的所有通道卷積操作一次處理資料量大小
  int col_offset_;//表示一個輸出通道對應的所有卷積核處理的一個卷積組的所有資料量
  int output_offset_;//表示一個卷積組輸出的所有資料量

  Blob<Dtype> col_buffer_;//儲存了一個輸出通道對應的所有卷積核的所有卷積區域轉化成的眾多單列向量
  Blob<Dtype> bias_multiplier_;//儲存了偏置乘數
};

}  // namespace caffe

#endif  // CAFFE_BASE_CONVOLUTION_LAYER_HPP_
   在上述程式碼中,有相當多的變數被定義,首先是LayerSetup和Reshape函式,這兩個函式在相應的.cpp程式碼中有詳細解析,緊接著聲明瞭一些cpu和gpu實現的前向傳播和反向傳播的函式,然後就是input_shape函式,這個函式在conv_layer.cpp中的compute_output_shape函式被呼叫,目的是獲取卷積層輸入特徵圖的大小,之後定義了一大堆描述卷積層輸入輸出,權重,操作引數等等變數,還定義了一些有關卷積操作處理的特徵圖和並列的列向量的相互轉換的函式。筆者想提示大家的是,在這個hpp檔案中,名稱中包含“offset”的變數全是和卷積組相關的,這些變數在多組卷積中會大派用場。關於卷積操作處理的特徵圖與並列的列向量的相互轉換的函式,下面筆者繪圖表示一下相關操作


   上圖表示了卷積輸入影象5*5,卷積核為3*3,長寬方向pad為0,長寬方向步長為2的卷積操作的原圖和對應的列向量,可見im2col函式是將卷積操作的原影象上的小窗轉換成一個個列向量並存儲,反之。col2im亦然。

   讓我們再來看看base_conv_layer.cpp的程式碼:

#include <algorithm>
#include <vector>

#include "caffe/filler.hpp"
#include "caffe/layers/base_conv_layer.hpp"
#include "caffe/util/im2col.hpp"
#include "caffe/util/math_functions.hpp"

namespace caffe {

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::LayerSetUp(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) {
  // Configure the kernel size, padding, stride, and inputs.
  ConvolutionParameter conv_param = this->layer_param_.convolution_param();//讀入引數
  force_nd_im2col_ = conv_param.force_nd_im2col();//讀入標誌進行強制n維卷積的引數
  /*channel_axis_這個引數讀取引數定義中的axis引數,預設為1,表示按channel求和,輸入blob為(N,C,W,H)時,
  一個輸出通道對應的所有卷積核對輸入blob上各通道做二維卷積,最後將輸入各通道卷積的結果加起來,作為
  一張輸出的特徵子圖*/
  channel_axis_ = bottom[0]->CanonicalAxisIndex(conv_param.axis());
  const int first_spatial_axis = channel_axis_ + 1;//指示卷積輸入影象的第一個軸,往往是H(height)
  const int num_axes = bottom[0]->num_axes();//得到bottom blob的維度
  num_spatial_axes_ = num_axes - first_spatial_axis;//卷積處理的維度數
  CHECK_GE(num_spatial_axes_, 0);//卷積處理的維度數必須大於0
  vector<int> bottom_dim_blob_shape(1, num_spatial_axes_ + 1);//用於初始化卷積操作輸入資料的形狀,一般三維(C,H,W)
  vector<int> spatial_dim_blob_shape(1, std::max(num_spatial_axes_, 1));//用於初始化卷積核的形狀
  // Setup filter kernel dimensions (kernel_shape_).
  kernel_shape_.Reshape(spatial_dim_blob_shape);//初始化卷積核的形狀(高*寬)
  int* kernel_shape_data = kernel_shape_.mutable_cpu_data();//得到記錄卷積核形狀資料地址
  /*檢查引數中有沒有自定義二維卷積的卷積核長寬,如果有定義則分別賦值,且自定義了二維卷積核
  長寬的話,kernal_size引數將不能被定義,否則非法。若引數中沒有定義二維卷積核的長寬,那麼根據
  kernal_size引數給卷積核賦值,卷積核一般是正方形*/
  if (conv_param.has_kernel_h() || conv_param.has_kernel_w()) {
    CHECK_EQ(num_spatial_axes_, 2)
        << "kernel_h & kernel_w can only be used for 2D convolution.";
    CHECK_EQ(0, conv_param.kernel_size_size())
        << "Either kernel_size or kernel_h/w should be specified; not both.";
    kernel_shape_data[0] = conv_param.kernel_h();
    kernel_shape_data[1] = conv_param.kernel_w();
  } else {
    const int num_kernel_dims = conv_param.kernel_size_size();
    CHECK(num_kernel_dims == 1 || num_kernel_dims == num_spatial_axes_)
        << "kernel_size must be specified once, or once per spatial dimension "
        << "(kernel_size specified " << num_kernel_dims << " times; "
        << num_spatial_axes_ << " spatial dims).";
      for (int i = 0; i < num_spatial_axes_; ++i) {
        kernel_shape_data[i] =
            conv_param.kernel_size((num_kernel_dims == 1) ? 0 : i);
      }
  }
  //檢查卷積核引數(高寬)是否合法
  for (int i = 0; i < num_spatial_axes_; ++i) {
    CHECK_GT(kernel_shape_data[i], 0) << "Filter dimensions must be nonzero.";
  }
  // Setup stride dimensions (stride_).
  stride_.Reshape(spatial_dim_blob_shape);//初始化步長,注意,卷積核處理二維影象的話,步長也是二維的
  int* stride_data = stride_.mutable_cpu_data();//得到卷積核步長引數的地址
  /*檢查引數中有沒有自定義二維卷積時高和寬方向的步長,如果定義了則賦值。如果沒有定義的話,就按照我們
  定義的網路引數檔案中的卷積層的stride引數賦值,stride引數要是缺失的話步長預設為kDefaultStride,即為1,
  我們往往只定義了一個步長值,代表高和寬方向的步長一致。*/
  if (conv_param.has_stride_h() || conv_param.has_stride_w()) {
    CHECK_EQ(num_spatial_axes_, 2)
        << "stride_h & stride_w can only be used for 2D convolution.";
    CHECK_EQ(0, conv_param.stride_size())
        << "Either stride or stride_h/w should be specified; not both.";
    stride_data[0] = conv_param.stride_h();
    stride_data[1] = conv_param.stride_w();
  } else {
    const int num_stride_dims = conv_param.stride_size();
    CHECK(num_stride_dims == 0 || num_stride_dims == 1 ||
          num_stride_dims == num_spatial_axes_)
        << "stride must be specified once, or once per spatial dimension "
        << "(stride specified " << num_stride_dims << " times; "
        << num_spatial_axes_ << " spatial dims).";
    const int kDefaultStride = 1;
    for (int i = 0; i < num_spatial_axes_; ++i) {
      stride_data[i] = (num_stride_dims == 0) ? kDefaultStride :
          conv_param.stride((num_stride_dims == 1) ? 0 : i);
      CHECK_GT(stride_data[i], 0) << "Stride dimensions must be nonzero.";
    }
  }
  // Setup pad dimensions (pad_).
  /*檢查引數中有沒有自定義高和寬方向的pad,如果定義了則賦值。如果沒有定義的話,就按照我們
  定義的網路引數檔案中的卷積層的pad引數賦值,pad引數要是缺失的話預設為kDefaultPad,即為0,
  我們往往只定義了一個pad值,代表高和寬方向的pad一致。*/
  pad_.Reshape(spatial_dim_blob_shape);
  int* pad_data = pad_.mutable_cpu_data();
  if (conv_param.has_pad_h() || conv_param.has_pad_w()) {
    CHECK_EQ(num_spatial_axes_, 2)
        << "pad_h & pad_w can only be used for 2D convolution.";
    CHECK_EQ(0, conv_param.pad_size())
        << "Either pad or pad_h/w should be specified; not both.";
    pad_data[0] = conv_param.pad_h();
    pad_data[1] = conv_param.pad_w();
  } else {
    const int num_pad_dims = conv_param.pad_size();
    CHECK(num_pad_dims == 0 || num_pad_dims == 1 ||
          num_pad_dims == num_spatial_axes_)
        << "pad must be specified once, or once per spatial dimension "
        << "(pad specified " << num_pad_dims << " times; "
        << num_spatial_axes_ << " spatial dims).";
    const int kDefaultPad = 0;
    for (int i = 0; i < num_spatial_axes_; ++i) {
      pad_data[i] = (num_pad_dims == 0) ? kDefaultPad :
          conv_param.pad((num_pad_dims == 1) ? 0 : i);
    }
  }
  /*檢查引數中有沒有自定義高和寬方向的卷積核擴充套件,如果定義了則賦值。如果沒有定義的話,就按照我們
  定義的網路引數檔案中的卷積層的dilation引數賦值,dilation_引數要是缺失的話預設為kDefaultDilation,
  即為1,表示卷積核不進行擴充套件。*/
  // Setup dilation dimensions (dilation_).
  dilation_.Reshape(spatial_dim_blob_shape);
  int* dilation_data = dilation_.mutable_cpu_data();
  const int num_dilation_dims = conv_param.dilation_size();
  CHECK(num_dilation_dims == 0 || num_dilation_dims == 1 ||
        num_dilation_dims == num_spatial_axes_)
      << "dilation must be specified once, or once per spatial dimension "
      << "(dilation specified " << num_dilation_dims << " times; "
      << num_spatial_axes_ << " spatial dims).";
  const int kDefaultDilation = 1;
  for (int i = 0; i < num_spatial_axes_; ++i) {
    dilation_data[i] = (num_dilation_dims == 0) ? kDefaultDilation :
                       conv_param.dilation((num_dilation_dims == 1) ? 0 : i);
  }
  // Special case: im2col is the identity for 1x1 convolution with stride 1
  // and no padding, so flag for skipping the buffer and transformation.
  //判斷是不是1*1卷積
  is_1x1_ = true;
  for (int i = 0; i < num_spatial_axes_; ++i) {
    is_1x1_ &=
        kernel_shape_data[i] == 1 && stride_data[i] == 1 && pad_data[i] == 0;
    if (!is_1x1_) { break; }
  }
  // Configure output channels and groups.
  channels_ = bottom[0]->shape(channel_axis_);//獲取卷積層輸入的單blob的通道數
  num_output_ = this->layer_param_.convolution_param().num_output();//獲取卷積層輸出的通道數
  CHECK_GT(num_output_, 0);//核驗輸出通道數是否大於零
  group_ = this->layer_param_.convolution_param().group();//獲取卷積組大小
  CHECK_EQ(channels_ % group_, 0);//核驗輸入的單blob通道數是否能被卷積組數整除
  CHECK_EQ(num_output_ % group_, 0)//核驗輸出通道數是否能被卷積組數整除
      << "Number of output should be multiples of group.";
  if (reverse_dimensions()) {//若需要反轉卷積操作,則交換輸入輸出,否則不交換
    conv_out_channels_ = channels_;
    conv_in_channels_ = num_output_;
  } else {
    conv_out_channels_ = num_output_;
    conv_in_channels_ = channels_;
  }
  // Handle the parameters: weights and biases.
  // - blobs_[0] holds the filter weights
  // - blobs_[1] holds the biases (optional)
  vector<int> weight_shape(2);//定義卷積層引數規格
  weight_shape[0] = conv_out_channels_;//權重引數shape的第一個數為輸出通道大小,即每個輸出通道對應各自的卷積核,理解為num
  weight_shape[1] = conv_in_channels_ / group_;//權重引數shape的第二個數為輸入通道大小除以卷積組數,理解為channel
  for (int i = 0; i < num_spatial_axes_; ++i) {
    weight_shape.push_back(kernel_shape_data[i]);//權重引數shape的第三個和第四個數為卷積核維度大小
  }
  bias_term_ = this->layer_param_.convolution_param().bias_term();//獲取是否使用偏置的引數
  vector<int> bias_shape(bias_term_, num_output_);//定義偏置引數規格,若bias_term_為true(1),那麼bias_shape[0]=num_output_
  if (this->blobs_.size() > 0) {
    CHECK_EQ(1 + bias_term_, this->blobs_.size())//核驗blobs_是否合法
        << "Incorrect number of weight blobs.";
    if (weight_shape != this->blobs_[0]->shape()) {//若weight_shape不為bobs_[0]的shape,則輸出異常
      Blob<Dtype> weight_shaped_blob(weight_shape);
      LOG(FATAL) << "Incorrect weight shape: expected shape "
          << weight_shaped_blob.shape_string() << "; instead, shape was "
          << this->blobs_[0]->shape_string();
    }
    if (bias_term_ && bias_shape != this->blobs_[1]->shape()) {//若bias_shape不為bobs_[1]的shape,則輸出異常
      Blob<Dtype> bias_shaped_blob(bias_shape);
      LOG(FATAL) << "Incorrect bias shape: expected shape "
          << bias_shaped_blob.shape_string() << "; instead, shape was "
          << this->blobs_[1]->shape_string();
    }
    LOG(INFO) << "Skipping parameter initialization";
  } else {//若blobs_.size() = 0,那麼根據bias_term_的真偽進行blobs_的大小初始化
    if (bias_term_) {
      this->blobs_.resize(2);
    } else {
      this->blobs_.resize(1);
    }
    // Initialize and fill the weights:
    // output channels x input channels per-group x kernel height x kernel width
    this->blobs_[0].reset(new Blob<Dtype>(weight_shape));//將blobs_[0]大小初始化為weight_shape
    shared_ptr<Filler<Dtype> > weight_filler(GetFiller<Dtype>(
        this->layer_param_.convolution_param().weight_filler()));//讀取我們定義層的引數中的權重填充,預設為0
    weight_filler->Fill(this->blobs_[0].get());//進行權重填充
    // If necessary, initialize and fill the biases.
    if (bias_term_) {
      this->blobs_[1].reset(new Blob<Dtype>(bias_shape));//若啟用了偏置,則讀取我們定義層的引數中的偏置填充,預設為0
      shared_ptr<Filler<Dtype> > bias_filler(GetFiller<Dtype>(
          this->layer_param_.convolution_param().bias_filler()));
      bias_filler->Fill(this->blobs_[1].get());//進行偏置的填充
    }
  }
  kernel_dim_ = this->blobs_[0]->count(1);//獲取一個輸出通道對應的所有卷積核對輸入的一個卷積組所有通道操作一次處理資料量大小,為(輸入總通道數/卷積組數)*卷積核高*卷積核寬
  weight_offset_ = conv_out_channels_ * kernel_dim_ / group_;//獲取權重的偏移量,理解為(conv_out_channels_/group_)* kernel_dim_ 
  // Propagate gradients to the parameters (as directed by backward pass).
  this->param_propagate_down_.resize(this->blobs_.size(), true);//初始化對權重和偏置(可選)梯度反傳的開關
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::Reshape(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) {
  const int first_spatial_axis = channel_axis_ + 1;//找到卷積操作處理的第一維的索引,通常為height
  /*核驗輸入blob的維度是否等於卷積操作處理的第一維的索引加上卷積操作需要處理的維度數*/
  CHECK_EQ(bottom[0]->num_axes(), first_spatial_axis + num_spatial_axes_)
      << "bottom num_axes may not change.";
  num_ = bottom[0]->count(0, channel_axis_);//獲取卷積層操作輸入的圖片數目
  CHECK_EQ(bottom[0]->shape(channel_axis_), channels_)//檢查輸入的通道數是否合法
      << "Input size incompatible with convolution kernel.";
  // TODO: generalize to handle inputs of different shapes.
  for (int bottom_id = 1; bottom_id < bottom.size(); ++bottom_id) {
    CHECK(bottom[0]->shape() == bottom[bottom_id]->shape())//如果輸入多個blob的話,檢查所有blob是否具有相同的shape
        << "All inputs must have the same shape.";
  }
  // Shape the tops.
  bottom_shape_ = &bottom[0]->shape();//獲取卷積層輸入的blob的形狀
  compute_output_shape();//獲取卷積層輸出的blob的形狀
  vector<int> top_shape(bottom[0]->shape().begin(),//初始化top_shape第一個元素為輸入單位blob的num
      bottom[0]->shape().begin() + channel_axis_);
  top_shape.push_back(num_output_);//top_shape加入輸出的通道數
  for (int i = 0; i < num_spatial_axes_; ++i) {
    top_shape.push_back(output_shape_[i]);//top_shape加入卷積處理的維度
  }
  for (int top_id = 0; top_id < top.size(); ++top_id) {
    top[top_id]->Reshape(top_shape);//將top的每個blob進行初始化
  }
  if (reverse_dimensions()) {
	/*如果要反轉卷積操作,conv_out_spatial_dim_初始化為卷積層輸出單位blob(bottom[0])的單通道的資料量*/
    conv_out_spatial_dim_ = bottom[0]->count(first_spatial_axis);
  } else {
	/*否則,conv_out_spatial_dim_初始化為卷積層輸出單位blob(top[0])的單通道的資料量*/
    conv_out_spatial_dim_ = top[0]->count(first_spatial_axis);
  }
  col_offset_ = kernel_dim_ * conv_out_spatial_dim_;//col_offset表徵了一個輸出通道對應的所有卷積核處理的一個卷積組的所有資料量
  output_offset_ = conv_out_channels_ * conv_out_spatial_dim_ / group_;//output_offset_表徵了一個卷積組輸出的所有資料量
  // Setup input dimensions (conv_input_shape_).
  vector<int> bottom_dim_blob_shape(1, num_spatial_axes_ + 1);//用於初始化卷積操作輸入資料的形狀,一般三維(C,H,W)
  conv_input_shape_.Reshape(bottom_dim_blob_shape);//初始化卷積層輸入shape,一般大小為3
  int* conv_input_shape_data = conv_input_shape_.mutable_cpu_data();
  for (int i = 0; i < num_spatial_axes_ + 1; ++i) {//初始化卷積層的輸入引數,一般順序為channel->height->width
    if (reverse_dimensions()) {
      conv_input_shape_data[i] = top[0]->shape(channel_axis_ + i);
    } else {
      conv_input_shape_data[i] = bottom[0]->shape(channel_axis_ + i);
    }
  }
  // The im2col result buffer will only hold one image at a time to avoid
  // overly large memory usage. In the special case of 1x1 convolution
  // it goes lazily unused to save memory.
  col_buffer_shape_.clear();
  col_buffer_shape_.push_back(kernel_dim_ * group_);//col_buffer_shape_加入(輸入總通道數*卷積核高*卷積核寬)
  for (int i = 0; i < num_spatial_axes_; ++i) {//col_buffer_shape_加入卷積層輸出單通道的維度
    if (reverse_dimensions()) {
      col_buffer_shape_.push_back(input_shape(i + 1));
    } else {
      col_buffer_shape_.push_back(output_shape_[i]);
    }
  }
  col_buffer_.Reshape(col_buffer_shape_);//初始化col_buffer
  bottom_dim_ = bottom[0]->count(channel_axis_);//bottom_dim_描述的是bottom blob的一個channel包含的資料量
  top_dim_ = top[0]->count(channel_axis_);//top_dim_描述的是top blob的一個channel包含的資料量
  num_kernels_im2col_ = conv_in_channels_ * conv_out_spatial_dim_;//描述了一個輸出通道對應的所有卷積核對全部輸入做卷積操作時轉換生成的列向量的數量
  num_kernels_col2im_ = reverse_dimensions() ? top_dim_ : bottom_dim_;//描述了將生成的列向量還原卷積操作的區域圖的數量
  // Set up the all ones "bias multiplier" for adding biases by BLAS
  out_spatial_dim_ = top[0]->count(first_spatial_axis);//描述了輸出的單通道資料量
  if (bias_term_) {//若啟用了偏置,那麼初始化偏置乘數blob
    //偏置乘數的大小為輸出的單通道資料量,因為對於每個輸出資料乘數不一樣
    vector<int> bias_multiplier_shape(1, out_spatial_dim_);
    bias_multiplier_.Reshape(bias_multiplier_shape);
    caffe_set(bias_multiplier_.count(), Dtype(1),//先將這些乘數置為1
        bias_multiplier_.mutable_cpu_data());
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::forward_cpu_gemm(const Dtype* input,//進行資料的cpu前向傳播
    const Dtype* weights, Dtype* output, bool skip_im2col) {
  const Dtype* col_buff = input;
  if (!is_1x1_) {
    if (!skip_im2col) {//im2col將一個卷積操作處理的原特徵圖按小窗變成並排列向量
      conv_im2col_cpu(input, col_buffer_.mutable_cpu_data());
    }
    col_buff = col_buffer_.cpu_data();
  }
  for (int g = 0; g < group_; ++g) {
    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, conv_out_channels_ /
        group_, conv_out_spatial_dim_, kernel_dim_,
        (Dtype)1., weights + weight_offset_ * g, col_buff + col_offset_ * g,
        (Dtype)0., output + output_offset_ * g);
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::forward_cpu_bias(Dtype* output,//進行偏置的cpu前向傳播
    const Dtype* bias) {
  caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num_output_,
      out_spatial_dim_, 1, (Dtype)1., bias, bias_multiplier_.cpu_data(),
      (Dtype)1., output);
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::backward_cpu_gemm(const Dtype* output,//進行資料梯度的cpu反向傳播
    const Dtype* weights, Dtype* input) {
  Dtype* col_buff = col_buffer_.mutable_cpu_data();
  if (is_1x1_) {
    col_buff = input;
  }
  for (int g = 0; g < group_; ++g) {
    caffe_cpu_gemm<Dtype>(CblasTrans, CblasNoTrans, kernel_dim_,
        conv_out_spatial_dim_, conv_out_channels_ / group_,
        (Dtype)1., weights + weight_offset_ * g, output + output_offset_ * g,
        (Dtype)0., col_buff + col_offset_ * g);
  }
  if (!is_1x1_) {
    conv_col2im_cpu(col_buff, input);//將並列的列向量還原成影象
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::weight_cpu_gemm(const Dtype* input,//進行權重的cpu前向傳播
    const Dtype* output, Dtype* weights) {
  const Dtype* col_buff = input;
  if (!is_1x1_) {
    conv_im2col_cpu(input, col_buffer_.mutable_cpu_data());
    col_buff = col_buffer_.cpu_data();
  }
  for (int g = 0; g < group_; ++g) {
    caffe_cpu_gemm<Dtype>(CblasNoTrans, CblasTrans, conv_out_channels_ / group_,
        kernel_dim_, conv_out_spatial_dim_,
        (Dtype)1., output + output_offset_ * g, col_buff + col_offset_ * g,
        (Dtype)1., weights + weight_offset_ * g);
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::backward_cpu_bias(Dtype* bias,//進行偏置梯度的cpu反向傳播
    const Dtype* input) {
  caffe_cpu_gemv<Dtype>(CblasNoTrans, num_output_, out_spatial_dim_, 1.,
      input, bias_multiplier_.cpu_data(), 1., bias);
}

#ifndef CPU_ONLY

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::forward_gpu_gemm(const Dtype* input,//進行資料的gpu前向傳播
    const Dtype* weights, Dtype* output, bool skip_im2col) {
  const Dtype* col_buff = input;
  if (!is_1x1_) {
    if (!skip_im2col) {
      conv_im2col_gpu(input, col_buffer_.mutable_gpu_data());
    }
    col_buff = col_buffer_.gpu_data();
  }
  for (int g = 0; g < group_; ++g) {
    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, conv_out_channels_ /
        group_, conv_out_spatial_dim_, kernel_dim_,
        (Dtype)1., weights + weight_offset_ * g, col_buff + col_offset_ * g,
        (Dtype)0., output + output_offset_ * g);
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::forward_gpu_bias(Dtype* output,//進行偏置的gpu前向傳播
    const Dtype* bias) {
  caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasNoTrans, num_output_,
      out_spatial_dim_, 1, (Dtype)1., bias, bias_multiplier_.gpu_data(),
      (Dtype)1., output);
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::backward_gpu_gemm(const Dtype* output,//進行資料梯度的gpu反向傳播
    const Dtype* weights, Dtype* input) {
  Dtype* col_buff = col_buffer_.mutable_gpu_data();
  if (is_1x1_) {
    col_buff = input;
  }
  for (int g = 0; g < group_; ++g) {
    caffe_gpu_gemm<Dtype>(CblasTrans, CblasNoTrans, kernel_dim_,
        conv_out_spatial_dim_, conv_out_channels_ / group_,
        (Dtype)1., weights + weight_offset_ * g, output + output_offset_ * g,
        (Dtype)0., col_buff + col_offset_ * g);
  }
  if (!is_1x1_) {
    conv_col2im_gpu(col_buff, input);
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::weight_gpu_gemm(const Dtype* input,//進行權重的gpu前向傳播
    const Dtype* output, Dtype* weights) {
  const Dtype* col_buff = input;
  if (!is_1x1_) {
    conv_im2col_gpu(input, col_buffer_.mutable_gpu_data());
    col_buff = col_buffer_.gpu_data();
  }
  for (int g = 0; g < group_; ++g) {
    caffe_gpu_gemm<Dtype>(CblasNoTrans, CblasTrans, conv_out_channels_ / group_,
        kernel_dim_, conv_out_spatial_dim_,
        (Dtype)1., output + output_offset_ * g, col_buff + col_offset_ * g,
        (Dtype)1., weights + weight_offset_ * g);
  }
}

template <typename Dtype>
void BaseConvolutionLayer<Dtype>::backward_gpu_bias(Dtype* bias,//進行偏置梯度的反向傳播
    const Dtype* input) {
  caffe_gpu_gemv<Dtype>(CblasNoTrans, num_output_, out_spatial_dim_, 1.,
      input, bias_multiplier_.gpu_data(), 1., bias);
}

#endif  // !CPU_ONLY

INSTANTIATE_CLASS(BaseConvolutionLayer);

}  // namespace caffe

   很詳細的程式碼註釋已經貼在上面,在base_conv_layer.cpp中,首先是LayerSetup函式,初始化了卷積核大小,步長,pad,卷積核擴充套件,然後根據這些引數初始化了卷積層的可學習引數的規格。在Reshape函式中,進行了卷積層輸出空間的佈局,還初始化了一個偏置乘數器,這個小東西可以使得每次卷積操作的結果再加上的偏置是不一樣的,請大家注意這個細節。在後面則定義了cpu和gpu上面的前傳和反傳函式,這些函式大量地呼叫了caffe/math_function.hpp中的介面,在這裡只需要明白這些函式是對哪些資訊(資料,梯度,權重梯度,偏置梯度)進行的哪些操作(前傳,反傳)即可。

   到此為止,卷積層的程式碼解析就接近尾聲了,筆者認為,卷積層的實現細節中,關於底層封裝還是比較嚴密的,比如卷積具體操作的前傳反傳並沒有具體給出來,不過窺一斑可見全豹,相信讀者朋友們通過卷積層的結構,能理解卷積層的底層操作,相關程式碼筆者打算留到後文解析。通過閱讀卷積層的原始碼,筆者深深敬佩caffe框架開發者的極客精神與動手能力,能夠寫出如此具有藝術氣息的程式碼。

   在解析完畢卷積層的程式碼之後,筆者也深深感受到,原始碼閱讀的確是提升綜合能力的有效手段,在對原始碼進行解析時,不僅能夠理解作者的理論思路,更能養成規範的程式碼編寫習慣,提升程式碼能力。同時在此提醒讀者朋友們,筆者作為深度學習菜鳥,在解析原始碼的時候,會有一些謬誤和疏漏,萬望讀者朋友們熱心提出,筆者一定加以改正。

   歡迎閱讀筆者後續解析caffe原始碼的部落格,各位讀者朋友的支援與鼓勵是我最大的動力!

written by jiong

生命的閃耀不堅持到底怎能看到