1. 程式人生 > >Caffe中的EuclideanLoss層原始碼解析

Caffe中的EuclideanLoss層原始碼解析

Caffe中的EuclideanLoss層是用於計算L2 loss的(即平方和損失函式),其損失函式為:

                                                                                   E = \frac{1}{2N} \sum\limits_{n=1}^N \left| \left| \hat{y}_n - y_n\right| \right|_2^2

其中,N為輸入blob的第一維(即bottom[0].num()),\hat{y}_n ,y_n可以是向量也可以是標量,前者為預測值,後者為標籤(即目標值)。

先看看該層的標頭檔案定義了哪些:

#ifndef CAFFE_EUCLIDEAN_LOSS_LAYER_HPP_
#define CAFFE_EUCLIDEAN_LOSS_LAYER_HPP_

#include <vector>

#include "caffe/blob.hpp"
#include "caffe/layer.hpp"
#include "caffe/proto/caffe.pb.h"

#include "caffe/layers/loss_layer.hpp"

namespace caffe {

/**
 * @brief Computes the Euclidean (L2) loss @f$
 *          E = \frac{1}{2N} \sum\limits_{n=1}^N \left| \left| \hat{y}_n - y_n
 *        \right| \right|_2^2 @f$ for real-valued regression tasks.
 *
 * @param bottom input Blob vector (length 2)
 *   -# @f$ (N \times C \times H \times W) @f$
 *      the predictions @f$ \hat{y} \in [-\infty, +\infty]@f$
 *   -# @f$ (N \times C \times H \times W) @f$
 *      the targets @f$ y \in [-\infty, +\infty]@f$
 * @param top output Blob vector (length 1)
 *   -# @f$ (1 \times 1 \times 1 \times 1) @f$
 *      the computed Euclidean loss: @f$ E =
 *          \frac{1}{2n} \sum\limits_{n=1}^N \left| \left| \hat{y}_n - y_n
 *        \right| \right|_2^2 @f$
 *
 * This can be used for least-squares regression tasks.  An InnerProductLayer
 * input to a EuclideanLossLayer exactly formulates a linear least squares
 * regression problem. With non-zero weight decay the problem becomes one of
 * ridge regression -- see src/caffe/test/test_sgd_solver.cpp for a concrete
 * example wherein we check that the gradients computed for a Net with exactly
 * this structure match hand-computed gradient formulas for ridge regression.
 *
 * (Note: Caffe, and SGD in general, is certainly \b not the best way to solve
 * linear least squares problems! We use it only as an instructive example.)
 */
template <typename Dtype>
class EuclideanLossLayer : public LossLayer<Dtype> {
 public:
  explicit EuclideanLossLayer(const LayerParameter& param)
      : LossLayer<Dtype>(param), diff_() {}
  virtual void Reshape(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);

  virtual inline const char* type() const { return "EuclideanLoss"; }
  /**
   * Unlike most loss layers, in the EuclideanLossLayer we can backpropagate
   * to both inputs -- override to return true and always allow force_backward.
   */
  //不像其他loss層,EuclideanLoss層的兩個輸入blob都可以進行反向傳播
  virtual inline bool AllowForceBackward(const int bottom_index) const {
    return true;
  }

 protected:
  /// @copydoc EuclideanLossLayer
  //前向傳播
  virtual void Forward_cpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);
  virtual void Forward_gpu(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top);

  /**
   * @brief Computes the Euclidean error gradient w.r.t. the inputs.
   *
   * Unlike other children of LossLayer, EuclideanLossLayer \b can compute
   * gradients with respect to the label inputs bottom[1] (but still only will
   * if propagate_down[1] is set, due to being produced by learnable parameters
   * or if force_backward is set). In fact, this layer is "commutative" -- the
   * result is the same regardless of the order of the two bottoms.
   *與LossLayer的其他loss函式不同,EuclideanLossLayer可以計算相對於標籤輸入bottom[1]的梯度
   *(即如果設定了propagate_down [1],或者設定了force_backward)。
   *實際上,這一層是“可交換的”——無論兩個bottom(即bottom[0]和bottom[1])的順序如何,反向傳播結果都是相同的。
   * @param top output Blob vector (length 1), providing the error gradient with
   *      respect to the outputs
   *   -# @f$ (1 \times 1 \times 1 \times 1) @f$
   *      This Blob's diff will simply contain the loss_weight* @f$ \lambda @f$,
   *      as @f$ \lambda @f$ is the coefficient of this layer's output
   *      @f$\
[email protected]
$ in the overall Net loss * @f$ E = \lambda_i \ell_i + \mbox{other loss terms}@f$; hence * @f$ \frac{\partial E}{\partial \ell_i} = \lambda_i @f$. * (*Assuming that this top Blob is not used as a bottom (input) by any * other layer of the Net.) * @param propagate_down see Layer::Backward. * @param bottom input Blob vector (length 2) * -# @f$ (N \times C \times H \times W) @f$ * the predictions @f$\hat{y}@f$; Backward fills their diff with * gradients @f$ * \frac{\partial E}{\partial \hat{y}} = * \frac{1}{n} \sum\limits_{n=1}^N (\hat{y}_n - y_n) * @f$ if propagate_down[0] * -# @f$ (N \times C \times H \times W) @f$ * the targets @
[email protected]
$; Backward fills their diff with gradients * @f$ \frac{\partial E}{\partial y} = * \frac{1}{n} \sum\limits_{n=1}^N (y_n - \hat{y}_n) * @f$ if propagate_down[1] */ //反向傳播 virtual void Backward_cpu(const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom); virtual void Backward_gpu(const vector<Blob<Dtype>*>& top, const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom); Blob<Dtype> diff_; //用於儲存y^hat-y(即預測值和標籤之間的差值) }; } // namespace caffe #endif // CAFFE_EUCLIDEAN_LOSS_LAYER_HPP_

再來看看上述標頭檔案中函式的實現(只看CPU的實現,GPU的實現類似,詳細可以參見euclidean_loss_layer.cu檔案)

#include <vector>

#include "caffe/layers/euclidean_loss_layer.hpp"
#include "caffe/util/math_functions.hpp"

namespace caffe {

template <typename Dtype>
void EuclideanLossLayer<Dtype>::Reshape(
  const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {
  LossLayer<Dtype>::Reshape(bottom, top); //呼叫LossLayer類的Reshape函式(會檢查bottom[0]和bottom[1]的num是否相等)
  CHECK_EQ(bottom[0]->count(1), bottom[1]->count(1))
      << "Inputs must have the same dimension."; //即bottom[0]和bottom[1]的channel×height×width需要相同
  diff_.ReshapeLike(*bottom[0]);
}

template <typename Dtype>
void EuclideanLossLayer<Dtype>::Forward_cpu(const vector<Blob<Dtype>*>& bottom,
    const vector<Blob<Dtype>*>& top) {
  int count = bottom[0]->count();
  caffe_sub(
      count,
      bottom[0]->cpu_data(),
      bottom[1]->cpu_data(),
      diff_.mutable_cpu_data()); //caffe_sub實現逐元素相減
  Dtype dot = caffe_cpu_dot(count, diff_.cpu_data(), diff_.cpu_data()); //caffe_cpu_dot實現向量內積
  Dtype loss = dot / bottom[0]->num() / Dtype(2); //計算L2 loss
  top[0]->mutable_cpu_data()[0] = loss;
}

template <typename Dtype>
void EuclideanLossLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
    const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
  for (int i = 0; i < 2; ++i) {
    if (propagate_down[i]) {
      const Dtype sign = (i == 0) ? 1 : -1;  //由此實現兩個輸入blob都可以反向傳播(無論設定哪一個,反向傳播結果都是一樣的)
      const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num(); //這裡的top[0]->cpu_diff()[0]儲存的其實就是該層的損失權重loss_weight
      //caffe_cpu_axpby實現b=alpha*a+beta*b(a和b均為向量)
      caffe_cpu_axpby(
          bottom[i]->count(),              // count
          alpha,                              // alpha
          diff_.cpu_data(),                   // a
          Dtype(0),                           // beta
          bottom[i]->mutable_cpu_diff());  // b
    }
  }
}

#ifdef CPU_ONLY
STUB_GPU(EuclideanLossLayer);
#endif

INSTANTIATE_CLASS(EuclideanLossLayer);
REGISTER_LAYER_CLASS(EuclideanLoss);

}  // namespace caffe

從上述Reshape()函式中可以看出該層的兩個輸入blob需要有相同的維數。

上述Forward_cpu()函式就是實現上面的損失函式(由於損失是一個標量,所以輸出blob,即top[0]只有一個元素,這是在LossLayer類的Reshape函式所設定的)

而Backward_cpu()函式實現的便是計算下面公式中的梯度:

如果propagate_down[0] = true,則\frac{\partial E}{\partial \hat{y}_n} =\frac{1}{N}( \hat{y}_n-y_n)

如果propagate_down[1] = true,則\frac{\partial E}{\partial y_n} =\frac{1}{N}( y_n-\hat{y}_n)

注:當\hat{y}_n ,y_n是標量時,這裡用到了向量/矩陣求導。

另外需要注意的是,這裡的兩個輸入blob都可以反向傳播的意思其實是該層可以允許預測值和標籤放反,即允許y_{n}為預測向量,\hat_{y}_{n}為標籤向量,但需要設定propagate_down[1] = true或設定force_backward。

還有一點需要注意的是,損失層的Forward/Backword函式的輸出blob(即top)是標量,只有一個數據,但參見這些損失層的繼承類LossLayer時,你會發現一個奇怪的現象(以下程式碼摘自loss_layer.cpp):

template <typename Dtype>
void LossLayer<Dtype>::Reshape(
    const vector<Blob<Dtype>*>& bottom, const vector<Blob<Dtype>*>& top) {
  CHECK_EQ(bottom[0]->num(), bottom[1]->num())
      << "The data and label should have the same number.";
  vector<int> loss_shape(0);  // Loss layers output a scalar; 0 axes.
  top[0]->Reshape(loss_shape);
}

即上述程式碼定義了top[0]這個blob的維數為0,但在其他損失函式,例如EuclideanLoss層中的Forward_cpu函式中的語句:top[0]->mutable_cpu_data()[0] = loss;卻呼叫了top[0][0](所有損失層都是繼承這個類的,所以top[0]的初始化正是上面的Reshape函式),一般而言這樣索引會出錯(即超出了索引範圍),但在caffe中並不會報錯(可能是proto自身的原因)。

例如,當執行以下程式碼

#include <vector>
#include <iostream>
#include <caffe/blob.hpp>
#include <caffe/util/io.hpp>
using namespace caffe;
using namespace std;
int main()
{
   Blob<float> a;
   cout<<"Size:="<<a.shape_string()<<endl;
   a.Reshape(1,2,3,4);
   cout<<"Size:="<<a.shape_string()<<endl;
   float* p=a.mutable_cpu_data();
   for(int i=0;i<a.count();i++)
       p[i]=i;
   for(int u=0;u<a.num();u++)
      for(int v=0;v<a.channels();v++)
         for(int w=0;w<a.height();w++)
            for(int x=0;x<a.width();x++)
               cout<<"a["<<u<<"]["<<v<<"]["<<w<<"]["<<x<<"]="<<a.data_at(u,v,w,x)<<endl;

   //測試loss_layer中設定shape為0維
   vector<int> shape(0);
   vector<Blob<float>*> top;
   top.push_back(&a);
   top[0]->Reshape(shape);
   top[0]->mutable_cpu_data()[0] = 10;
   cout << "loss:" << top[0]->cpu_data()[0] << endl;
   return 0;
}

程式碼執行結果如下 ,即輸出的top[0]->cpu_data()[0]即為設定的10,且top[0]的size為1。

Size:=(0)
WARNING: Logging before InitGoogleLogging() is written to STDERR
I0827 14:35:05.329982  6920 common.cpp:36] System entropy source not available, using fallback algorithm to generate seed instead.
Size:=1 2 3 4 (24)
a[0][0][0][0]=0
a[0][0][0][1]=1
a[0][0][0][2]=2
a[0][0][0][3]=3
a[0][0][1][0]=4
a[0][0][1][1]=5
a[0][0][1][2]=6
a[0][0][1][3]=7
a[0][0][2][0]=8
a[0][0][2][1]=9
a[0][0][2][2]=10
a[0][0][2][3]=11
a[0][1][0][0]=12
a[0][1][0][1]=13
a[0][1][0][2]=14
a[0][1][0][3]=15
a[0][1][1][0]=16
a[0][1][1][1]=17
a[0][1][1][2]=18
a[0][1][1][3]=19
a[0][1][2][0]=20
a[0][1][2][1]=21
a[0][1][2][2]=22
a[0][1][2][3]=23
Size:=(1)
loss:10

假如註釋掉top[0]->mutable_cpu_data()[0] = 10;這句程式碼,會發現執行結果為loss:0;而當修改

cout << "loss:" << top[0]->cpu_data()[0] << endl;

cout << "loss:" << top[0]->cpu_data()[1] << endl;

執行結果為loss:1。

可知proto定義的blob是自動給你初始化了(按照標號初始化),且索引超出範圍也不會報錯,但需要注意的是top[0]->cpu_diff()[n],無論n為幾,輸出都是0(不信的話大家不妨可以試一試)。當然本質上 top[0]的大小仍舊為1,只是超出索引不會報錯而已,這一點還是要注意的,至於為何大小為1,這就需要看一下blob.cpp中的Reshape()函數了,如下:

template <typename Dtype>
void Blob<Dtype>::Reshape(const vector<int>& shape) {
  CHECK_LE(shape.size(), kMaxBlobAxes);
  count_ = 1; //此為關鍵,正是這裡設定了1,所以該blob的大小至少為1
  shape_.resize(shape.size());
  if (!shape_data_ || shape_data_->size() < shape.size() * sizeof(int)) {
    shape_data_.reset(new SyncedMemory(shape.size() * sizeof(int)));
  }
  int* shape_data = static_cast<int*>(shape_data_->mutable_cpu_data());
  for (int i = 0; i < shape.size(); ++i) {
    CHECK_GE(shape[i], 0);
    if (count_ != 0) {
      CHECK_LE(shape[i], INT_MAX / count_) << "blob size exceeds INT_MAX";
    }
    count_ *= shape[i];
    shape_[i] = shape[i];
    shape_data[i] = shape[i];
  }
  if (count_ > capacity_) {
    capacity_ = count_;
    data_.reset(new SyncedMemory(capacity_ * sizeof(Dtype)));
    diff_.reset(new SyncedMemory(capacity_ * sizeof(Dtype)));
  }
}

那為何top[0]->cpu_diff()[n]無論n為幾,都為0,卻在損失層中要乘上這個資料呢,例如上面EuclideanLoss層中的Backward_cpu()函式中的一句程式碼如下:

const Dtype alpha = sign * top[0]->cpu_diff()[0] / bottom[i]->num(); //這裡的top[0]->cpu_diff()[0]儲存的其實就是該層的損失權重loss_weight

求取alpha時乘上了top[0]->cpu_diff()[0],原因在於在損失層初始化的時候,該層的top[0]->cpu_diff()[0]被賦值為loss_weight(損失權重),當然也只有損失層有這個待遇,其餘層並沒有操作。具體溯源的話,先要看layer.hpp中的SetUp()函式:

 /**
   * @brief Implements common layer setup functionality.
   *
   * @param bottom the preshaped input blobs
   * @param top
   *     the allocated but unshaped output blobs, to be shaped by Reshape
   *
   * Checks that the number of bottom and top blobs is correct.
   * Calls LayerSetUp to do special layer setup for individual layer types,
   * followed by Reshape to set up sizes of top blobs and internal buffers.
   * Sets up the loss weight multiplier blobs for any non-zero loss weights.
   * This method may not be overridden.
   */
  void SetUp(const vector<Blob<Dtype>*>& bottom,
      const vector<Blob<Dtype>*>& top) {
    InitMutex();
    CheckBlobCounts(bottom, top);
    LayerSetUp(bottom, top);
    Reshape(bottom, top);
    SetLossWeights(top);  //此函式設定了損失權重
  }

然後是SetLossWeights()函式:

 /**
   * Called by SetUp to initialize the weights associated with any top blobs in
   * the loss function. Store non-zero loss weights in the diff blob.
   */
  inline void SetLossWeights(const vector<Blob<Dtype>*>& top) {
    const int num_loss_weights = layer_param_.loss_weight_size();
    if (num_loss_weights) {
      CHECK_EQ(top.size(), num_loss_weights) << "loss_weight must be "
          "unspecified or specified once per top blob.";
      for (int top_id = 0; top_id < top.size(); ++top_id) {
        const Dtype loss_weight = layer_param_.loss_weight(top_id);
        if (loss_weight == Dtype(0)) { continue; }
        this->set_loss(top_id, loss_weight);
        const int count = top[top_id]->count();
        //下面這句程式碼就是關鍵了,呼叫caffe_set函式給top中的diff賦值
        Dtype* loss_multiplier = top[top_id]->mutable_cpu_diff();
        caffe_set(count, loss_weight, loss_multiplier);
      }
    }
  }

看到這,大家應該豁然開朗了。

如需轉載,請註明本部落格出處!