1. 程式人生 > >(原)faster rcnn的tensorflow程式碼的理解

(原)faster rcnn的tensorflow程式碼的理解

轉載請註明出處:

 

參考網址:

論文:https://arxiv.org/abs/1506.01497

tf的第三方faster rcnn:https://github.com/endernewton/tf-faster-rcnn

IOU:https://www.cnblogs.com/darkknightzh/p/9043395.html

faster rcnn主要包括兩部分:rpn網路和rcnn網路。rpn網路用於保留在影象內部的archors,同時得到這些archors是正樣本還是負樣本還是不關注。最終訓練時通過nms保留最多2000個archors,測試時保留300個archors。另一方面,rpn網路會提供256個archors給rcnn網路,用於rcnn分類及迴歸座標位置。

Network為基類,vgg16為派生類,過載了Network中的_image_to_head和_head_to_tail。

下面只針對vgg16進行分析。

faster rcnn網路總體結構如下圖所示。

1. 訓練階段:

SolverWrapper通過construct_graph建立網路、train_op等。

construct_graph通過Network的create_architecture建立網路。

1.1 create_architecture

create_architecture通過_build_network具體建立網路模型、損失及其他相關操作,得到rois, cls_prob, bbox_pred,定義如下

 1 def create_architecture(self, mode, num_classes, tag=None, anchor_scales=(8, 16, 32), anchor_ratios=(0.5, 1, 2)):
 2     self._image = tf.placeholder(tf.float32, shape=[1, None, None, 3])   # 由於影象寬高不定,因而第二維和第三維都是None
 3     self._im_info = tf.placeholder(tf.float32, shape=[3])        #
影象資訊,高、寬、縮放到寬為600或者高為1000的最小比例 4 self._gt_boxes = tf.placeholder(tf.float32, shape=[None, 5]) # ground truth框的資訊。前四個為位置資訊,最後一個為該框對應的類別(見roi_data_layer/minibatch.py/get_minibatch) 5 self._tag = tag 6 7 self._num_classes = num_classes 8 self._mode = mode 9 self._anchor_scales = anchor_scales 10 self._num_scales = len(anchor_scales) 11 12 self._anchor_ratios = anchor_ratios 13 self._num_ratios = len(anchor_ratios) 14 15 self._num_anchors = self._num_scales * self._num_ratios # self._num_anchors=9 16 17 training = mode == 'TRAIN' 18 testing = mode == 'TEST' 19 20 weights_regularizer = tf.contrib.layers.l2_regularizer(cfg.TRAIN.WEIGHT_DECAY) # handle most of the regularizers here 21 if cfg.TRAIN.BIAS_DECAY: 22 biases_regularizer = weights_regularizer 23 else: 24 biases_regularizer = tf.no_regularizer 25 26 # list as many types of layers as possible, even if they are not used now 27 with arg_scope([slim.conv2d, slim.conv2d_in_plane, slim.conv2d_transpose, slim.separable_conv2d, slim.fully_connected], 28 weights_regularizer=weights_regularizer, biases_regularizer=biases_regularizer, biases_initializer=tf.constant_initializer(0.0)): 29 # rois:256個archors的類別(訓練時為每個archors的類別,測試時全0) 30 # cls_prob:256個archors每一類別的概率 31 # bbox_pred:預測位置資訊的偏移 32 rois, cls_prob, bbox_pred = self._build_network(training) 33 34 layers_to_output = {'rois': rois} 35 36 for var in tf.trainable_variables(): 37 self._train_summaries.append(var) 38 39 if testing: 40 stds = np.tile(np.array(cfg.TRAIN.BBOX_NORMALIZE_STDS), (self._num_classes)) 41 means = np.tile(np.array(cfg.TRAIN.BBOX_NORMALIZE_MEANS), (self._num_classes)) 42 self._predictions["bbox_pred"] *= stds # 訓練時_region_proposal中預測的位置偏移減均值除標準差,因而測試時需要反過來。 43 self._predictions["bbox_pred"] += means 44 else: 45 self._add_losses() 46 layers_to_output.update(self._losses) 47 48 val_summaries = [] 49 with tf.device("/cpu:0"): 50 val_summaries.append(self._add_gt_image_summary()) 51 for key, var in self._event_summaries.items(): 52 val_summaries.append(tf.summary.scalar(key, var)) 53 for key, var in self._score_summaries.items(): 54 self._add_score_summary(key, var) 55 for var in self._act_summaries: 56 self._add_act_summary(var) 57 for var in self._train_summaries: 58 self._add_train_summary(var) 59 60 self._summary_op = tf.summary.merge_all() 61 self._summary_op_val = tf.summary.merge(val_summaries) 62 63 layers_to_output.update(self._predictions) 64 65 return layers_to_output
View Code

 1.2 _build_network

_build_network用於建立網路
_build_network = _image_to_head + //得到輸入影象的特徵
_anchor_component + //得到所有可能的archors在原始影象中的座標(可能超出影象邊界)及archors的數量
_region_proposal + //對輸入特徵進行處理,最終得到2000個archors(訓練)或300個archors(測試)
_crop_pool_layer + //將256個archors裁剪出來,並縮放到7*7的固定大小,得到特徵
_head_to_tail + //將256個archors的特徵增加fc及dropout,得到4096維的特徵
_region_classification // 增加fc層及dropout層,用於rcnn的分類及迴歸
總體流程:網路通過vgg1-5得到特徵net_conv後,送入rpn網路得到候選區域archors,去除超出影象邊界的archors並選出2000個archors用於訓練rpn網路(300個用於測試)。並進一步選擇256個archors(用於rcnn分類)。之後將這256個archors的特徵根據rois進行裁剪縮放及pooling,得到相同大小7*7的特徵pool5,pool5通過兩個fc層得到4096維特徵fc7,fc7送入_region_classification(2個並列的fc層),得到21維的cls_score和21*4維的bbox_pred。
_build_network定義如下
 1 def _build_network(self, is_training=True):
 2     if cfg.TRAIN.TRUNCATED:  # select initializers
 3         initializer = tf.truncated_normal_initializer(mean=0.0, stddev=0.01)
 4         initializer_bbox = tf.truncated_normal_initializer(mean=0.0, stddev=0.001)
 5     else:
 6         initializer = tf.random_normal_initializer(mean=0.0, stddev=0.01)
 7         initializer_bbox = tf.random_normal_initializer(mean=0.0, stddev=0.001)
 8 
 9     net_conv = self._image_to_head(is_training)   # 得到vgg16的conv5_3
10     with tf.variable_scope(self._scope, self._scope):
11         self._anchor_component()  # 通過特徵圖及相對原始影象的縮放倍數_feat_stride得到所有archors的起點及終點座標
12         rois = self._region_proposal(net_conv, is_training, initializer)  # 通過rpn網路,得到256個archors的類別(訓練時為每個archors的類別,測試時全0)及位置(後四維)
13         pool5 = self._crop_pool_layer(net_conv, rois, "pool5")  # 對特徵圖通過rois得到候選區域,並對候選區域進行縮放,得到14*14的固定大小,進一步pooling成7*7大小
14 
15     fc7 = self._head_to_tail(pool5, is_training)  # 對固定大小的rois增加fc及dropout,得到4096維的特徵,用於分類及迴歸
16     with tf.variable_scope(self._scope, self._scope):
17         cls_prob, bbox_pred = self._region_classification(fc7, is_training, initializer, initializer_bbox)  # 對rois進行分類,完成目標檢測;進行迴歸,得到預測座標
18 
19     self._score_summaries.update(self._predictions)
20 
21     # rois:256個archors的類別(訓練時為每個archors的類別,測試時全0)
22     # cls_prob:256個archors每一類別的概率
23     # bbox_pred:預測位置資訊的偏移
24     return rois, cls_prob, bbox_pred
View Code

1.3 _image_to_head

_image_to_head用於得到輸入影象的特徵

該函式位於vgg16.py中,定義如下

 1 def _image_to_head(self, is_training, reuse=None):
 2     with tf.variable_scope(self._scope, self._scope, reuse=reuse):
 3         net = slim.repeat(self._image, 2, slim.conv2d, 64, [3, 3], trainable=False, scope='conv1')
 4         net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool1')
 5         net = slim.repeat(net, 2, slim.conv2d, 128, [3, 3], trainable=False, scope='conv2')
 6         net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool2')
 7         net = slim.repeat(net, 3, slim.conv2d, 256, [3, 3], trainable=is_training, scope='conv3')
 8         net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool3')
 9         net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], trainable=is_training, scope='conv4')
10         net = slim.max_pool2d(net, [2, 2], padding='SAME', scope='pool4')
11         net = slim.repeat(net, 3, slim.conv2d, 512, [3, 3], trainable=is_training, scope='conv5')
12 
13     self._act_summaries.append(net)
14     self._layers['head'] = net
15 
16     return net
View Code

1.4 _anchor_component

_anchor_component:用於得到所有可能的archors在原始影象中的座標(可能超出影象邊界)及archors的數量(特徵圖寬*特徵圖高*9)。該函式使用的self._im_info,為一個3維向量,[0]代表影象寬,[1]代表影象高,[2]代表影象縮放的比例(將影象寬縮放到600,或高縮放到1000的最小比例,比如縮放到600*900、850*1000)。該函式呼叫generate_anchors_pre_tf並進一步呼叫generate_anchors來得到所有可能的archors在原始影象中的座標及archors的個數(由於影象大小不一樣,因而最終archor的個數也不一樣)。
generate_anchors_pre_tf步驟如下:

1. 通過_ratio_enum得到archor時,使用 (0, 0, 15, 15) 的基準視窗,先通過ratio=[0.5,1,2]的比例得到archors。ratio指的是畫素總數(寬*高)的比例,而不是寬或者高的比例,得到如下三個archor(每個archor為左上角和右下角的座標):

2. 而後在通過scales=(8, 16, 32)得到放大倍數後的archors。scales時,將上面的每個都直接放大對應的倍數,最終得到9個archors(每個archor為左上角和右下角的座標)。將上面三個archors分別放大就行了,因而本文未給出該圖。

之後通過tf.add(anchor_constant, shifts)得到縮放後的每個點的9個archor在原始圖的矩形框。anchor_constant:1*9*4。shifts:N*1*4。N為縮放後特徵圖的畫素數。將維度從N*9*4變換到(N*9)*4,得到縮放後的影象每個點在原始影象中的archors。

_anchor_component如下:
 1 def _anchor_component(self):
 2     with tf.variable_scope('ANCHOR_' + self._tag) as scope:
 3         height = tf.to_int32(tf.ceil(self._im_info[0] / np.float32(self._feat_stride[0])))  # 影象經過vgg16得到特徵圖的寬高
 4         width = tf.to_int32(tf.ceil(self._im_info[1] / np.float32(self._feat_stride[0])))
 5         if cfg.USE_E2E_TF:
 6             # 通過特徵圖寬高、_feat_stride(特徵圖相對原始圖縮小的比例)及_anchor_scales、_anchor_ratios得到原始影象上
 7             # 所有可能的archors(座標可能超出原始影象邊界)和archor的數量
 8             anchors, anchor_length = generate_anchors_pre_tf(height, width, self._feat_stride, self._anchor_scales, self._anchor_ratios )
 9         else:
10             anchors, anchor_length = tf.py_func(generate_anchors_pre,
11                 [height, width, self._feat_stride, self._anchor_scales, self._anchor_ratios], [tf.float32, tf.int32], name="generate_anchors")
12         anchors.set_shape([None, 4])   # 起點座標,終點座標,共4個值
13         anchor_length.set_shape([])
14         self._anchors = anchors
15         self._anchor_length = anchor_length
16 
17 def generate_anchors_pre_tf(height, width, feat_stride=16, anchor_scales=(8, 16, 32), anchor_ratios=(0.5, 1, 2)):
18     shift_x = tf.range(width) * feat_stride  # 得到所有archors在原始影象的起始x座標:(0,feat_stride,2*feat_stride...)
19     shift_y = tf.range(height) * feat_stride  # 得到所有archors在原始影象的起始y座標:(0,feat_stride,2*feat_stride...)
20     shift_x, shift_y = tf.meshgrid(shift_x, shift_y) # shift_x:height個(0,feat_stride,2*feat_stride...);shift_y:width個(0,feat_stride,2*feat_stride...)'
21     sx = tf.reshape(shift_x, shape=(-1,)) # 0,feat_stride,2*feat_stride...0,feat_stride,2*feat_stride...0,feat_stride,2*feat_stride...
22     sy = tf.reshape(shift_y, shape=(-1,)) # 0,0,0...feat_stride,feat_stride,feat_stride...2*feat_stride,2*feat_stride,2*feat_stride..
23     shifts = tf.transpose(tf.stack([sx, sy, sx, sy])) # width*height個四位矩陣
24     K = tf.multiply(width, height)  # 特徵圖總共畫素數
25     shifts = tf.transpose(tf.reshape(shifts, shape=[1, K, 4]), perm=(1, 0, 2)) # 增加一維,變成1*(width*height)*4矩陣,而後變換維度為(width*height)*1*4矩陣
26 
27     anchors = generate_anchors(ratios=np.array(anchor_ratios), scales=np.array(anchor_scales))  #得到9個archors的在原始影象中的四個座標(放大比例預設為16)
28     A = anchors.shape[0]   # A=9
29     anchor_constant = tf.constant(anchors.reshape((1, A, 4)), dtype=tf.int32) # anchors增加維度為1*9*4
30 
31     length = K * A  # 總共的archors的個數(每個點對應A=9個archor,共K=height*width個點)
32     # 1*9*4的base archors和(width*height)*1*4的偏移矩陣進行broadcast相加,得到(width*height)*9*4,並改變形狀為(width*height*9)*4,得到所有的archors的四個座標
33     anchors_tf = tf.reshape(tf.add(anchor_constant, shifts), shape=(length, 4))
34 
35     return tf.cast(anchors_tf, dtype=tf.float32), length
36 
37 def generate_anchors(base_size=16, ratios=[0.5, 1, 2], scales=2 ** np.arange(3, 6)):
38     """Generate anchor (reference) windows by enumerating aspect ratios X scales wrt a reference (0, 0, 15, 15) window."""
39     base_anchor = np.array([1, 1, base_size, base_size]) - 1  # base archor的四個座標
40     ratio_anchors = _ratio_enum(base_anchor, ratios)  # 通過ratio得到3個archors的座標(3*4矩陣)
41     anchors = np.vstack([_scale_enum(ratio_anchors[i, :], scales) for i in range(ratio_anchors.shape[0])]) # 3*4矩陣變成9*4矩陣,得到9個archors的座標
42     return anchors
43 
44 
45 def _whctrs(anchor):
46     """ Return width, height, x center, and y center for an anchor (window). """
47     w = anchor[2] - anchor[0] + 1  #
48     h = anchor[3] - anchor[1] + 1  #
49     x_ctr = anchor[0] + 0.5 * (w - 1)  # 中心x
50     y_ctr = anchor[1] + 0.5 * (h - 1)  # 中心y
51     return w, h, x_ctr, y_ctr
52 
53 
54 def _mkanchors(ws, hs, x_ctr, y_ctr):
55     """ Given a vector of widths (ws) and heights (hs) around a center (x_ctr, y_ctr), output a set of anchors (windows)."""
56     ws = ws[:, np.newaxis]  # 3維向量變成3*1矩陣
57     hs = hs[:, np.newaxis]  # 3維向量變成3*1矩陣
58     anchors = np.hstack((x_ctr - 0.5 * (ws - 1), y_ctr - 0.5 * (hs - 1), x_ctr + 0.5 * (ws - 1), y_ctr + 0.5 * (hs - 1)))  # 3*4矩陣
59     return anchors
60 
61 
62 def _ratio_enum(anchor, ratios):  # 縮放比例為畫素總數的比例,而非單獨寬或者高的比例
63     """ Enumerate a set of anchors for each aspect ratio wrt an anchor. """
64     w, h, x_ctr, y_ctr = _whctrs(anchor)  # 得到中心位置和寬高
65     size = w * h    # 總共畫素數
66     size_ratios = size / ratios  # 縮放比例
67     ws = np.round(np.sqrt(size_ratios))  # 縮放後的寬,3維向量(值由大到小)
68     hs = np.round(ws * ratios)     # 縮放後的高,兩個3維向量對應元素相乘,為3維向量(值由小到大)
69     anchors = _mkanchors(ws, hs, x_ctr, y_ctr)  # 根據中心及寬高得到3個archors的四個座標
70     return anchors
71 
72 
73 def _scale_enum(anchor, scales):
74     """ Enumerate a set of anchors for each scale wrt an anchor. """
75     w, h, x_ctr, y_ctr = _whctrs(anchor)    # 得到中心位置和寬高
76     ws = w * scales    # 得到寬的放大倍數
77     hs = h * scales    # 得到寬的放大倍數
78     anchors = _mkanchors(ws, hs, x_ctr, y_ctr)  # 根據中心及寬高得到3個archors的四個座標
79     return anchors
View Code

1.5 _region_proposal

_region_proposal用於將vgg16的conv5的特徵通過3*3的滑動窗得到rpn特徵,進行兩條並行的線路,分別送入cls和reg網路。cls網路判斷通過1*1的卷積得到archors是正樣本還是負樣本(由於archors過多,還有可能有不關心的archors,使用時只使用正樣本和負樣本),用於二分類rpn_cls_score;reg網路對通過1*1的卷積迴歸出archors的座標偏移rpn_bbox_pred。這兩個網路共用3*3 conv(rpn)。由於每個位置有k個archor,因而每個位置均有2k個soores和4k個coordinates。

cls(將輸入的512維降低到2k維):3*3 conv + 1*1 conv(2k個scores,k為每個位置archors個數,如9)

在第一次使用_reshape_layer時,由於輸入bottom為1*?*?*2k,先得到caffe中的資料順序(tf為batchsize*height*width*channels,caffe中為batchsize*channels*height*width)to_caffe:1*2k*?*?,而後reshape後得到reshaped為1*2*?*?,最後在轉回tf的順序to_tf為1*?*?*2,得到rpn_cls_score_reshape。之後通過rpn_cls_prob_reshape(softmax的值,只針對最後一維,即2計算softmax),得到概率rpn_cls_prob_reshape(其最大值,即為預測值rpn_cls_pred),再次_reshape_layer,得到1*?*?*2k的rpn_cls_prob,為原始的概率。

reg(將輸入的512維降低到4k維):3*3 conv + 1*1 conv(4k個coordinates,k為每個位置archors個數,如9)。

_region_proposal定義如下:

 1 def _region_proposal(self, net_conv, is_training, initializer):  # 對輸入特徵圖進行處理
 2     rpn = slim.conv2d(net_conv, cfg.RPN_CHANNELS, [3, 3], trainable=is_training, weights_initializer=initializer, scope="rpn_conv/3x3")  #3*3的conv,作為rpn網路
 3     self._act_summaries.append(rpn)
 4     rpn_cls_score = slim.conv2d(rpn, self._num_anchors * 2, [1, 1], trainable=is_training, weights_initializer=initializer,  # _num_anchors為9
 5                                 padding='VALID', activation_fn=None, scope='rpn_cls_score')    #1*1的conv,得到每個位置的9個archors分類特徵1*?*?*(9*2)(二分類),判斷當前archors是正樣本還是負樣本
 6     rpn_cls_score_reshape = self._reshape_layer(rpn_cls_score, 2, 'rpn_cls_score_reshape') # 1*?*?*18==>1*(?*9)*?*2
 7     rpn_cls_prob_reshape = self._softmax_layer(rpn_cls_score_reshape, "rpn_cls_prob_reshape")  # 以最後一維為特徵長度,得到所有特徵的概率1*(?*9)*?*2
 8     rpn_cls_pred = tf.argmax(tf.reshape(rpn_cls_score_reshape, [-1, 2]), axis=1, name="rpn_cls_pred")  # 得到每個位置的9個archors預測的類別,(1*?*9*?)的列向量
 9     rpn_cls_prob = self._reshape_layer(rpn_cls_prob_reshape, self._num_anchors * 2, "rpn_cls_prob")  # 變換會原始維度1*(?*9)*?*2==>1*?*?*(9*2)
10     rpn_bbox_pred = slim.conv2d(rpn, self._num_anchors * 4, [1, 1], trainable=is_training, weights_initializer=initializer,
11                                 padding='VALID', activation_fn=None, scope='rpn_bbox_pred')    #1*1的conv,每個位置的9個archors迴歸位置偏移1*?*?*(9*4)
12     if is_training:
13         # 每個位置的9個archors的類別概率和每個位置的9個archors的迴歸位置偏移得到post_nms_topN=2000個archors的位置(包括全0的batch_inds)及為1的概率
14         rois, roi_scores = self._proposal_layer(rpn_cls_prob, rpn_bbox_pred, "rois")
15         rpn_labels = self._anchor_target_layer(rpn_cls_score, "anchor")   # rpn_labels:特徵圖中每個位置對應的是正樣本、負樣本還是不關注
16         with tf.control_dependencies([rpn_labels]):  # Try to have a deterministic order for the computing graph, for reproducibility
17             rois, _ = self._proposal_target_layer(rois, roi_scores, "rpn_rois")  #通過post_nms_topN個archors的位置及為1(正樣本)的概率得到256個rois(第一列的全0更新為每個archors對應的類別)及對應資訊
18     else:
19         if cfg.TEST.MODE == 'nms':
20             # 每個位置的9個archors的類別概率和每個位置的9個archors的迴歸位置偏移得到post_nms_topN=300個archors的位置(包括全0的batch_inds)及為1的概率
21             rois, _ = self._proposal_layer(rpn_cls_prob, rpn_bbox_pred, "rois")
22         elif cfg.TEST.MODE == 'top':
23             rois, _ = self._proposal_top_layer(rpn_cls_prob, rpn_bbox_pred, "rois")
24         else:
25             raise NotImplementedError
26 
27     self._predictions["rpn_cls_score"] = rpn_cls_score  # 每個位置的9個archors是正樣本還是負樣本
28     self._predictions["rpn_cls_score_reshape"] = rpn_cls_score_reshape  # 每個archors是正樣本還是負樣本
29     self._predictions["rpn_cls_prob"] = rpn_cls_prob   # 每個位置的9個archors是正樣本和負樣本的概率
30     self._predictions["rpn_cls_pred"] = rpn_cls_pred   # 每個位置的9個archors預測的類別,(1*?*9*?)的列向量
31     self._predictions["rpn_bbox_pred"] = rpn_bbox_pred  # 每個位置的9個archors迴歸位置偏移
32     self._predictions["rois"] = rois   # 256個archors的類別(第一維)及位置(後四維)
33 
34     return rois  # 返回256個archors的類別(第一維,訓練時為每個archors的類別,測試時全0)及位置(後四維)
35 
36 def _reshape_layer(self, bottom, num_dim, name):
37     input_shape = tf.shape(bottom)
38     with tf.variable_scope(name) as scope:
39         to_caffe = tf.transpose(bottom, [0, 3, 1, 2])  # NHWC(TF資料格式)變成NCHW(caffe格式)
40         reshaped = tf.reshape(to_caffe, tf.concat(axis=0, values=[[1, num_dim, -1], [input_shape[2]]]))  # 1*(num_dim*9)*?*?==>1*num_dim*(9*?)*?  或 1*num_dim*(9*?)*?==>1*(num_dim*9)*?*?
41         to_tf = tf.transpose(reshaped, [0, 2, 3, 1])
42         return to_tf
43 
44 
45 def _softmax_layer(self, bottom, name):
46     if name.startswith('rpn_cls_prob_reshape'):    # bottom:1*(?*9)*?*2
47         input_shape = tf.shape(bottom)
48         bottom_reshaped = tf.reshape(bottom, [-1, input_shape[-1]])   # 只保留最後一維,用於計算softmax的概率,其他的全合併:1*(?*9)*?*2==>(1*?*9*?)*2
49         reshaped_score = tf.nn.softmax(bottom_reshaped, name=name)  # 得到所有特徵的概率
50         return tf.reshape(reshaped_score, input_shape)   # (1*?*9*?)*2==>1*(?*9)*?*2
51     return tf.nn.softmax(bottom, name=name)
View Code

1.6 _proposal_layer

_proposal_layer呼叫proposal_layer_tf,通過(N*9)*4個archors,計算估計後的座標(bbox_transform_inv_tf),並對座標進行裁剪(clip_boxes_tf)及非極大值抑制(tf.image.non_max_suppression,可得到符合條件的索引indices)的archors:rois及這些archors為正樣本的概率:rpn_scores。rois為m*5維,rpn_scores為m*4維,其中m為經過非極大值抑制後得到的候選區域個數(訓練時2000個,測試時300個)。m*5的第一列為全為0的batch_inds,後4列為座標(坐上+右下)

_proposal_layer如下

 1 def _proposal_layer(self, rpn_cls_prob, rpn_bbox_pred, name):  #每個位置的9個archors的類別概率和每個位置的9個archors的迴歸位置偏移得到post_nms_topN個archors的位置及為1的概率
 2     with tf.variable_scope(name) as scope:
 3         if cfg.USE_E2E_TF:  # post_nms_topN*5的rois(第一列為全0的batch_inds,後4列為座標);rpn_scores:post_nms_topN*1個對應的為1的概率
 4             rois, rpn_scores = proposal_layer_tf(rpn_cls_prob, rpn_bbox_pred, self._im_info, self._mode, self._feat_stride, self._anchors, self._num_anchors)
 5         else:
 6             rois, rpn_scores = tf.py_func(proposal_layer, [rpn_cls_prob, rpn_bbox_pred, self._im_info, self._mode,
 7                 self._feat_stride, self._anchors, self._num_anchors], [tf.float32, tf.float32], name="proposal")
 8 
 9         rois.set_shape([None, 5])
10         rpn_scores.set_shape([None, 1])
11 
12     return rois, rpn_scores
13 
14 def proposal_layer_tf(rpn_cls_prob, rpn_bbox_pred, im_info, cfg_key, _feat_stride, anchors, num_anchors):  #每個位置的9個archors的類別概率和每個位置的9個archors的迴歸位置偏移
15     if type(cfg_key) == bytes:
16         cfg_key = cfg_key.decode('utf-8')
17     pre_nms_topN = cfg[cfg_key].RPN_PRE_NMS_TOP_N
18     post_nms_topN = cfg[cfg_key].RPN_POST_NMS_TOP_N  # 訓練時為2000,測試時為300
19     nms_thresh = cfg[cfg_key].RPN_NMS_THRESH   # nms的閾值,為0.7
20 
21     scores = rpn_cls_prob[:, :, :, num_anchors:]    # 1*?*?*(9*2)取後9個:1*?*?*9。應該是前9個代表9個archors為背景景的概率,後9個代表9個archors為前景的概率(二分類,只有背景和前景)
22     scores = tf.reshape(scores, shape=(-1,))        # 所有的archors為1的概率
23     rpn_bbox_pred = tf.reshape(rpn_bbox_pred, shape=(-1, 4))     # 所有的archors的四個座標
24 
25     proposals = bbox_transform_inv_tf(anchors, rpn_bbox_pred)   # 已知archor和偏移求預測的座標
26     proposals = clip_boxes_tf(proposals, im_info[:2])    # 限制預測座標在原始影象上
27 
28     indices = tf.image.non_max_suppression(proposals, scores, max_output_size=post_nms_topN, iou_threshold=nms_thresh)    # 通過nms得到分值最大的post_nms_topN個座標的索引
29 
30     boxes = tf.gather(proposals, indices)   # 得到post_nms_topN個對應的座標
31     boxes = tf.to_float(boxes)
32     scores = tf.gather(scores, indices)    # 得到post_nms_topN個對應的為1的概率
33     scores = tf.reshape(scores, shape=(-1, 1))
34 
35     batch_inds = tf.zeros((tf.shape(indices)[0], 1), dtype=tf.float32)    # Only support single image as input
36     blob = tf.concat([batch_inds, boxes], 1)  # post_nms_topN*1個batch_inds和post_nms_topN*4個座標concat,得到post_nms_topN*5的blob
37 
38     return blob, scores
39 
40 def bbox_transform_inv_tf(boxes, deltas):    # 已知archor和偏移求預測的座標
41     boxes = tf.cast(boxes, deltas.dtype)
42     widths = tf.subtract(boxes[:, 2], boxes[:, 0]) + 1.0     #
43     heights = tf.subtract(boxes[:, 3], boxes[:, 1]) + 1.0     #
44     ctr_x = tf.add(boxes[:, 0], widths * 0.5)             # 中心x
45     ctr_y = tf.add(boxes[:, 1], heights * 0.5)            # 中心y
46 
47     dx = deltas[:, 0]      # 預測的dx
48     dy = deltas[:, 1]      # 預測的dy
49     dw = deltas[:, 2]      # 預測的dw
50     dh = deltas[:, 3]      # 預測的dh
51 
52     pred_ctr_x = tf.add(tf.multiply(dx, widths), ctr_x)      # 公式2已知xa,wa,tx反過來求預測的x中心座標
53     pred_ctr_y = tf.add(tf.multiply(dy, heights), ctr_y)     # 公式2已知ya,ha,ty反過來求預測的y中心座標
54     pred_w = tf.multiply(tf.exp(dw), widths)         # 公式2已知wa,tw反過來求預測的w
55     pred_h = tf.multiply(tf.exp(dh), heights)        # 公式2已知ha,th反過來求預測的h
56 
57     pred_boxes0 = tf.subtract(pred_ctr_x, pred_w * 0.5)  # 預測的框的起始和終點四個座標
58     pred_boxes1 = tf.subtract(pred_ctr_y, pred_h * 0.5)
59     pred_boxes2 = tf.add(pred_ctr_x, pred_w * 0.5)
60     pred_boxes3 = tf.add(pred_ctr_y, pred_h * 0.5)
61 
62     return tf.stack([pred_boxes0, pred_boxes1, pred_boxes2, pred_boxes3], axis=1)
63 
64 
65 def clip_boxes_tf(boxes, im_info):   # 限制預測座標在原始影象上
66     b0 = tf.maximum(tf.minimum(boxes[:, 0], im_info[1] - 1), 0)
67     b1 = tf.maximum(tf.minimum(boxes[:, 1], im_info[0] - 1), 0)
68     b2 = tf.maximum(tf.minimum(boxes[:, 2], im_info[1] - 1), 0)
69     b3 = tf.maximum(tf.minimum(boxes[:, 3], im_info[0] - 1), 0)
70     return tf.stack([b0, b1, b2, b3], axis=1)
View Code

1.7 _anchor_target_layer

通過_anchor_target_layer首先去除archors中邊界超出影象的archors。而後通過bbox_overlaps計算archors(N*4)和gt_boxes(M*4)的重疊區域的值overlaps(N*M),並得到每個archor對應的最大的重疊ground_truth的值max_overlaps(1*N),以及ground_truth的背景對應的最大重疊archors的值gt_max_overlaps(1*M)和每個背景對應的archor的位置gt_argmax_overlaps。之後通過_compute_targets計算anchors和最大重疊位置的gt_boxes的變換後的座標bbox_targets(見公式2後四個)。最後通過_unmap在變換回和原始的archors一樣大小的rpn_labels(archors是正樣本、負樣本還是不關注),rpn_bbox_targets, rpn_bbox_inside_weights, rpn_bbox_outside_weights。

_anchor_target_layer定義:

  1 def _anchor_target_layer(self, rpn_cls_score, name):  # rpn_cls_score:每個位置的9個archors分類特徵1*?*?*(9*2)
  2     with tf.variable_scope(name) as scope:
  3         # rpn_labels; 特徵圖中每個位置對應的是正樣本、負樣本還是不關注(去除了邊界在影象外面的archors)
  4         # rpn_bbox_targets:# 特徵圖中每個位置和對應的正樣本的座標偏移(很多為0)
  5         # rpn_bbox_inside_weights:  正樣本的權重為1(去除負樣本和不關注的樣本,均為0)
  6         # rpn_bbox_outside_weights:  正樣本和負樣本(不包括不關注的樣本)歸一化的權重
  7         rpn_labels, rpn_bbox_targets, rpn_bbox_inside_weights, rpn_bbox_outside_weights = tf.py_func(
  8             anchor_target_layer, [rpn_cls_score, self._gt_boxes, self._im_info, self._feat_stride, self._anchors, self._num_anchors],
  9             [tf.float32, tf.float32, tf.float32, tf.float32], name="anchor_target")
 10 
 11         rpn_labels.set_shape([1, 1, None, None])
 12         rpn_bbox_targets.set_shape([1, None, None, self._num_anchors * 4])
 13         rpn_bbox_inside_weights.set_shape([1, None, None, self._num_anchors * 4])
 14         rpn_bbox_outside_weights.set_shape([1, None, None, self._num_anchors * 4])
 15 
 16         rpn_labels = tf.to_int32(rpn_labels, name="to_int32")
 17         self._anchor_targets['rpn_labels'] = rpn_labels  # 特徵圖中每個位置對應的是正樣本、負樣本還是不關注(去除了邊界在影象外面的archors)
 18         self._anchor_targets['rpn_bbox_targets'] = rpn_bbox_targets  # 特徵圖中每個位置和對應的正樣本的座標偏移(很多為0)
 19         self._anchor_targets['rpn_bbox_inside_weights'] = rpn_bbox_inside_weights  #  正樣本的權重為1(去除負樣本和不關注的樣本,均為0)
 20         self._anchor_targets['rpn_bbox_outside_weights'] = rpn_bbox_outside_weights  #   正樣本和負樣本(不包括不關注的樣本)歸一化的權重
 21 
 22         self._score_summaries.update(self._anchor_targets)
 23 
 24     return rpn_labels
 25  
 26 def anchor_target_layer(rpn_cls_score, gt_boxes, im_info, _feat_stride, all_anchors, num_anchors):# 1*?*?*(9*2); ?*5; 3; [16], ?*4; [9]
 27     """Same as the anchor target layer in original Fast/er RCNN """
 28     A = num_anchors   # [9]
 29     total_anchors = all_anchors.shape[0]   # 所有archors的個數,9*特徵圖寬*特徵圖高 個
 30     K = total_anchors / num_anchors
 31 
 32     _allowed_border = 0  # allow boxes to sit over the edge by a small amount
 33     height, width = rpn_cls_score.shape[1:3]  # rpn網路得到的特徵的高寬
 34 
 35     inds_inside = np.where(  # 所有archors邊界可能超出影象,取在影象內部的archors的索引
 36         (all_anchors[:, 0] >= -_allowed_border) & (all_anchors[:, 1] >= -_allowed_border) &
 37         (all_anchors[:, 2] < im_info[1] + _allowed_border) &  # width
 38         (all_anchors[:, 3] < im_info[0] + _allowed_border)  # height
 39         )[0]
 40 
 41     anchors = all_anchors[inds_inside, :]   # 得到在影象內部archors的座標
 42 
 43     labels = np.empty((len(inds_inside),), dtype=np.float32)  # label: 1 正樣本, 0 負樣本, -1 不關注
 44     labels.fill(-1)
 45 
 46     # 計算每個anchors:n*4和每個真實位置gt_boxes:m*4的重疊區域的比的矩陣:n*m
 47     overlaps = bbox_overlaps(np.ascontiguousarray(anchors, dtype=np.float), np.ascontiguousarray(gt_boxes, dtype=np.float))
 48     argmax_overlaps = overlaps.argmax(axis=1)  # 找到每行最大值的位置,即每個archors對應的正樣本的位置,得到n維的行向量
 49     max_overlaps = overlaps[np.arange(len(inds_inside)), argmax_overlaps]  # 取出每個archors對應的正樣本的重疊區域,n維向量
 50     gt_argmax_overlaps = overlaps.argmax(axis=0)  # 找到每列最大值的位置,即每個真實位置對應的archors的位置,得到m維的行向量
 51     gt_max_overlaps = overlaps[gt_argmax_overlaps, np.arange(overlaps.shape[1])]  # 取出每個真實位置對應的archors的重疊區域,m維向量
 52     gt_argmax_overlaps = np.where(overlaps == gt_max_overlaps)[0]  # 得到從小到大順序的位置
 53 
 54     if not cfg.TRAIN.RPN_CLOBBER_POSITIVES:   # assign bg labels first so that positive labels can clobber them first set the negatives
 55         labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0  # 將archors對應的正樣本的重疊區域中小於閾值的置0
 56 
 57     labels[gt_argmax_overlaps] = 1  # fg label: for each gt, anchor with highest overlap 每個真實位置對應的archors置1
 58     labels[max_overlaps >= cfg.TRAIN.RPN_POSITIVE_OVERLAP] = 1 # fg label: above threshold IOU 將archors對應的正樣本的重疊區域中大於閾值的置1
 59 
 60     if cfg.TRAIN.RPN_CLOBBER_POSITIVES:  # assign bg labels last so that negative labels can clobber positives
 61         labels[max_overlaps < cfg.TRAIN.RPN_NEGATIVE_OVERLAP] = 0
 62 
 63     # 如果有過多的正樣本,則只隨機選擇num_fg=0.5*256=128個正樣本
 64     num_fg = int(cfg.TRAIN.RPN_FG_FRACTION * cfg.TRAIN.RPN_BATCHSIZE)  # subsample positive labels if we have too many
 65     fg_inds = np.where(labels == 1)[0]
 66     if len(fg_inds) > num_fg:
 67         disable_inds = npr.choice(fg_inds, size=(len(fg_inds) - num_fg), replace=False)
 68         labels[disable_inds] = -1   # 將多於的正樣本設定為不關注
 69 
 70     # 如果有過多的負樣本,則只隨機選擇 num_bg=256-正樣本個數 個負樣本
 71     num_bg = cfg.TRAIN.RPN_BATCHSIZE - np.sum(labels == 1)  # subsample negative labels if we have too many
 72     bg_inds = np.where(labels == 0)[0]
 73     if len(bg_inds) > num_bg:
 74         disable_inds = npr.choice(bg_inds, size=(len(bg_inds) - num_bg), replace=False)
 75         labels[disable_inds] = -1   # 將多於的負樣本設定為不關注
 76 
 77     bbox_targets = np.zeros((len(inds_inside), 4), dtype=np.float32)
 78     bbox_targets = _compute_targets(anchors, gt_boxes[argmax_overlaps, :])  # 通過archors和archors對應的正樣本計算座標的偏移
 79 
 80     bbox_inside_weights = np.zeros((len(inds_inside), 4), dtype=np.float32)
 81     bbox_inside_weights[labels == 1, :] = np.array(cfg.TRAIN.RPN_BBOX_INSIDE_WEIGHTS)  # 正樣本的四個座標的權重均設定為1
 82 
 83     bbox_outside_weights = np.zeros((len(inds_inside), 4), dtype=np.float32)
 84     if cfg.TRAIN.RPN_POSITIVE_WEIGHT < 0:  # uniform weighting of examples (given non-uniform sampling)
 85         num_examples = np.sum(labels >= 0)   # 正樣本和負樣本的總數(去除不關注的樣本)
 86         positive_weights = np.ones((1, 4)) * 1.0 / num_examples   # 歸一化的權重
 87         negative_weights = np.ones((1, 4)) * 1.0 / num_examples   # 歸一化的權重
 88     else:
 89         assert ((cfg.TRAIN.RPN_POSITIVE_WEIGHT > 0) & (cfg.TRAIN.RPN_POSITIVE_WEIGHT < 1))
 90         positive_weights = (cfg.TRAIN.RPN_POSITIVE_WEIGHT / np.sum(labels == 1))
 91         negative_weights = ((1.0 - cfg.TRAIN.RPN_POSITIVE_WEIGHT) / np.sum(labels == 0))
 92     bbox_outside_weights[labels == 1, :] = positive_weights     # 歸一化的權重
 93     bbox_outside_weights[labels == 0, :] = negative_weights     # 歸一化的權重
 94 
 95     # 由於上面使用了inds_inside,此處將labels,bbox_targets,bbox_inside_weights,bbox_outside_weights對映到原始的archors(包含未知
 96     # 引數超出影象邊界的archors)對應的labels,bbox_targets,bbox_inside_weights,bbox_outside_weights,同時將不需要的填充fill的值
 97     labels = _unmap(labels, total_anchors, inds_inside, fill=-1)
 98     bbox_targets = _unmap(bbox_targets, total_anchors, inds_inside, fill=0)
 99     bbox_inside_weights = _unmap(bbox_inside_weights, total_anchors, inds_inside, fill=0)  # 所有archors中正樣本的四個座標的權重均設定為1,其他為0
100     bbox_outside_weights = _unmap(bbox_outside_weights, total_anchors, inds_inside, fill=0)
101 
102     labels = labels.reshape((1, height, width, A)).transpose(0, 3, 1, 2)   # (1*?*?)*9==>1*?*?*9==>1*9*?*?
103     labels = labels.reshape((1, 1, A * height, width))  # 1*9*?*?==>1*1*(9*?)*?
104     rpn_labels = labels  # 特徵圖中每個位置對應的是正樣本、負樣本還是不關注(去除了邊界在影象外面的archors)
105 
106     bbox_targets = bbox_targets.reshape((1, height, width, A * 4))  # 1*(9*?)*?*4==>1*?*?*(9*4)
107 
108     rpn_bbox_targets = bbox_targets  # 特徵圖中每個位置和對應的正樣本的座標偏移(很多為0)
109     bbox_inside_weights = bbox_inside_weights.reshape((1, height, width, A * 4))  # 1*(9*?)*?*4==>1*?*?*(9*4)
110     rpn_bbox_inside_weights = bbox_inside_weights
111     bbox_outside_weights = bbox_outside_weights.reshape((1, height, width, A * 4))  # 1*(9*?)*?*4==>1*?*?*(9*4)
112     rpn_bbox_outside_weights = bbox_outside_weights    #   歸一化的權重
113     return rpn_labels, rpn_bbox_targets, rpn_bbox_inside_weights, rpn_bbox_outside_weights
114 
115 
116 def _unmap(data, count, inds, fill=0):
117     """ Unmap a subset of item (data) back to the original set of items (of size count) """
118     if len(data.shape) == 1:
119         ret = np.empty((count,), dtype=np.float32)   # 得到1維矩陣
120         ret.fill(fill)   # 預設填充fill的值
121         ret[inds] = data   # 有效位置填充具體資料
122     else:
123         ret = np.empty((count,) + data.shape[1:], dtype=np.float32)  # 得到對應維數的矩陣
124         ret.fill(fill)    # 預設填充fill的值
125         ret[inds, :] = data   # 有效位置填充具體資料
126     return ret
127 
128 
129 def _compute_targets(ex_rois, gt_rois):
130     """Compute bounding-box regression targets for an image."""
131     assert ex_rois.shape[0] == gt_rois.shape[0]
132     assert ex_rois.shape[1] == 4
133     assert gt_rois.shape[1] == 5
134 
135     # 通過公式2後四個,結合archor和對應的正樣本的座標計算座標的偏移
136     return bbox_transform(ex_rois, gt_rois[:, :4]).astype(np.float32, copy=False)  # 由於gt_rois是5列,去掉第一列的batch_inds
137 
138 def bbox_transform(ex_rois, gt_rois):
139     ex_widths = ex_rois[:, 2] - ex_rois[:, 0] + 1.0  # archor的寬
140     ex_heights = ex_rois[:, 3] - ex_rois[:, 1] + 1.0  # archor的高
141     ex_ctr_x = ex_rois[:, 0] + 0.5 * ex_widths  #archor的中心x
142     ex_ctr_y = ex_rois[:, 1] + 0.5 * ex_heights  #archor的中心y
143 
144     gt_widths = gt_rois[:, 2] - gt_rois[:, 0] + 1.0  # 真實正樣本w
145     gt_heights = gt_rois[:, 3] - gt_rois[:, 1] + 1.0   # 真實正樣本h
146     gt_ctr_x = gt_rois[:, 0] + 0.5 * gt_widths      # 真實正樣本中心x
147     gt_ctr_y = gt_rois[:, 1] + 0.5 * gt_heights     # 真實正樣本中心y
148 
149     targets_dx = (gt_ctr_x - ex_ctr_x) / ex_widths    # 通過公式2後四個的x*,xa,wa得到dx
150     targets_dy = (gt_ctr_y - ex_ctr_y) / ex_heights   # 通過公式2後四個的y*,ya,ha得到dy
151     targets_dw = np.log(gt_widths / ex_widths)        # 通過公式2後四個的w*,wa得到dw
152     targets_dh = np.log(gt_heights / ex_heights)      # 通過公式2後四個的h*,ha得到dh
153 
154     targets = np.vstack((targets_dx, targets_dy, targets_dw, targets_dh)).transpose()
155     return targets
View Code

1.8 bbox_overlaps

bbox_overlaps用於計算archors和ground truth box重疊區域的面積。具體可見參考網址https://www.cnblogs.com/darkknightzh/p/9043395.html,程式中的程式碼如下:

 1 def bbox_overlaps(
 2         np.ndarray[DTYPE_t, ndim=2] boxes,
 3         np.ndarray[DTYPE_t, ndim=2] query_boxes):
 4     """
 5     Parameters
 6     ----------
 7     boxes: (N, 4) ndarray of float
 8     query_boxes: (K, 4) ndarray of float
 9     Returns
10     -------
11