1. 程式人生 > >使用自己的資料集訓練faster-rcnn

使用自己的資料集訓練faster-rcnn

在安裝完成caffe後,並且編譯完成github上的faster-rcnn python版之後,可以採用自己的資料來訓練faster-rcnn了。
一,檔案修改:
1,在py-faster-rcnn目錄下,找到lib/datasets/pascal_voc.py 檔案開啟逐一修改相應的函式:
如果打算新增中文註釋請,在檔案開圖新增#encoding:utf-8,不然會報錯。
以下為修改的細節:

1)、初始化函式init的修改,同時修改類名:

class hs(imdb):
    def __init__(self, image_set, devkit_path=None):
# modified imdb.__init__(self, image_set) self._image_set = image_set self._devkit_path = devkit_path#datasets路徑 self._data_path = os.path.join(self._devkit_path,image_set) #圖片資料夾路徑 self._classes = ('__background__', # always index 0 'jyz'
,'fzc','qnq') #two classes self._class_to_ind = dict(zip(self.classes, xrange(self.num_classes))) # form the dict{'__background__':'0','person':'1'} self._image_ext = '.jpg' self._image_index = self._load_image_set_index('ImageList.txt') # Default to roidb handler
self._roidb_handler = self.selective_search_roidb self._salt = str(uuid.uuid4()) self._comp_id = 'comp4' # PASCAL specific config options self.config = {'cleanup' : True, 'use_salt' : True, 'use_diff' : False, 'matlab_eval' : False, 'rpn_file' : None, 'min_size' : 16} #小於16個畫素的框扔掉 assert os.path.exists(self._devkit_path), \ 'VOCdevkit path does not exist: {}'.format(self._devkit_path) assert os.path.exists(self._data_path), \ 'Path does not exist: {}'.format(self._data_path)

2)修改image_path_from_index函式的修改:

def image_path_from_index(self, index): #modified
    """
    Construct an image path from the image's "index" identifier.
    """
    image_path = os.path.join(self._data_path,index +'.jpg')
    assert os.path.exists(image_path), \
            'Path does not exist: {}'.format(image_path)
    return image_path

3)修改_load_image_set_index函式:

def _load_image_set_index(self,imagelist): # modified
    """
    Load the indexes listed in this dataset's image set file.
    """
    # Example path to image set file:
    # self._devkit_path + /VOCdevkit2007/VOC2007/ImageSets/Main/val.txt
    image_set_file = os.path.join(self._devkit_path, imagelist)
    assert os.path.exists(image_set_file), \
            'Path does not exist: {}'.format(image_set_file)
    with open(image_set_file) as f:
        image_index = [x.strip() for x in f.readlines()]
    return image_index

4)修改_load_pascal_annotation(self, index):

def _load_pascal_annotation(self, index):    #modified
    """
    Load image and bounding boxes info from XML file in the PASCAL VOC
    format.
    """
    filename = os.path.join(self._devkit_path, 'Annotations', index + '.xml')
    tree = ET.parse(filename)
    objs = tree.findall('object')
    if not self.config['use_diff']:
        # Exclude the samples labeled as difficult
        non_diff_objs = [
            obj for obj in objs if int(obj.find('difficult').text) == 0]
        # if len(non_diff_objs) != len(objs):
        #     print 'Removed {} difficult objects'.format(
        #         len(objs) - len(non_diff_objs))
        objs = non_diff_objs
    num_objs = len(objs)

    boxes = np.zeros((num_objs, 4), dtype=np.uint16)
    gt_classes = np.zeros((num_objs), dtype=np.int32)
    overlaps = np.zeros((num_objs, self.num_classes), dtype=np.float32)
    # "Seg" area for pascal is just the box area
    seg_areas = np.zeros((num_objs), dtype=np.float32)

    # Load object bounding boxes into a data frame.
    for ix, obj in enumerate(objs):
        bbox = obj.find('bndbox')
        # Make pixel indexes 0-based
        x1 = float(bbox.find('xmin').text)
        y1 = float(bbox.find('ymin').text)
        x2 = float(bbox.find('xmax').text)
        y2 = float(bbox.find('ymax').text)
        cls = self._class_to_ind[obj.find('name').text.lower().strip()]
        boxes[ix, :] = [x1, y1, x2, y2]
        gt_classes[ix] = cls
        overlaps[ix, cls] = 1.0
        seg_areas[ix] = (x2 - x1 + 1) * (y2 - y1 + 1)

    overlaps = scipy.sparse.csr_matrix(overlaps)

    return {'boxes' : boxes,
            'gt_classes': gt_classes,
            'gt_overlaps' : overlaps,
            'flipped' : False,
            'seg_areas' : seg_areas}

5)main下面修改相應的路徑:


if __name__ == '__main__':
    from datasets.hs import hs
    d = hs('hs', '/home/panyiming/py-faster-rcnn/lib/datasets')
    res = d.roidb
    from IPython import embed; embed()

2,在py-faster-rcnn目錄下,找到lib/datasets/factory.py 並修改,修改後的檔案如下:

# --------------------------------------------------------
# Fast R-CNN
# Copyright (c) 2015 Microsoft
# Licensed under The MIT License [see LICENSE for details]
# Written by Ross Girshick
# --------------------------------------------------------

"""Factory method for easily getting imdbs by name."""

__sets = {}

from datasets.hs import hs
import numpy as np

# # Set up voc_<year>_<split> using selective search "fast" mode
# for year in ['2007', '2012']:
#     for split in ['train', 'val', 'trainval', 'test']:
#         name = 'voc_{}_{}'.format(year, split)
#         __sets[name] = (lambda split=split, year=year: pascal_voc(split, year))
#
# # Set up coco_2014_<split>
# for year in ['2014']:
#     for split in ['train', 'val', 'minival', 'valminusminival']:
#         name = 'coco_{}_{}'.format(year, split)
#         __sets[name] = (lambda split=split, year=year: coco(split, year))
#
# # Set up coco_2015_<split>
# for year in ['2015']:
#     for split in ['test', 'test-dev']:
#         name = 'coco_{}_{}'.format(year, split)
#         __sets[name] = (lambda split=split, year=year: coco(split, year))

name = 'hs'
devkit = '/home/panyiming/py-faster-rcnn/lib/datasets'
__sets['hs'] = (lambda name = name,devkit = devkit: hs(name,devkit))

def get_imdb(name):
    """Get an imdb (image database) by name."""
    if not __sets.has_key(name):
        raise KeyError('Unknown dataset: {}'.format(name))
    return __sets[name]()

def list_imdbs():
    """List all registered imdbs."""
    return __sets.keys()

二、模型的選擇、訓練以及測試:
1.預訓練模型介紹
在github官網上的py-faster-rcnn的編譯安裝教程中有一步如下:

cd $FRCN_ROOT
./data/scripts/fetch_faster_rcnn_models.sh

執行完成之後會在/data/scripts下產生壓縮檔案faster_rcnn_models.tgz,解壓得到faster_rcnn_model資料夾,faster_rcnn_model資料夾下面是作者用faster rcnn訓練好的三個網路,分別對應著小、中、大型網路,大家可以試用一下這幾個網路,看一些檢測效果,他們訓練都迭代了80000次,資料集都是pascal_voc的資料集。

可以通過執行如下命令下載Imagenet上訓練好的通用模型:

cd $FRCN_ROOT
./data/scripts/fetch_imagenet_models.sh

執行完成之後會在/data/scripts下產生壓縮檔案imagenet_models.tgz,解壓得到imagenet_models資料夾,imagenet_model資料夾下面是在Imagenet上訓練好的通用模型,在這裡用來初始化網路的引數.

2.修改模型檔案配置
模型檔案在models下面對應的網路資料夾下,在這裡我用中型網路的配置檔案修改為例子
比如:我的檢測目標物是3類 ,那麼我的類別就有兩個類別即 background 和 3類目標
因此,首先開啟網路的模型資料夾,開啟train.prototxt修改的地方重要有三個
分別是個地方

首先在data層把num_classes 從原來的21類 20類+背景 ,改成 4類 3類目標+背景
接在在cls_score層把num_output 從原來的21 改成 4
RoI Proposal下有個名為name: 'roi-data'的層,將其num_classes修改為4
在bbox_pred層把num_output 從原來的84 改成16, 為檢測類別個數乘以4,

如果你要進一步修改網路訓練中的學習速率,步長,gamma值,以及輸出模型的名字,需要在同目錄下的solver.prototxt中修改。

3.啟動Fast RCNN網路訓練

python ./tools/train_net.py --gpu 1 --solver models/hs/faster_rcnn_end2end/solver.prototxt --weights data/imagenet_models/VGG_CNN_M_1024.v2.caffemodel --imdb hs --iters 80000 --cfg experiments/cfgs/faster_rcnn_end2end.yml

命令解析:
1)、train_net.py是網路的訓練檔案,之後的引數都是附帶的輸入引數。
3)、–gpu 代表機器上的GPU編號,如果是nvidia系列的tesla顯示卡,可以在終端中輸入nvidia-smi來檢視當前的顯示卡負荷,選擇合適的顯示卡。
4)、–solver 代表模型的配置檔案,train.prototxt的檔案路徑已經包含在這個檔案之中。
5)、-weights 代表初始化的權重檔案,這裡用的是Imagenet上預訓練好的模型,中型的網路我們選擇用VGG_CNN_M_1024.v2.caffemodel,此步可以省略,省略後會自動初始化。
6)、–imdb 這裡給出的訓練的資料庫名字需要在factory.py的_sets中,我在檔案裡面有。_sets[‘hs’],train_net.py這個檔案會呼叫factory.py再生成hs這個類,來讀取資料。

4.啟動Fast RCNN網路檢測
可以參考tools下面的demo.py 檔案,來做檢測,並且將檢測的座標結果輸出到相應的txt檔案中。