1. 程式人生 > >在windows下用caffe跑ImageNet

在windows下用caffe跑ImageNet

使用caffe主要分為三大步:
【1】用convert_imageset.exe把圖片資料庫轉換為.lmdb或者.leveldb的格式。
【2】用compute_image_mean.exe進行取均值的預處理,生成.binaryproto檔案
【3】用caffe.exe跑CNN。
1)資料準備
下載的一個比較小的ImageNet圖片資料集,共120種,每種不到200張。
ImageNet圖片資料集
2)生成train.txt檔案
對於train.txt檔案的格式,網上有明確的介紹。
來自:http://blog.csdn.net/u012878523/article/details/41698209
是這樣的格式:
train_txt檔案的格式
我自己寫了一個matlab的小程式,直接生成train.txt檔案:

clear all
clc
foodDir='E:\000Deep Learning000\caffe-windows-3rdparty20151001\data\train_data_v2';
numClasses=10;
classes=dir(foodDir);
classes = classes([classes.isdir]) ;
classes = {classes(3:numClasses+2).name};
imageName={};
fp = fopen('train.txt','a');
for ci = 1:length(classes)
  ims = dir(fullfile(foodDir, classes{ci}, '*.jpg'
))' ; for ii=1:length(ims) fprintf(fp,classes{ci}); fprintf(fp,'/'); fprintf(fp,ims(ii).name); fprintf(fp,' '); fprintf(fp,'%d',ci); fprintf(fp,'\r\n'); end end fclose(fp);

下面開始使用caffe:
【1】用convert_imageset.exe把圖片資料庫轉換為.lmdb或者.leveldb的格式。

網上流傳的大多是Linux的shell命令,我仿著caffe自帶的example裡面的imagenet的shell檔案寫了一個批處理命令,可以直接用的。

.\bin\convert_imageset.exe --resize_height=256 --resize_width=256 --shuffle --backend="leveldb" D:\000\caffe-windows-3rdparty20151001\data\train_data_v2\ D:\000\caffe-windows-3rdparty20151001\data\train.txt  D:\000\caffe-windows-3rdparty20151001\examples\imagenet\ilsvrc12_train_new2_lmdb_lmdb_lmdb_lmdb

注意這裡的backend用是leveldb,預設的是lmdb。
如果這裡生成的是leveldb檔案,後面預處理 計算均值影象的時候也要用leveldb。我一開始生成的是lmdb檔案,結果後面執行compute_image_mean的時候報錯:
set end of file error
這裡寫圖片描述

後來改成leveldb,一切正常。

這是lmdb

這裡寫圖片描述

這是leveldb

這裡寫圖片描述

跑出來的結果是這樣的:

這裡寫圖片描述

【2】用compute_image_mean.exe進行取均值的預處理,生成.binaryproto檔案

.\bin\compute_image_mean.exe --backend="leveldb" D:\000\caffe-windows-3rdparty20151001\examples\imagenet\ilsvrc12_train_lmdb D:\000\caffe-windows-3rdparty20151001\examples\imagenet\mean.binaryproto
pause

跑出來的結果是這樣的:

這裡寫圖片描述

【3】用caffe.exe跑CNN

先看看caffe.exe 的help

C:\Users\connor>D:\000\caffe-windows-3rdparty20151001\bin\caffe.exe -help
D:\000\caffe-windows-3rdparty20151001\bin\caffe.exe: command line brew
usage: caffe <command> <args>

commands:
  train           train or finetune a model
  test            score a model
  device_query    show GPU diagnostic information
  time            benchmark model execution time

  Flags from ..\..\src\gflags.cc:
    --flagfile (load flags from file)
        type: string default: ""
    --fromenv (set flags from the environment [use 'export FLAGS_flag1=value'])
        type: string default: ""
    --tryfromenv (set flags from the environment if present)
        type: string default: ""
    --undefok (comma-separated list of flag names that it is okay to specify on
      the command line even if the program does not define a flag with that
      name.  IMPORTANT: flags in this list that have arguments MUST use the
      flag=value format)
        type: string default: ""

  Flags from ..\..\src\gflags_completions.cc:
    --tab_completion_columns (Number of columns to use in output for tab
      completion)
        type: int32 default: 80
    --tab_completion_word (If non-empty, HandleCommandLineCompletions() will
      hijack the process and attempt to do bash-style command line flag
      completion on this value.)
        type: string default: ""

  Flags from ..\..\src\gflags_reporting.cc:
    --help (show help on all flags [tip: all flags can have two dashes])
        type: bool default: false currently: true
    --helpfull (show help on all flags -- same as -help)
        type: bool default: false
    --helpmatch (show help on modules whose name contains the specified substr)
        type: string default: ""
    --helpon (show help on the modules named by this flag value)
        type: string default: ""
    --helppackage (show help on all modules in the main package)
        type: bool default: false
    --helpshort (show help on only the main module for this program)
        type: bool default: false
    --helpxml (produce an xml version of help)
        type: bool default: false
    --version (show version and build info and exit)
        type: bool default: false

  Flags from ..\..\tools\caffe.cpp:
    --gpu (Optional; run in GPU mode on given device IDs separated by ','.Use
      '-gpu all' to run on all available GPUs. The effective training batch
      size is multiplied by the number of devices.)
        type: string default: ""
    --iterations (The number of iterations to run.)
        type: int32 default: 50
    --model (The model definition protocol buffer text file..)
        type: string default: ""
    --sighup_effect (Optional; action to take when a SIGHUP signal is received:
      snapshot, stop or none.)
        type: string default: "snapshot"
    --sigint_effect (Optional; action to take when a SIGINT signal is received:
      snapshot, stop or none.)
        type: string default: "stop"
    --snapshot (Optional; the snapshot solver state to resume training.)
        type: string default: ""
    --solver (The solver definition protocol buffer text file.)
        type: string default: ""
    --weights (Optional; the pretrained weights to initialize finetuning,
      separated by ','. Cannot be set simultaneously with snapshot.)
        type: string default: ""

C:\Users\connor>

有兩個主要的引數:
solver

snapshot

solver是指向solver.prototxt配置檔案的。

snapshot是將螢幕上輸出的東西寫進一個txt檔案裡。

下面看prototxt檔案裡的內容,在 E:\000Deep Learning000\caffe-windows-3rdparty20151001\models\bvlc_alexnet 裡.

net: "models/bvlc_alexnet/train_val.prototxt"
test_iter: 1000
test_interval: 1000
base_lr: 0.01
lr_policy: "step"
gamma: 0.1
stepsize: 100000
display: 20
max_iter: 450000
momentum: 0.9
weight_decay: 0.0005
snapshot: 10000
snapshot_prefix: "models/bvlc_alexnet/caffe_alexnet_train"
solver_mode: GPU

我們還有一個執行的協議solver.prototxt,複製過來,將第一行路徑改為我們的路徑net: “examples/myself/train_val.prototxt”,從裡面可以觀察到,我們將執行256批次,迭代4500000次(90期),每1000次迭代,我們測試學習網路驗證資料,我們設定初始的學習率為0.01,每100000(20期)次迭代減少學習率,顯示一次資訊,訓練的weight_decay為0.0005,每10000次迭代,我們顯示一下當前狀態。
以上是教程的,實際上,以上需要耗費很長時間,因此,我們稍微改一下
test_iter: 1000是指測試的批次,我們就10張照片,設定10就可以了。
test_interval: 1000是指每1000次迭代測試一次,我們改成500次測試一次。
base_lr: 0.01是基礎學習率,因為資料量小,0.01就會下降太快了,因此改成0.001
lr_policy: “step”學習率變化
gamma: 0.1學習率變化的比率
stepsize: 100000每100000次迭代減少學習率
display: 20每20層顯示一次
max_iter: 450000最大迭代次數,
momentum: 0.9學習的引數,不用變
weight_decay: 0.0005學習的引數,不用變
snapshot: 10000每迭代10000次顯示狀態,這裡改為2000次
solver_mode: GPU末尾加一行,代表用GPU進行

開啟 models/bvlc_alexnet/train_val.prototxt 看看

先只看資料層:

layer {
  name: "data"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    mirror: true
    crop_size: 227
    mean_file: "data/ilsvrc12/imagenet_mean.binaryproto"
  }
  data_param {
    source: "examples/imagenet/ilsvrc12_train_lmdb"
    batch_size: 256
    backend: leveldb
  }
}
layer {
  name: "data"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  transform_param {
    mirror: false
    crop_size: 227
    mean_file: "data/ilsvrc12/imagenet_mean.binaryproto"
  }
  data_param {
    source: "examples/imagenet/ilsvrc12_val_lmdb"
    batch_size: 50
    backend: leveldb
  }
}

這裡backend: LMDB要改成backend: LEVELDB,注意要全部大寫,不然會報錯。

下面就可以直接執行caffe.exe跑CNN了,cmd命令如下:

D:\000\caffe-windows-3rdparty20151001\bin\caffe.exe train --solver=models\bvlc_alexnet\solver.prototxt

這裡寫圖片描述

本文實驗過程中承蒙實驗室孫滿利師兄指導,撒花感謝~