1. 程式人生 > >目標檢測SSD+Tensorflow 轉資料為tfrecord

目標檢測SSD+Tensorflow 轉資料為tfrecord

  用tensorflow做深度學習的目標檢測真是艱難困苦啊!

1.程式碼地址:https://github.com/balancap/SSD-Tensorflow,下載該程式碼到本地

2.解壓ssd_300_vgg.ckpt.zip 到checkpoint資料夾下

3.測試一下看看,在notebooks中建立demo_test.py,其實就是複製ssd_notebook.ipynb中的程式碼,該py檔案是完成對於單張圖片的測試,對Jupyter不熟,就自己改了,感覺這樣要方便一些。

  1. import os

  2. import math

  3. import random

  4. import numpy as np

  5. import tensorflow as tf

  6. import cv2

  7. slim = tf.contrib.slim

  8. import matplotlib.pyplot as plt

  9. import matplotlib.image as mpimg

  10. import sys

  11. sys.path.append('../')

  12. from nets import ssd_vgg_300, ssd_common, np_methods

  13. from preprocessing import ssd_vgg_preprocessing

  14. from notebooks import visualization

  15. # TensorFlow session: grow memory when needed. TF, DO NOT USE ALL MY GPU MEMORY!!!

  16. gpu_options = tf.GPUOptions(allow_growth=True)

  17. config = tf.ConfigProto(log_device_placement=False, gpu_options=gpu_options)

  18. isess = tf.InteractiveSession(config=config)

  19. # Input placeholder.

  20. net_shape = (300, 300)

  21. data_format = 'NHWC'

  22. img_input = tf.placeholder(tf.uint8, shape=(None, None, 3))

  23. # Evaluation pre-processing: resize to SSD net shape.

  24. image_pre, labels_pre, bboxes_pre, bbox_img = ssd_vgg_preprocessing.preprocess_for_eval(

  25. img_input, None, None, net_shape, data_format, resize=ssd_vgg_preprocessing.Resize.WARP_RESIZE)

  26. image_4d = tf.expand_dims(image_pre, 0)

  27. # Define the SSD model.

  28. reuse = True if 'ssd_net' in locals() else None

  29. ssd_net = ssd_vgg_300.SSDNet()

  30. with slim.arg_scope(ssd_net.arg_scope(data_format=data_format)):

  31. predictions, localisations, _, _ = ssd_net.net(image_4d, is_training=False, reuse=reuse)

  32. # Restore SSD model.

  33. ckpt_filename = '../checkpoints/ssd_300_vgg.ckpt'

  34. # ckpt_filename = '../checkpoints/VGG_VOC0712_SSD_300x300_ft_iter_120000.ckpt'

  35. isess.run(tf.global_variables_initializer())

  36. saver = tf.train.Saver()

  37. saver.restore(isess, ckpt_filename)

  38. # SSD default anchor boxes.

  39. ssd_anchors = ssd_net.anchors(net_shape)

  40. # Main image processing routine.

  41. def process_image(img, select_threshold=0.5, nms_threshold=.45, net_shape=(300, 300)):

  42. # Run SSD network.

  43. rimg, rpredictions, rlocalisations, rbbox_img = isess.run([image_4d, predictions, localisations, bbox_img],

  44. feed_dict={img_input: img})

  45. # Get classes and bboxes from the net outputs.

  46. rclasses, rscores, rbboxes = np_methods.ssd_bboxes_select(

  47. rpredictions, rlocalisations, ssd_anchors,

  48. select_threshold=select_threshold, img_shape=net_shape, num_classes=21, decode=True)

  49. rbboxes = np_methods.bboxes_clip(rbbox_img, rbboxes)

  50. rclasses, rscores, rbboxes = np_methods.bboxes_sort(rclasses, rscores, rbboxes, top_k=400)

  51. rclasses, rscores, rbboxes = np_methods.bboxes_nms(rclasses, rscores, rbboxes, nms_threshold=nms_threshold)

  52. # Resize bboxes to original image shape. Note: useless for Resize.WARP!

  53. rbboxes = np_methods.bboxes_resize(rbbox_img, rbboxes)

  54. return rclasses, rscores, rbboxes

  55. # Test on some demo image and visualize output.

  56. #測試的資料夾

  57. path = '../demo/'

  58. image_names = sorted(os.listdir(path))

  59. #資料夾中的第幾張圖,-1代表最後一張

  60. img = mpimg.imread(path + image_names[-1])

  61. rclasses, rscores, rbboxes = process_image(img)

  62. # visualization.bboxes_draw_on_img(img, rclasses, rscores, rbboxes, visualization.colors_plasma)

  63. visualization.plt_bboxes(img, rclasses, rscores, rbboxes)

4.將自己的資料集做成VOC2007格式放在該工程下面

5. 修改datasets資料夾中pascalvoc_common.py檔案,將訓練類修改別成自己的

  1. #原始的

  2. # VOC_LABELS = {

  3. # 'none': (0, 'Background'),

  4. # 'aeroplane': (1, 'Vehicle'),

  5. # 'bicycle': (2, 'Vehicle'),

  6. # 'bird': (3, 'Animal'),

  7. # 'boat': (4, 'Vehicle'),

  8. # 'bottle': (5, 'Indoor'),

  9. # 'bus': (6, 'Vehicle'),

  10. # 'car': (7, 'Vehicle'),

  11. # 'cat': (8, 'Animal'),

  12. # 'chair': (9, 'Indoor'),

  13. # 'cow': (10, 'Animal'),

  14. # 'diningtable': (11, 'Indoor'),

  15. # 'dog': (12, 'Animal'),

  16. # 'horse': (13, 'Animal'),

  17. # 'motorbike': (14, 'Vehicle'),

  18. # 'person': (15, 'Person'),

  19. # 'pottedplant': (16, 'Indoor'),

  20. # 'sheep': (17, 'Animal'),

  21. # 'sofa': (18, 'Indoor'),

  22. # 'train': (19, 'Vehicle'),

  23. # 'tvmonitor': (20, 'Indoor'),

  24. # }

  25. #修改後的

  26. VOC_LABELS = {

  27. 'none': (0, 'Background'),

  28. 'pantograph':(1,'Vehicle'),

  29. }

6.  將影象資料轉換為tfrecods格式,修改datasets資料夾中的pascalvoc_to_tfrecords.py檔案,然後更改檔案的83行讀取方式為’rb‘,如果你的檔案不是.jpg格式,也可以修改圖片的型別。

此外, 修改67行,可以修改幾張圖片轉為一個tfrecords

7.執行tf_convert_data.py檔案,但是需要傳給它一些引數:

linux

在SSD-Tensorflow-master資料夾下建立tf_conver_data.sh,檔案寫入內容如下:

DATASET_DIR=./VOC2007/     #VOC資料儲存的資料夾(VOC的目錄格式未改變)  
OUTPUT_DIR=./tfrecords_  #自己建立的儲存tfrecords資料的資料夾       
python ./tf_convert_data.py \     
  --dataset_name=pascalvoc \         
  --dataset_dir=${DATASET_DIR} \   
  --output_name=voc_2007_train \ 
  --output_dir=${OUTPUT_DIR}  

windows     +pychram

配置pycharm-->run-->Edit Configuration

Script parameters中寫入:--dataset_name=pascalvoc --dataset_dir=./VOC2007/ --output_name=voc_2007_train --output_dir=./tfrecords_

執行tf_convert_data.py檔案

生成tfrecords檔案過程中你會看到 生成tfrecords檔案完畢後你會看到

8.訓練模型train_ssd_network.py檔案中修改

 

train_ssd_network.py檔案中網路引數配置,若需要改,在此檔案中進行修改,如:

其他需要修改的地方

a.   nets/ssd_vgg_300.py  (因為使用此網路結構) ,修改87 和88行的類別
b. train_ssd_network.py,修改類別120行,GPU佔用量,學習率,batch_size等
 c eval_ssd_network.py 修改類別,66行
d. datasets/pascalvoc_2007.py 根據自己的訓練資料修改整個檔案
  1. # (Images, Objects) statistics on every class.

  2. # TRAIN_STATISTICS = {

  3. # 'none': (0, 0),

  4. # 'aeroplane': (238, 306),

  5. # 'bicycle': (243, 353),

  6. # 'bird': (330, 486),

  7. # 'boat': (181, 290),

  8. # 'bottle': (244, 505),

  9. # 'bus': (186, 229),

  10. # 'car': (713, 1250),

  11. # 'cat': (337, 376),

  12. # 'chair': (445, 798),

  13. # 'cow': (141, 259),

  14. # 'diningtable': (200, 215),

  15. # 'dog': (421, 510),

  16. # 'horse': (287, 362),

  17. # 'motorbike': (245, 339),

  18. # 'person': (2008, 4690),

  19. # 'pottedplant': (245, 514),

  20. # 'sheep': (96, 257),

  21. # 'sofa': (229, 248),

  22. # 'train': (261, 297),

  23. # 'tvmonitor': (256, 324),

  24. # 'total': (5011, 12608),

  25. # }

  26. # TEST_STATISTICS = {

  27. # 'none': (0, 0),

  28. # 'aeroplane': (1, 1),

  29. # 'bicycle': (1, 1),

  30. # 'bird': (1, 1),

  31. # 'boat': (1, 1),

  32. # 'bottle': (1, 1),

  33. # 'bus': (1, 1),

  34. # 'car': (1, 1),

  35. # 'cat': (1, 1),

  36. # 'chair': (1, 1),

  37. # 'cow': (1, 1),

  38. # 'diningtable': (1, 1),

  39. # 'dog': (1, 1),

  40. # 'horse': (1, 1),

  41. # 'motorbike': (1, 1),

  42. # 'person': (1, 1),

  43. # 'pottedplant': (1, 1),

  44. # 'sheep': (1, 1),

  45. # 'sofa': (1, 1),

  46. # 'train': (1, 1),

  47. # 'tvmonitor': (1, 1),

  48. # 'total': (20, 20),

  49. # }

  50. # SPLITS_TO_SIZES = {

  51. # 'train': 5011,

  52. # 'test': 4952,

  53. # }

  54. # (Images, Objects) statistics on every class.

  1. TRAIN_STATISTICS = {

  2. 'none': (0, 0),

  3. 'pantograph': (1000, 1000),

  4. }

  5. TEST_STATISTICS = {

  6. 'none': (0, 0),

  7. 'pantograph': (1000, 1000),

  8. }

  9. SPLITS_TO_SIZES = {

  10. 'train': 500,

  11. 'test': 500,

  12. }

9.通過載入預訓練好的vgg16模型,訓練網路

下載預訓練好的vgg16模型,解壓放入checkpoint檔案中,如果找不到vgg_16.ckpt檔案,可以在下面的連結中點選下載。

連結:https://pan.baidu.com/s/1diWbdJdjVbB3AWN99406nA 密碼:ge3x

按照之前的方式,同樣,如果你是linux使用者,你可以新建一個.sh檔案,檔案裡寫入

  1. DATASET_DIR=./tfrecords_/

  2. TRAIN_DIR=./train_model/

  3. CHECKPOINT_PATH=./checkpoints/vgg_16.ckpt

  4. python3 ./train_ssd_network.py \

  5. --train_dir=./train_model/ \ #訓練生成模型的存放路徑

  6. --dataset_dir=./tfrecords_/ \ #資料存放路徑

  7. --dataset_name=pascalvoc_2007 \ #資料名的字首

  8. --dataset_split_name=train \

  9. --model_name=ssd_300_vgg \ #載入的模型的名字

  10. --checkpoint_path=./checkpoints/vgg_16.ckpt \ #所載入模型的路徑

  11. --checkpoint_model_scope=vgg_16 \ #所載入模型裡面的作用域名

  12. --checkpoint_exclude_scopes=ssd_300_vgg/conv6,ssd_300_vgg/conv7,ssd_300_vgg/block8,ssd_300_vgg/block9,ssd_300_vgg/block10,ssd_300_vgg/block11,ssd_300_vgg/block4_box,ssd_300_vgg/block7_box,ssd_300_vgg/block8_box,ssd_300_vgg/block9_box,ssd_300_vgg/block10_box,ssd_300_vgg/block11_box \

  13. --trainable_scopes=ssd_300_vgg/conv6,ssd_300_vgg/conv7,ssd_300_vgg/block8,ssd_300_vgg/block9,ssd_300_vgg/block10,ssd_300_vgg/block11,ssd_300_vgg/block4_box,ssd_300_vgg/block7_box,ssd_300_vgg/block8_box,ssd_300_vgg/block9_box,ssd_300_vgg/block10_box,ssd_300_vgg/block11_box \

  14. --save_summaries_secs=60 \ #每60s儲存一下日誌

  15. --save_interval_secs=600 \ #每600s儲存一下模型

  16. --weight_decay=0.0005 \ #正則化的權值衰減的係數

  17. --optimizer=adam \ #選取的最優化函式

  18. --learning_rate=0.001 \ #學習率

  19. --learning_rate_decay_factor=0.94 \ #學習率的衰減因子

  20. --batch_size=24 \

  21. --gpu_memory_fraction=0.9 #指定佔用gpu記憶體的百分比

如果你是windows+pycharm中執行,除了在上述的run中Edit Configuration配置,你還可以開啟Terminal,在這裡執行程式碼,輸入即可

python ./train_ssd_network.py  --train_dir=./train_model/  --dataset_dir=./tfrecords_/  --dataset_name=pascalvoc_2007 --dataset_split_name=train --model_name=ssd_300_vgg   --checkpoint_path=./checkpoints/vgg16.ckpt   --checkpoint_model_scope=vgg_16 --checkpoint_exclude_scopes=ssd_300_vgg/conv6,ssd_300_vgg/conv7,ssd_300_vgg/block8,ssd_300_vgg/block9,ssd_300_vgg/block10,ssd_300_vgg/block11,ssd_300_vgg/block4_box,ssd_300_vgg/block7_box,ssd_300_vgg/block8_box,ssd_300_vgg/block9_box,ssd_300_vgg/block10_box,ssd_300_vgg/block11_box   --trainable_scopes=ssd_300_vgg/conv6,ssd_300_vgg/conv7,ssd_300_vgg/block8,ssd_300_vgg/block9,ssd_300_vgg/block10,ssd_300_vgg/block11,ssd_300_vgg/block4_box,ssd_300_vgg/block7_box,ssd_300_vgg/block8_box,ssd_300_vgg/block9_box,ssd_300_vgg/block10_box,ssd_300_vgg/block11_box --save_summaries_secs=60   --save_interval_secs=600 --weight_decay=0.0005   --optimizer=adam   --learning_rate=0.001 --learning_rate_decay_factor=0.94   --batch_size=24      --gpu_memory_fraction=0.9