1. 程式人生 > >利用caffe日誌進行測試精度訓練損失等的畫圖(caffe訓練結果視覺化)

利用caffe日誌進行測試精度訓練損失等的畫圖(caffe訓練結果視覺化)

本文主要介紹,將caffe訓練得到的accracy,loss進行影象化。

對於一般caffe訓練結果的視覺化:

1.在訓練時,需要將訓練的結果儲存日誌。

  train.sh:

#!/usr/bin/env sh

TOOLS=/home/zhuangni/code/Multi-Task/caffe-master/build/tools
GLOG_log_dir='/home/zhuangni/code/Multi-Task/experiment_single/attr1/vgg/log/' \
$TOOLS/caffe train \
  --solver=/home/zhuangni/code/Multi-Task/experiment_single/attr1/vgg/solver.prototxt \
  --weights=/home/zhuangni/code/Multi-Task/experiment_single/attr1/vgg/face_snapshot_iter_450000.caffemodel \
  --gpu=0

   GLOG_log_dir為日誌儲存路徑。

  日誌名自動生成為: caffe.zhuangni.zhuangni.log.INFO.20161020-215304.3679

  log資料夾下自動生成: caffe.INFO 和 caffe.zhuangni.zhuangni.log.INFO.20161020-215304.3679 檔案。其中 caffe.zhuangni.zhuangni.log.INFO.20161020-215304.3679 檔案即為日誌檔案。

 

2.新建一個acc目錄,

  1)將日誌檔案caffe.zhuangni.zhuangni.log.INFO.20161020-215304.3679拷入,並更改為 my.log

   2)將caffe-master/tools/extra目錄裡的extract_seconds.py , plot_training_log.py.example, parse_log.sh三個檔案拷入 

3.執行

./plot_training_log.py.example 0 demo.png my.log
 執行引數:


==============================================================================================================================

以下記錄我的caffe訓練結果視覺化過程:

我的網路多工訓練了五個屬性,現將其中一個屬性的測試精度視覺化。

目標:視覺化測試精度。

1.在訓練時將結果儲存日誌:

   在train.sh中新增GLOG_log_dir:

#!/usr/bin/env sh

TOOLS=/home/zhuangni/code/Multi-Task/caffe-master/build/tools
GLOG_log_dir='/home/zhuangni/code/Multi-Task/experiment_single/attr1/vgg/log/' \
$TOOLS/caffe train \
  --solver=/home/zhuangni/code/Multi-Task/experiment_single/attr1/vgg/solver.prototxt \
  --weights=/home/zhuangni/code/Multi-Task/experiment_single/attr1/vgg/face_snapshot_iter_450000.caffemodel \
  --gpu=0
   訓練結束後生成日誌檔案 caffe.zhuangni.zhuangni.log.INFO.20161020-215304.3679

2.新建一個acc目錄,

  1)將日誌檔案caffe.zhuangni.zhuangni.log.INFO.20161020-215304.3679拷入,並更改為 single_attr1.log

   2)將caffe-master/tools/extra目錄裡的extract_seconds.py , plot_training_log.py.example, parse_log.sh三個檔案拷入。

      修改parse_log.sh:

#!/bin/bash
# Usage parse_log.sh caffe.log
# It creates the following two text files, each containing a table:
#     caffe.log.test (columns: '#Iters Seconds TestAccuracy TestLoss')
#     caffe.log.train (columns: '#Iters Seconds TrainingLoss LearningRate')


# get the dirname of the script
DIR="$( cd "$(dirname "$0")" ; pwd -P )"

if [ "$#" -lt 1 ]
then
echo "Usage parse_log.sh /path/to/your.log"
exit
fi
LOG=`basename $1`
sed -n '/Iteration .* Testing net/,/Iteration *. loss/p' $1 > aux.txt
sed -i '/Waiting for data/d' aux.txt
sed -i '/prefetch queue empty/d' aux.txt
sed -i '/Iteration .* loss/d' aux.txt
sed -i '/Iteration .* lr/d' aux.txt
sed -i '/Train net/d' aux.txt
grep 'Iteration ' aux.txt | sed  's/.*Iteration \([[:digit:]]*\).*/\1/g' > aux0.txt
grep 'Test net output #0' aux.txt | awk '{print $11}' > aux1.txt
grep 'Test net output #1' aux.txt | awk '{print $11}' > aux2.txt
grep 'Test net output #2' aux.txt | awk '{print $11}' > aux3.txt
grep 'Test net output #3' aux.txt | awk '{print $11}' > aux4.txt
grep 'Test net output #4' aux.txt | awk '{print $11}' > aux5.txt
grep 'Test net output #5' aux.txt | awk '{print $11}' > aux6.txt
grep 'Test net output #6' aux.txt | awk '{print $11}' > aux7.txt
grep 'Test net output #7' aux.txt | awk '{print $11}' > aux8.txt
grep 'Test net output #8' aux.txt | awk '{print $11}' > aux9.txt
grep 'Test net output #9' aux.txt | awk '{print $11}' > aux10.txt


# Extracting elapsed seconds
# For extraction of time since this line contains the start time
grep '] Solving ' $1 > aux11.txt
grep 'Testing net' $1 >> aux11.txt
$DIR/extract_seconds.py aux11.txt aux12.txt

# Generating
echo '#Iters Seconds Test_accuracy_attr1 Test_accuracy_attr2 Test_accuracy_attr3 Test_accuracy_attr4 Test_accuracy_attr5 Test_loss_attr1 Test_loss_attr2 Test_loss_attr3 Test_loss_attr4 Test_loss_attr5'> $LOG.test
paste aux0.txt aux12.txt aux1.txt aux2.txt aux3.txt aux4.txt aux5.txt aux6.txt aux7.txt aux8.txt aux9.txt aux10.txt | column -t >> $LOG.test
rm aux.txt aux0.txt aux12.txt aux1.txt aux2.txt aux3.txt aux4.txt aux5.txt aux6.txt aux7.txt aux8.txt aux9.txt aux10.txt aux11.txt

# For extraction of time since this line contains the start time
grep '] Solving ' $1 > aux.txt
grep ', loss = ' $1 >> aux.txt
grep 'Train net' $1 >> aux.txt
grep 'Iteration ' aux.txt | sed  's/.*Iteration \([[:digit:]]*\).*/\1/g' > aux0.txt
grep ', lr = ' $1 | awk '{print $9}' > aux1.txt
grep 'Train net output #0' $1 | awk '{print $11}' > aux3.txt
grep 'Train net output #1' $1 | awk '{print $11}' > aux4.txt
grep 'Train net output #2' $1 | awk '{print $11}' > aux5.txt
grep 'Train net output #3' $1 | awk '{print $11}' > aux6.txt
grep 'Train net output #4' $1 | awk '{print $11}' > aux7.txt


# Extracting elapsed seconds
$DIR/extract_seconds.py aux.txt aux2.txt

# Generating
echo '#Iters Seconds Train_loss_attr1 Train_loss_attr2 Train_loss_attr3 Train_loss_attr4 Train_loss_attr5 LearningRate'> $LOG.train
paste aux0.txt aux2.txt aux3.txt aux4.txt aux5.txt aux6.txt aux7.txt aux1.txt | column -t >> $LOG.train
rm aux.txt aux0.txt aux1.txt aux2.txt  aux3.txt aux4.txt aux5.txt aux6.txt aux7.txt

3.執行

./plot_training_log.py.example 0 demo.png single_attr1.log
4.結果


附加:因為我有多個屬性的log,則執行:

./plot_training_log.py.example 0 demo.png single_attr1.log single_attr2.log single_attr3.log single_attr4.log


5.補充



日誌檔案內容: