1. 程式人生 > >1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 論文閱讀筆記

1705.Person Re-Identification by Deep Joint Learning of Multi-Loss Classification 論文閱讀筆記

Person Re-Identification by Deep Joint Learning of Multi-Loss Classification
本文采用多loss分類聯合訓練同時學習行人條紋區域性特徵和全域性特徵,受益於區域性和全域性學習到的特徵具有互補性,因此得到的特徵更具區分性。具體框架如下:
這裡寫圖片描述
這裡寫圖片描述
提出的JLML(Joint Learning Multi-Loss)模型框架如下:
1.兩個分支,一個m個子流(條紋)的區域性CNN子網路,一個全域性的CNN子網路,他們還共享一個基網路(這樣底層抽取的特徵具有一些共性,共用即合理也可以極大地減少引數)。網路是對resnet-50的一個縮減和修改(得到JLML-ResNet39)。

2.為同時優化分支的各子流網路的特徵表達,並使得區域性和全域性網路間特徵的表達具有互補性(一個特徵選擇的過程),各分支都各自受同樣的ID監督約束(即每個分支都有各自的loss function)進行訓練,每個分支的訓練是獨立的,但學習到的資訊具有互補的判別性,作者稱這種設計為MultiLoss design。

3.(noise and data covariance)為進一步較少對噪聲的學習,提升對多樣資料來源的魯棒性學習,作者引入了以下論文的正則化方法,進行進一步的特徵去冗餘.
這裡寫圖片描述
sparsify the global feature representation with a group LASSO [Wang et al., 2013]和 enforce a local feature sparsity constraint by an exclusive group LASSO [Kong et al., 2014],去冗餘正則化項分別是:
這裡寫圖片描述


這裡寫圖片描述
參考文獻:
[Wang et al., 2013] Hua Wang, Feiping Nie, and Heng Huang.Multi-view clustering and feature learning via structured sparsity.In ICML, 2013.
[Kong et al., 2014] Deguang Kong, Ryohei Fujimaki, Ji Liu, Feiping Nie, and Chris Ding. Exclusive feature learning on arbitrary structures via l1,2-norm. In NIPS, 2014.
4.分類loss採用了交叉熵loss,最終的全域性和區域性分支loss形式為(含正則化項):
這裡寫圖片描述

實驗
這裡寫圖片描述
作者分別在VIPeR [Gray and Tao, 2008], GRID [Loy et al.,2009], CUHK03 [Li et al., 2014], and Market-1501 [Zheng etal., 2015]. 資料集上對以下現有的一些方法做了比較和測試:
這裡寫圖片描述

實驗結果:
表格內容說明:
(A) Hand-crafted (feature) with domain-specific distance learning (metric); (B) Deep learning (feature) with domain-specific deep verification (metric) learning (C) Deep learning (feature) with generic nonlearning L2 distance (metric).

這裡寫圖片描述
這裡寫圖片描述
這裡寫圖片描述
這裡寫圖片描述

另外,作者對一些細節做了進一步的實驗分析和討論:
1.Complementary of Global and Local Features
這裡寫圖片描述

2.Importance of Branch Independence
這裡寫圖片描述
3.Benefits from Shared Low-Level Features
這裡寫圖片描述
4.Effects of Selective Feature Learning
這裡寫圖片描述
5.Comparisons of Model Size and Complexity
這裡寫圖片描述

作者的貢獻:
1. propose the idea of learning concurrently both local and global feature selections for maximising their correlated complementary effects by learning discriminative feature representations in different context subject to multi-loss classification objective functions in a unified framework(formulate a novel Joint Learning Multi-Loss (JLML) CNN model)
2.by optimising multiple classification losses on the same person label information concurrently, but also utilising their complementary advantages jointly in coping with local misalignment and optimising holistic matching criteria for person re-id.
3.performing structured feature sparsity regularisation:introduce a structured sparsity based feature selective learning mechanism to further improve joint feature learning.