1. 程式人生 > >【電腦科學】【2017.11】【含原始碼】用於超光譜影象畫素分類的深度學習研究

【電腦科學】【2017.11】【含原始碼】用於超光譜影象畫素分類的深度學習研究

在這裡插入圖片描述

本文為荷蘭代爾夫特理工大學(作者:I.A.F. Snuverink)的碩士論文,共128頁。

在超光譜(HS)成像中,每一個畫素都要捕獲波長光譜,這些光譜代表材料性質,即光譜特徵。因此,HS影象的分類是基於材料屬性的。本文介紹了一種在不同環境條件下的固定場景中進行HS影象畫素分類的框架。TNO已經記錄了一年中的HS影象,在日出和日落之間每小時記錄一次。因此,該資料集受光照、天氣和季節條件的影響,使記錄資料的質量下降。傳統上,大氣模型被用於校正這些效應的影響,從而恢復光譜資訊。在這項工作中,訓練一個全卷積網路(FCN)來執行分割任務,該任務還通過學習實現環境條件的校正,從而避免了大氣模型及影象歸一化的使用。

單一的FCN(U-Net)是為了解決15類分割問題,區分闊葉林、草、沙地、瀝青和人工草地等。為了訓練神經網路,訓練資料集需要相應的地面實景特徵。設計了一種稀疏的帶註釋掩模,它適合於覆蓋整個記錄週期的影象。在單個場景分割中,使用稀疏掩模進行註釋是一種允許移動物件邊界的快速方法。此外,它避免了混合畫素的引入。為了減少計算量,訓練資料集是由原始HS影象中提取的許多小塊圖片構成的。因此,利用區域性頻譜空間資訊對網路進行訓練。標準的U-Net訓練被證明限於在相對恆定的環境條件下訓練資料集。為了進一步增強季節間的泛化,重新排列網路權重,以便保持相似數量的權重。該網路對於訓練複雜的資料集是非常重要的,因為它能夠提取更多的資訊特徵。

使用簡單訓練資料集(即相對恆定的環境條件)訓練的標準U-Net對於晴雨天的測試影象在一天中任何時間都能達到A>94%的準確率。然而,該模型在有限的時間段內是有效的。使用複雜訓練資料集(即高度變化的環境條件)訓練的定製U-Net產生的精度在86 - 93%之間。該模型的有效期較長,能夠覆蓋多個季節。因此,實驗表明分割精度與模型有效性的持續時間之間存在著折衷關係,而模型有效性的持續時間是由網路權值控制的。

In hyperspectral (HS) imaging, for every pixel a spectrum ofwavelengths is captured. These spectra represent material properties, i.e. thespectral signatures. So, classification of HS imagery is based on materialproperties. This thesis describes a framework to perform pixelwiseclassification of HS images of a fixed scene subject to varying ambientconditions. TNO has recorded HS images over the course of one year, every hourbetween sunrise and sunset. Therefore, this data set is subject to a range oflighting, weather and seasonal conditions, degrading the recorded data.Traditionally, atmospheric models are used to correct for these effects,recovering spectral information. In this work an Fully Convolutional Network(FCN) is trained to perform the segmentation task which also learns to correctfor ambient conditions, eliminating the need of implementing an atmosphericmodel or applying image normalization. A single FCN (U-Net) is implemented tosolve a five-class segmentation problem, distinguishing broad leaf trees,grass, sand, asphalt and artificial grass. To start training a neural network,the training data set requires a corresponding ground truth. A sparselyannotated mask is designed which fits images covering the entire recordingperiod. In single-scene segmentation, annotating using a sparse mask is a quickmethod which allows for moving object borders. Furthermore, it avoids inclusionof mixed pixels. In order to reduce computational load, the training data setis formed by many small patches taken from the original HS images.Consequently, the network is trained with local spectral-spatial information.Training the standard U-Net proves to be limited to training data sets underrelatively constant ambient conditions. In order to further enhancegeneralization over seasons, the network weights are rearranged so that asimilar number of weights is maintained. This network is essential for traininga complex training data set, as it is able to extract more informativefeatures. The standard U-Net trained with a simple training data set (i.e.relatively constant ambient conditions) achieves an accuracy A > 94% for both sunnyand rainy test images irrespective of time of the day. However, this model isvalid for a limited period of time. The customized U-Net trained with a complextraining data set (i.e. highly varying ambient conditions) yields segmentationswith an accuracy ranging between 86 and 93%. This model is valid for a longer periodof time, covering multiple seasons. So the experiments show that there is atrade-off between segmentation accuracy and duration of model validity, whichis controlled by network weight arrangement.

1 引言
2 專案背景與相關工作
3 神經網路設計
4 實驗設計
5 訓練資料集優化
6 實驗結果
7 結論與建議
附錄A 註釋掩模
附錄B Python原始碼
附錄C 實驗細節
附錄D 訓練資料集優化的實驗結果
附錄E 時變條件的實驗結果
附錄F 提高泛化特性

在這裡插入圖片描述

下載英文原文地址:

http://page5.dfpan.com/fs/0lc4j202142931678b4/

更多精彩文章請關注微訊號:在這裡插入圖片描述