1. 程式人生 > >On Loss Functions for Deep Neural Networks in Classification讀後感

On Loss Functions for Deep Neural Networks in Classification讀後感

分類問題中的另一類loss函式

在這裡插入圖片描述

In particular, for purely accuracy focused research, squared hinge loss seems
to be a better choice at it converges faster as well as provides better performance.
It is also more robust to noise in the training set labelling and
slightly more robust to noise in the input space. However, if one works
with highly noised dataset (both input and output spaces) – the expectation
losses described in detail in this paper – seem to be the best choice,
both from theoretical and empirical perspective.
At the same time this topic is far from being exhausted, with a large
amount of possible paths to follow and questions to be answered. In
particular, non-classical loss functions such as Tanimoto loss(噪聲大的情況) and CauchySchwarz
Divergence(無噪聲) are worth further investigation.