1. 程式人生 > >DL之NN:NN演算法(本地資料集50000張訓練集圖片)進階優化之三種引數改進,進一步提高手寫數字圖片識別的準確率

DL之NN:NN演算法(本地資料集50000張訓練集圖片)進階優化之三種引數改進,進一步提高手寫數字圖片識別的準確率

首先,改變之一:

先在初始化權重的部分,採取一種更為好的隨機初始化方法,我們依舊保持正態分佈的均值不變,只對標準差進行改動,

初始化權重改變前,

 def large_weight_initializer(self):  
        self.biases = [np.random.randn(y, 1) for y in self.sizes[1:]]
        self.weights = [np.random.randn(y, x)  for x, y in zip(self.sizes[:-1], self.sizes[1:])]

初始化權重改變後,

    def default_weight_initializer(self): 
        self.biases 
= [np.random.randn(y, 1) for y in self.sizes[1:]] self.weights = [np.random.randn(y, x)/np.sqrt(x) for x, y in zip(self.sizes[:-1], self.sizes[1:])]

改變之二:

為了減少Overfitting,降低資料區域性噪音影響,將原先的目標函式由 quadratic cost 改為 cross-enrtopy cost

class CrossEntropyCost(object): 
    def fn(a, y):
        return
np.sum(np.nan_to_num(-y*np.log(a)-(1-y)*np.log(1-a))) def delta(z, a, y): return (a-y)

改變之三:

將S函式改為Softmax函式

class SoftmaxLayer(object):
    def __init__(self, n_in, n_out, p_dropout=0.0):
        self.n_in = n_in
        self.n_out = n_out
        self.p_dropout = p_dropout
        self.w = theano.shared(
            np.zeros((n_in, n_out), dtype
=theano.config.floatX), name='w', borrow=True) self.b = theano.shared( np.zeros((n_out,), dtype=theano.config.floatX), name='b', borrow=True) self.params = [self.w, self.b] def set_inpt(self, inpt, inpt_dropout, mini_batch_size): self.inpt = inpt.reshape((mini_batch_size, self.n_in)) self.output = softmax((1-self.p_dropout)*T.dot(self.inpt, self.w) + self.b) self.y_out = T.argmax(self.output, axis=1) self.inpt_dropout = dropout_layer( inpt_dropout.reshape((mini_batch_size, self.n_in)), self.p_dropout) self.output_dropout = softmax(T.dot(self.inpt_dropout, self.w) + self.b) def cost(self, net): "Return the log-likelihood cost." return -T.mean(T.log(self.output_dropout)[T.arange(net.y.shape[0]), net.y]) def accuracy(self, y): "Return the accuracy for the mini-batch." return T.mean(T.eq(y, self.y_out))