Neural Networks and Deep Learning 學習筆記(一)
1. 為什麼假設w⋅x≡∑jwjxj 後,w 與x 就變成向量了?
The first change is to write
∑jwjxj as a dot product,w⋅x≡∑jwjxj , where w and x are vectors whose components are the weights and inputs, respectively.
向量的點積為標量,兩個同樣維度的向量的點積恰好是每個維度積的累加,以上變換可用公式表示為
2. 用perceptron實現‘與非’、‘與’、‘或’門。
we have a perceptron with two inputs, each with weight −2, and an overall bias of 3. Then we see that input 00 produces output 1, since
(−2)∗0+(−2)∗0+3=3 is positive. Here, I’ve introduced the ∗ symbol to make the multiplications explicit. Similar calculations show that the inputs 01 and 10 produce output 1. But the input 11 produces output 0, since( is negative. And so our perceptron implements a NAND gate!−2)∗1+(−2)∗1+3=−1
以上是電子書中實現的與非門。以此類推實現與門的方式可以為bias設定為-3,
實現或門的bias設定為-1,權重設定為2。則輸入00,輸出為負,輸入01,輸出為正,輸入11,輸出為正。
3. exp是啥?
exp是以
4. 這個偏導數是怎麼得到的(未解決)?
Δoutput≈∑j∂output∂wjΔwj+∂output∂bΔb