1. 程式人生 > >生物智慧和人工智慧

生物智慧和人工智慧

原文連結:https://hai.stanford.edu/news/the_intertwined_quest_for_understanding_biological_intelligence_and_creating_artificial_intelligence/

 

因其他筆記無法使用,故存為部落格。

讀書筆記:

1:unlock the mystery of the three pounds of matter that sits between our ears.

2:In essence humans, as products of evolution, sometimes yearn to play the role of creator。

3:生物智慧和人工智慧的互相協作:

     The Hopfield network, a model in theoretical neuroscience that provided a unified framework for thinking about distributed, content-addressable memory storage and retrieval, also inspired the Boltzmann machine, which in turn provided a key first step in demonstrating the success of deep neural network models and inspired the idea of distributed satisfaction of many weak constraints as a model of computation in AI.

      Critical ingredients underlying deep convolutional networks currently dominating machine vision were directly inspired by the brain. These ingredients include hierarchical visual processing in the ventral stream, suggesting the importance of depth; the discovery of retinotopy as an organizing principle throughout visual cortex, leading to convolution

; the discovery of simple and complex cells motivating operations like max pooling; and the discovery of neural normalization within cortex, which motivated various normalization stages in artificial networks.

       The human attentional system inspired the incorporation of attentional neural networks that can be trained to dynamically attend to or ignore different aspects of its state and inputs to make future computational decisions.

4:未來AI的生物學啟發

    theoretical studies have shown that such synaptic complexity may indeed be essential to learning and memory . In fact network models of memory in which synapses have finite dynamic range, require such synapses be dynamical systems in their own right with complex temporal filtering properties to achieve reasonable network memory capacities. Moreover, more intelligent synapses have recently been explored in AI as a way to solve the catastrophic forgetting problem, in which a network trained to learn two tasks in sequence can only learn the second task, because learning the second task changes synaptic weights in such a way as to erase knowledge gained from learning the first task.

   如何有效的解決遺忘(並不是NLP中上下文語義之間的聯絡),即是否存在一個方法可以適用於很多類似問題。(推翻深度學習中的”天下沒有免費的午餐“)。

    遷移學習目標是舉一反三(具有一定的經驗),學會騎自行車,會很快的學會騎摩托車。

    強化學習:自我反饋自我修正。

****************此處還需斟酌,推敲。

5:從系統級模組化大腦架構中獲取線索

     we currently lack any engineering design principles that can explain how a complex sensing, communication, control and memory network like the brain can continuously scale in size and complexity over 500 million years while never losing the ability to adaptively function in dynamic environments.

6:周志華教授的一次報告中也指出,我們目前深度學習所取得進步和成果都是處在資料分佈恆定、樣本類別恆定、樣本屬性恆定、評價目標恆定的封閉靜態環境中,即有效的深度模型,強的監督資訊,較為穩定的學習環境才能有今天深度學習的繁榮,深度學習的未來充滿挑戰的。

 

 

 

附:

十分感謝各位大牛先驅的努力探索研究,同時也非常感謝微博@愛可可-愛生活老師的分享,感恩!