生成對抗網路(GAN)的前沿進展(論文、報告、框架和Github資源)彙總
生成模型(GenerativeModel)是一種可以通過學習訓練樣本來產生更多類似樣本的模型。在所有生成模型當中,最具潛力的是生成對抗網路(Generative Adversarial Networks, GANs)。GANs 是非監督機器學習的一種,它的運作方式可被看做是兩個神經網路相互競爭的零和遊戲(zero-sum game)。
2014年,Ian Goodfellow等人在《GenerativeAdversarial Nets》一文中首次提出了GANs,標誌著GANs的誕生。
原文連結:https://arxiv.org/pdf/1406.2661v1.pdf
PPT連結:http://www.iangoodfellow.com/slides/2016-12-04-NIPS.pdf
原始碼連結:https://github.com/goodfeli/adversarial
視訊連結:https://www.youtube.com/watch?v=HN9NRhm9waY
本文總結了一系列關於GANs的前沿工作進展
一、最新研究論文(根據Google Scholar的引用數進行降序排列)
-
基於深度卷積生成對抗網絡的無監督學習(Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks (DCGANs))2015
原文鏈接:https://arxiv.org/pdf/1511.06434v2.pdf
-
對抗實例的解釋和利用(Explaining and Harnessing Adversarial Examples)2014
原文連結:https://arxiv.org/pdf/1412.6572.pdf
-
基於深度生成模型的半監督學習( Semi-Supervised Learning with Deep Generative Models )2014
原文鏈接:https://arxiv.org/pdf/1406.5298v2.pdf
-
基於拉普拉斯金字塔生成式對抗網路的深度影象生成模型(Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks)2015
原文鏈接:http://papers.nips.cc/paper/5773-deep-generative-image-models-using-a-laplacian-pyramid-of-adversarial-networks.pdf
-
訓練GANs的一些技巧(Improved Techniques for Training GANs)2016
原文連結:https://arxiv.org/pdf/1606.03498v1.pdf
-
條件生成對抗網絡(Conditional Generative Adversarial Nets)2014
原文連結:https://arxiv.org/pdf/1411.1784v1.pdf
-
生成式矩匹配網路(Generative Moment Matching Networks)2015
原文連結:http://proceedings.mlr.press/v37/li15.pdf
-
超越均方誤差的深度多尺度視訊預測(Deep multi-scale video prediction beyond mean square error)2015
原文連結:https://arxiv.org/pdf/1511.05440.pdf
-
通過學習相似性度量的超畫素自編碼(Autoencoding beyond pixels using a learned similarity metric)2015
原文連結:https://arxiv.org/pdf/1512.09300.pdf
-
對抗自編碼(Adversarial Autoencoders)2015
原文連結:https://arxiv.org/pdf/1511.05644.pdf
-
InfoGAN:基於資訊最大化GANs的可解釋表達學習(InfoGAN:Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets)2016
原文連結:https://arxiv.org/pdf/1606.03657v1.pdf
-
上下文畫素編碼:通過修復進行特徵學習(Context Encoders: Feature Learning by Inpainting)2016
原文連結:http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Pathak_Context_Encoders_Feature_CVPR_2016_paper.pdf
-
生成對抗網絡實現文字合成圖像(Generative Adversarial Text to Image Synthesis)2016
原文連結:http://proceedings.mlr.press/v48/reed16.pdf
-
基於畫素卷積神經網絡的條件生成圖片(Conditional Image Generation with PixelCNN Decoders)2015
原文連結:https://arxiv.org/pdf/1606.05328.pdf
-
對抗特徵學習(Adversarial Feature Learning)2016
原文連結:https://arxiv.org/pdf/1605.09782.pdf
-
結合逆自回歸流的變分推理(Improving Variational Inference with Inverse Autoregressive Flow )2016
原文連結:https://papers.nips.cc/paper/6581-improving-variational-autoencoders-with-inverse-autoregressive-flow.pdf
-
深度學習系統對抗樣本黑盒攻擊(Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples)2016
原文連結:https://arxiv.org/pdf/1602.02697.pdf
-
參加,推斷,重複:基於生成模型的快速場景理解(Attend, infer, repeat: Fast scene understanding with generative models)2016
原文連結:https://arxiv.org/pdf/1603.08575.pdf
-
f-GAN: 使用變分散度最小化訓練生成神經採樣器(f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization )2016
原文連結:http://papers.nips.cc/paper/6066-tagger-deep-unsupervised-perceptual-grouping.pdf
-
在自然圖像流形上的生成視覺操作(Generative Visual Manipulation on the Natural Image Manifold)2016
原文連結:https://arxiv.org/pdf/1609.03552.pdf
-
通過平均差異最大優化訓練生成神經網絡(Training generative neural networks via Maximum Mean Discrepancy optimization)2015
原文連結:https://arxiv.org/pdf/1505.03906.pdf
-
對抗性推斷學習(Adversarially Learned Inference)2016
原文連結:https://arxiv.org/pdf/1606.00704.pdf
-
基於循環對抗網絡的影象生成(Generating images with recurrent adversarial networks)2016
原文連結:https://arxiv.org/pdf/1602.05110.pdf
-
生成對抗模仿學習(Generative Adversarial Imitation Learning)2016
原文連結:http://papers.nips.cc/paper/6391-generative-adversarial-imitation-learning.pdf
-
基於3D生成對抗模型學習物體形狀的概率隱空間(Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling)2016
原文連結:https://arxiv.org/pdf/1610.07584.pdf
-
學習畫畫(Learning What and Where to Draw)2016
原文連結:https://arxiv.org/pdf/1610.02454v1.pdf
-
基於輔助分類器GANs的條件圖像合成(Conditional Image Synthesis with Auxiliary Classifier GANs)2016
原文連結:https://arxiv.org/pdf/1610.09585.pdf
-
隱生成模型的學習(Learning in Implicit Generative Models)2016
原文:https://arxiv.org/pdf/1610.03483.pdf
-
VIME: 變分資訊最大化探索(VIME: Variational Information Maximizing Exploration)2016
原文連結:http://papers.nips.cc/paper/6591-vime-variational-information-maximizing-exploration.pdf
-
生成對抗網絡的展開(Unrolled Generative Adversarial Networks)2016
原文連結:https://arxiv.org/pdf/1611.02163.pdf
-
訓練生成對抗網絡的基本方法(Towards Principled Methods for Training Generative Adversarial Networks)2017
原文連結:https://arxiv.org/pdf/1701.04862.pdf
-
基於內省對抗網路的神經影象編輯(Neural Photo Editing with Introspective Adversarial Networks)2016
原文連結:https://arxiv.org/pdf/1609.07093.pdf
-
基於解碼器的生成模型的定量分析(On the Quantitative Analysis of Decoder-Based Generative Models )2016
原文連結:https://arxiv.org/pdf/1611.04273.pdf
-
結合生成對抗網絡和Actor-Critic 方法(Connecting Generative Adversarial Networks and Actor-Critic Methods)2016
原文連結:https://arxiv.org/pdf/1610.01945.pdf
-
通過對抗網絡使用模擬和非監督圖像訓練( Learning from Simulated and Unsupervised Images through Adversarial Training)2016
原文連結:https://arxiv.org/pdf/1612.07828.pdf
-
基於上下文RNN-GANs的抽象推理圖的生成(Contextual RNN-GANs for Abstract Reasoning Diagram Generation)2016
原文連結:https://arxiv.org/pdf/1609.09444.pdf
-
生成多對抗網絡(Generative Multi-Adversarial Networks)2016
原文連結:https://arxiv.org/pdf/1611.01673.pdf
-
生成對抗網絡組合(Ensembles of Generative Adversarial Network)2016
原文連結:https://arxiv.org/pdf/1612.00991.pdf
-
改進生成器目標的GANs(Improved generator objectives for GANs) 2016
原文連結:https://arxiv.org/pdf/1612.02780.pdf
-
生成對抗模型的隱向量精準修復(Precise Recovery of Latent Vectors from Generative Adversarial Networks)2017
原文連結:https://openreview.NET/pdf?id=HJC88BzFl
-
生成混合模型(Generative Mixture of Networks)2017
原文連結:https://arxiv.org/pdf/1702.03307.pdf
-
記憶生成時空模型(Generative Temporal Models with Memory)2017
原文連結:https://arxiv.org/pdf/1702.04649.pdf
-
停止GAN暴力:生成性非對抗模型(Stopping GAN Violence: Generative Unadversarial Networks)2017
原文連結:https://arxiv.org/pdf/1703.02528.pdf
二、理論學習
1.訓練GANs的技巧,
參見連結:http://papers.nips.cc/paper/6124-improved-techniques-for-training-gans.pdf
2.Energy-Based GANs 以及Yann Le Cun 的相關研究
參見連結:http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
3.模式正則化GAN
參見連結:https://arxiv.org/pdf/1612.02136.pdf
三、報告
1.Ian Goodfellow的GANs報告
2. Russ Salakhutdinov的深度生成模型
參見連結:http://www.cs.toronto.edu/~rsalakhu/talk_Montreal_2016_Salakhutdinov.pdf
四、課程/教程
1.NIPS 2016教程:生成對抗網路
參見連結:https://arxiv.org/pdf/1701.00160.pdf
2.訓練GANs的技巧和竅門
參見連結:https://github.com/soumith/ganhacks
3.OpenAI生成模型
參見連結:https://blog.openai.com/generative-models/
4.用Keras實現MNIST生成對抗模型
參見連結:https://oshearesearch.com/index.PHP/2016/07/01/mnist-generative-adversarial-model-in-keras/
5.用深度學習TensorFlow實現影象修復
參見連結:http://bamos.github.io/2016/08/09/deep-completion/
五、Github資源以及模型
1.深度卷積生成對抗模型(DCGAN)
參見連結:https://github.com/Newmu/dcgan_code
2.TensorFlow實現深度卷積生成對抗模型(DCGAN)
參見連結:https://github.com/carpedm20/DCGAN-tensorflow
3.Torch實現深度卷積生成對抗模型(DCGAN)
參見連結:https://github.com/soumith/dcgan.torch
4.Keras實現深度卷積生成對抗模型(DCGAN)
參見連結:https://github.com/jacobgil/keras-dcgan
5.使用神經網路生成自然影象(Facebook的Eyescream專案)
參見連結:https://github.com/facebook/eyescream
6.對抗自編碼(AdversarialAutoEncoder)
參見連結:https://github.com/musyoku/adversarial-autoencoder
7.利用ThoughtVectors 實現文字到圖像的合成
參見連結:https://github.com/paarthneekhara/text-to-image
8.對抗樣本生成器(Adversarialexample generator)
參見連結:https://github.com/e-lab/torch-toolbox/tree/master/Adversarial
9.深度生成模型的半監督學習
參見連結:https://github.com/dpkingma/nips14-ssl
10.GANs的訓練方法
參見連結:https://github.com/openai/improved-gan
11. 生成式矩匹配網路(Generative Moment Matching Networks, GMMNs)
參見連結:https://github.com/yujiali/gmmn
12. 對抗視訊生成
參見連結:https://github.com/dyelax/Adversarial_Video_Generation
13. 基於條件對抗網路的影象到影象翻譯(pix2pix)
參見連結:https://github.com/phillipi/pix2pix
14. 對抗機器學習庫Cleverhans,
參見連結:https://github.com/openai/cleverhans
五、框架以及學習庫(根據GitHub的星級排序)
1.谷歌的TensorFlow [C++ and CUDA]
主頁連結:https://www.tensorflow.org/
Github連結:https://github.com/tensorflow/tensorflow
2. Berkeley Vision and LearningCenter (BVLC) 的Caffe [C++]
主頁連結:http://caffe.berkeleyvision.org/
Github連結:https://github.com/BVLC/caffe
安裝指南:http://gkalliatakis.com/blog/Caffe_Installation/README.md
3. François Chollet的Keras [Python]
主頁連結:https://keras.io/
Github連結:https://github.com/fchollet/keras
4. Microsoft Cognitive Toolkit -CNTK [C++]
主頁連結:https://www.microsoft.com/en-us/research/product/cognitive-toolkit/
Github連結:https://github.com/Microsoft/CNTK
5. Amazon 的MXNet [C++]
主頁連結:http://mxnet.io/
Github連結:https://github.com/dmlc/mxnet
6. Collobert, Kavukcuoglu &Clement Farabet的Torch,被Facebook廣泛採用[Lua]
主頁連結:http://torch.ch/
Github連結:https://github.com/torch
-
Andrej Karpathy 的Convnetjs [JavaScript]
主頁連結:http://cs.stanford.edu/people/karpathy/convnetjs/
Github連結:https://github.com/karpathy/convnetjs
-
Université de Montréal的 Theano [python]
主頁連結:http://deeplearning.net/software/theano/
Github連結:https://github.com/Theano/Theano
-
startup Skymind 的Deeplearning4j [Java]
主頁連結:https://deeplearning4j.org/
Github連結:https://github.com/deeplearning4j/deeplearning4j
-
Baidu 的Paddle[C++]
主頁連結:http://www.paddlepaddle.org/
Github連結:https://github.com/PaddlePaddle/Paddle
-
Amazon 的Deep Scalable Sparse Tensor Network Engine (DSSTNE) [C++]
Github連結:https://github.com/amzn/amazon-dsstne
-
Nervana Systems 的Neon [Python & Sass]
主頁連結:http://neon.nervanasys.com/docs/latest/
Github連結:https://github.com/NervanaSystems/neon
-
Chainer [Python]
主頁連結:http://chainer.org/
Github連結:https://github.com/pfnet/chainer
-
h2o [Java]
主頁連結:https://www.h2o.ai/
Github連結:https://github.com/h2oai/h2o-3
-
Istituto Dalle Molle di Studi sull’Intelligenza Artificiale (IDSIA) 的Brainstorm [Python]
Github連結:https://github.com/IDSIA/brainstorm
-
Andrea Vedaldi 的Matconvnet by [Matlab]
主頁連結:http://www.vlfeat.org/matconvnet/
Github連結:https://github.com/vlfeat/matconvnet
更多細節請參考原文連結:http://gkalliatakis.com/blog/delving-deep-into-gans