1. 程式人生 > >生成對抗網路(GAN)的前沿進展(論文、報告、框架和Github資源)彙總

生成對抗網路(GAN)的前沿進展(論文、報告、框架和Github資源)彙總

生成模型(GenerativeModel)是一種可以通過學習訓練樣本來產生更多類似樣本的模型。在所有生成模型當中,最具潛力的是生成對抗網路(Generative Adversarial Networks, GANs)。GANs 是非監督機器學習的一種,它的運作方式可被看做是兩個神經網路相互競爭的零和遊戲(zero-sum game)。

2014年,Ian Goodfellow等人在《GenerativeAdversarial Nets》一文中首次提出了GANs,標誌著GANs的誕生。

原文連結:https://arxiv.org/pdf/1406.2661v1.pdf

PPT連結:http://www.iangoodfellow.com/slides/2016-12-04-NIPS.pdf

原始碼連結:https://github.com/goodfeli/adversarial

視訊連結:https://www.youtube.com/watch?v=HN9NRhm9waY

本文總結了一系列關於GANs的前沿工作進展

一、最新研究論文(根據Google Scholar的引用數進行降序排列)

  1. 基於深度卷生成抗網絡的無監督學習(Unsupervised Representation Learning with Deep     Convolutional Generative Adversarial Networks (DCGANs))2015

    原文接:https://arxiv.org/pdf/1511.06434v2.pdf

  2. 例的解和利用(Explaining and Harnessing     Adversarial Examples)2014

    原文連結:https://arxiv.org/pdf/1412.6572.pdf

  3. 基於深度生成模型的半督學( Semi-Supervised Learning with Deep     Generative Models )2014

    原文接:https://arxiv.org/pdf/1406.5298v2.pdf

  4. 基於拉普拉斯金字塔生成式對抗網路的深度影象生成模型(Deep Generative Image Models using a Laplacian Pyramid     of Adversarial Networks)2015

    原文接:http://papers.nips.cc/paper/5773-deep-generative-image-models-using-a-laplacian-pyramid-of-adversarial-networks.pdf

  5. 訓練GANs的一些技巧(Improved Techniques for  Training GANs)2016

    原文連結:https://arxiv.org/pdf/1606.03498v1.pdf

  6. 條件生成抗網(Conditional Generative     Adversarial Nets)2014

    原文連結:https://arxiv.org/pdf/1411.1784v1.pdf

  7. 生成式矩匹配網路(Generative Moment Matching Networks)2015

    原文連結:http://proceedings.mlr.press/v37/li15.pdf

  8. 超越均方差的深度多尺度視訊預測(Deep multi-scale video     prediction beyond mean square error)2015

    原文連結:https://arxiv.org/pdf/1511.05440.pdf

  9. 相似性度量的超畫素自編碼(Autoencoding beyond pixels using a learned similarity  metric)2015

    原文連結:https://arxiv.org/pdf/1512.09300.pdf

  10. 抗自編碼(Adversarial     Autoencoders)2015

    原文連結:https://arxiv.org/pdf/1511.05644.pdf

  11. InfoGAN:基於資訊最大化GANs的可解表達學習(InfoGAN:Interpretable Representation Learning by Information Maximizing Generative     Adversarial Nets)2016

    原文連結:https://arxiv.org/pdf/1606.03657v1.pdf

  12. 上下文畫素編碼:通修復行特徵學(Context     Encoders: Feature Learning by Inpainting)2016

    原文連結:http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Pathak_Context_Encoders_Feature_CVPR_2016_paper.pdf

  13. 生成抗網絡實現文字合成像(Generative     Adversarial Text to Image Synthesis)2016

    原文連結:http://proceedings.mlr.press/v48/reed16.pdf

  14. 基於畫素卷的條件生成片(Conditional   Image Generation with PixelCNN Decoders)2015

    原文連結:https://arxiv.org/pdf/1606.05328.pdf

  15. 抗特徵學(Adversarial Feature     Learning)2016

    原文連結:https://arxiv.org/pdf/1605.09782.pdf

  16. 合逆自回流的分推理(Improving Variational     Inference with Inverse Autoregressive Flow )2016

    原文連結:https://papers.nips.cc/paper/6581-improving-variational-autoencoders-with-inverse-autoregressive-flow.pdf

  17. 深度學統對本黑盒攻(Practical     Black-Box Attacks against Deep Learning Systems using Adversarial Examples)2016

    原文連結:https://arxiv.org/pdf/1602.02697.pdf

  18. 參加,推斷,重複:基於生成模型的快速景理解(Attend,     infer, repeat: Fast scene understanding with generative models)2016

    原文連結:https://arxiv.org/pdf/1603.08575.pdf

  19. f-GAN: 使用分散度最小化訓練生成神器(f-GAN:  Training Generative Neural Samplers using Variational Divergence     Minimization )2016

    原文連結:http://papers.nips.cc/paper/6066-tagger-deep-unsupervised-perceptual-grouping.pdf

  20. 在自然像流形上的生成視覺操作(Generative     Visual Manipulation on the Natural Image Manifold)2016

    原文連結:https://arxiv.org/pdf/1609.03552.pdf

  21. 平均差異最大訓練生成神(Training generative neural networks via Maximum Mean Discrepancy optimization)2015

    原文連結:https://arxiv.org/pdf/1505.03906.pdf

  22. 對抗性推斷學(Adversarially     Learned Inference)2016

    原文連結:https://arxiv.org/pdf/1606.00704.pdf

  23. 基於循環對抗網影象生成(Generating images with recurrent adversarial networks)2016

    原文連結:https://arxiv.org/pdf/1602.05110.pdf

  24. 生成抗模仿學(Generative Adversarial Imitation Learning)2016

    原文連結:http://papers.nips.cc/paper/6391-generative-adversarial-imitation-learning.pdf

  25. 基於3D生成抗模型學習物體形狀的概率間(Learning a Probabilistic Latent Space of Object Shapes     via 3D Generative-Adversarial Modeling)2016

    原文連結:https://arxiv.org/pdf/1610.07584.pdf

  26. 畫畫(Learning What     and Where to Draw)2016

    原文連結:https://arxiv.org/pdf/1610.02454v1.pdf

  27. 基於助分器GANs的條件像合成(Conditional Image     Synthesis with Auxiliary Classifier GANs)2016

    原文連結:https://arxiv.org/pdf/1610.09585.pdf

  28. 生成模型的學(Learning in Implicit     Generative Models)2016

    原文:https://arxiv.org/pdf/1610.03483.pdf

  29. VIME: 分資訊最大化探索(VIME: Variational     Information Maximizing Exploration)2016

    原文連結:http://papers.nips.cc/paper/6591-vime-variational-information-maximizing-exploration.pdf

  30. 生成抗網的展開(Unrolled     Generative Adversarial Networks)2016

    原文連結:https://arxiv.org/pdf/1611.02163.pdf

  31. 訓練生成抗網的基本方法(Towards  Principled Methods for Training Generative Adversarial Networks)2017

    原文連結:https://arxiv.org/pdf/1701.04862.pdf

  32. 基於內省對抗網路的神經影象編輯(Neural Photo Editing with Introspective Adversarial   Networks)2016

    原文連結:https://arxiv.org/pdf/1609.07093.pdf

  33. 基於解器的生成模型的定量分析(On the   Quantitative Analysis of Decoder-Based Generative Models )2016

    原文連結:https://arxiv.org/pdf/1611.04273.pdf

  34. 合生成抗網和Actor-Critic 方法(Connecting Generative     Adversarial Networks and Actor-Critic Methods)2016

    原文連結:https://arxiv.org/pdf/1610.01945.pdf

  35.  通過對抗網使用模和非訓練( Learning     from Simulated and Unsupervised Images through Adversarial Training)2016

    原文連結:https://arxiv.org/pdf/1612.07828.pdf

  36. 基於上下文RNN-GANs的抽象推理的生成(Contextual     RNN-GANs for Abstract Reasoning Diagram Generation)2016

    原文連結:https://arxiv.org/pdf/1609.09444.pdf

  37. 生成多抗網(Generative     Multi-Adversarial Networks)2016

    原文連結:https://arxiv.org/pdf/1611.01673.pdf

  38. 生成抗網絡組合(Ensembles of   Generative Adversarial Network)2016

    原文連結:https://arxiv.org/pdf/1612.00991.pdf

  39. 生成器目標的GANs(Improved  generator objectives for GANs) 2016

    原文連結:https://arxiv.org/pdf/1612.02780.pdf

  40. 生成抗模型的向量精準修復(Precise     Recovery of Latent Vectors from Generative Adversarial Networks)2017

    原文連結:https://openreview.NET/pdf?id=HJC88BzFl

  41. 生成混合模型(Generative     Mixture of Networks)2017

    原文連結:https://arxiv.org/pdf/1702.03307.pdf

  42. 記憶生成空模型(Generative Temporal     Models with Memory)2017

    原文連結:https://arxiv.org/pdf/1702.04649.pdf

  43. 停止GAN暴力:生成性非對抗模型(Stopping GAN Violence: Generative Unadversarial     Networks)2017

    原文連結:https://arxiv.org/pdf/1703.02528.pdf

二、理論學習

1.訓練GANs的技巧,

參見連結:http://papers.nips.cc/paper/6124-improved-techniques-for-training-gans.pdf

2.Energy-Based GANs 以及Yann Le Cun 的相關研究

參見連結:http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

3.模式正則化GAN

參見連結:https://arxiv.org/pdf/1612.02136.pdf

三、報告

1.Ian GoodfellowGANs報告

2.     Russ Salakhutdinov的深度生成模型

參見連結:http://www.cs.toronto.edu/~rsalakhu/talk_Montreal_2016_Salakhutdinov.pdf

四、課程/教程

1.NIPS 2016教程:生成對抗網路

參見連結:https://arxiv.org/pdf/1701.00160.pdf

2.訓練GANs的技巧和竅門

參見連結:https://github.com/soumith/ganhacks

3.OpenAI生成模型

參見連結:https://blog.openai.com/generative-models/

4.Keras實現MNIST生成對抗模型

參見連結:https://oshearesearch.com/index.PHP/2016/07/01/mnist-generative-adversarial-model-in-keras/

5.深度學習TensorFlow實現影象修復

參見連結:http://bamos.github.io/2016/08/09/deep-completion/

五、Github資源以及模型

1.深度卷積生成對抗模型(DCGAN

參見連結:https://github.com/Newmu/dcgan_code

2.TensorFlow實現深度卷積生成對抗模型(DCGAN

參見連結:https://github.com/carpedm20/DCGAN-tensorflow

3.Torch實現深度卷積生成對抗模型(DCGAN

參見連結:https://github.com/soumith/dcgan.torch

4.Keras實現深度卷積生成對抗模型(DCGAN

參見連結:https://github.com/jacobgil/keras-dcgan

5.使用神經網路生成自然影象(FacebookEyescream專案)

參見連結:https://github.com/facebook/eyescream

6.對抗自編碼(AdversarialAutoEncoder

參見連結:https://github.com/musyoku/adversarial-autoencoder

7.利用ThoughtVectors 實現文字到像的合成

參見連結:https://github.com/paarthneekhara/text-to-image

8.對抗樣本生成器(Adversarialexample generator

參見連結:https://github.com/e-lab/torch-toolbox/tree/master/Adversarial

9.深度生成模型的半監督學習

參見連結:https://github.com/dpkingma/nips14-ssl

10.GANs的訓練方法

參見連結:https://github.com/openai/improved-gan

11.  生成式矩匹配網路(Generative Moment Matching Networks, GMMNs)

參見連結:https://github.com/yujiali/gmmn

12.  對抗視訊生成

參見連結:https://github.com/dyelax/Adversarial_Video_Generation

13.  基於條件對抗網路的影象到影象翻譯(pix2pix)

參見連結:https://github.com/phillipi/pix2pix

14.  對抗機器學習庫Cleverhans,

參見連結:https://github.com/openai/cleverhans

五、框架以及學習庫(根據GitHub的星級排序)

    1.谷歌的TensorFlow [C++ and CUDA]

主頁連結:https://www.tensorflow.org/

Github連結:https://github.com/tensorflow/tensorflow

    2. Berkeley Vision and LearningCenter (BVLC) 的Caffe [C++]

主頁連結:http://caffe.berkeleyvision.org/

Github連結:https://github.com/BVLC/caffe

安裝指南:http://gkalliatakis.com/blog/Caffe_Installation/README.md

    3. François Chollet的Keras [Python]

主頁連結:https://keras.io/

Github連結:https://github.com/fchollet/keras

    4. Microsoft Cognitive Toolkit -CNTK [C++]

主頁連結:https://www.microsoft.com/en-us/research/product/cognitive-toolkit/

Github連結:https://github.com/Microsoft/CNTK

    5.  Amazon 的MXNet [C++]

主頁連結:http://mxnet.io/

Github連結:https://github.com/dmlc/mxnet

    6. Collobert, Kavukcuoglu &Clement Farabet的Torch,被Facebook廣泛採用[Lua]

主頁連結:http://torch.ch/

Github連結:https://github.com/torch

  1. Andrej Karpathy 的Convnetjs [JavaScript]

    主頁連結:http://cs.stanford.edu/people/karpathy/convnetjs/

    Github連結:https://github.com/karpathy/convnetjs

  2. Université de Montréal的 Theano [python]

    主頁連結:http://deeplearning.net/software/theano/

    Github連結:https://github.com/Theano/Theano

  3. startup Skymind 的Deeplearning4j [Java]

    主頁連結:https://deeplearning4j.org/

    Github連結:https://github.com/deeplearning4j/deeplearning4j

  4. Baidu 的Paddle[C++]

    主頁連結:http://www.paddlepaddle.org/

    Github連結:https://github.com/PaddlePaddle/Paddle

  5. Amazon 的Deep Scalable Sparse  Tensor Network Engine (DSSTNE) [C++]

    Github連結:https://github.com/amzn/amazon-dsstne  

  6. Nervana Systems 的Neon [Python & Sass]

    主頁連結:http://neon.nervanasys.com/docs/latest/

    Github連結:https://github.com/NervanaSystems/neon

  7. Chainer [Python]

    主頁連結:http://chainer.org/

    Github連結:https://github.com/pfnet/chainer

  8. h2o [Java]

    主頁連結:https://www.h2o.ai/

    Github連結:https://github.com/h2oai/h2o-3

  9. Istituto Dalle Molle di     Studi sull’Intelligenza Artificiale (IDSIA) 的Brainstorm [Python]

    Github連結:https://github.com/IDSIA/brainstorm

  10. Andrea Vedaldi 的Matconvnet by [Matlab]

    主頁連結:http://www.vlfeat.org/matconvnet/

    Github連結:https://github.com/vlfeat/matconvnet

更多細節請參考原文連結:http://gkalliatakis.com/blog/delving-deep-into-gans