1. 程式人生 > >基於深度學習的計算機視覺學習資料彙編(英)

基於深度學習的計算機視覺學習資料彙編(英)

轉載自:http://www.open-open.com/lib/view/open1452776149855.html

Awesome Deep Vision 

A curated list of deep learning resources for computer vision, inspired byawesome-php and awesome-computer-vision .

We are looking for a maintainer! Let me know ( [email protected] ) if interested.

Contributing

Please feel free topull requests to add papers.

Sharing

Table of Contents

  • Papers
    • ImageNet Classification
    • Object Detection
    • Object Tracking
    • Low-Level Vision
      • Super-Resolution
      • Other Applications
    • Edge Detection
    • Semantic Segmentation
    • Visual Attention and Saliency
    • Object Recognition
    • Understanding CNN
    • Image and Language
      • Image Captioning
      • Video Captioning
      • Question Answering
    • Other Topics
  • Courses
  • Books
  • Videos
  • Software
    • Framework
    • Applications
  • Tutorials
  • Blogs

Papers

ImageNet Classification

 (from Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks, NIPS, 2012.)

  • Microsoft (PReLu/Weight Initialization) [Paper]
    • Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification, arXiv:1502.01852.
  • Batch Normalization [Paper]
    • Sergey Ioffe, Christian Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, arXiv:1502.03167.
  • GoogLeNet [Paper]
    • Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, CVPR, 2015.
  • VGG-Net [Web] [Paper]
    • Karen Simonyan and Andrew Zisserman, Very Deep Convolutional Networks for Large-Scale Visual Recognition, ICLR, 2015.
  • AlexNet [Paper]
    • Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks, NIPS, 2012.

Object Detection

 (from Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, arXiv:1506.01497.)

  • OverFeat, NYU [Paper]
    • OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks, ICLR, 2014.
  • R-CNN, UC Berkeley [Paper-CVPR14] [Paper-arXiv14]
    • Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, CVPR, 2014.
  • SPP, Microsoft Research [Paper]
    • Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition, ECCV, 2014.
  • Fast R-CNN, Microsoft Research [Paper]
    • Ross Girshick, Fast R-CNN, arXiv:1504.08083.
  • Faster R-CNN, Microsoft Research [Paper]
    • Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, arXiv:1506.01497.
  • R-CNN minus R, Oxford [Paper]
    • Karel Lenc, Andrea Vedaldi, R-CNN minus R, arXiv:1506.06981.
  • End-to-end people detection in crowded scenes [Paper]
    • Russell Stewart, Mykhaylo Andriluka, End-to-end people detection in crowded scenes, arXiv:1506.04878.
  • You Only Look Once: Unified, Real-Time Object Detection [Paper]
    • Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi, You Only Look Once: Unified, Real-Time Object Detection, arXiv:1506.02640

Object Tracking

  • Seunghoon Hong, Tackgeun You, Suha Kwak, Bohyung Han, Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network, arXiv:1502.06796. [Paper]
  • Hanxi Li, Yi Li and Fatih Porikli, DeepTrack: Learning Discriminative Feature Representations by Convolutional Neural Networks for Visual Tracking, BMVC, 2014. [Paper]
  • N Wang, DY Yeung, Learning a Deep Compact Image Representation for Visual Tracking, NIPS, 2013. [Paper]
  • Chao Ma, Jia-Bin Huang, Xiaokang Yang and Ming-Hsuan Yang, "Hierarchical Convolutional Features for Visual Tracking", ICCV 2015[GitHub]
  • Lijun Wang, Wanli Ouyang, Xiaogang Wang, and Huchuan Lu, "Visual Tracking with fully Convolutional Networks", ICCV 2015[GitHub] [Paper]

Low-Level Vision

Super-Resolution

  • Super-Resolution (SRCNN) [Web] [Paper-ECCV14] [Paper-arXiv15]
    • Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Learning a Deep Convolutional Network for Image Super-Resolution, ECCV, 2014.
    • Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Image Super-Resolution Using Deep Convolutional Networks, arXiv:1501.00092.
  • Very Deep Super-Resolution
    • Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Accurate Image Super-Resolution Using Very Deep Convolutional Networks, arXiv:1511.04587, 2015. [Paper]
  • Deeply-Recursive Convolutional Network
    • Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Deeply-Recursive Convolutional Network for Image Super-Resolution, arXiv:1511.04491, 2015. [Paper]
  • Others
    • Osendorfer, Christian, Hubert Soyer, and Patrick van der Smagt, Image Super-Resolution with Fast Approximate Convolutional Sparse Coding, ICONIP, 2014. [Paper ICONIP-2014]

Other Applications

  • Optical Flow (FlowNet) [Paper]
    • Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox, FlowNet: Learning Optical Flow with Convolutional Networks, arXiv:1504.06852.
  • Compression Artifacts Reduction [Paper-arXiv15]
    • Chao Dong, Yubin Deng, Chen Change Loy, Xiaoou Tang, Compression Artifacts Reduction by a Deep Convolutional Network, arXiv:1504.06993.
  • Blur Removal
    • Christian J. Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Schölkopf, Learning to Deblur, arXiv:1406.7444 [Paper]
    • Jian Sun, Wenfei Cao, Zongben Xu, Jean Ponce, Learning a Convolutional Neural Network for Non-uniform Motion Blur Removal, CVPR, 2015 [Paper]
  • Image Deconvolution [Web] [Paper]
    • Li Xu, Jimmy SJ. Ren, Ce Liu, Jiaya Jia, Deep Convolutional Neural Network for Image Deconvolution, NIPS, 2014.
  • Deep Edge-Aware Filter [Paper]
    • Li Xu, Jimmy SJ. Ren, Qiong Yan, Renjie Liao, Jiaya Jia, Deep Edge-Aware Filters, ICML, 2015.
  • Computing the Stereo Matching Cost with a Convolutional Neural Network [Paper]
    • Jure Žbontar, Yann LeCun, Computing the Stereo Matching Cost with a Convolutional Neural Network, CVPR, 2015.

Edge Detection

 (from Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR, 2015.)

  • Holistically-Nested Edge Detection [Paper]
    • Saining Xie, Zhuowen Tu, Holistically-Nested Edge Detection, arXiv:1504.06375.
  • DeepEdge [Paper]
    • Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR, 2015.
  • DeepContour [Paper]
    • Wei Shen, Xinggang Wang, Yan Wang, Xiang Bai, Zhijiang Zhang, DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection, CVPR, 2015.

Semantic Segmentation

 (from Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640.)

  • PASCAL VOC2012 Challenge Leaderboard (02 Dec. 2015)  (from PASCAL VOC2012 leaderboards )
  • Adelaide
    • Guosheng Lin, Chunhua Shen, Ian Reid, Anton van dan Hengel, Efficient piecewise training of deep structured models for semantic segmentation, arXiv:1504.01013. [Paper] (1st ranked in VOC2012)
    • Guosheng Lin, Chunhua Shen, Ian Reid, Anton van den Hengel, Deeply Learning the Messages in Message Passing Inference, arXiv:1508.02108. [Paper] (4th ranked in VOC2012)
  • Deep Parsing Network (DPN)
    • Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang, Semantic Image Segmentation via Deep Parsing Network, arXiv:1509.02634 / ICCV 2015 [Paper] (2nd ranked in VOC 2012)
  • CentraleSuperBoundaries, INRIA [Paper]
    • Iasonas Kokkinos, Surpassing Humans in Boundary Detection using Deep Learning, arXiv:1411.07386 (4th ranked in VOC 2012)
  • BoxSup [Paper]
    • Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640. (6th ranked in VOC2012)
  • POSTECH
    • Hyeonwoo Noh, Seunghoon Hong, Bohyung Han, Learning Deconvolution Network for Semantic Segmentation, arXiv:1505.04366. [Paper] (7th ranked in VOC2012)
    • Seunghoon Hong, Hyeonwoo Noh, Bohyung Han, Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation, arXiv:1506.04924. [Paper]
  • Conditional Random Fields as Recurrent Neural Networks [Paper]
    • Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240. (8th ranked in VOC2012)
  • DeepLab
    • Liang-Chieh Chen, George Papandreou, Kevin Murphy, Alan L. Yuille, Weakly-and semi-supervised learning of a DCNN for semantic image segmentation, arXiv:1502.02734. [Paper] (9th ranked in VOC2012)
  • Zoom-out [Paper]
    • Mohammadreza Mostajabi, Payman Yadollahpour, Gregory Shakhnarovich, Feedforward Semantic Segmentation With Zoom-Out Features, CVPR, 2015
  • Joint Calibration [Paper]
    • Holger Caesar, Jasper Uijlings, Vittorio Ferrari, Joint Calibration for Semantic Segmentation, arXiv:1507.01581.
  • Fully Convolutional Networks for Semantic Segmentation [Paper-CVPR15] [Paper-arXiv15]
    • Jonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR, 2015.
  • Hypercolumn [Paper]
    • Bharath Hariharan, Pablo Arbelaez, Ross Girshick, Jitendra Malik, Hypercolumns for Object Segmentation and Fine-Grained Localization, CVPR, 2015.
  • Deep Hierarchical Parsing
    • Abhishek Sharma, Oncel Tuzel, David W. Jacobs, Deep Hierarchical Parsing for Semantic Segmentation, CVPR, 2015. [Paper]
  • Learning Hierarchical Features for Scene Labeling [Paper-ICML12] [Paper-PAMI13]
    • Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, ICML, 2012.
    • Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Learning Hierarchical Features for Scene Labeling, PAMI, 2013.
  • University of Cambridge [Web]
    • Vijay Badrinarayanan, Alex Kendall and Roberto Cipolla "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation." arXiv preprint arXiv:1511.00561, 2015. [Paper]
    • Alex Kendall, Vijay Badrinarayanan and Roberto Cipolla "Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding." arXiv preprint arXiv:1511.02680, 2015. [Paper]

Visual Attention and Saliency

 (from Nian Liu, Junwei Han, Dingwen Zhang, Shifeng Wen, Tianming Liu, Predicting Eye Fixations using Convolutional Neural Networks, CVPR, 2015.)

  • Mr-CNN [Paper]
    • Nian Liu, Junwei Han, Dingwen Zhang, Shifeng Wen, Tianming Liu, Predicting Eye Fixations using Convolutional Neural Networks, CVPR, 2015.
  • Learning a Sequential Search for Landmarks [Paper]
    • Saurabh Singh, Derek Hoiem, David Forsyth, Learning a Sequential Search for Landmarks, CVPR, 2015.
  • Multiple Object Recognition with Visual Attention [Paper]
    • Jimmy Lei Ba, Volodymyr Mnih, Koray Kavukcuoglu, Multiple Object Recognition with Visual Attention, ICLR, 2015.
  • Recurrent Models of Visual Attention [Paper]
    • Volodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu, Recurrent Models of Visual Attention, NIPS, 2014.

Object Recognition

  • Weakly-supervised learning with convolutional neural networks [Paper]
    • Maxime Oquab, Leon Bottou, Ivan Laptev, Josef Sivic, Is object localization for free? – Weakly-supervised learning with convolutional neural networks, CVPR, 2015.
  • FV-CNN [Paper]
    • Mircea Cimpoi, Subhransu Maji, Andrea Vedaldi, Deep Filter Banks for Texture Recognition and Segmentation, CVPR, 2015.

Understanding CNN

 (from Aravindh Mahendran, Andrea Vedaldi, Understanding Deep Image Representations by Inverting Them, CVPR, 2015.)

  • Equivariance and Equivalence of Representations [Paper]
    • Karel Lenc, Andrea Vedaldi, Understanding image representations by measuring their equivariance and equivalence, CVPR, 2015.
  • Deep Neural Networks Are Easily Fooled [Paper]
    • Anh Nguyen, Jason Yosinski, Jeff Clune, Deep Neural Networks are Easily Fooled:High Confidence Predictions for Unrecognizable Images, CVPR, 2015.
  • Understanding Deep Image Representations by Inverting Them [Paper]
    • Aravindh Mahendran, Andrea Vedaldi, Understanding Deep Image Representations by Inverting Them, CVPR, 2015.
  • Object Detectors Emerge in Deep Scene CNNs [Paper]
    • Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba, Object Detectors Emerge in Deep Scene CNNs, ICLR, 2015.
  • Inverting Convolutional Networks with Convolutional Networks
    • Alexey Dosovitskiy, Thomas Brox, Inverting Convolutional Networks with Convolutional Networks, arXiv, 2015. [Paper]
  • Visualizing and Understanding CNN
    • Matthrew Zeiler, Rob Fergus, Visualizing and Understanding Convolutional Networks, ECCV, 2014. [Paper]

Image and Language

Image Captioning

 (from Andrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR, 2015.)

  • UCLA / Baidu [Paper]
    • Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille, Explain Images with Multimodal Recurrent Neural Networks, arXiv:1410.1090.
  • Toronto [Paper]
    • Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel, Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models, arXiv:1411.2539.
  • Berkeley [Paper]
    • Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, arXiv:1411.4389.
  • Google [Paper]
    • Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, Show and Tell: A Neural Image Caption Generator, arXiv:1411.4555.
  • Stanford [Web] [Paper]
    • Andrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR, 2015.
  • UML / UT [Paper]
    • Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, NAACL-HLT, 2015.
  • CMU / Microsoft [Paper-arXiv] [Paper-CVPR]
    • Xinlei Chen, C. Lawrence Zitnick, Learning a Recurrent Visual Representation for Image Caption Generation, arXiv:1411.5654.
    • Xinlei Chen, C. Lawrence Zitnick, Mind’s Eye: A Recurrent Visual Representation for Image Caption Generation, CVPR 2015
  • Microsoft [Paper]
    • Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, From Captions to Visual Concepts and Back, CVPR, 2015.
  • Univ. Montreal / Univ. Toronto [ Web ] [ Paper ]
    • Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, Yoshua Bengio, Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention, arXiv:1502.03044 / ICML 2015
  • Idiap / EPFL / Facebook [ Paper ]
    • Remi Lebret, Pedro O. Pinheiro, Ronan Collobert, Phrase-based Image Captioning, arXiv:1502.03671 / ICML 2015
  • UCLA / Baidu [ Paper ]
    • Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan L. Yuille, Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images, arXiv:1504.06692
  • MS + Berkeley
    • Jacob Devlin, Saurabh Gupta, Ross Girshick, Margaret Mitchell, C. Lawrence Zitnick, Exploring Nearest Neighbor Approaches for Image Captioning, arXiv:1505.04467 [ Paper ]
    • Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, Margaret Mitchell, Language Models for Image Captioning: The Quirks and What Works, arXiv:1505.01809 [ Paper ]
  • Adelaide [ Paper ]
    • Qi Wu, Chunhua Shen, Anton van den Hengel, Lingqiao Liu, Anthony Dick, Image Captioning with an Intermediate Attributes Layer, arXiv:1506.01144
  • Tilburg [ Paper ]
    • Grzegorz Chrupala, Akos Kadar, Afra Alishahi, Learning language through pictures, arXiv:1506.03694
  • Univ. Montreal [ Paper ]
    • Kyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks, arXiv:1507.01053
  • Cornell [ Paper ]
    • Jack Hessel, Nicolas Savva, Michael J. Wilber, Image Representations and New Domains in Neural Image Captioning, arXiv:1508.02091

Video Captioning

  • Berkeley [Web] [Paper]
    • Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, CVPR, 2015.
  • UT / UML / Berkeley [Paper]
    • Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, arXiv:1412.4729.
  • Microsoft [Paper]
    • Yingwei Pan, Tao Mei, Ting Yao, Houqiang Li, Yong Rui, Joint Modeling Embedding and Translation to Bridge Video and Language, arXiv:1505.01861.
  • UT / Berkeley / UML [Paper]
    • Subhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko, Sequence to Sequence--Video to Text, arXiv:1505.00487.
  • Univ. Montreal / Univ. Sherbrooke [ Paper ]
    • Li Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, Aaron Courville, Describing Videos by Exploiting Temporal Structure, arXiv:1502.08029
  • MPI / Berkeley [ Paper ]
    • Anna Rohrbach, Marcus Rohrbach, Bernt Schiele, The Long-Short Story of Movie Description, arXiv:1506.01698
  • Univ. Toronto / MIT [ Paper ]
    • Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, arXiv:1506.06724
  • Univ. Montreal [ Paper ]
    • Kyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks, arXiv:1507.01053

Question Answering

 (from Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop)

  • Virginia Tech / MSR [Web] [Paper]
    • Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop.
  • MPI / Berkeley [Web] [Paper]
    • Mateusz Malinowski, Marcus Rohrbach, Mario Fritz, Ask Your Neurons: A Neural-based Approach to Answering Questions about Images, arXiv:1505.01121.
  • Toronto [Paper] [Dataset]
    • Mengye Ren, Ryan Kiros, Richard Zemel, Image Question Answering: A Visual Semantic Embedding Model and a New Dataset, arXiv:1505.02074 / ICML 2015 deep learning workshop.
  • Baidu / UCLA [Paper] [Dataset]
    • Hauyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, Wei Xu, Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering, arXiv:1505.05612.

Other Topics

  • Surface Normal Estimation [Paper]
    • Xiaolong Wang, David F. Fouhey, Abhinav Gupta, Designing Deep Networks for Surface Normal Estimation, CVPR, 2015.
  • Action Detection [Paper]
    • Georgia Gkioxari, Jitendra Malik, Finding Action Tubes, CVPR, 2015.
  • Crowd Counting [Paper]
    • Cong Zhang, Hongsheng Li, Xiaogang Wang, Xiaokang Yang, Cross-scene Crowd Counting via Deep Convolutional Neural Networks, CVPR, 2015.
  • 3D Shape Retrieval [Paper]
    • Fang Wang, Le Kang, Yi Li, Sketch-based 3D Shape Retrieval using Convolutional Neural Networks, CVPR, 2015.
  • Generate image [Paper]
    • Alexey Dosovitskiy, Jost Tobias Springenberg, Thomas Brox, Learning to Generate Chairs with Convolutional Neural Networks, CVPR, 2015.
  • Generate Image with Adversarial Network
    • Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Generative Adversarial Networks, NIPS, 2014. [Paper]
    • Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus, Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks, NIPS, 2015. [Paper]
  • Artistic Style [Paper] [Code]
    • Leon A. Gatys, Alexander S. Ecker, Matthias Bethge, A Neural Algorithm of Artistic Style.
  • Human Gaze Estimation
    • Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling, Appearance-Based Gaze Estimation in the Wild, CVPR, 2015. [Paper] [Website]

Courses

Books

Videos

Software

Framework

  • Torch7: Deep learning library in Lua, used by Facebook and Google Deepmind [Web]
  • Caffe: Deep learning framework by the BVLC [Web]
  • Theano: Mathematical library in Python, maintained by LISA lab [Web]
  • MatConvNet: CNNs for MATLAB [Web]

Applications

  • Adversarial Training
    • Code and hyperparameters for the paper "Generative Adversarial Networks"[Web]
  • Understanding and Visualizing
    • Source code for "Understanding Deep Image Representations by Inverting Them," CVPR, 2015.[Web]
  • Semantic Segmentation
    • Source code for the paper "Rich feature hierarchies for accurate object detection and semantic segmentation," CVPR, 2014.[Web]
    • Source code for the paper "Fully Convolutional Networks for Semantic Segmentation," CVPR, 2015.[Web]
  • Super-Resolution
    • Image Super-Resolution for Anime-Style-Art[Web]
  • Edge Detection
    • Source code for the paper "DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection," CVPR, 2015.[Web]

Tutorials

Blogs

相關推薦

基於深度學習計算機視覺學習資料彙編

轉載自:http://www.open-open.com/lib/view/open1452776149855.html Awesome Deep Vision  A curated list of deep learning resources for comput

【Sql Server學習】Sql Server資料查詢

(1)查詢資訊系(IS)的所有學生的姓名,所選課程號及成績。 select Sname,Cno,Grade from Student,SC where Sdept IN (select

vue的原始碼學習之五——7.資料驅動update

1. 介紹         版本:2.5.17。         我們使用vue-vli建立基於Runtime+Compiler的vue腳手架。  &nb

vue的原始碼學習之五——6.資料驅動createElement

1. 介紹       版本:2.5.17。         我們使用vue-vli建立基於Runtime+Compiler的vue腳手架。   &nbs

vue的原始碼學習之五——4.資料驅動render

介紹         版本:2.5.17。        我們使用vue-vli建立基於Runtime+Compiler的v

『python』計算機視覺_OpenCV3目標檢測器待續

類型 print import 目標 return 視覺 != read svm bulid-in目標檢測器 調用內部函數進行人體檢測,實際效果並不好。民工三連: hog = cv2.HOGDescriptor()

計算機視覺』Normalization層待續

tex rnn domain end nbsp str 函數 普通 ria 原文來自知乎 一、兩個概念 獨立同分布(independent and identically distributed) 獨立同分布的數據可以簡化常規機器學習模型的訓練、提升機器學習模型的預測

基於深度self-attention的字符集語言模型transformer論文筆記

論文題目:Character-Level Language Modeling with Deeper Self-Attention 論文地址:https://arxiv.org/abs/1808.04444v1 摘要       LSTM和其他RNN的變體在

計算機視覺與模式識別色彩遷移

色彩遷移 這一個學期任務量比較大,主要是計算機視覺這門課和計算機圖形學任務量有點大,所以暑假才有時間更新這系列部落格,這系列部落格主要是利用CImg這個庫來實現一些演算法,而不是應用一些演算法。 下面開始本期的介紹! 1.什麼是色彩遷移? 對於搞CV(Compu

計算機視覺和模式識別

計算機視覺和模式識別 計算機視覺和影象處理是一個令很多人充滿興趣的計算機領域,它不僅涉及到很多精妙和令人稱讚的數學推導和漂亮的程式碼,更是應用到我們生活的方方面面,簡單如Photoshop、美圖秀秀,複雜於相機校準、人臉識別、處理視訊等。 時隔很久沒有寫部落格了,以前寫部落格的時候

計算機視覺---超畫素superpixel

超畫素就是把一幅原本是畫素級(pixel-level)的圖,劃分成區域級(district-level)的圖。可以將其看做是對基本資訊進行的抽象。超畫素分割屬於影象分割(image segmentation),再細化應該屬於過分割(over segmentation)。比如我們對一幅影象進行超畫素分割,分割之

計算機視覺】雙目測距--三維重建及UI顯示

原文: http://blog.csdn.NET/chenyusiyuan/article/details/5970799 在獲取到視差資料後,利用 OpenCV 的 reProjectImageTo3D 函式結合 Bouquet 校正方法得到的 Q 矩陣就可以得到環

計算機視覺與影象處理——卷積與opencv

VideoCapture cap(0); //開啟預設攝像頭裝置 //1.VideoCapture cap(in device); 如果只有一個裝置,device只通過0

OpenCV3計算機視覺Python語言實現:使用OpenCV3處理影象

3.1 不同色彩空間的轉換 3.2 傅立葉變換 3.2.1 高通濾波器 HPF 3.2.2 低通濾波器 LPF 3.3 建立模組 3.4 邊緣檢測 3.5 用定製核心做卷積 3.6 修改應用 3.7

OpenCV3計算機視覺Python語言實現:處理檔案、攝像頭和圖形使用者介面

2.1 基本I/O指令碼 2.1.1 讀/寫影象檔案 2.1.2 影象和原始位元組之間的轉換 2.1.3使用numpy.array()訪問影象資料 2.1.4 視訊檔案的讀寫 2.1.5 捕獲攝像頭的幀 2.1.6 在視窗顯示影象 2.1.7 在視窗顯示攝像

【逐夢AI】深度學習計算機視覺應用實戰課程BAT工程師主講,無人汽車,機器人,神經網絡

bat 神經網絡 深度學習 深度學習框架 0基礎 http 提取 框架 以及 【逐夢AI】深度學習與計算機視覺應用實戰課程(BAT工程師主講,無人汽車,機器人,神經網絡)網盤地址:https://pan.baidu.com/s/1G0_WS-uHeSyVvvl_4bQnlA

深度學習計算機視覺崗位 面試問題總結

1. 超引數和引數 參考這篇部落格 引數是模型自己學習的部分,比如卷積核的weight以及bias 超引數是根據經驗設定使得模型具有好的效果的引數,CNN中常見的超引數有: 1卷積層層數 2全連線層層數 3 卷積核size 4卷積核數目 5 learning rate 6正則化引數

部分嵌入式平臺——小談含AI邊緣計算,深度學習計算機視覺等的邊緣實現等

(~~~排版格式啥的就不管咯~~~) 從左至右,效能逐漸增強; (Arduino):圖中無,適合上手開發,熟悉庫函式操作,方便做很多有意思的應用; 51:89c52,適合入門學習,理解掌握暫存器操作; 物聯網32:小套件,可進行簡單物聯網應用demo開發; 32

PyTorch深度學習計算機視覺框架

Taylor Guo @ Shanghai - 2018.10.22 - 星期一 影象分類 VGG ResNet DenseNet MobileNetV2 ResNeXt SqueezeNet ShuffleNet ShuffleNet V2 位姿估計 CPM: Convolutional Po

10 大深度學習架構:計算機視覺優秀從業者必備附程式碼實現

近日,Faizan Shaikh 在 Analytics Vidhya 發表了一篇題為《10 Advanced Deep Learning Architectures Data Scientists Should Know!》的文章,總結了計算機視覺領域已經成效卓著的 10