Updating conversion, creating readmes

This commit is contained in:
Jonas Zeunert
2024-04-19 23:37:46 +02:00
parent 3619ac710a
commit 08e75b0f0a
635 changed files with 30878 additions and 37344 deletions

View File

@@ -1,18 +1,15 @@
 Awesome Deep Vision !Awesome (https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg) (https://github.com/sindresorhus/awesome)
 Awesome Deep Vision !Awesome (https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg) (https://github.com/sindresorhus/awesome)
A curated list of deep learning resources for computer vision, inspired by awesome-php (https://github.com/ziadoz/awesome-php) and awesome-computer-vision 
(https://github.com/jbhuang0604/awesome-computer-vision).
A curated list of deep learning resources for computer vision, inspired by awesome-php (https://github.com/ziadoz/awesome-php) and awesome-computer-vision (https://github.com/jbhuang0604/awesome-computer-vision).
Maintainers - Jiwon Kim (https://github.com/kjw0612), Heesoo Myeong (https://github.com/hmyeong), Myungsub Choi (https://github.com/myungsub), Jung Kwon Lee (https://github.com/deruci), Taeksoo Kim 
(https://github.com/jazzsaxmafia)
Maintainers - Jiwon Kim (https://github.com/kjw0612), Heesoo Myeong (https://github.com/hmyeong), Myungsub Choi (https://github.com/myungsub), Jung Kwon Lee (https://github.com/deruci), Taeksoo Kim (https://github.com/jazzsaxmafia)
The project is not actively maintained. 
Contributing
Please feel free to pull requests (https://github.com/kjw0612/awesome-deep-vision/pulls) to add papers.
!Join the chat at https://gitter.im/kjw0612/awesome-deep-vision (https://badges.gitter.im/Join%20Chat.svg) 
(https://gitter.im/kjw0612/awesome-deep-vision?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
!Join the chat at https://gitter.im/kjw0612/awesome-deep-vision (https://badges.gitter.im/Join%20Chat.svg) (https://gitter.im/kjw0612/awesome-deep-vision?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
Sharing
+ Share on Twitter (http://twitter.com/home?status=http://jiwonkim.org/awesome-deep-vision%0ADeep Learning Resources for Computer Vision)
@@ -87,8 +84,8 @@
  ⟡ Karel Lenc, Andrea Vedaldi, R-CNN minus R, arXiv:1506.06981.
⟡ End-to-end people detection in crowded scenes Paper  (http://arxiv.org/abs/1506.04878)
  ⟡ Russell Stewart, Mykhaylo Andriluka, End-to-end people detection in crowded scenes, arXiv:1506.04878.
⟡ You Only Look Once: Unified, Real-Time Object Detection Paper  (http://arxiv.org/abs/1506.02640), Paper Version 2  (https://arxiv.org/abs/1612.08242), C Code  (https://github.com/pjreddie/darknet), Tensorflow 
Code  (https://github.com/thtrieu/darkflow)
⟡ You Only Look Once: Unified, Real-Time Object Detection Paper  (http://arxiv.org/abs/1506.02640), Paper Version 2  (https://arxiv.org/abs/1612.08242), C Code  (https://github.com/pjreddie/darknet), Tensorflow Code  
(https://github.com/thtrieu/darkflow)
  ⟡ Joseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi, You Only Look Once: Unified, Real-Time Object Detection, arXiv:1506.02640
  ⟡ Joseph Redmon, Ali Farhadi (Version 2)
⟡ Inside-Outside Net Paper  (http://arxiv.org/abs/1512.04143)
@@ -109,13 +106,11 @@
Object Tracking
⟡ Seunghoon Hong, Tackgeun You, Suha Kwak, Bohyung Han, Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network, arXiv:1502.06796. Paper  (http://arxiv.org/pdf/1502.06796)
⟡ Hanxi Li, Yi Li and Fatih Porikli, DeepTrack: Learning Discriminative Feature Representations by Convolutional Neural Networks for Visual Tracking, BMVC, 2014. Paper  
(http://www.bmva.org/bmvc/2014/files/paper028.pdf)
⟡ Hanxi Li, Yi Li and Fatih Porikli, DeepTrack: Learning Discriminative Feature Representations by Convolutional Neural Networks for Visual Tracking, BMVC, 2014. Paper  (http://www.bmva.org/bmvc/2014/files/paper028.pdf)
⟡ N Wang, DY Yeung, Learning a Deep Compact Image Representation for Visual Tracking, NIPS, 2013. Paper  (http://winsty.net/papers/dlt.pdf)
⟡ Chao Ma, Jia-Bin Huang, Xiaokang Yang and Ming-Hsuan Yang, Hierarchical Convolutional Features for Visual Tracking, ICCV 2015 Paper 
(http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Ma_Hierarchical_Convolutional_Features_ICCV_2015_paper.pdf) Code (https://github.com/jbhuang0604/CF2) 
⟡ Lijun Wang, Wanli Ouyang, Xiaogang Wang, and Huchuan Lu, Visual Tracking with fully Convolutional Networks, ICCV 2015 Paper (http://202.118.75.4/lu/Paper/ICCV2015/iccv15_lijun.pdf) Code 
(https://github.com/scott89/FCNT) 
⟡ Lijun Wang, Wanli Ouyang, Xiaogang Wang, and Huchuan Lu, Visual Tracking with fully Convolutional Networks, ICCV 2015 Paper (http://202.118.75.4/lu/Paper/ICCV2015/iccv15_lijun.pdf) Code (https://github.com/scott89/FCNT) 
⟡ Hyeonseob Namand Bohyung Han, Learning Multi-Domain Convolutional Neural Networks for Visual Tracking, Paper (http://arxiv.org/pdf/1510.07945.pdf) Code (https://github.com/HyeonseobNam/MDNet) Project Page 
(http://cvlab.postech.ac.kr/research/mdnet/) 
@@ -126,8 +121,7 @@
  ⟡ Sven Behnke: Learning Iterative Image Reconstruction. IJCAI, 2001. Paper  (http://www.ais.uni-bonn.de/behnke/papers/ijcai01.pdf)
  ⟡ Sven Behnke: Learning Iterative Image Reconstruction in the Neural Abstraction Pyramid. International Journal of Computational Intelligence and Applications, vol. 1, no. 4, pp. 427-438, 2001. Paper  
(http://www.ais.uni-bonn.de/behnke/papers/ijcia01.pdf)
⟡ Super-Resolution (SRCNN) Web  (http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html) Paper-ECCV14  (http://personal.ie.cuhk.edu.hk/~ccloy/files/eccv_2014_deepresolution.pdf) Paper-arXiv15  
(http://arxiv.org/pdf/1501.00092.pdf)
⟡ Super-Resolution (SRCNN) Web  (http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html) Paper-ECCV14  (http://personal.ie.cuhk.edu.hk/~ccloy/files/eccv_2014_deepresolution.pdf) Paper-arXiv15  (http://arxiv.org/pdf/1501.00092.pdf)
  ⟡ Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Learning a Deep Convolutional Network for Image Super-Resolution, ECCV, 2014.
  ⟡ Chao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Image Super-Resolution Using Deep Convolutional Networks, arXiv:1501.00092.
⟡ Very Deep Super-Resolution
@@ -135,22 +129,20 @@
⟡ Deeply-Recursive Convolutional Network
  ⟡ Jiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Deeply-Recursive Convolutional Network for Image Super-Resolution, arXiv:1511.04491, 2015. Paper  (http://arxiv.org/abs/1511.04491)
⟡ Casade-Sparse-Coding-Network
  ⟡ Zhaowen Wang, Ding Liu, Wei Han, Jianchao Yang and Thomas S. Huang, Deep Networks for Image Super-Resolution with Sparse Prior. ICCV, 2015. Paper  (http://www.ifp.illinois.edu/~dingliu2/iccv15/iccv15.pdf) 
Code  (http://www.ifp.illinois.edu/~dingliu2/iccv15/)
  ⟡ Zhaowen Wang, Ding Liu, Wei Han, Jianchao Yang and Thomas S. Huang, Deep Networks for Image Super-Resolution with Sparse Prior. ICCV, 2015. Paper  (http://www.ifp.illinois.edu/~dingliu2/iccv15/iccv15.pdf) Code  
(http://www.ifp.illinois.edu/~dingliu2/iccv15/)
⟡ Perceptual Losses for Super-Resolution
  ⟡ Justin Johnson, Alexandre Alahi, Li Fei-Fei, Perceptual Losses for Real-Time Style Transfer and Super-Resolution, arXiv:1603.08155, 2016. Paper  (http://arxiv.org/abs/1603.08155) Supplementary  
(http://cs.stanford.edu/people/jcjohns/papers/fast-style/fast-style-supp.pdf)
⟡ SRGAN
  ⟡ Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi, Photo-Realistic Single Image 
Super-Resolution Using a Generative Adversarial Network, arXiv:1609.04802v3, 2016. Paper  (https://arxiv.org/pdf/1609.04802v3.pdf)
  ⟡ Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi, Photo-Realistic Single Image Super-Resolution Using a Generative
Adversarial Network, arXiv:1609.04802v3, 2016. Paper  (https://arxiv.org/pdf/1609.04802v3.pdf)
⟡ Others
  ⟡ Osendorfer, Christian, Hubert Soyer, and Patrick van der Smagt, Image Super-Resolution with Fast Approximate Convolutional Sparse Coding, ICONIP, 2014. Paper ICONIP-2014  
(http://brml.org/uploads/tx_sibibtex/281.pdf)
  ⟡ Osendorfer, Christian, Hubert Soyer, and Patrick van der Smagt, Image Super-Resolution with Fast Approximate Convolutional Sparse Coding, ICONIP, 2014. Paper ICONIP-2014  (http://brml.org/uploads/tx_sibibtex/281.pdf)
Other Applications
⟡ Optical Flow (FlowNet) Paper  (http://arxiv.org/pdf/1504.06852)
  ⟡ Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox, FlowNet: Learning Optical Flow with Convolutional Networks,
arXiv:1504.06852.
  ⟡ Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox, FlowNet: Learning Optical Flow with Convolutional Networks, arXiv:1504.06852.
⟡ Compression Artifacts Reduction Paper-arXiv15  (http://arxiv.org/pdf/1504.06993)
  ⟡ Chao Dong, Yubin Deng, Chen Change Loy, Xiaoou Tang, Compression Artifacts Reduction by a Deep Convolutional Network, arXiv:1504.06993.
⟡ Blur Removal
@@ -185,15 +177,13 @@
 !VOC2012_top_rankings (https://cloud.githubusercontent.com/assets/3803777/18164608/c3678488-7038-11e6-9ec1-74a1542dce13.png)
 (from PASCAL VOC2012 leaderboards (http://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?challengeid=11&compid=6))
⟡ SEC: Seed, Expand and Constrain
  ⟡  Alexander Kolesnikov, Christoph Lampert, Seed, Expand and Constrain: Three Principles for Weakly-Supervised Image Segmentation, ECCV, 2016. Paper  (http://pub.ist.ac.at/~akolesnikov/files/ECCV2016/main.pdf)
Code  (https://github.com/kolesman/SEC)
  ⟡  Alexander Kolesnikov, Christoph Lampert, Seed, Expand and Constrain: Three Principles for Weakly-Supervised Image Segmentation, ECCV, 2016. Paper  (http://pub.ist.ac.at/~akolesnikov/files/ECCV2016/main.pdf) Code  
(https://github.com/kolesman/SEC)
⟡ Adelaide
  ⟡ Guosheng Lin, Chunhua Shen, Ian Reid, Anton van dan Hengel, Efficient piecewise training of deep structured models for semantic segmentation, arXiv:1504.01013. Paper  (http://arxiv.org/pdf/1504.01013) (1st 
ranked in VOC2012)
  ⟡ Guosheng Lin, Chunhua Shen, Ian Reid, Anton van dan Hengel, Efficient piecewise training of deep structured models for semantic segmentation, arXiv:1504.01013. Paper  (http://arxiv.org/pdf/1504.01013) (1st ranked in VOC2012)
  ⟡ Guosheng Lin, Chunhua Shen, Ian Reid, Anton van den Hengel, Deeply Learning the Messages in Message Passing Inference, arXiv:1508.02108. Paper  (http://arxiv.org/pdf/1506.02108) (4th ranked in VOC2012)
⟡ Deep Parsing Network (DPN)
  ⟡ Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang, Semantic Image Segmentation via Deep Parsing Network, arXiv:1509.02634 / ICCV 2015 Paper  (http://arxiv.org/pdf/1509.02634.pdf) (2nd ranked in 
VOC 2012)
  ⟡ Ziwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang, Semantic Image Segmentation via Deep Parsing Network, arXiv:1509.02634 / ICCV 2015 Paper  (http://arxiv.org/pdf/1509.02634.pdf) (2nd ranked in VOC 2012)
⟡ CentraleSuperBoundaries, INRIA Paper  (http://arxiv.org/pdf/1511.07386)
  ⟡ Iasonas Kokkinos, Surpassing Humans in Boundary Detection using Deep Learning, arXiv:1411.07386 (4th ranked in VOC 2012)
⟡ BoxSup Paper  (http://arxiv.org/pdf/1503.01640)
@@ -201,14 +191,12 @@
⟡ POSTECH
  ⟡ Hyeonwoo Noh, Seunghoon Hong, Bohyung Han, Learning Deconvolution Network for Semantic Segmentation, arXiv:1505.04366. Paper  (http://arxiv.org/pdf/1505.04366) (7th ranked in VOC2012)
  ⟡ Seunghoon Hong, Hyeonwoo Noh, Bohyung Han, Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation, arXiv:1506.04924. Paper  (http://arxiv.org/pdf/1506.04924)
  ⟡ Seunghoon Hong,Junhyuk Oh, Bohyung Han, and Honglak Lee, Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network, arXiv:1512.07928 Paper 
(http://arxiv.org/pdf/1512.07928.pdf) Project Page (http://cvlab.postech.ac.kr/research/transfernet/) 
  ⟡ Seunghoon Hong,Junhyuk Oh, Bohyung Han, and Honglak Lee, Learning Transferrable Knowledge for Semantic Segmentation with Deep Convolutional Neural Network, arXiv:1512.07928 Paper (http://arxiv.org/pdf/1512.07928.pdf) Project Page 
(http://cvlab.postech.ac.kr/research/transfernet/) 
⟡ Conditional Random Fields as Recurrent Neural Networks Paper  (http://arxiv.org/pdf/1502.03240)
  ⟡ Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240. 
(8th ranked in VOC2012)
  ⟡ Shuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240. (8th ranked in VOC2012)
⟡ DeepLab
  ⟡ Liang-Chieh Chen, George Papandreou, Kevin Murphy, Alan L. Yuille, Weakly-and semi-supervised learning of a DCNN for semantic image segmentation, arXiv:1502.02734. Paper  (http://arxiv.org/pdf/1502.02734) 
(9th ranked in VOC2012)
  ⟡ Liang-Chieh Chen, George Papandreou, Kevin Murphy, Alan L. Yuille, Weakly-and semi-supervised learning of a DCNN for semantic image segmentation, arXiv:1502.02734. Paper  (http://arxiv.org/pdf/1502.02734) (9th ranked in VOC2012)
⟡ Zoom-out Paper  (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Mostajabi_Feedforward_Semantic_Segmentation_2015_CVPR_paper.pdf)
  ⟡ Mohammadreza Mostajabi, Payman Yadollahpour, Gregory Shakhnarovich, Feedforward Semantic Segmentation With Zoom-Out Features, CVPR, 2015
⟡ Joint Calibration Paper  (http://arxiv.org/pdf/1507.01581)
@@ -225,10 +213,9 @@
  ⟡ Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, ICML, 2012.
  ⟡ Clement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Learning Hierarchical Features for Scene Labeling, PAMI, 2013.
⟡ University of Cambridge Web  (http://mi.eng.cam.ac.uk/projects/segnet/)
  ⟡ Vijay Badrinarayanan, Alex Kendall and Roberto Cipolla "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation." arXiv preprint arXiv:1511.00561, 2015. Paper  
  ⟡ Vijay Badrinarayanan, Alex Kendall and Roberto Cipolla "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation." arXiv preprint arXiv:1511.00561, 2015. Paper  (http://arxiv.org/abs/1511.00561)
⟡ Alex Kendall, Vijay Badrinarayanan and Roberto Cipolla "Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding." arXiv preprint arXiv:1511.02680, 2015. Paper  
(http://arxiv.org/abs/1511.00561)
⟡ Alex Kendall, Vijay Badrinarayanan and Roberto Cipolla "Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding." arXiv preprint arXiv:1511.02680, 2015. 
Paper  (http://arxiv.org/abs/1511.00561)
⟡ Princeton
  ⟡ Fisher Yu, Vladlen Koltun, "Multi-Scale Context Aggregation by Dilated Convolutions", ICLR 2016, Paper (http://arxiv.org/pdf/1511.07122v2.pdf) 
⟡ Univ. of Washington, Allen AI
@@ -293,8 +280,7 @@
⟡ Toronto Paper  (http://arxiv.org/pdf/1411.2539)
  ⟡ Ryan Kiros, Ruslan Salakhutdinov, Richard S. Zemel, Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models, arXiv:1411.2539.
⟡ Berkeley Paper  (http://arxiv.org/pdf/1411.4389)
  ⟡ Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, 
arXiv:1411.4389.
  ⟡ Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, arXiv:1411.4389.
⟡ Google Paper  (http://arxiv.org/pdf/1411.4555)
  ⟡ Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, Show and Tell: A Neural Image Caption Generator, arXiv:1411.4555.
⟡ Stanford Web  (http://cs.stanford.edu/people/karpathy/deepimagesent/) Paper  (http://cs.stanford.edu/people/karpathy/cvpr2015.pdf)
@@ -305,19 +291,17 @@
  ⟡ Xinlei Chen, C. Lawrence Zitnick, Learning a Recurrent Visual Representation for Image Caption Generation, arXiv:1411.5654.
  ⟡ Xinlei Chen, C. Lawrence Zitnick, Minds Eye: A Recurrent Visual Representation for Image Caption Generation, CVPR 2015
⟡ Microsoft Paper  (http://arxiv.org/pdf/1411.4952)
  ⟡ Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, From Captions to Visual 
Concepts and Back, CVPR, 2015.
  ⟡ Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, From Captions to Visual Concepts and Back, CVPR, 
2015.
⟡ Univ. Montreal / Univ. Toronto Web (http://kelvinxu.github.io/projects/capgen.html) Paper (http://www.cs.toronto.edu/~zemel/documents/captionAttn.pdf) 
  ⟡ Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, Yoshua Bengio, Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention, 
arXiv:1502.03044 / ICML 2015
  ⟡ Kelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, Yoshua Bengio, Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention, arXiv:1502.03044 / ICML 2015
⟡ Idiap / EPFL / Facebook Paper (http://arxiv.org/pdf/1502.03671) 
  ⟡ Remi Lebret, Pedro O. Pinheiro, Ronan Collobert, Phrase-based Image Captioning, arXiv:1502.03671 / ICML 2015
⟡ UCLA / Baidu Paper (http://arxiv.org/pdf/1504.06692) 
  ⟡ Junhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan L. Yuille, Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images, arXiv:1504.06692
⟡ MS + Berkeley
  ⟡ Jacob Devlin, Saurabh Gupta, Ross Girshick, Margaret Mitchell, C. Lawrence Zitnick, Exploring Nearest Neighbor Approaches for Image Captioning, arXiv:1505.04467 Paper (http://arxiv.org/pdf/1505.04467.pdf) 
  ⟡ Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, Margaret Mitchell, Language Models for Image Captioning: The Quirks and What Works, arXiv:1505.01809 Paper 
(http://arxiv.org/pdf/1505.01809.pdf) 
  ⟡ Jacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, Margaret Mitchell, Language Models for Image Captioning: The Quirks and What Works, arXiv:1505.01809 Paper (http://arxiv.org/pdf/1505.01809.pdf)
⟡ Adelaide Paper (http://arxiv.org/pdf/1506.01144.pdf) 
  ⟡ Qi Wu, Chunhua Shen, Anton van den Hengel, Lingqiao Liu, Anthony Dick, Image Captioning with an Intermediate Attributes Layer, arXiv:1506.01144
⟡ Tilburg Paper (http://arxiv.org/pdf/1506.03694.pdf) 
@@ -332,8 +316,7 @@
Video Captioning
⟡ Berkeley Web  (http://jeffdonahue.com/lrcn/) Paper  (http://arxiv.org/pdf/1411.4389.pdf)
  ⟡ Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, 
CVPR, 2015.
  ⟡ Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, CVPR, 2015.
⟡ UT / UML / Berkeley Paper  (http://arxiv.org/pdf/1412.4729)
  ⟡ Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, arXiv:1412.4729.
⟡ Microsoft Paper  (http://arxiv.org/pdf/1505.01861)
@@ -345,8 +328,7 @@
⟡ MPI / Berkeley Paper (http://arxiv.org/pdf/1506.01698.pdf) 
  ⟡ Anna Rohrbach, Marcus Rohrbach, Bernt Schiele, The Long-Short Story of Movie Description, arXiv:1506.01698
⟡ Univ. Toronto / MIT Paper (http://arxiv.org/pdf/1506.06724.pdf) 
  ⟡ Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading 
Books, arXiv:1506.06724
  ⟡ Yukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, arXiv:1506.06724
⟡ Univ. Montreal Paper (http://arxiv.org/pdf/1507.01053.pdf) 
  ⟡ Kyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks, arXiv:1507.01053
⟡ TAU / USC paper (https://arxiv.org/pdf/1612.06950.pdf) 
@@ -387,8 +369,7 @@
(http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Dosovitskiy_Learning_to_Generate_2015_CVPR_paper.pdf)
  ⟡ Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra, "DRAW: A Recurrent Neural Network For Image Generation", ICML, 2015. Paper (https://arxiv.org/pdf/1502.04623v2.pdf) 
⟡ Adversarial Networks
  ⟡ Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Generative Adversarial Networks, NIPS, 2014. Paper  
(http://arxiv.org/abs/1406.2661)
  ⟡ Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Generative Adversarial Networks, NIPS, 2014. Paper  (http://arxiv.org/abs/1406.2661)
  ⟡ Emily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus, Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks, NIPS, 2015. Paper  (http://arxiv.org/abs/1506.05751)
  ⟡ Lucas Theis, Aäron van den Oord, Matthias Bethge, "A note on the evaluation of generative models", ICLR 2016. Paper (http://arxiv.org/abs/1511.01844) 
  ⟡ Zhenwen Dai, Andreas Damianou, Javier Gonzalez, Neil Lawrence, "Variationally Auto-Encoded Deep Gaussian Processes", ICLR 2016. Paper (http://arxiv.org/pdf/1511.06455v2.pdf) 
@@ -396,8 +377,8 @@
  ⟡ Jost Tobias Springenberg, "Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks", ICLR 2016, Paper (http://arxiv.org/pdf/1511.06390v1.pdf) 
  ⟡ Harrison Edwards, Amos Storkey, "Censoring Representations with an Adversary", ICLR 2016, Paper (http://arxiv.org/pdf/1511.05897v3.pdf) 
  ⟡ Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii, "Distributional Smoothing with Virtual Adversarial Training", ICLR 2016, Paper (http://arxiv.org/pdf/1507.00677v8.pdf) 
  ⟡ Jun-Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros, "Generative Visual Manipulation on the Natural Image Manifold", ECCV 2016. Paper (https://arxiv.org/pdf/1609.03552v2.pdf) Code 
(https://github.com/junyanz/iGAN) Video (https://youtu.be/9c4z6YsBGQ0) 
  ⟡ Jun-Yan Zhu, Philipp Krahenbuhl, Eli Shechtman, and Alexei A. Efros, "Generative Visual Manipulation on the Natural Image Manifold", ECCV 2016. Paper (https://arxiv.org/pdf/1609.03552v2.pdf) Code (https://github.com/junyanz/iGAN) 
Video (https://youtu.be/9c4z6YsBGQ0) 
⟡ Mixing Convolutional and Adversarial Networks
  ⟡ Alec Radford, Luke Metz, Soumith Chintala, "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks", ICLR 2016. Paper (http://arxiv.org/pdf/1511.06434.pdf) 
@@ -421,8 +402,7 @@
(http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Zhang_Appearance-Based_Gaze_Estimation_2015_CVPR_paper.pdf) Website  
(https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/gaze-based-human-computer-interaction/appearance-based-gaze-estimation-in-the-wild-mpiigaze/)
⟡ Face Recognition
  ⟡ Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf, DeepFace: Closing the Gap to Human-Level Performance in Face Verification, CVPR, 2014. Paper  
(https://www.cs.toronto.edu/~ranzato/publications/taigman_cvpr14.pdf)
  ⟡ Yaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf, DeepFace: Closing the Gap to Human-Level Performance in Face Verification, CVPR, 2014. Paper  (https://www.cs.toronto.edu/~ranzato/publications/taigman_cvpr14.pdf)
  ⟡ Yi Sun, Ding Liang, Xiaogang Wang, Xiaoou Tang, DeepID3: Face Recognition with Very Deep Neural Networks, 2015. Paper  (http://arxiv.org/abs/1502.00873)
  ⟡ Florian Schroff, Dmitry Kalenichenko, James Philbin, FaceNet: A Unified Embedding for Face Recognition and Clustering, CVPR, 2015. Paper  (http://arxiv.org/abs/1503.03832)
⟡ Facial Landmark Detection
@@ -459,8 +439,7 @@
  ⟡ Torch-based deep learning libraries: torchnet (https://github.com/torchnet/torchnet) ,
⟡ Caffe: Deep learning framework by the BVLC Web (http://caffe.berkeleyvision.org/) 
⟡ Theano: Mathematical library in Python, maintained by LISA lab Web (http://deeplearning.net/software/theano/) 
  ⟡ Theano-based deep learning libraries: Pylearn2 (http://deeplearning.net/software/pylearn2/) , Blocks (https://github.com/mila-udem/blocks) , Keras (http://keras.io/) , Lasagne 
(https://github.com/Lasagne/Lasagne) 
  ⟡ Theano-based deep learning libraries: Pylearn2 (http://deeplearning.net/software/pylearn2/) , Blocks (https://github.com/mila-udem/blocks) , Keras (http://keras.io/) , Lasagne (https://github.com/Lasagne/Lasagne) 
⟡ MatConvNet: CNNs for MATLAB Web (http://www.vlfeat.org/matconvnet/) 
⟡ MXNet: A flexible and efficient deep learning library for heterogeneous distributed systems with multi-language support Web (http://mxnet.io/) 
⟡ Deepgaze: A computer vision library for human-computer interaction based on CNNs Web (https://github.com/mpatacchiola/deepgaze)