Files
awesome-awesomeness/html/deeplearning.md2.html
2025-07-18 23:13:11 +02:00

1611 lines
81 KiB
HTML
Raw Permalink Blame History

This file contains invisible Unicode characters
This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
<h1 id="awesome-deep-learning-awesome">Awesome Deep Learning <a
href="https://github.com/sindresorhus/awesome"><img
src="https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg"
alt="Awesome" /></a></h1>
<h2 id="table-of-contents">Table of Contents</h2>
<ul>
<li><p><strong><a href="#books">Books</a></strong></p></li>
<li><p><strong><a href="#courses">Courses</a></strong></p></li>
<li><p><strong><a href="#videos-and-lectures">Videos and
Lectures</a></strong></p></li>
<li><p><strong><a href="#papers">Papers</a></strong></p></li>
<li><p><strong><a href="#tutorials">Tutorials</a></strong></p></li>
<li><p><strong><a href="#researchers">Researchers</a></strong></p></li>
<li><p><strong><a href="#websites">Websites</a></strong></p></li>
<li><p><strong><a href="#datasets">Datasets</a></strong></p></li>
<li><p><strong><a href="#Conferences">Conferences</a></strong></p></li>
<li><p><strong><a href="#frameworks">Frameworks</a></strong></p></li>
<li><p><strong><a href="#tools">Tools</a></strong></p></li>
<li><p><strong><a
href="#miscellaneous">Miscellaneous</a></strong></p></li>
<li><p><strong><a
href="#contributing">Contributing</a></strong></p></li>
</ul>
<h3 id="books">Books</h3>
<ol type="1">
<li><a href="http://www.deeplearningbook.org/">Deep Learning</a> by
Yoshua Bengio, Ian Goodfellow and Aaron Courville (05/07/2015)</li>
<li><a href="http://neuralnetworksanddeeplearning.com/">Neural Networks
and Deep Learning</a> by Michael Nielsen (Dec 2014)</li>
<li><a
href="http://research.microsoft.com/pubs/209355/DeepLearning-NowPublishing-Vol7-SIG-039.pdf">Deep
Learning</a> by Microsoft Research (2013)</li>
<li><a href="http://deeplearning.net/tutorial/deeplearning.pdf">Deep
Learning Tutorial</a> by LISA lab, University of Montreal (Jan 6
2015)</li>
<li><a href="https://github.com/karpathy/neuraltalk">neuraltalk</a> by
Andrej Karpathy : numpy-based RNN/LSTM implementation</li>
<li><a href="http://www.boente.eti.br/fuzzy/ebook-fuzzy-mitchell.pdf">An
introduction to genetic algorithms</a></li>
<li><a href="http://aima.cs.berkeley.edu/">Artificial Intelligence: A
Modern Approach</a></li>
<li><a href="http://arxiv.org/pdf/1404.7828v4.pdf">Deep Learning in
Neural Networks: An Overview</a></li>
<li><a
href="https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/">Artificial
intelligence and machine learning: Topic wise explanation</a></li>
<li><a
href="https://www.manning.com/books/grokking-deep-learning-for-computer-vision">Grokking
Deep Learning for Computer Vision</a></li>
<li><a href="https://d2l.ai/">Dive into Deep Learning</a> - numpy based
interactive Deep Learning book</li>
<li><a
href="https://www.oreilly.com/library/view/practical-deep-learning/9781492034858/">Practical
Deep Learning for Cloud, Mobile, and Edge</a> - A book for optimization
techniques during production.</li>
<li><a
href="https://www.manning.com/books/math-and-architectures-of-deep-learning">Math
and Architectures of Deep Learning</a> - by Krishnendu Chaudhury</li>
<li><a
href="https://www.manning.com/books/tensorflow-in-action">TensorFlow 2.0
in Action</a> - by Thushan Ganegedara</li>
<li><a
href="https://www.manning.com/books/deep-learning-for-natural-language-processing">Deep
Learning for Natural Language Processing</a> - by Stephan
Raaijmakers</li>
<li><a
href="https://www.manning.com/books/deep-learning-patterns-and-practices">Deep
Learning Patterns and Practices</a> - by Andrew Ferlitsch</li>
<li><a href="https://www.manning.com/books/inside-deep-learning">Inside
Deep Learning</a> - by Edward Raff</li>
<li><a
href="https://www.manning.com/books/deep-learning-with-python-second-edition">Deep
Learning with Python, Second Edition</a> - by François Chollet</li>
<li><a
href="https://www.manning.com/books/evolutionary-deep-learning">Evolutionary
Deep Learning</a> - by Micheal Lanham</li>
<li><a
href="https://www.manning.com/books/engineering-deep-learning-platforms">Engineering
Deep Learning Platforms</a> - by Chi Wang and Donald Szeto</li>
<li><a
href="https://www.manning.com/books/deep-learning-with-r-second-edition">Deep
Learning with R, Second Edition</a> - by François Chollet with Tomasz
Kalinowski and J. J. Allaire</li>
<li><a
href="https://www.manning.com/books/regularization-in-deep-learning">Regularization
in Deep Learning</a> - by Liu Peng</li>
<li><a href="https://www.manning.com/books/jax-in-action">Jax in
Action</a> - by Grigory Sapunov</li>
<li><a
href="https://www.knowledgeisle.com/wp-content/uploads/2019/12/2-Aur%C3%A9lien-G%C3%A9ron-Hands-On-Machine-Learning-with-Scikit-Learn-Keras-and-Tensorflow_-Concepts-Tools-and-Techniques-to-Build-Intelligent-Systems-O%E2%80%99Reilly-Media-2019.pdf">Hands-On
Machine Learning with Scikit-Learn, Keras, and TensorFlow</a> by
Aurélien Géron | Oct 15, 2019</li>
</ol>
<h3 id="courses">Courses</h3>
<ol type="1">
<li><a href="https://class.coursera.org/ml-005">Machine Learning -
Stanford</a> by Andrew Ng in Coursera (2010-2014)</li>
<li><a href="http://work.caltech.edu/lectures.html">Machine Learning -
Caltech</a> by Yaser Abu-Mostafa (2012-2014)</li>
<li><a
href="http://www.cs.cmu.edu/~tom/10701_sp11/lectures.shtml">Machine
Learning - Carnegie Mellon</a> by Tom Mitchell (Spring 2011)</li>
<li><a href="https://class.coursera.org/neuralnets-2012-001">Neural
Networks for Machine Learning</a> by Geoffrey Hinton in Coursera
(2012)</li>
<li><a
href="https://www.youtube.com/playlist?list=PL6Xpj9I5qXYEcOhn7TqghAJ6NAPrNmUBH">Neural
networks class</a> by Hugo Larochelle from Université de Sherbrooke
(2013)</li>
<li><a
href="http://cilvr.cs.nyu.edu/doku.php?id=deeplearning:slides:start">Deep
Learning Course</a> by CILVR lab @ NYU (2014)</li>
<li><a
href="https://courses.edx.org/courses/BerkeleyX/CS188x_1/1T2013/courseware/">A.I
- Berkeley</a> by Dan Klein and Pieter Abbeel (2013)</li>
<li><a
href="http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/lecture-videos/">A.I
- MIT</a> by Patrick Henry Winston (2010)</li>
<li><a
href="http://web.mit.edu/course/other/i2course/www/vision_and_learning_fall_2013.html">Vision
and learning - computers and brains</a> by Shimon Ullman, Tomaso Poggio,
Ethan Meyers @ MIT (2013)</li>
<li><a
href="http://vision.stanford.edu/teaching/cs231n/syllabus.html">Convolutional
Neural Networks for Visual Recognition - Stanford</a> by Fei-Fei Li,
Andrej Karpathy (2017)</li>
<li><a href="http://cs224d.stanford.edu/">Deep Learning for Natural
Language Processing - Stanford</a></li>
<li><a
href="http://info.usherbrooke.ca/hlarochelle/neural_networks/content.html">Neural
Networks - usherbrooke</a></li>
<li><a
href="https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/">Machine
Learning - Oxford</a> (2014-2015)</li>
<li><a href="https://developer.nvidia.com/deep-learning-courses">Deep
Learning - Nvidia</a> (2015)</li>
<li><a
href="https://www.youtube.com/playlist?list=PLHyI3Fbmv0SdzMHAy0aN59oYnLy5vyyTA">Graduate
Summer School: Deep Learning, Feature Learning</a> by Geoffrey Hinton,
Yoshua Bengio, Yann LeCun, Andrew Ng, Nando de Freitas and several
others @ IPAM, UCLA (2012)</li>
<li><a href="https://www.udacity.com/course/deep-learning--ud730">Deep
Learning - Udacity/Google</a> by Vincent Vanhoucke and Arpan Chakraborty
(2016)</li>
<li><a
href="https://www.youtube.com/playlist?list=PLehuLRPyt1Hyi78UOkMPWCGRxGcA9NVOE">Deep
Learning - UWaterloo</a> by Prof. Ali Ghodsi at University of Waterloo
(2015)</li>
<li><a
href="https://www.youtube.com/watch?v=azaLcvuql_g&amp;list=PLjbUi5mgii6BWEUZf7He6nowWvGne_Y8r">Statistical
Machine Learning - CMU</a> by Prof. Larry Wasserman</li>
<li><a
href="https://www.college-de-france.fr/site/en-yann-lecun/course-2015-2016.htm">Deep
Learning Course</a> by Yann LeCun (2016)</li>
<li><a
href="https://www.youtube.com/playlist?list=PLkFD6_40KJIxopmdJF_CLNqG3QuDFHQUm">Designing,
Visualizing and Understanding Deep Neural Networks-UC Berkeley</a></li>
<li><a href="http://uvadlc.github.io">UVA Deep Learning Course</a> MSc
in Artificial Intelligence for the University of Amsterdam.</li>
<li><a href="http://selfdrivingcars.mit.edu/">MIT 6.S094: Deep Learning
for Self-Driving Cars</a></li>
<li><a href="http://introtodeeplearning.com/">MIT 6.S191: Introduction
to Deep Learning</a></li>
<li><a href="http://rll.berkeley.edu/deeprlcourse/">Berkeley CS 294:
Deep Reinforcement Learning</a></li>
<li><a href="https://www.manning.com/livevideo/keras-in-motion">Keras in
Motion video course</a></li>
<li><a href="http://course.fast.ai/">Practical Deep Learning For
Coders</a> by Jeremy Howard - Fast.ai</li>
<li><a href="http://deeplearning.cs.cmu.edu/">Introduction to Deep
Learning</a> by Prof. Bhiksha Raj (2017)</li>
<li><a href="https://www.deeplearning.ai/ai-for-everyone/">AI for
Everyone</a> by Andrew Ng (2019)</li>
<li><a href="https://introtodeeplearning.com">MIT Intro to Deep Learning
7 day bootcamp</a> - A seven day bootcamp designed in MIT to introduce
deep learning methods and applications (2019)</li>
<li><a href="https://mithi.github.io/deep-blueberry">Deep Blueberry:
Deep Learning</a> - A free five-weekend plan to self-learners to learn
the basics of deep-learning architectures like CNNs, LSTMs, RNNs, VAEs,
GANs, DQN, A3C and more (2019)</li>
<li><a href="https://spinningup.openai.com/">Spinning Up in Deep
Reinforcement Learning</a> - A free deep reinforcement learning course
by OpenAI (2019)</li>
<li><a
href="https://www.coursera.org/specializations/deep-learning">Deep
Learning Specialization - Coursera</a> - Breaking into AI with the best
course from Andrew NG.</li>
<li><a
href="https://www.youtube.com/playlist?list=PLZSO_6-bSqHQHBCoGaObUljoXAyyqhpFW">Deep
Learning - UC Berkeley | STAT-157</a> by Alex Smola and Mu Li
(2019)</li>
<li><a
href="https://www.manning.com/livevideo/machine-learning-for-mere-mortals">Machine
Learning for Mere Mortals video course</a> by Nick Chase</li>
<li><a
href="https://developers.google.com/machine-learning/crash-course/">Machine
Learning Crash Course with TensorFlow APIs</a> -Google AI</li>
<li><a href="https://course.fast.ai/part2">Deep Learning from the
Foundations</a> Jeremy Howard - Fast.ai</li>
<li><a
href="https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893">Deep
Reinforcement Learning (nanodegree) - Udacity</a> a 3-6 month Udacity
nanodegree, spanning multiple courses (2018)</li>
<li><a
href="https://www.manning.com/livevideo/grokking-deep-learning-in-motion">Grokking
Deep Learning in Motion</a> by Beau Carnes (2018)</li>
<li><a href="https://www.udemy.com/share/1000gAA0QdcV9aQng=/">Face
Detection with Computer Vision and Deep Learning</a> by Hakan
Cebeci</li>
<li><a href="https://classpert.com/deep-learning">Deep Learning Online
Course list at Classpert</a> List of Deep Learning online courses (some
are free) from Classpert Online Course Search</li>
<li><a href="https://aws.training/machinelearning">AWS Machine
Learning</a> Machine Learning and Deep Learning Courses from Amazons
Machine Learning university</li>
<li><a
href="https://www.udacity.com/course/deep-learning-pytorch--ud188">Intro
to Deep Learning with PyTorch</a> - A great introductory course on Deep
Learning by Udacity and Facebook AI</li>
<li><a href="https://www.kaggle.com/learn/deep-learning">Deep Learning
by Kaggle</a> - Kaggles free course on Deep Learning</li>
<li><a href="https://cds.nyu.edu/deep-learning/">Yann LeCuns Deep
Learning Course at CDS</a> - DS-GA 1008 · SPRING 2021</li>
<li><a href="https://webcms3.cse.unsw.edu.au/COMP9444/19T3/">Neural
Networks and Deep Learning</a> - COMP9444 19T3</li>
<li><a href="http://aishelf.org/category/ia/deep-learning/">Deep
Learning A.I.Shelf</a></li>
</ol>
<h3 id="videos-and-lectures">Videos and Lectures</h3>
<ol type="1">
<li><a href="https://www.youtube.com/watch?v=RIkxVci-R4k">How To Create
A Mind</a> By Ray Kurzweil</li>
<li><a href="https://www.youtube.com/watch?v=n1ViNeWhC24">Deep Learning,
Self-Taught Learning and Unsupervised Feature Learning</a> By Andrew
Ng</li>
<li><a
href="https://www.youtube.com/watch?v=vShMxxqtDDs&amp;index=3&amp;list=PL78U8qQHXgrhP9aZraxTT5-X1RccTcUYT">Recent
Developments in Deep Learning</a> By Geoff Hinton</li>
<li><a href="https://www.youtube.com/watch?v=sc-KbuZqGkI">The
Unreasonable Effectiveness of Deep Learning</a> by Yann LeCun</li>
<li><a href="https://www.youtube.com/watch?v=4xsVFLnHC_0">Deep Learning
of Representations</a> by Yoshua bengio</li>
<li><a href="https://www.youtube.com/watch?v=6ufPpZDmPKA">Principles of
Hierarchical Temporal Memory</a> by Jeff Hawkins</li>
<li><a
href="https://www.youtube.com/watch?v=2QJi0ArLq7s&amp;list=PL78U8qQHXgrhP9aZraxTT5-X1RccTcUYT">Machine
Learning Discussion Group - Deep Learning w/ Stanford AI Lab</a> by Adam
Coates</li>
<li><a href="http://vimeo.com/80821560">Making Sense of the World with
Deep Learning</a> By Adam Coates</li>
<li><a href="https://www.youtube.com/watch?v=wZfVBwOO0-k">Demystifying
Unsupervised Feature Learning</a> By Adam Coates</li>
<li><a href="https://www.youtube.com/watch?v=3boKlkPBckA">Visual
Perception with Deep Learning</a> By Yann LeCun</li>
<li><a href="https://www.youtube.com/watch?v=AyzOUbkUf3M">The Next
Generation of Neural Networks</a> By Geoffrey Hinton at
GoogleTechTalks</li>
<li><a
href="http://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn">The
wonderful and terrifying implications of computers that can learn</a> By
Jeremy Howard at TEDxBrussels</li>
<li><a
href="http://web.stanford.edu/class/cs294a/handouts.html">Unsupervised
Deep Learning - Stanford</a> by Andrew Ng in Stanford (2011)</li>
<li><a href="http://web.stanford.edu/class/cs224n/handouts/">Natural
Language Processing</a> By Chris Manning in Stanford</li>
<li><a
href="http://googleresearch.blogspot.com/2015/09/a-beginners-guide-to-deep-neural.html">A
beginners Guide to Deep Neural Networks</a> By Natalie Hammel and
Lorraine Yurshansky</li>
<li><a href="https://www.youtube.com/watch?v=czLI3oLDe8M">Deep Learning:
Intelligence from Big Data</a> by Steve Jurvetson (and panel) at VLAB in
Stanford.</li>
<li><a href="https://www.youtube.com/watch?v=FoO8qDB8gUU">Introduction
to Artificial Neural Networks and Deep Learning</a> by Leo Isikdogan at
Motorola Mobility HQ</li>
<li><a href="https://nips.cc/Conferences/2016/Schedule">NIPS 2016
lecture and workshop videos</a> - NIPS 2016</li>
<li><a
href="https://www.youtube.com/watch?v=oS5fz_mHVz0&amp;list=PLWKotBjTDoLj3rXBL-nEIPRN9V3a9Cx07">Deep
Learning Crash Course</a>: a series of mini-lectures by Leo Isikdogan on
YouTube (2018)</li>
<li><a
href="https://www.manning.com/livevideo/deep-learning-crash-course">Deep
Learning Crash Course</a> By Oliver Zeigermann</li>
<li><a
href="https://www.manning.com/livevideo/deep-learning-with-r-in-motion">Deep
Learning with R in Motion</a>: a live video course that teaches how to
apply deep learning to text and images using the powerful Keras library
and its R language interface.</li>
<li><a
href="https://www.youtube.com/playlist?list=PLheiZMDg_8ufxEx9cNVcOYXsT3BppJP4b">Medical
Imaging with Deep Learning Tutorial</a>: This tutorial is styled as a
graduate lecture about medical imaging with deep learning. This will
cover the background of popular medical image domains (chest X-ray and
histology) as well as methods to tackle multi-modality/view,
segmentation, and counting tasks.</li>
<li><a
href="https://www.youtube.com/playlist?list=PLqYmG7hTraZCDxZ44o4p3N5Anz3lLRVZF">Deepmind
x UCL Deeplearning</a>: 2020 version</li>
<li><a
href="https://www.youtube.com/playlist?list=PLqYmG7hTraZBKeNJ-JE_eyJHZ7XgBoAyb">Deepmind
x UCL Reinforcement Learning</a>: Deep Reinforcement Learning</li>
<li><a
href="https://www.youtube.com/playlist?list=PLp-0K3kfddPzCnS4CqKphh-zT3aDwybDe">CMU
11-785 Intro to Deep learning Spring 2020</a> Course: 11-785, Intro to
Deep Learning by Bhiksha Raj</li>
<li><a
href="https://www.youtube.com/playlist?list=PLoROMvodv4rMiGQp3WXShtMGgzqpfVfbU">Machine
Learning CS 229</a> : End part focuses on deep learning By Andrew
Ng</li>
<li><a href="https://youtu.be/LXWSE_9gHd0">What is Neural Structured
Learning by Andrew Ferlitsch</a></li>
<li><a href="https://youtu.be/_DaviS6K0Vc">Deep Learning Design Patterns
by Andrew Ferlitsch</a></li>
<li><a href="https://youtu.be/QCGSS3kyGo0">Architecture of a Modern CNN:
the design pattern approach by Andrew Ferlitsch</a></li>
<li><a href="https://youtu.be/K1PLeggQ33I">Metaparameters in a CNN by
Andrew Ferlitsch</a></li>
<li><a href="https://youtu.be/dH2nuI-1-qM">Multi-task CNN: a real-world
example by Andrew Ferlitsch</a></li>
<li><a href="https://youtu.be/1FyAh07jh0o">A friendly introduction to
deep reinforcement learning by Luis Serrano</a></li>
<li><a href="https://youtu.be/f6ivp84qFUc">What are GANs and how do they
work? by Edward Raff</a></li>
<li><a href="https://youtu.be/7VRdaqMDalQ">Coding a basic WGAN in
PyTorch by Edward Raff</a></li>
<li><a href="https://youtu.be/8TMT-gHlj_Q">Training a Reinforcement
Learning Agent by Miguel Morales</a></li>
<li><a
href="https://www.scaler.com/topics/what-is-deep-learning/">Understand
what is Deep Learning</a></li>
</ol>
<h3 id="papers">Papers</h3>
<p><em>You can also find the most cited deep learning papers from <a
href="https://github.com/terryum/awesome-deep-learning-papers">here</a></em></p>
<ol type="1">
<li><a
href="http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf">ImageNet
Classification with Deep Convolutional Neural Networks</a></li>
<li><a
href="http://www.cs.toronto.edu/~hinton/absps/esann-deep-final.pdf">Using
Very Deep Autoencoders for Content Based Image Retrieval</a></li>
<li><a
href="http://www.iro.umontreal.ca/~lisa/pointeurs/TR1312.pdf">Learning
Deep Architectures for AI</a></li>
<li><a href="http://deeplearning.cs.cmu.edu/">CMUs list of
papers</a></li>
<li><a href="http://nlp.stanford.edu/~socherr/pa4_ner.pdf">Neural
Networks for Named Entity Recognition</a> <a
href="http://nlp.stanford.edu/~socherr/pa4-ner.zip">zip</a></li>
<li><a
href="http://www.iro.umontreal.ca/~bengioy/papers/YB-tricks.pdf">Training
tricks by YB</a></li>
<li><a href="http://www.cs.toronto.edu/~hinton/deeprefs.html">Geoff
Hintons reading list (all papers)</a></li>
<li><a href="http://www.cs.toronto.edu/~graves/preprint.pdf">Supervised
Sequence Labelling with Recurrent Neural Networks</a></li>
<li><a
href="http://www.fit.vutbr.cz/~imikolov/rnnlm/thesis.pdf">Statistical
Language Models based on Neural Networks</a></li>
<li><a
href="http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf">Training
Recurrent Neural Networks</a></li>
<li><a href="http://nlp.stanford.edu/~socherr/thesis.pdf">Recursive Deep
Learning for Natural Language Processing and Computer Vision</a></li>
<li><a
href="http://www.di.ufpe.br/~fnj/RNA/bibliografia/BRNN.pdf">Bi-directional
RNN</a></li>
<li><a
href="http://web.eecs.utk.edu/~itamar/courses/ECE-692/Bobby_paper1.pdf">LSTM</a></li>
<li><a href="http://arxiv.org/pdf/1406.1078v3.pdf">GRU - Gated Recurrent
Unit</a></li>
<li><a href="http://arxiv.org/pdf/1502.02367v3.pdf">GFRNN</a> <a
href="http://jmlr.org/proceedings/papers/v37/chung15.pdf">.</a> <a
href="http://jmlr.org/proceedings/papers/v37/chung15-supp.pdf">.</a></li>
<li><a href="http://arxiv.org/pdf/1503.04069v1.pdf">LSTM: A Search Space
Odyssey</a></li>
<li><a href="http://arxiv.org/pdf/1506.00019v1.pdf">A Critical Review of
Recurrent Neural Networks for Sequence Learning</a></li>
<li><a href="http://arxiv.org/pdf/1506.02078v1.pdf">Visualizing and
Understanding Recurrent Networks</a></li>
<li><a
href="http://jmlr.org/proceedings/papers/v37/jozefowicz15.pdf">Wojciech
Zaremba, Ilya Sutskever, An Empirical Exploration of Recurrent Network
Architectures</a></li>
<li><a
href="http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf">Recurrent
Neural Network based Language Model</a></li>
<li><a
href="http://www.fit.vutbr.cz/research/groups/speech/publi/2011/mikolov_icassp2011_5528.pdf">Extensions
of Recurrent Neural Network Language Model</a></li>
<li><a
href="http://www.fit.vutbr.cz/~imikolov/rnnlm/ApplicationOfRNNinMeetingRecognition_IS2011.pdf">Recurrent
Neural Network based Language Modeling in Meeting Recognition</a></li>
<li><a href="http://cs224d.stanford.edu/papers/maas_paper.pdf">Deep
Neural Networks for Acoustic Modeling in Speech Recognition</a></li>
<li><a href="http://www.cs.toronto.edu/~fritz/absps/RNN13.pdf">Speech
Recognition with Deep Recurrent Neural Networks</a></li>
<li><a href="http://arxiv.org/pdf/1505.00521v1">Reinforcement Learning
Neural Turing Machines</a></li>
<li><a href="http://arxiv.org/pdf/1406.1078v3.pdf">Learning Phrase
Representations using RNN Encoder-Decoder for Statistical Machine
Translation</a></li>
<li><a
href="http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf">Google
- Sequence to Sequence Learning with Neural Networks</a></li>
<li><a href="http://arxiv.org/pdf/1410.3916v10">Memory Networks</a></li>
<li><a href="http://arxiv.org/pdf/1507.01273v1">Policy Learning with
Continuous Memory States for Partially Observed Robotic Control</a></li>
<li><a href="http://arxiv.org/pdf/1505.01861v1.pdf">Microsoft - Jointly
Modeling Embedding and Translation to Bridge Video and Language</a></li>
<li><a href="http://arxiv.org/pdf/1410.5401v2.pdf">Neural Turing
Machines</a></li>
<li><a href="http://arxiv.org/pdf/1506.07285v1.pdf">Ask Me Anything:
Dynamic Memory Networks for Natural Language Processing</a></li>
<li><a
href="http://www.nature.com/nature/journal/v529/n7587/pdf/nature16961.pdf">Mastering
the Game of Go with Deep Neural Networks and Tree Search</a></li>
<li><a href="https://arxiv.org/abs/1502.03167">Batch
Normalization</a></li>
<li><a href="https://arxiv.org/pdf/1512.03385v1.pdf">Residual
Learning</a></li>
<li><a href="https://arxiv.org/pdf/1611.07004v1.pdf">Image-to-Image
Translation with Conditional Adversarial Networks</a></li>
<li><a href="https://arxiv.org/pdf/1611.07004v1.pdf">Berkeley AI
Research (BAIR) Laboratory</a></li>
<li><a href="https://arxiv.org/abs/1704.04861">MobileNets by
Google</a></li>
<li><a href="https://arxiv.org/abs/1706.05739">Cross Audio-Visual
Recognition in the Wild Using Deep Learning</a></li>
<li><a href="https://arxiv.org/abs/1710.09829">Dynamic Routing Between
Capsules</a></li>
<li><a href="https://openreview.net/pdf?id=HJWLfGWRb">Matrix Capsules
With Em Routing</a></li>
<li><a
href="http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf">Efficient
BackProp</a></li>
<li><a href="https://arxiv.org/pdf/1406.2661v1.pdf">Generative
Adversarial Nets</a></li>
<li><a href="https://arxiv.org/pdf/1504.08083.pdf">Fast R-CNN</a></li>
<li><a href="https://arxiv.org/pdf/1503.03832.pdf">FaceNet: A Unified
Embedding for Face Recognition and Clustering</a></li>
<li><a
href="https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf">Siamese
Neural Networks for One-shot Image Recognition</a></li>
<li><a href="https://arxiv.org/pdf/2006.03511.pdf">Unsupervised
Translation of Programming Languages</a></li>
<li><a
href="http://papers.nips.cc/paper/6385-matching-networks-for-one-shot-learning.pdf">Matching
Networks for One Shot Learning</a></li>
<li><a href="https://arxiv.org/pdf/2106.13112.pdf">VOLO: Vision
Outlooker for Visual Recognition</a></li>
<li><a href="https://arxiv.org/pdf/2010.11929.pdf">ViT: An Image is
Worth 16x16 Words: Transformers for Image Recognition at Scale</a></li>
<li><a href="http://proceedings.mlr.press/v37/ioffe15.pdf">Batch
Normalization: Accelerating Deep Network Training by Reducing Internal
Covariate Shift</a></li>
<li><a
href="http://geometrylearning.com/paper/DeepFaceDrawing.pdf?fbclid=IwAR0colWFHPGBCB1APZq9JVsWeWtmeZd9oCTNQvR52T5PRUJP_dLOwB8pt0I">DeepFaceDrawing:
Deep Generation of Face Images from Sketches</a></li>
</ol>
<h3 id="tutorials">Tutorials</h3>
<ol type="1">
<li><a
href="http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial">UFLDL
Tutorial 1</a></li>
<li><a
href="http://ufldl.stanford.edu/tutorial/supervised/LinearRegression/">UFLDL
Tutorial 2</a></li>
<li><a
href="http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial">Deep
Learning for NLP (without Magic)</a></li>
<li><a
href="http://www.toptal.com/machine-learning/an-introduction-to-deep-learning-from-perceptrons-to-deep-networks">A
Deep Learning Tutorial: From Perceptrons to Deep Networks</a></li>
<li><a
href="http://www.metacademy.org/roadmaps/rgrosse/deep_learning">Deep
Learning from the Bottom up</a></li>
<li><a href="http://deeplearning.net/tutorial/deeplearning.pdf">Theano
Tutorial</a></li>
<li><a
href="http://uk.mathworks.com/help/pdf_doc/nnet/nnet_ug.pdf">Neural
Networks for Matlab</a></li>
<li><a
href="http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/">Using
convolutional neural nets to detect facial keypoints tutorial</a></li>
<li><a
href="https://github.com/clementfarabet/ipam-tutorials/tree/master/th_tutorials">Torch7
Tutorials</a></li>
<li><a
href="https://github.com/josephmisiti/machine-learning-module">The Best
Machine Learning Tutorials On The Web</a></li>
<li><a
href="http://www.robots.ox.ac.uk/~vgg/practicals/cnn/index.html">VGG
Convolutional Neural Networks Practical</a></li>
<li><a href="https://github.com/nlintz/TensorFlow-Tutorials">TensorFlow
tutorials</a></li>
<li><a href="https://github.com/pkmital/tensorflow_tutorials">More
TensorFlow tutorials</a></li>
<li><a
href="https://github.com/aymericdamien/TensorFlow-Examples">TensorFlow
Python Notebooks</a></li>
<li><a href="https://github.com/Vict0rSch/deep_learning">Keras and
Lasagne Deep Learning Tutorials</a></li>
<li><a
href="https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition">Classification
on raw time series in TensorFlow with a LSTM RNN</a></li>
<li><a
href="http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/">Using
convolutional neural nets to detect facial keypoints tutorial</a></li>
<li><a
href="https://github.com/astorfi/TensorFlow-World">TensorFlow-World</a></li>
<li><a
href="https://www.manning.com/books/deep-learning-with-python">Deep
Learning with Python</a></li>
<li><a
href="https://www.manning.com/books/grokking-deep-learning">Grokking
Deep Learning</a></li>
<li><a
href="https://www.manning.com/books/deep-learning-for-search">Deep
Learning for Search</a></li>
<li><a
href="https://medium.com/sicara/keras-tutorial-content-based-image-retrieval-convolutional-denoising-autoencoder-dc91450cc511">Keras
Tutorial: Content Based Image Retrieval Using a Convolutional Denoising
Autoencoder</a></li>
<li><a href="https://github.com/yunjey/pytorch-tutorial">Pytorch
Tutorial by Yunjey Choi</a></li>
<li><a
href="https://ahmedbesbes.com/understanding-deep-convolutional-neural-networks-with-a-practical-use-case-in-tensorflow-and-keras.html">Understanding
deep Convolutional Neural Networks with a practical use-case in
Tensorflow and Keras</a></li>
<li><a
href="https://ahmedbesbes.com/overview-and-benchmark-of-traditional-and-deep-learning-models-in-text-classification.html">Overview
and benchmark of traditional and deep learning models in text
classification</a></li>
<li><a href="https://github.com/MelAbgrall/HardwareforAI">Hardware for
AI: Understanding computer hardware &amp; build your own
computer</a></li>
<li><a
href="https://hackr.io/tutorials/learn-artificial-intelligence-ai">Programming
Community Curated Resources</a></li>
<li><a
href="https://amitness.com/2020/02/illustrated-self-supervised-learning/">The
Illustrated Self-Supervised Learning</a></li>
<li><a href="https://amitness.com/2020/02/albert-visual-summary/">Visual
Paper Summary: ALBERT (A Lite BERT)</a></li>
<li><a
href="https://www.manning.com/liveproject/semi-supervised-deep-learning-with-gans-for-melanoma-detection/">Semi-Supervised
Deep Learning with GANs for Melanoma Detection</a></li>
<li><a
href="https://github.com/SauravMaheshkar/Trax-Examples/blob/main/NLP/NER%20using%20Reformer.ipynb">Named
Entity Recognition using Reformers</a></li>
<li><a
href="https://github.com/SauravMaheshkar/Trax-Examples/blob/main/NLP/Deep%20N-Gram.ipynb">Deep
N-Gram Models on Shakespeares works</a></li>
<li><a
href="https://github.com/SauravMaheshkar/Trax-Examples/blob/main/vision/illustrated-wideresnet.ipynb">Wide
Residual Networks</a></li>
<li><a href="https://github.com/SauravMaheshkar/Flax-Examples">Fashion
MNIST using Flax</a></li>
<li><a
href="https://github.com/SauravMaheshkar/Fake-News-Classification">Fake
News Classification (with streamlit deployment)</a></li>
<li><a
href="https://github.com/SauravMaheshkar/CoxPH-Model-for-Primary-Biliary-Cirrhosis">Regression
Analysis for Primary Biliary Cirrhosis</a></li>
<li><a
href="https://github.com/SauravMaheshkar/Cross-Matching-Methods-for-Astronomical-Catalogs">Cross
Matching Methods for Astronomical Catalogs</a></li>
<li><a
href="https://github.com/SauravMaheshkar/Named-Entity-Recognition-">Named
Entity Recognition using BiDirectional LSTMs</a></li>
<li><a
href="https://github.com/SauravMaheshkar/Flutter_Image-Recognition">Image
Recognition App using Tflite and Flutter</a></li>
</ol>
<h2 id="researchers">Researchers</h2>
<ol type="1">
<li><a href="http://aaroncourville.wordpress.com">Aaron
Courville</a></li>
<li><a href="http://www.cs.toronto.edu/~asamir/">Abdel-rahman
Mohamed</a></li>
<li><a href="http://cs.stanford.edu/~acoates/">Adam Coates</a></li>
<li><a href="http://research.microsoft.com/en-us/people/alexac/">Alex
Acero</a></li>
<li><a href="http://www.cs.utoronto.ca/~kriz/index.html">Alex
Krizhevsky</a></li>
<li><a href="http://users.ics.aalto.fi/alexilin/">Alexander
Ilin</a></li>
<li><a href="http://homepages.inf.ed.ac.uk/amos/">Amos Storkey</a></li>
<li><a href="https://karpathy.ai/">Andrej Karpathy</a></li>
<li><a href="http://www.stanford.edu/~asaxe/">Andrew M. Saxe</a></li>
<li><a href="http://www.cs.stanford.edu/people/ang/">Andrew Ng</a></li>
<li><a href="http://research.google.com/pubs/author37792.html">Andrew W.
Senior</a></li>
<li><a href="http://www.gatsby.ucl.ac.uk/~amnih/">Andriy Mnih</a></li>
<li><a href="http://www.cs.nyu.edu/~naz/">Ayse Naz Erkan</a></li>
<li><a href="http://reslab.elis.ugent.be/benjamin">Benjamin
Schrauwen</a></li>
<li><a href="https://www.cisuc.uc.pt/people/show/2020">Bernardete
Ribeiro</a></li>
<li><a
href="http://vision.caltech.edu/~bchen3/Site/Bo_David_Chen.html">Bo
David Chen</a></li>
<li><a href="http://cs.nyu.edu/~ylan/">Boureau Y-Lan</a></li>
<li><a
href="http://researcher.watson.ibm.com/researcher/view.php?person=us-bedk">Brian
Kingsbury</a></li>
<li><a href="http://nlp.stanford.edu/~manning/">Christopher
Manning</a></li>
<li><a href="http://www.clement.farabet.net/">Clement Farabet</a></li>
<li><a href="http://www.idsia.ch/~ciresan/">Dan Claudiu Cireșan</a></li>
<li><a
href="http://serre-lab.clps.brown.edu/person/david-reichert/">David
Reichert</a></li>
<li><a href="http://mil.engr.utk.edu/nmil/member/5.html">Derek
Rose</a></li>
<li><a
href="http://research.microsoft.com/en-us/people/dongyu/default.aspx">Dong
Yu</a></li>
<li><a href="http://www.seas.upenn.edu/~wulsin/">Drausin Wulsin</a></li>
<li><a href="http://music.ece.drexel.edu/people/eschmidt">Erik M.
Schmidt</a></li>
<li><a
href="https://engineering.purdue.edu/BME/People/viewPersonById?resource_id=71333">Eugenio
Culurciello</a></li>
<li><a href="http://research.microsoft.com/en-us/people/fseide/">Frank
Seide</a></li>
<li><a href="http://homes.cs.washington.edu/~galen/">Galen
Andrew</a></li>
<li><a href="http://www.cs.toronto.edu/~hinton/">Geoffrey
Hinton</a></li>
<li><a href="http://www.cs.toronto.edu/~gdahl/">George Dahl</a></li>
<li><a href="http://www.uoguelph.ca/~gwtaylor/">Graham Taylor</a></li>
<li><a href="http://gregoire.montavon.name/">Grégoire Montavon</a></li>
<li><a href="http://personal-homepages.mis.mpg.de/montufar/">Guido
Francisco Montúfar</a></li>
<li><a href="http://brainlogging.wordpress.com/">Guillaume
Desjardins</a></li>
<li><a href="http://www.ais.uni-bonn.de/~schulz/">Hannes Schulz</a></li>
<li><a href="http://www.lri.fr/~hpaugam/">Hélène Paugam-Moisy</a></li>
<li><a href="http://web.eecs.umich.edu/~honglak/">Honglak Lee</a></li>
<li><a href="http://www.dmi.usherb.ca/~larocheh/index_en.html">Hugo
Larochelle</a></li>
<li><a href="http://www.cs.toronto.edu/~ilya/">Ilya Sutskever</a></li>
<li><a href="http://mil.engr.utk.edu/nmil/member/2.html">Itamar
Arel</a></li>
<li><a href="http://www.cs.toronto.edu/~jmartens/">James
Martens</a></li>
<li><a href="http://www.jasonmorton.com/">Jason Morton</a></li>
<li><a href="http://www.thespermwhale.com/jaseweston/">Jason
Weston</a></li>
<li><a href="http://research.google.com/pubs/jeff.html">Jeff
Dean</a></li>
<li><a href="http://cs.stanford.edu/~jngiam/">Jiquan Mgiam</a></li>
<li><a href="http://www-etud.iro.umontreal.ca/~turian/">Joseph
Turian</a></li>
<li><a href="http://aclab.ca/users/josh/index.html">Joshua Matthew
Susskind</a></li>
<li><a href="http://www.idsia.ch/~juergen/">Jürgen Schmidhuber</a></li>
<li><a href="https://sites.google.com/site/blancousna/">Justin A.
Blanco</a></li>
<li><a href="http://koray.kavukcuoglu.org/">Koray Kavukcuoglu</a></li>
<li><a href="http://users.ics.aalto.fi/kcho/">KyungHyun Cho</a></li>
<li><a href="http://research.microsoft.com/en-us/people/deng/">Li
Deng</a></li>
<li><a
href="http://www.kyb.tuebingen.mpg.de/nc/employee/details/lucas.html">Lucas
Theis</a></li>
<li><a href="http://ludovicarnold.altervista.org/home/">Ludovic
Arnold</a></li>
<li><a href="http://www.cs.nyu.edu/~ranzato/">MarcAurelio
Ranzato</a></li>
<li><a href="http://aass.oru.se/~mlt/">Martin Längkvist</a></li>
<li><a href="http://mdenil.com/">Misha Denil</a></li>
<li><a href="http://www.cs.toronto.edu/~norouzi/">Mohammad
Norouzi</a></li>
<li><a href="http://www.cs.ubc.ca/~nando/">Nando de Freitas</a></li>
<li><a href="http://www.cs.utoronto.ca/~ndjaitly/">Navdeep
Jaitly</a></li>
<li><a href="http://nicolas.le-roux.name/">Nicolas Le Roux</a></li>
<li><a href="http://www.cs.toronto.edu/~nitish/">Nitish
Srivastava</a></li>
<li><a href="https://www.cisuc.uc.pt/people/show/2028">Noel
Lopes</a></li>
<li><a href="http://www.cs.berkeley.edu/~vinyals/">Oriol
Vinyals</a></li>
<li><a href="http://www.iro.umontreal.ca/~vincentp">Pascal
Vincent</a></li>
<li><a href="https://sites.google.com/site/drpngx/">Patrick
Nguyen</a></li>
<li><a href="http://homes.cs.washington.edu/~pedrod/">Pedro
Domingos</a></li>
<li><a href="http://homepages.inf.ed.ac.uk/pseries/">Peggy
Series</a></li>
<li><a href="http://cs.nyu.edu/~sermanet">Pierre Sermanet</a></li>
<li><a href="http://www.cs.nyu.edu/~mirowski/">Piotr Mirowski</a></li>
<li><a href="http://ai.stanford.edu/~quocle/">Quoc V. Le</a></li>
<li><a href="http://bci.tugraz.at/scherer/">Reinhold Scherer</a></li>
<li><a href="http://www.socher.org/">Richard Socher</a></li>
<li><a href="http://cs.nyu.edu/~fergus/pmwiki/pmwiki.php">Rob
Fergus</a></li>
<li><a href="http://mil.engr.utk.edu/nmil/member/19.html">Robert
Coop</a></li>
<li><a href="http://homes.cs.washington.edu/~rcg/">Robert Gens</a></li>
<li><a href="http://people.csail.mit.edu/rgrosse/">Roger Grosse</a></li>
<li><a href="http://ronan.collobert.com/">Ronan Collobert</a></li>
<li><a href="http://www.utstat.toronto.edu/~rsalakhu/">Ruslan
Salakhutdinov</a></li>
<li><a
href="http://www.kyb.tuebingen.mpg.de/nc/employee/details/sgerwinn.html">Sebastian
Gerwinn</a></li>
<li><a href="http://www.cmap.polytechnique.fr/~mallat/">Stéphane
Mallat</a></li>
<li><a href="http://www.ais.uni-bonn.de/behnke/">Sven Behnke</a></li>
<li><a href="http://users.ics.aalto.fi/praiko/">Tapani Raiko</a></li>
<li><a href="https://sites.google.com/site/tsainath/">Tara
Sainath</a></li>
<li><a href="http://www.cs.toronto.edu/~tijmen/">Tijmen
Tieleman</a></li>
<li><a href="http://mil.engr.utk.edu/nmil/member/36.html">Tom
Karnowski</a></li>
<li><a href="https://research.facebook.com/tomas-mikolov">Tomáš
Mikolov</a></li>
<li><a href="http://www.idsia.ch/~meier/">Ueli Meier</a></li>
<li><a href="http://vincent.vanhoucke.com">Vincent Vanhoucke</a></li>
<li><a href="http://www.cs.toronto.edu/~vmnih/">Volodymyr Mnih</a></li>
<li><a href="http://yann.lecun.com/">Yann LeCun</a></li>
<li><a href="http://www.cs.toronto.edu/~tang/">Yichuan Tang</a></li>
<li><a
href="http://www.iro.umontreal.ca/~bengioy/yoshua_en/index.html">Yoshua
Bengio</a></li>
<li><a href="http://yota.ro/">Yotaro Kubo</a></li>
<li><a href="http://ai.stanford.edu/~wzou">Youzhi (Will) Zou</a></li>
<li><a href="http://vision.stanford.edu/feifeili">Fei-Fei Li</a></li>
<li><a href="https://research.google.com/pubs/105214.html">Ian
Goodfellow</a></li>
<li><a href="http://www.site.uottawa.ca/~laganier/">Robert
Laganière</a></li>
<li><a href="http://www.ayyucekizrak.com/">Merve Ayyüce Kızrak</a></li>
</ol>
<h3 id="websites">Websites</h3>
<ol type="1">
<li><a href="http://deeplearning.net/">deeplearning.net</a></li>
<li><a
href="http://deeplearning.stanford.edu/">deeplearning.stanford.edu</a></li>
<li><a href="http://nlp.stanford.edu/">nlp.stanford.edu</a></li>
<li><a
href="http://www.ai-junkie.com/ann/evolved/nnt1.html">ai-junkie.com</a></li>
<li><a
href="http://cs.brown.edu/research/ai/">cs.brown.edu/research/ai</a></li>
<li><a href="http://www.eecs.umich.edu/ai/">eecs.umich.edu/ai</a></li>
<li><a
href="http://www.cs.utexas.edu/users/ai-lab/">cs.utexas.edu/users/ai-lab</a></li>
<li><a
href="http://www.cs.washington.edu/research/ai/">cs.washington.edu/research/ai</a></li>
<li><a href="http://www.aiai.ed.ac.uk/">aiai.ed.ac.uk</a></li>
<li><a href="http://www-aig.jpl.nasa.gov/">www-aig.jpl.nasa.gov</a></li>
<li><a href="http://www.csail.mit.edu/">csail.mit.edu</a></li>
<li><a
href="http://cgi.cse.unsw.edu.au/~aishare/">cgi.cse.unsw.edu.au/~aishare</a></li>
<li><a
href="http://www.cs.rochester.edu/research/ai/">cs.rochester.edu/research/ai</a></li>
<li><a href="http://www.ai.sri.com/">ai.sri.com</a></li>
<li><a href="http://www.isi.edu/AI/isd.htm">isi.edu/AI/isd.htm</a></li>
<li><a
href="http://www.nrl.navy.mil/itd/aic/">nrl.navy.mil/itd/aic</a></li>
<li><a
href="http://hips.seas.harvard.edu/">hips.seas.harvard.edu</a></li>
<li><a href="http://aiweekly.co">AI Weekly</a></li>
<li><a href="http://statistics.ucla.edu/">stat.ucla.edu</a></li>
<li><a
href="http://deeplearning.cs.toronto.edu/i2t">deeplearning.cs.toronto.edu</a></li>
<li><a
href="http://jeffdonahue.com/lrcn/">jeffdonahue.com/lrcn/</a></li>
<li><a href="http://www.visualqa.org/">visualqa.org</a></li>
<li><a
href="https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/">www.mpi-inf.mpg.de/departments/computer-vision…</a></li>
<li><a href="http://news.startup.ml/">Deep Learning News</a></li>
<li><a href="https://medium.com/@ageitgey/">Machine Learning is Fun!
Adam Geitgeys Blog</a></li>
<li><a href="http://yerevann.com/a-guide-to-deep-learning/">Guide to
Machine Learning</a></li>
<li><a href="https://spandan-madan.github.io/DeepLearningProject/">Deep
Learning for Beginners</a></li>
<li><a href="https://machinelearningmastery.com/blog/">Machine Learning
Mastery blog</a></li>
<li><a href="https://ml-compiled.readthedocs.io/en/latest/">ML
Compiled</a></li>
<li><a
href="https://hackr.io/tutorials/learn-artificial-intelligence-ai">Programming
Community Curated Resources</a></li>
<li><a
href="https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/">A
Beginners Guide To Understanding Convolutional Neural Networks</a></li>
<li><a href="http://ahmedbesbes.com">ahmedbesbes.com</a></li>
<li><a href="https://amitness.com/">amitness.com</a></li>
<li><a href="https://theaisummer.com/">AI Summer</a></li>
<li><a href="https://aihub.org/">AI Hub - supported by AAAI,
NeurIPS</a></li>
<li><a href="https://www.catalyzeX.com">CatalyzeX: Machine Learning Hub
for Builders and Makers</a></li>
<li><a href="https://theepiccode.com/">The Epic Code</a></li>
<li><a href="https://allainews.com/">all AI news</a></li>
</ol>
<h3 id="datasets">Datasets</h3>
<ol type="1">
<li><a href="http://yann.lecun.com/exdb/mnist/">MNIST</a> Handwritten
digits</li>
<li><a href="http://ufldl.stanford.edu/housenumbers/">Google House
Numbers</a> from street view</li>
<li><a href="http://www.cs.toronto.edu/~kriz/cifar.html">CIFAR-10 and
CIFAR-100</a></li>
<li><a href="http://www.image-net.org/">IMAGENET</a></li>
<li><a href="http://groups.csail.mit.edu/vision/TinyImages/">Tiny
Images</a> 80 Million tiny images6.<br />
</li>
<li><a
href="https://yahooresearch.tumblr.com/post/89783581601/one-hundred-million-creative-commons-flickr-images">Flickr
Data</a> 100 Million Yahoo dataset</li>
<li><a
href="http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/">Berkeley
Segmentation Dataset 500</a></li>
<li><a href="http://archive.ics.uci.edu/ml/">UC Irvine Machine Learning
Repository</a></li>
<li><a
href="http://nlp.cs.illinois.edu/HockenmaierGroup/Framing_Image_Description/KCCA.html">Flickr
8k</a></li>
<li><a href="http://shannon.cs.illinois.edu/DenotationGraph/">Flickr
30k</a></li>
<li><a href="http://mscoco.org/home/">Microsoft COCO</a></li>
<li><a href="http://www.visualqa.org/">VQA</a></li>
<li><a href="http://www.cs.toronto.edu/~mren/imageqa/data/cocoqa/">Image
QA</a></li>
<li><a href="http://www.uk.research.att.com/facedatabase.html">AT&amp;T
Laboratories Cambridge face database</a></li>
<li><a href="http://xtreme.gsfc.nasa.gov">AVHRR Pathfinder</a></li>
<li><a href="http://www.anc.ed.ac.uk/~amos/afreightdata.html">Air
Freight</a> - The Air Freight data set is a ray-traced image sequence
along with ground truth segmentation based on textural characteristics.
(455 images + GT, each 160x120 pixels). (Formats: PNG)<br />
</li>
<li><a href="http://www.science.uva.nl/~aloi/">Amsterdam Library of
Object Images</a> - ALOI is a color image collection of one-thousand
small objects, recorded for scientific purposes. In order to capture the
sensory variation in object recordings, we systematically varied viewing
angle, illumination angle, and illumination color for each object, and
additionally captured wide-baseline stereo images. We recorded over a
hundred images of each object, yielding a total of 110,250 images for
the collection. (Formats: png)</li>
<li><a href="http://www.imm.dtu.dk/~aam/">Annotated face, hand, cardiac
&amp; meat images</a> - Most images &amp; annotations are supplemented
by various ASM/AAM analyses using the AAM-API. (Formats: bmp,asf)</li>
<li><a href="http://www.imm.dtu.dk/image/">Image Analysis and Computer
Graphics</a><br />
</li>
<li><a href="http://www.cog.brown.edu/~tarr/stimuli.html">Brown
University Stimuli</a> - A variety of datasets including geons, objects,
and “greebles”. Good for testing recognition algorithms. (Formats:
pict)</li>
<li><a href="http://homepages.inf.ed.ac.uk/rbf/CAVIARDATA1/">CAVIAR
video sequences of mall and public space behavior</a> - 90K video frames
in 90 sequences of various human activities, with XML ground truth of
detection and behavior classification (Formats: MPEG2 &amp; JPEG)</li>
<li><a href="http://www.ipab.inf.ed.ac.uk/mvu/">Machine Vision
Unit</a></li>
<li><a href="http://www.cs.waikato.ac.nz/~singlis/ccitt.html">CCITT Fax
standard images</a> - 8 images (Formats: gif)</li>
<li><a href="cil-ster.html">CMU CILs Stereo Data with Ground Truth</a>
- 3 sets of 11 images, including color tiff images with
spectroradiometry (Formats: gif, tiff)</li>
<li><a href="http://www.ri.cmu.edu/projects/project_418.html">CMU PIE
Database</a> - A database of 41,368 face images of 68 people captured
under 13 poses, 43 illuminations conditions, and with 4 different
expressions.</li>
<li><a href="http://www.ius.cs.cmu.edu/idb/">CMU VASC Image Database</a>
- Images, sequences, stereo pairs (thousands of images) (Formats: Sun
Rasterimage)</li>
<li><a
href="http://www.vision.caltech.edu/html-files/archive.html">Caltech
Image Database</a> - about 20 images - mostly top-down views of small
objects and toys. (Formats: GIF)</li>
<li><a href="http://www.cs.columbia.edu/CAVE/curet/">Columbia-Utrecht
Reflectance and Texture Database</a> - Texture and reflectance
measurements for over 60 samples of 3D texture, observed with over 200
different combinations of viewing and illumination directions. (Formats:
bmp)</li>
<li><a href="http://www.cs.sfu.ca/~colour/data/index.html">Computational
Colour Constancy Data</a> - A dataset oriented towards computational
color constancy, but useful for computer vision in general. It includes
synthetic data, camera sensor data, and over 700 images. (Formats:
tiff)</li>
<li><a href="http://www.cs.sfu.ca/~colour/">Computational Vision
Lab</a></li>
<li><a
href="http://www.cs.washington.edu/research/imagedatabase/groundtruth/">Content-based
image retrieval database</a> - 11 sets of color images for testing
algorithms for content-based retrieval. Most sets have a description
file with names of objects in each image. (Formats: jpg)</li>
<li><a
href="http://www.cs.washington.edu/research/imagedatabase/">Efficient
Content-based Retrieval Group</a></li>
<li><a
href="http://ls7-www.cs.uni-dortmund.de/~peters/pages/research/modeladaptsys/modeladaptsys_vba_rov.html">Densely
Sampled View Spheres</a> - Densely sampled view spheres - upper half of
the view sphere of two toy objects with 2500 images each. (Formats:
tiff)</li>
<li><a href="http://ls7-www.cs.uni-dortmund.de/">Computer Science VII
(Graphical Systems)</a></li>
<li><a
href="https://web-beta.archive.org/web/20011216051535/vision.psych.umn.edu/www/kersten-lab/demos/digitalembryo.html">Digital
Embryos</a> - Digital embryos are novel objects which may be used to
develop and test object recognition systems. They have an organic
appearance. (Formats: various formats are available on request)</li>
<li><a
href="http://vision.psych.umn.edu/users/kersten//kersten-lab/kersten-lab.html">Univerity
of Minnesota Vision Lab</a></li>
<li><a href="http://www.gastrointestinalatlas.com">El Salvador Atlas of
Gastrointestinal VideoEndoscopy</a> - Images and Videos of his-res of
studies taken from Gastrointestinal Video endoscopy. (Formats: jpg, mpg,
gif)</li>
<li><a
href="http://sting.cycollege.ac.cy/~alanitis/fgnetaging/index.htm">FG-NET
Facial Aging Database</a> - Database contains 1002 face images showing
subjects at different ages. (Formats: jpg)</li>
<li><a href="http://bias.csr.unibo.it/fvc2000/">FVC2000 Fingerprint
Databases</a> - FVC2000 is the First International Competition for
Fingerprint Verification Algorithms. Four fingerprint databases
constitute the FVC2000 benchmark (3520 fingerprints in all).</li>
<li><a href="http://biolab.csr.unibo.it/home.asp">Biometric Systems
Lab</a> - University of Bologna</li>
<li><a href="http://www.fg-net.org">Face and Gesture images and image
sequences</a> - Several image datasets of faces and gestures that are
ground truth annotated for benchmarking</li>
<li><a
href="http://www-i6.informatik.rwth-aachen.de/~dreuw/database.html">German
Fingerspelling Database</a> - The database contains 35 gestures and
consists of 1400 image sequences that contain gestures of 20 different
persons recorded under non-uniform daylight lighting conditions.
(Formats: mpg,jpg)<br />
</li>
<li><a href="http://www-i6.informatik.rwth-aachen.de/">Language
Processing and Pattern Recognition</a></li>
<li><a href="http://hlab.phys.rug.nl/archive.html">Groningen Natural
Image Database</a> - 4000+ 1536x1024 (16 bit) calibrated outdoor images
(Formats: homebrew)</li>
<li><a href="http://www.icg.tu-graz.ac.at/~schindler/Data">ICG Testhouse
sequence</a> - 2 turntable sequences from different viewing heights, 36
images each, resolution 1000x750, color (Formats: PPM)</li>
<li><a href="http://www.icg.tu-graz.ac.at">Institute of Computer
Graphics and Vision</a></li>
<li><a href="http://www.ien.it/is/vislib/">IEN Image Library</a> - 1000+
images, mostly outdoor sequences (Formats: raw, ppm)<br />
</li>
<li><a href="http://www-rocq.inria.fr/~tarel/syntim/images.html">INRIAs
Syntim images database</a> - 15 color image of simple objects (Formats:
gif)</li>
<li><a href="http://www.inria.fr/">INRIA</a></li>
<li><a href="http://www-rocq.inria.fr/~tarel/syntim/paires.html">INRIAs
Syntim stereo databases</a> - 34 calibrated color stereo pairs (Formats:
gif)</li>
<li><a
href="http://www.ece.ncsu.edu/imaging/Archives/ImageDataBase/index.html">Image
Analysis Laboratory</a> - Images obtained from a variety of imaging
modalities raw CFA images, range images and a host of “medical
images”. (Formats: homebrew)</li>
<li><a href="http://www.ece.ncsu.edu/imaging">Image Analysis
Laboratory</a></li>
<li><a href="http://www.prip.tuwien.ac.at/prip/image.html">Image
Database</a> - An image database including some textures<br />
</li>
<li><a href="http://www.mis.atr.co.jp/~mlyons/jaffe.html">JAFFE Facial
Expression Image Database</a> - The JAFFE database consists of 213
images of Japanese female subjects posing 6 basic facial expressions as
well as a neutral pose. Ratings on emotion adjectives are also
available, free of charge, for research purposes. (Formats: TIFF
Grayscale images.)</li>
<li><a href="http://www.mic.atr.co.jp/">ATR Research, Kyoto,
Japan</a></li>
<li><a href="ftp://ftp.vislist.com/IMAGERY/JISCT/">JISCT Stereo
Evaluation</a> - 44 image pairs. These data have been used in an
evaluation of stereo analysis, as described in the April 1993 ARPA Image
Understanding Workshop paper ``The JISCT Stereo Evaluation by
R.C.Bolles, H.H.Baker, and M.J.Hannah, 263274 (Formats: SSI)</li>
<li><a
href="https://vismod.media.mit.edu/vismod/imagery/VisionTexture/vistex.html">MIT
Vision Texture</a> - Image archive (100+ images) (Formats: ppm)</li>
<li><a href="ftp://whitechapel.media.mit.edu/pub/images">MIT face images
and more</a> - hundreds of images (Formats: homebrew)</li>
<li><a href="http://vision.cse.psu.edu/book/testbed/images/">Machine
Vision</a> - Images from the textbook by Jain, Kasturi, Schunck (20+
images) (Formats: GIF TIFF)</li>
<li><a
href="http://marathon.csee.usf.edu/Mammography/Database.html">Mammography
Image Databases</a> - 100 or more images of mammograms with ground
truth. Additional images available by request, and links to several
other mammography databases are provided. (Formats: homebrew)</li>
<li><a
href="ftp://ftp.cps.msu.edu/pub/prip">ftp://ftp.cps.msu.edu/pub/prip</a>
- many images (Formats: unknown)</li>
<li><a href="http://www.middlebury.edu/stereo/data.html">Middlebury
Stereo Data Sets with Ground Truth</a> - Six multi-frame stereo data
sets of scenes containing planar regions. Each data set contains 9 color
images and subpixel-accuracy ground-truth data. (Formats: ppm)</li>
<li><a href="http://www.middlebury.edu/stereo">Middlebury Stereo Vision
Research Page</a> - Middlebury College</li>
<li><a href="http://ltpwww.gsfc.nasa.gov/MODIS/MAS/">Modis Airborne
simulator, Gallery and data set</a> - High Altitude Imagery from around
the world for environmental modeling in support of NASA EOS program
(Formats: JPG and HDF)</li>
<li><a href="ftp://sequoyah.ncsl.nist.gov/pub/databases/data">NIST
Fingerprint and handwriting</a> - datasets - thousands of images
(Formats: unknown)</li>
<li><a href="ftp://ftp.cs.columbia.edu/jpeg/other/uuencoded">NIST
Fingerprint data</a> - compressed multipart uuencoded tar file</li>
<li><a
href="http://www.nlm.nih.gov/research/visible/visible_human.html">NLM
HyperDoc Visible Human Project</a> - Color, CAT and MRI image samples -
over 30 images (Formats: jpeg)</li>
<li><a href="http://www.designrepository.org">National Design
Repository</a> - Over 55,000 3D CAD and solid models of (mostly)
mechanical/machined engineering designs. (Formats:
gif,vrml,wrl,stp,sat)</li>
<li><a href="http://gicl.mcs.drexel.edu">Geometric &amp; Intelligent
Computing Laboratory</a></li>
<li><a href="http://eewww.eng.ohio-state.edu/~flynn/3DDB/Models/">OSU
(MSU) 3D Object Model Database</a> - several sets of 3D object models
collected over several years to use in object recognition research
(Formats: homebrew, vrml)</li>
<li><a href="http://eewww.eng.ohio-state.edu/~flynn/3DDB/RID/">OSU
(MSU/WSU) Range Image Database</a> - Hundreds of real and synthetic
images (Formats: gif, homebrew)</li>
<li><a
href="http://sampl.eng.ohio-state.edu/~sampl/database.htm">OSU/SAMPL
Database: Range Images, 3D Models, Stills, Motion Sequences</a> - Over
1000 range images, 3D object models, still images and motion sequences
(Formats: gif, ppm, vrml, homebrew)</li>
<li><a href="http://sampl.eng.ohio-state.edu">Signal Analysis and
Machine Perception Laboratory</a></li>
<li><a
href="http://www.cs.otago.ac.nz/research/vision/Research/OpticalFlow/opticalflow.html">Otago
Optical Flow Evaluation Sequences</a> - Synthetic and real sequences
with machine-readable ground truth optical flow fields, plus tools to
generate ground truth for new sequences. (Formats:
ppm,tif,homebrew)</li>
<li><a
href="http://www.cs.otago.ac.nz/research/vision/index.html">Vision
Research Group</a></li>
<li><a
href="ftp://ftp.limsi.fr/pub/quenot/opflow/testdata/piv/">ftp://ftp.limsi.fr/pub/quenot/opflow/testdata/piv/</a>
- Real and synthetic image sequences used for testing a Particle Image
Velocimetry application. These images may be used for the test of
optical flow and image matching algorithms. (Formats: pgm (raw))</li>
<li><a
href="http://www.limsi.fr/Recherche/IMM/PageIMM.html">LIMSI-CNRS/CHM/IMM/vision</a></li>
<li><a href="http://www.limsi.fr/">LIMSI-CNRS</a></li>
<li><a
href="http://www.taurusstudio.net/research/pmtexdb/index.htm">Photometric
3D Surface Texture Database</a> - This is the first 3D texture database
which provides both full real surface rotations and registered
photometric stereo data (30 textures, 1680 images). (Formats: TIFF)</li>
<li><a href="http://www.cee.hw.ac.uk/~mtc/sofa">SEQUENCES FOR OPTICAL
FLOW ANALYSIS (SOFA)</a> - 9 synthetic sequences designed for testing
motion analysis applications, including full ground truth of motion and
camera parameters. (Formats: gif)</li>
<li><a href="http://www.cee.hw.ac.uk/~mtc/research.html">Computer Vision
Group</a></li>
<li><a
href="http://www.nada.kth.se/~zucch/CAMERA/PUB/seq.html">Sequences for
Flow Based Reconstruction</a> - synthetic sequence for testing structure
from motion algorithms (Formats: pgm)</li>
<li><a href="http://www-dbv.cs.uni-bonn.de/stereo_data/">Stereo Images
with Ground Truth Disparity and Occlusion</a> - a small set of synthetic
images of a hallway with varying amounts of noise added. Use these
images to benchmark your stereo algorithm. (Formats: raw, viff (khoros),
or tiff)</li>
<li><a href="http://range.informatik.uni-stuttgart.de">Stuttgart Range
Image Database</a> - A collection of synthetic range images taken from
high-resolution polygonal models available on the web (Formats:
homebrew)</li>
<li><a
href="http://www.informatik.uni-stuttgart.de/ipvr/bv/bv_home_engl.html">Department
Image Understanding</a></li>
<li><a href="http://www2.ece.ohio-state.edu/~aleix/ARdatabase.html">The
AR Face Database</a> - Contains over 4,000 color images corresponding to
126 peoples faces (70 men and 56 women). Frontal views with variations
in facial expressions, illumination, and occlusions. (Formats: RAW (RGB
24-bit))</li>
<li><a href="http://rvl.www.ecn.purdue.edu/RVL/">Purdue Robot Vision
Lab</a></li>
<li><a href="http://web.mit.edu/torralba/www/database.html">The
MIT-CSAIL Database of Objects and Scenes</a> - Database for testing
multiclass object detection and scene recognition algorithms. Over
72,000 images with 2873 annotated frames. More than 50 annotated object
classes. (Formats: jpg)</li>
<li><a href="http://rvl1.ecn.purdue.edu/RVL/specularity_database/">The
RVL SPEC-DB (SPECularity DataBase)</a> - A collection of over 300 real
images of 100 objects taken under three different illuminaiton
conditions (Diffuse/Ambient/Directed). Use these images to test
algorithms for detecting and compensating specular highlights in color
images. (Formats: TIFF )</li>
<li><a href="http://rvl1.ecn.purdue.edu/RVL/">Robot Vision
Laboratory</a></li>
<li><a href="http://xm2vtsdb.ee.surrey.ac.uk">The Xm2vts database</a> -
The XM2VTSDB contains four digital recordings of 295 people taken over a
period of four months. This database contains both image and video data
of faces.</li>
<li><a href="http://www.ee.surrey.ac.uk/Research/CVSSP">Centre for
Vision, Speech and Signal Processing</a></li>
<li><a href="http://i21www.ira.uka.de/image_sequences">Traffic Image
Sequences and Marbled Block Sequence</a> - thousands of frames of
digitized traffic image sequences as well as the Marbled Block
sequence (grayscale images) (Formats: GIF)</li>
<li><a href="http://i21www.ira.uka.de">IAKS/KOGS</a></li>
<li><a href="ftp://ftp.iam.unibe.ch/pub/Images/FaceImages">U Bern Face
images</a> - hundreds of images (Formats: Sun rasterfile)</li>
<li><a href="ftp://freebie.engin.umich.edu/pub/misc/textures">U Michigan
textures</a> (Formats: compressed raw)</li>
<li><a href="http://www.ee.oulu.fi/~olli/Projects/Lumber.Grading.html">U
Oulu wood and knots database</a> - Includes classifications - 1000+
color images (Formats: ppm)</li>
<li><a href="http://vision.doc.ntu.ac.uk/datasets/UCID/ucid.html">UCID -
an Uncompressed Colour Image Database</a> - a benchmark database for
image retrieval with predefined ground truth. (Formats: tiff)</li>
<li><a href="http://vis-www.cs.umass.edu/~vislib/">UMass Vision Image
Archive</a> - Large image database with aerial, space, stereo, medical
images and more. (Formats: homebrew)</li>
<li><a
href="ftp://sunsite.unc.edu/pub/academic/computer-science/virtual-reality/3d">UNCs
3D image database</a> - many images (Formats: GIF)</li>
<li><a
href="http://marathon.csee.usf.edu/range/seg-comp/SegComp.html">USF
Range Image Data with Segmentation Ground Truth</a> - 80 image sets
(Formats: Sun rasterimage)</li>
<li><a
href="http://www.ee.oulu.fi/research/imag/color/pbfd.html">University of
Oulu Physics-based Face Database</a> - contains color images of faces
under different illuminants and camera calibration conditions as well as
skin spectral reflectance measurements of each person.</li>
<li><a href="http://www.ee.oulu.fi/mvmp/">Machine Vision and Media
Processing Unit</a></li>
<li><a href="http://www.outex.oulu.fi">University of Oulu Texture
Database</a> - Database of 320 surface textures, each captured under
three illuminants, six spatial resolutions and nine rotation angles. A
set of test suites is also provided so that texture segmentation,
classification, and retrieval algorithms can be tested in a standard
manner. (Formats: bmp, ras, xv)</li>
<li><a href="http://www.ee.oulu.fi/mvg">Machine Vision Group</a></li>
<li><a href="ftp://ftp.uu.net/published/usenix/faces">Usenix face
database</a> - Thousands of face images from many different sites (circa
994)</li>
<li><a
href="http://www-prima.inrialpes.fr/Prima/hall/view_sphere.html">View
Sphere Database</a> - Images of 8 objects seen from many different view
points. The view sphere is sampled using a geodesic with 172
images/sphere. Two sets for training and testing are available.
(Formats: ppm)</li>
<li><a href="http://www-prima.inrialpes.fr/Prima/">PRIMA,
GRAVIR</a></li>
<li><a href="ftp://ftp.vislist.com/IMAGERY/">Vision-list Imagery
Archive</a> - Many images, many formats</li>
<li><a href="http://www.cs.cmu.edu/~owenc/word.htm">Wiry Object
Recognition Database</a> - Thousands of images of a cart, ladder, stool,
bicycle, chairs, and cluttered scenes with ground truth labelings of
edges and regions. (Formats: jpg)</li>
<li><a href="http://www.cs.cmu.edu/0.000000E+003dvision/">3D Vision
Group</a></li>
<li><a href="http://cvc.yale.edu/projects/yalefaces/yalefaces.html">Yale
Face Database</a> - 165 images (15 individuals) with different lighting,
expression, and occlusion configurations.</li>
<li><a
href="http://cvc.yale.edu/projects/yalefacesB/yalefacesB.html">Yale Face
Database B</a> - 5760 single light source images of 10 subjects each
seen under 576 viewing conditions (9 poses x 64 illumination
conditions). (Formats: PGM)</li>
<li><a href="http://cvc.yale.edu/">Center for Computational Vision and
Control</a></li>
<li><a href="https://github.com/deepmind/rc-data">DeepMind QA Corpus</a>
- Textual QA corpus from CNN and DailyMail. More than 300K documents in
total. <a href="http://arxiv.org/abs/1506.03340">Paper</a> for
reference.</li>
<li><a href="https://research.google.com/youtube8m/">YouTube-8M
Dataset</a> - YouTube-8M is a large-scale labeled video dataset that
consists of 8 million YouTube video IDs and associated labels from a
diverse vocabulary of 4800 visual entities.</li>
<li><a href="https://github.com/openimages/dataset">Open Images
dataset</a> - Open Images is a dataset of ~9 million URLs to images that
have been annotated with labels spanning over 6000 categories.</li>
<li><a
href="http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html#devkit">Visual
Object Classes Challenge 2012 (VOC2012)</a> - VOC2012 dataset containing
12k images with 20 annotated classes for object detection and
segmentation.</li>
<li><a
href="https://github.com/zalandoresearch/fashion-mnist">Fashion-MNIST</a>
- MNIST like fashion product dataset consisting of a training set of
60,000 examples and a test set of 10,000 examples. Each example is a
28x28 grayscale image, associated with a label from 10 classes.</li>
<li><a
href="http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html">Large-scale
Fashion (DeepFashion) Database</a> - Contains over 800,000 diverse
fashion images. Each image in this dataset is labeled with 50
categories, 1,000 descriptive attributes, bounding box and clothing
landmarks</li>
<li><a
href="https://github.com/several27/FakeNewsCorpus">FakeNewsCorpus</a> -
Contains about 10 million news articles classified using <a
href="http://opensources.co">opensources.co</a> types</li>
<li><a href="https://github.com/bupt-ai-cz/LLVIP">LLVIP</a> - 15488
visible-infrared paired images (30976 images) for low-light vision
research, <a
href="https://bupt-ai-cz.github.io/LLVIP/">Project_Page</a></li>
<li><a href="https://github.com/bupt-ai-cz/Meta-SelfLearning">MSDA</a> -
Over over 5 million images from 5 different domains for multi-source
ocr/text recognition DA research, <a
href="https://bupt-ai-cz.github.io/Meta-SelfLearning/">Project_Page</a></li>
<li><a href="https://data.mendeley.com/datasets/57zpx667y9/2">SANAD:
Single-Label Arabic News Articles Dataset for Automatic Text
Categorization</a> - SANAD Dataset is a large collection of Arabic news
articles that can be used in different Arabic NLP tasks such as Text
Classification and Word Embedding. The articles were collected using
Python scripts written specifically for three popular news websites:
AlKhaleej, AlArabiya and Akhbarona.</li>
<li><a href="https://referit3d.github.io">Referit3D</a> - Two
large-scale and complementary visio-linguistic datasets (aka Nr3D and
Sr3D) for identifying fine-grained 3D objects in ScanNet scenes. Nr3D
contains 41.5K natural, free-form utterances, and Sr3d contains 83.5K
template-based utterances.</li>
<li><a href="https://rajpurkar.github.io/SQuAD-explorer/">SQuAD</a> -
Stanford released ~100,000 English QA pairs and ~50,000 unanswerable
questions</li>
<li><a href="https://fquad.illuin.tech/">FQuAD</a> - ~25,000 French QA
pairs released by Illuin Technology</li>
<li><a href="https://www.deepset.ai/germanquad">GermanQuAD and
GermanDPR</a> - deepset released ~14,000 German QA pairs</li>
<li><a href="https://github.com/annnyway/QA-for-Russian">SberQuAD</a> -
Sberbank released ~90,000 Russian QA pairs</li>
<li><a href="http://artemisdataset.org/">ArtEmis</a> - Contains 450K
affective annotations of emotional responses and linguistic explanations
for 80,000 artworks of WikiArt.</li>
</ol>
<h3 id="conferences">Conferences</h3>
<ol type="1">
<li><a href="http://cvpr2018.thecvf.com">CVPR - IEEE Conference on
Computer Vision and Pattern Recognition</a></li>
<li><a href="http://celweb.vuse.vanderbilt.edu/aamas18/">AAMAS -
International Joint Conference on Autonomous Agents and Multiagent
Systems</a></li>
<li><a href="https://www.ijcai-18.org/">IJCAI - International Joint
Conference on Artificial Intelligence</a></li>
<li><a href="https://icml.cc">ICML - International Conference on Machine
Learning</a></li>
<li><a href="http://www.ecmlpkdd2018.org">ECML - European Conference on
Machine Learning</a></li>
<li><a href="http://www.kdd.org/kdd2018/">KDD - Knowledge Discovery and
Data Mining</a></li>
<li><a href="https://nips.cc/Conferences/2018">NIPS - Neural Information
Processing Systems</a></li>
<li><a
href="https://conferences.oreilly.com/artificial-intelligence/ai-ny">OReilly
AI Conference - OReilly Artificial Intelligence Conference</a></li>
<li><a
href="https://www.waset.org/conference/2018/07/istanbul/ICDM">ICDM -
International Conference on Data Mining</a></li>
<li><a href="http://iccv2017.thecvf.com">ICCV - International Conference
on Computer Vision</a></li>
<li><a href="https://www.aaai.org">AAAI - Association for the
Advancement of Artificial Intelligence</a></li>
<li><a href="https://montrealaisymposium.wordpress.com/">MAIS - Montreal
AI Symposium</a></li>
</ol>
<h3 id="frameworks">Frameworks</h3>
<ol type="1">
<li><a href="http://caffe.berkeleyvision.org/">Caffe</a><br />
</li>
<li><a href="http://torch.ch/">Torch7</a></li>
<li><a href="http://deeplearning.net/software/theano/">Theano</a></li>
<li><a
href="https://code.google.com/p/cuda-convnet2/">cuda-convnet</a></li>
<li><a href="https://github.com/karpathy/convnetjs">convetjs</a></li>
<li><a href="http://libccv.org/doc/doc-convnet/">Ccv</a></li>
<li><a href="http://numenta.org/nupic.html">NuPIC</a></li>
<li><a href="http://deeplearning4j.org/">DeepLearning4J</a></li>
<li><a href="https://github.com/harthur/brain">Brain</a></li>
<li><a
href="https://github.com/rasmusbergpalm/DeepLearnToolbox">DeepLearnToolbox</a></li>
<li><a
href="https://github.com/nitishsrivastava/deepnet">Deepnet</a></li>
<li><a href="https://github.com/andersbll/deeppy">Deeppy</a></li>
<li><a
href="https://github.com/ivan-vasilev/neuralnetworks">JavaNN</a></li>
<li><a href="https://github.com/hannes-brt/hebel">hebel</a></li>
<li><a href="https://github.com/pluskid/Mocha.jl">Mocha.jl</a></li>
<li><a href="https://github.com/guoding83128/OpenDL">OpenDL</a></li>
<li><a href="https://developer.nvidia.com/cuDNN">cuDNN</a></li>
<li><a
href="http://melisgl.github.io/mgl-pax-world/mgl-manual.html">MGL</a></li>
<li><a href="https://github.com/denizyuret/Knet.jl">Knet.jl</a></li>
<li><a href="https://github.com/NVIDIA/DIGITS">Nvidia DIGITS - a web app
based on Caffe</a></li>
<li><a href="https://github.com/NervanaSystems/neon">Neon - Python based
Deep Learning Framework</a></li>
<li><a href="http://keras.io">Keras - Theano based Deep Learning
Library</a></li>
<li><a href="http://chainer.org/">Chainer - A flexible framework of
neural networks for deep learning</a></li>
<li><a href="http://rnnlm.org/">RNNLM Toolkit</a></li>
<li><a href="http://sourceforge.net/p/rnnl/wiki/Home/">RNNLIB - A
recurrent neural network library</a></li>
<li><a href="https://github.com/karpathy/char-rnn">char-rnn</a></li>
<li><a href="https://github.com/vlfeat/matconvnet">MatConvNet: CNNs for
MATLAB</a></li>
<li><a href="https://github.com/dmlc/minerva">Minerva - a fast and
flexible tool for deep learning on multi-GPU</a></li>
<li><a href="https://github.com/IDSIA/brainstorm">Brainstorm - Fast,
flexible and fun neural networks.</a></li>
<li><a href="https://github.com/tensorflow/tensorflow">Tensorflow - Open
source software library for numerical computation using data flow
graphs</a></li>
<li><a href="https://github.com/Microsoft/DMTK">DMTK - Microsoft
Distributed Machine Learning Tookit</a></li>
<li><a href="https://github.com/google/skflow">Scikit Flow - Simplified
interface for TensorFlow (mimicking Scikit Learn)</a></li>
<li><a href="https://github.com/apache/incubator-mxnet">MXnet -
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning
framework</a></li>
<li><a href="https://github.com/Samsung/veles">Veles - Samsung
Distributed machine learning platform</a></li>
<li><a href="https://github.com/PrincetonVision/marvin">Marvin - A
Minimalist GPU-only N-Dimensional ConvNets Framework</a></li>
<li><a href="http://singa.incubator.apache.org/">Apache SINGA - A
General Distributed Deep Learning Platform</a></li>
<li><a href="https://github.com/amznlabs/amazon-dsstne">DSSTNE -
Amazons library for building Deep Learning models</a></li>
<li><a
href="https://github.com/tensorflow/models/tree/master/syntaxnet">SyntaxNet
- Googles syntactic parser - A TensorFlow dependency library</a></li>
<li><a href="http://mlpack.org/">mlpack - A scalable Machine Learning
library</a></li>
<li><a href="https://github.com/torchnet/torchnet">Torchnet - Torch
based Deep Learning Library</a></li>
<li><a href="https://github.com/baidu/paddle">Paddle - PArallel
Distributed Deep LEarning by Baidu</a></li>
<li><a href="http://neupy.com">NeuPy - Theano based Python library for
ANN and Deep Learning</a></li>
<li><a href="https://github.com/Lasagne/Lasagne">Lasagne - a lightweight
library to build and train neural networks in Theano</a></li>
<li><a href="https://github.com/dnouri/nolearn">nolearn - wrappers and
abstractions around existing neural network libraries, most notably
Lasagne</a></li>
<li><a href="https://github.com/deepmind/sonnet">Sonnet - a library for
constructing neural networks by Googles DeepMind</a></li>
<li><a href="https://github.com/pytorch/pytorch">PyTorch - Tensors and
Dynamic neural networks in Python with strong GPU acceleration</a></li>
<li><a href="https://github.com/Microsoft/CNTK">CNTK - Microsoft
Cognitive Toolkit</a></li>
<li><a href="https://github.com/SerpentAI/SerpentAI">Serpent.AI - Game
agent framework: Use any video game as a deep learning sandbox</a></li>
<li><a href="https://github.com/caffe2/caffe2">Caffe2 - A New
Lightweight, Modular, and Scalable Deep Learning Framework</a></li>
<li><a href="https://github.com/PAIR-code/deeplearnjs">deeplearn.js -
Hardware-accelerated deep learning and linear algebra (NumPy) library
for the web</a></li>
<li><a href="https://tvm.ai/">TVM - End to End Deep Learning Compiler
Stack for CPUs, GPUs and specialized accelerators</a></li>
<li><a href="https://github.com/NervanaSystems/coach">Coach -
Reinforcement Learning Coach by Intel® AI Lab</a></li>
<li><a href="https://github.com/albu/albumentations">albumentations - A
fast and framework agnostic image augmentation library</a></li>
<li><a href="https://github.com/Neuraxio/Neuraxle">Neuraxle - A
general-purpose ML pipelining framework</a></li>
<li><a href="https://github.com/catalyst-team/catalyst">Catalyst:
High-level utils for PyTorch DL &amp; RL research. It was developed with
a focus on reproducibility, fast experimentation and code/ideas
reusing</a></li>
<li><a href="https://github.com/rlworkgroup/garage">garage - A toolkit
for reproducible reinforcement learning research</a></li>
<li><a href="https://github.com/alankbi/detecto">Detecto - Train and run
object detection models with 5-10 lines of code</a></li>
<li><a href="https://github.com/benedekrozemberczki/karateclub">Karate
Club - An unsupervised machine learning library for graph structured
data</a></li>
<li><a href="https://github.com/mrdimosthenis/Synapses">Synapses - A
lightweight library for neural networks that runs anywhere</a></li>
<li><a href="https://github.com/reinforceio/tensorforce">TensorForce - A
TensorFlow library for applied reinforcement learning</a></li>
<li><a href="https://github.com/logicalclocks/hopsworks">Hopsworks - A
Feature Store for ML and Data-Intensive AI</a></li>
<li><a href="https://github.com/gojek/feast">Feast - A Feature Store for
ML for GCP by Gojek/Google</a></li>
<li><a href="https://github.com/gojek/feast">PyTorch Geometric Temporal
- Representation learning on dynamic graphs</a></li>
<li><a href="https://github.com/lightly-ai/lightly">lightly - A computer
vision framework for self-supervised learning</a></li>
<li><a href="https://github.com/google/trax">Trax — Deep Learning with
Clear Code and Speed</a></li>
<li><a href="https://github.com/google/flax">Flax - a neural network
ecosystem for JAX that is designed for flexibility</a></li>
<li><a
href="https://github.com/Quick-AI/quickvision">QuickVision</a></li>
<li><a href="https://github.com/hpcaitech/ColossalAI">Colossal-AI - An
Integrated Large-scale Model Training System with Efficient
Parallelization Techniques</a></li>
<li><a href="https://haystack.deepset.ai/docs/intromd">haystack: an
open-source neural search framework</a></li>
<li><a href="https://github.com/enlite-ai/maze">Maze</a> -
Application-oriented deep reinforcement learning framework addressing
real-world decision problems.</li>
<li><a href="https://github.com/chncwang/InsNet">InsNet - A neural
network library for building instance-dependent NLP models with
padding-free dynamic batching</a></li>
</ol>
<h3 id="tools">Tools</h3>
<ol type="1">
<li><a href="https://github.com/nebuly-ai/nebullvm">Nebullvm</a> -
Easy-to-use library to boost deep learning inference leveraging multiple
deep learning compilers.</li>
<li><a href="https://github.com/lutzroeder/netron">Netron</a> -
Visualizer for deep learning and machine learning models</li>
<li><a href="http://jupyter.org">Jupyter Notebook</a> - Web-based
notebook environment for interactive computing</li>
<li><a href="https://github.com/tensorflow/tensorboard">TensorBoard</a>
- TensorFlows Visualization Toolkit</li>
<li><a
href="https://www.microsoft.com/en-us/research/project/visual-studio-code-tools-ai/">Visual
Studio Tools for AI</a> - Develop, debug and deploy deep learning and AI
solutions</li>
<li><a href="https://github.com/microsoft/tensorwatch">TensorWatch</a> -
Debugging and visualization for deep learning</li>
<li><a href="https://github.com/ml-tooling/ml-workspace">ML
Workspace</a> - All-in-one web-based IDE for machine learning and data
science.</li>
<li><a href="https://github.com/rlworkgroup/dowel">dowel</a> - A little
logger for machine learning research. Log any object to the console,
CSVs, TensorBoard, text log files, and more with just one call to
<code>logger.log()</code></li>
<li><a href="https://neptune.ai/">Neptune</a> - Lightweight tool for
experiment tracking and results visualization.</li>
<li><a
href="https://chrome.google.com/webstore/detail/code-finder-for-research/aikkeehnlfpamidigaffhfmgbkdeheil">CatalyzeX</a>
- Browser extension (<a
href="https://chrome.google.com/webstore/detail/code-finder-for-research/aikkeehnlfpamidigaffhfmgbkdeheil">Chrome</a>
and <a
href="https://addons.mozilla.org/en-US/firefox/addon/code-finder-catalyzex/">Firefox</a>)
that automatically finds and links to code implementations for ML papers
anywhere online: Google, Twitter, Arxiv, Scholar, etc.</li>
<li><a href="https://github.com/determined-ai/determined">Determined</a>
- Deep learning training platform with integrated support for
distributed training, hyperparameter tuning, smart GPU scheduling,
experiment tracking, and a model registry.</li>
<li><a href="https://dagshub.com/">DAGsHub</a> - Community platform for
Open Source ML Manage experiments, data &amp; models and create
collaborative ML projects easily.</li>
<li><a href="https://github.com/activeloopai/Hub">hub</a> - Fastest
unstructured dataset management for TensorFlow/PyTorch by activeloop.ai.
Stream &amp; version-control data. Converts large data into single
numpy-like array on the cloud, accessible on any machine.</li>
<li><a href="https://dvc.org/">DVC</a> - DVC is built to make ML models
shareable and reproducible. It is designed to handle large files, data
sets, machine learning models, and metrics as well as code.</li>
<li><a href="https://cml.dev/">CML</a> - CML helps you bring your
favorite DevOps tools to machine learning.</li>
<li><a href="https://mlem.ai/">MLEM</a> - MLEM is a tool to easily
package, deploy and serve Machine Learning models. It seamlessly
supports a variety of scenarios like real-time serving and batch
processing.</li>
<li><a href="https://getmaxim.ai">Maxim AI</a> - Tool for AI Agent
Simulation, Evaluation &amp; Observability.</li>
</ol>
<h3 id="miscellaneous">Miscellaneous</h3>
<ol type="1">
<li><a
href="http://on-demand-gtc.gputechconf.com/gtcnew/on-demand-gtc.php?searchByKeyword=shelhamer&amp;searchItems=&amp;sessionTopic=&amp;sessionEvent=4&amp;sessionYear=2014&amp;sessionFormat=&amp;submit=&amp;select=+">Caffe
Webinar</a></li>
<li><a
href="http://meta-guide.com/software-meta-guide/100-best-github-deep-learning/">100
Best Github Resources in Github for DL</a></li>
<li><a href="https://code.google.com/p/word2vec/">Word2Vec</a></li>
<li><a href="https://github.com/tleyden/docker/tree/master/caffe">Caffe
DockerFile</a></li>
<li><a
href="https://github.com/TorontoDeepLearning/convnet">TorontoDeepLEarning
convnet</a></li>
<li><a href="https://github.com/clementfarabet/gfx.js">gfx.js</a></li>
<li><a href="https://github.com/torch/torch7/wiki/Cheatsheet">Torch7
Cheat sheet</a></li>
<li><a
href="http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-864-advanced-natural-language-processing-fall-2005/">Misc
from MITs Advanced Natural Language Processing course</a></li>
<li><a
href="http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-867-machine-learning-fall-2006/lecture-notes/">Misc
from MITs Machine Learning course</a></li>
<li><a
href="http://ocw.mit.edu/courses/brain-and-cognitive-sciences/9-520-a-networks-for-learning-regression-and-classification-spring-2001/">Misc
from MITs Networks for Learning: Regression and Classification
course</a></li>
<li><a
href="http://ocw.mit.edu/courses/health-sciences-and-technology/hst-723j-neural-coding-and-perception-of-sound-spring-2005/index.htm">Misc
from MITs Neural Coding and Perception of Sound course</a></li>
<li><a
href="http://www.datasciencecentral.com/profiles/blogs/implementing-a-distributed-deep-learning-network-over-spark">Implementing
a Distributed Deep Learning Network over Spark</a></li>
<li><a href="https://github.com/erikbern/deep-pink">A chess AI that
learns to play chess using deep learning.</a></li>
<li><a
href="https://github.com/kristjankorjus/Replicating-DeepMind">Reproducing
the results of “Playing Atari with Deep Reinforcement Learning” by
DeepMind</a></li>
<li><a href="https://github.com/idio/wiki2vec">Wiki2Vec. Getting
Word2vec vectors for entities and word from Wikipedia Dumps</a></li>
<li><a href="https://github.com/kuz/DeepMind-Atari-Deep-Q-Learner">The
original code from the DeepMind article + tweaks</a></li>
<li><a href="https://github.com/google/deepdream">Google deepdream -
Neural Network art</a></li>
<li><a href="https://gist.github.com/karpathy/587454dc0146a6ae21fc">An
efficient, batched LSTM.</a></li>
<li><a
href="https://github.com/hexahedria/biaxial-rnn-music-composition">A
recurrent neural network designed to generate classical music.</a></li>
<li><a href="https://github.com/facebook/MemNN">Memory Networks
Implementations - Facebook</a></li>
<li><a href="https://github.com/cmusatyalab/openface">Face recognition
with Googles FaceNet deep neural network.</a></li>
<li><a href="https://github.com/joeledenberg/DigitRecognition">Basic
digit recognition neural network</a></li>
<li><a
href="https://www.projectoxford.ai/demo/emotion#detection">Emotion
Recognition API Demo - Microsoft</a></li>
<li><a href="https://github.com/ethereon/caffe-tensorflow">Proof of
concept for loading Caffe models in TensorFlow</a></li>
<li><a href="http://pjreddie.com/darknet/yolo/#webcam">YOLO: Real-Time
Object Detection</a></li>
<li><a
href="https://www.analyticsvidhya.com/blog/2018/12/practical-guide-object-detection-yolo-framewor-python/">YOLO:
Practical Implementation using Python</a></li>
<li><a href="https://github.com/Rochester-NRT/AlphaGo">AlphaGo - A
replication of DeepMinds 2016 Nature publication, “Mastering the game
of Go with deep neural networks and tree search”</a></li>
<li><a
href="https://github.com/ZuzooVn/machine-learning-for-software-engineers">Machine
Learning for Software Engineers</a></li>
<li><a
href="https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471#.oa4rzez3g">Machine
Learning is Fun!</a></li>
<li><a
href="https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A">Siraj
Ravals Deep Learning tutorials</a></li>
<li><a href="https://github.com/natanielruiz/dockerface">Dockerface</a>
- Easy to install and use deep learning Faster R-CNN face detection for
images and video in a docker container.</li>
<li><a
href="https://github.com/ybayle/awesome-deep-learning-music">Awesome
Deep Learning Music</a> - Curated list of articles related to deep
learning scientific research applied to music</li>
<li><a
href="https://github.com/benedekrozemberczki/awesome-graph-embedding">Awesome
Graph Embedding</a> - Curated list of articles related to deep learning
scientific research on graph structured data at the graph level.</li>
<li><a
href="https://github.com/chihming/awesome-network-embedding">Awesome
Network Embedding</a> - Curated list of articles related to deep
learning scientific research on graph structured data at the node
level.</li>
<li><a href="https://github.com/Microsoft/Recommenders">Microsoft
Recommenders</a> contains examples, utilities and best practices for
building recommendation systems. Implementations of several
state-of-the-art algorithms are provided for self-study and
customization in your own applications.</li>
<li><a
href="http://karpathy.github.io/2015/05/21/rnn-effectiveness/">The
Unreasonable Effectiveness of Recurrent Neural Networks</a> - Andrej
Karpathy blog post about using RNN for generating text.</li>
<li><a href="https://github.com/divamgupta/ladder_network_keras">Ladder
Network</a> - Keras Implementation of Ladder Network for Semi-Supervised
Learning</li>
<li><a href="https://github.com/amitness/toolbox">toolbox: Curated list
of ML libraries</a></li>
<li><a href="https://poloclub.github.io/cnn-explainer/">CNN
Explainer</a></li>
<li><a href="https://github.com/AMAI-GmbH/AI-Expert-Roadmap">AI Expert
Roadmap</a> - Roadmap to becoming an Artificial Intelligence Expert</li>
<li><a
href="https://github.com/AstraZeneca/awesome-polipharmacy-side-effect-prediction/">Awesome
Drug Interactions, Synergy, and Polypharmacy Prediction</a></li>
</ol>
<table style="width:8%;">
<colgroup>
<col style="width: 8%" />
</colgroup>
<tbody>
<tr class="odd">
<td>### Contributing Have anything in mind that you think is awesome and
would fit in this list? Feel free to send a <a
href="https://github.com/ashara12/awesome-deeplearning/pulls">pull
request</a>.</td>
</tr>
</tbody>
</table>
<h2 id="license">License</h2>
<p><a href="http://creativecommons.org/publicdomain/zero/1.0/"><img
src="http://i.creativecommons.org/p/zero/1.0/88x31.png"
alt="CC0" /></a></p>
<p>To the extent possible under law, <a
href="https://linkedin.com/in/Christofidis">Christos Christofidis</a>
has waived all copyright and related or neighboring rights to this
work.</p>
<p><a
href="https://github.com/ChristosChristofidis/awesome-deep-learning">deeplearning.md
Github</a></p>