725 lines
138 KiB
Plaintext
725 lines
138 KiB
Plaintext
[38;5;12m# Awesome Deep Learning [39m[38;5;14m[1m![0m[38;5;12mAwesome[39m[38;5;14m[1m (https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)[0m[38;5;12m (https://github.com/sindresorhus/awesome)[39m
|
||
|
||
[38;2;255;187;0m[4mTable of Contents[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mBooks[39m[38;5;14m[1m (#books)[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mCourses[39m[38;5;14m[1m (#courses)[0m[38;5;12m [39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mVideos and Lectures[39m[38;5;14m[1m (#videos-and-lectures)[0m[38;5;12m [39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mPapers[39m[38;5;14m[1m (#papers)[0m[38;5;12m [39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mTutorials[39m[38;5;14m[1m (#tutorials)[0m[38;5;12m [39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mResearchers[39m[38;5;14m[1m (#researchers)[0m[38;5;12m [39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mWebsites[39m[38;5;14m[1m (#websites)[0m[38;5;12m [39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mDatasets[39m[38;5;14m[1m (#datasets)[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mConferences[39m[38;5;14m[1m (#Conferences)[0m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mFrameworks[39m[38;5;14m[1m (#frameworks)[0m[38;5;12m [39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mTools[39m[38;5;14m[1m (#tools)[0m[38;5;12m [39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMiscellaneous[39m[38;5;14m[1m (#miscellaneous)[0m[38;5;12m [39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mContributing[39m[38;5;14m[1m (#contributing)[0m[38;5;12m [39m
|
||
|
||
|
||
[38;2;255;187;0m[4mBooks[0m
|
||
|
||
[38;5;12m1. [39m[38;5;14m[1mDeep Learning[0m[38;5;12m (http://www.deeplearningbook.org/) by Yoshua Bengio, Ian Goodfellow and Aaron Courville (05/07/2015)[39m
|
||
[38;5;12m2. [39m[38;5;14m[1mNeural Networks and Deep Learning[0m[38;5;12m (http://neuralnetworksanddeeplearning.com/) by Michael Nielsen (Dec 2014)[39m
|
||
[38;5;12m3. [39m[38;5;14m[1mDeep Learning[0m[38;5;12m (http://research.microsoft.com/pubs/209355/DeepLearning-NowPublishing-Vol7-SIG-039.pdf) by Microsoft Research (2013)[39m
|
||
[38;5;12m4. [39m[38;5;14m[1mDeep Learning Tutorial[0m[38;5;12m (http://deeplearning.net/tutorial/deeplearning.pdf) by LISA lab, University of Montreal (Jan 6 2015)[39m
|
||
[38;5;12m5. [39m[38;5;14m[1mneuraltalk[0m[38;5;12m (https://github.com/karpathy/neuraltalk) by Andrej Karpathy : numpy-based RNN/LSTM implementation[39m
|
||
[38;5;12m6. [39m[38;5;14m[1mAn introduction to genetic algorithms[0m[38;5;12m (http://www.boente.eti.br/fuzzy/ebook-fuzzy-mitchell.pdf)[39m
|
||
[38;5;12m7. [39m[38;5;14m[1mArtificial Intelligence: A Modern Approach[0m[38;5;12m (http://aima.cs.berkeley.edu/)[39m
|
||
[38;5;12m8. [39m[38;5;14m[1mDeep Learning in Neural Networks: An Overview[0m[38;5;12m (http://arxiv.org/pdf/1404.7828v4.pdf)[39m
|
||
[38;5;12m9. [39m[38;5;14m[1mArtificial intelligence and machine learning: Topic wise explanation[0m[38;5;12m (https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/)[39m
|
||
[38;5;12m10. [39m[38;5;14m[1mGrokking Deep Learning for Computer Vision[0m[38;5;12m (https://www.manning.com/books/grokking-deep-learning-for-computer-vision)[39m
|
||
[38;5;12m11. [39m[38;5;14m[1mDive into Deep Learning[0m[38;5;12m (https://d2l.ai/) - numpy based interactive Deep Learning book[39m
|
||
[38;5;12m12. [39m[38;5;14m[1mPractical Deep Learning for Cloud, Mobile, and Edge[0m[38;5;12m (https://www.oreilly.com/library/view/practical-deep-learning/9781492034858/) - A book for optimization techniques during production.[39m
|
||
[38;5;12m13. [39m[38;5;14m[1mMath and Architectures of Deep Learning[0m[38;5;12m (https://www.manning.com/books/math-and-architectures-of-deep-learning) - by Krishnendu Chaudhury[39m
|
||
[38;5;12m14. [39m[38;5;14m[1mTensorFlow 2.0 in Action[0m[38;5;12m (https://www.manning.com/books/tensorflow-in-action) - by Thushan Ganegedara[39m
|
||
[38;5;12m15. [39m[38;5;14m[1mDeep Learning for Natural Language Processing[0m[38;5;12m (https://www.manning.com/books/deep-learning-for-natural-language-processing) - by Stephan Raaijmakers[39m
|
||
[38;5;12m16. [39m[38;5;14m[1mDeep Learning Patterns and Practices[0m[38;5;12m (https://www.manning.com/books/deep-learning-patterns-and-practices) - by Andrew Ferlitsch[39m
|
||
[38;5;12m17. [39m[38;5;14m[1mInside Deep Learning[0m[38;5;12m (https://www.manning.com/books/inside-deep-learning) - by Edward Raff[39m
|
||
[38;5;12m18. [39m[38;5;14m[1mDeep Learning with Python, Second Edition[0m[38;5;12m (https://www.manning.com/books/deep-learning-with-python-second-edition) - by François Chollet[39m
|
||
[38;5;12m19. [39m[38;5;14m[1mEvolutionary Deep Learning[0m[38;5;12m (https://www.manning.com/books/evolutionary-deep-learning) - by Micheal Lanham[39m
|
||
[38;5;12m20. [39m[38;5;14m[1mEngineering Deep Learning Platforms[0m[38;5;12m (https://www.manning.com/books/engineering-deep-learning-platforms) - by Chi Wang and Donald Szeto[39m
|
||
[38;5;12m21. [39m[38;5;14m[1mDeep Learning with R, Second Edition[0m[38;5;12m (https://www.manning.com/books/deep-learning-with-r-second-edition) - by François Chollet with Tomasz Kalinowski and J. J. Allaire[39m
|
||
[38;5;12m22. [39m[38;5;14m[1mRegularization in Deep Learning[0m[38;5;12m (https://www.manning.com/books/regularization-in-deep-learning) - by Liu Peng[39m
|
||
[38;5;12m23. [39m[38;5;14m[1mJax in Action[0m[38;5;12m (https://www.manning.com/books/jax-in-action) - by Grigory Sapunov[39m
|
||
[38;5;12m24.[39m[38;5;12m [39m[38;5;14m[1mHands-On[0m[38;5;14m[1m [0m[38;5;14m[1mMachine[0m[38;5;14m[1m [0m[38;5;14m[1mLearning[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1mScikit-Learn,[0m[38;5;14m[1m [0m[38;5;14m[1mKeras,[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mTensorFlow[0m[38;5;12m [39m
|
||
[38;5;12m(https://www.knowledgeisle.com/wp-content/uploads/2019/12/2-Aur%C3%A9lien-G%C3%A9ron-Hands-On-Machine-Learning-with-Scikit-Learn-Keras-and-Tensorflow_-Concepts-Tools-and-Techniques-to-Build-Intelligent-Systems-O%E2%80%99Reilly-Media-201[39m
|
||
[38;5;12m9.pdf)[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mAurélien[39m[38;5;12m [39m[38;5;12mGéron[39m[38;5;12m [39m[38;5;12m|[39m[38;5;12m [39m[38;5;12mOct[39m[38;5;12m [39m[38;5;12m15,[39m[38;5;12m [39m[38;5;12m2019[39m
|
||
|
||
[38;2;255;187;0m[4mCourses[0m
|
||
|
||
[38;5;12m1. [39m[38;5;14m[1mMachine Learning - Stanford[0m[38;5;12m (https://class.coursera.org/ml-005) by Andrew Ng in Coursera (2010-2014)[39m
|
||
[38;5;12m2. [39m[38;5;14m[1mMachine Learning - Caltech[0m[38;5;12m (http://work.caltech.edu/lectures.html) by Yaser Abu-Mostafa (2012-2014)[39m
|
||
[38;5;12m3. [39m[38;5;14m[1mMachine Learning - Carnegie Mellon[0m[38;5;12m (http://www.cs.cmu.edu/~tom/10701_sp11/lectures.shtml) by Tom Mitchell (Spring 2011)[39m
|
||
[38;5;12m2. [39m[38;5;14m[1mNeural Networks for Machine Learning[0m[38;5;12m (https://class.coursera.org/neuralnets-2012-001) by Geoffrey Hinton in Coursera (2012)[39m
|
||
[38;5;12m3. [39m[38;5;14m[1mNeural networks class[0m[38;5;12m (https://www.youtube.com/playlist?list=PL6Xpj9I5qXYEcOhn7TqghAJ6NAPrNmUBH) by Hugo Larochelle from Université de Sherbrooke (2013)[39m
|
||
[38;5;12m4. [39m[38;5;14m[1mDeep Learning Course[0m[38;5;12m (http://cilvr.cs.nyu.edu/doku.php?id=deeplearning:slides:start) by CILVR lab @ NYU (2014)[39m
|
||
[38;5;12m5. [39m[38;5;14m[1mA.I - Berkeley[0m[38;5;12m (https://courses.edx.org/courses/BerkeleyX/CS188x_1/1T2013/courseware/) by Dan Klein and Pieter Abbeel (2013)[39m
|
||
[38;5;12m6. [39m[38;5;14m[1mA.I - MIT[0m[38;5;12m (http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/lecture-videos/) by Patrick Henry Winston (2010)[39m
|
||
[38;5;12m7. [39m[38;5;14m[1mVision and learning - computers and brains[0m[38;5;12m (http://web.mit.edu/course/other/i2course/www/vision_and_learning_fall_2013.html) by Shimon Ullman, Tomaso Poggio, Ethan Meyers @ MIT (2013)[39m
|
||
[38;5;12m9. [39m[38;5;14m[1mConvolutional Neural Networks for Visual Recognition - Stanford[0m[38;5;12m (http://vision.stanford.edu/teaching/cs231n/syllabus.html) by Fei-Fei Li, Andrej Karpathy (2017)[39m
|
||
[38;5;12m10. [39m[38;5;14m[1mDeep Learning for Natural Language Processing - Stanford[0m[38;5;12m (http://cs224d.stanford.edu/)[39m
|
||
[38;5;12m11. [39m[38;5;14m[1mNeural Networks - usherbrooke[0m[38;5;12m (http://info.usherbrooke.ca/hlarochelle/neural_networks/content.html)[39m
|
||
[38;5;12m12. [39m[38;5;14m[1mMachine Learning - Oxford[0m[38;5;12m (https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/) (2014-2015)[39m
|
||
[38;5;12m13. [39m[38;5;14m[1mDeep Learning - Nvidia[0m[38;5;12m (https://developer.nvidia.com/deep-learning-courses) (2015)[39m
|
||
[38;5;12m14.[39m[38;5;12m [39m[38;5;14m[1mGraduate[0m[38;5;14m[1m [0m[38;5;14m[1mSummer[0m[38;5;14m[1m [0m[38;5;14m[1mSchool:[0m[38;5;14m[1m [0m[38;5;14m[1mDeep[0m[38;5;14m[1m [0m[38;5;14m[1mLearning,[0m[38;5;14m[1m [0m[38;5;14m[1mFeature[0m[38;5;14m[1m [0m[38;5;14m[1mLearning[0m[38;5;12m [39m[38;5;12m(https://www.youtube.com/playlist?list=PLHyI3Fbmv0SdzMHAy0aN59oYnLy5vyyTA)[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mGeoffrey[39m[38;5;12m [39m[38;5;12mHinton,[39m[38;5;12m [39m[38;5;12mYoshua[39m[38;5;12m [39m[38;5;12mBengio,[39m[38;5;12m [39m[38;5;12mYann[39m[38;5;12m [39m[38;5;12mLeCun,[39m[38;5;12m [39m[38;5;12mAndrew[39m[38;5;12m [39m[38;5;12mNg,[39m[38;5;12m [39m[38;5;12mNando[39m[38;5;12m [39m[38;5;12mde[39m[38;5;12m [39m[38;5;12mFreitas[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mseveral[39m[38;5;12m [39m[38;5;12mothers[39m[38;5;12m [39m[38;5;12m@[39m[38;5;12m [39m
|
||
[38;5;12mIPAM,[39m[38;5;12m [39m[38;5;12mUCLA[39m[38;5;12m [39m[38;5;12m(2012)[39m
|
||
[38;5;12m15. [39m[38;5;14m[1mDeep Learning - Udacity/Google[0m[38;5;12m (https://www.udacity.com/course/deep-learning--ud730) by Vincent Vanhoucke and Arpan Chakraborty (2016)[39m
|
||
[38;5;12m16. [39m[38;5;14m[1mDeep Learning - UWaterloo[0m[38;5;12m (https://www.youtube.com/playlist?list=PLehuLRPyt1Hyi78UOkMPWCGRxGcA9NVOE) by Prof. Ali Ghodsi at University of Waterloo (2015)[39m
|
||
[38;5;12m17. [39m[38;5;14m[1mStatistical Machine Learning - CMU[0m[38;5;12m (https://www.youtube.com/watch?v=azaLcvuql_g&list=PLjbUi5mgii6BWEUZf7He6nowWvGne_Y8r) by Prof. Larry Wasserman[39m
|
||
[38;5;12m18. [39m[38;5;14m[1mDeep Learning Course[0m[38;5;12m (https://www.college-de-france.fr/site/en-yann-lecun/course-2015-2016.htm) by Yann LeCun (2016)[39m
|
||
[38;5;12m19. [39m[38;5;14m[1mDesigning, Visualizing and Understanding Deep Neural Networks-UC Berkeley[0m[38;5;12m (https://www.youtube.com/playlist?list=PLkFD6_40KJIxopmdJF_CLNqG3QuDFHQUm)[39m
|
||
[38;5;12m20. [39m[38;5;14m[1mUVA Deep Learning Course[0m[38;5;12m (http://uvadlc.github.io) MSc in Artificial Intelligence for the University of Amsterdam.[39m
|
||
[38;5;12m21. [39m[38;5;14m[1mMIT 6.S094: Deep Learning for Self-Driving Cars[0m[38;5;12m (http://selfdrivingcars.mit.edu/)[39m
|
||
[38;5;12m22. [39m[38;5;14m[1mMIT 6.S191: Introduction to Deep Learning[0m[38;5;12m (http://introtodeeplearning.com/)[39m
|
||
[38;5;12m23. [39m[38;5;14m[1mBerkeley CS 294: Deep Reinforcement Learning[0m[38;5;12m (http://rll.berkeley.edu/deeprlcourse/)[39m
|
||
[38;5;12m24. [39m[38;5;14m[1mKeras in Motion video course[0m[38;5;12m (https://www.manning.com/livevideo/keras-in-motion)[39m
|
||
[38;5;12m25. [39m[38;5;14m[1mPractical Deep Learning For Coders[0m[38;5;12m (http://course.fast.ai/) by Jeremy Howard - Fast.ai[39m
|
||
[38;5;12m26. [39m[38;5;14m[1mIntroduction to Deep Learning[0m[38;5;12m (http://deeplearning.cs.cmu.edu/) by Prof. Bhiksha Raj (2017)[39m
|
||
[38;5;12m27. [39m[38;5;14m[1mAI for Everyone[0m[38;5;12m (https://www.deeplearning.ai/ai-for-everyone/) by Andrew Ng (2019)[39m
|
||
[38;5;12m28. [39m[38;5;14m[1mMIT Intro to Deep Learning 7 day bootcamp[0m[38;5;12m (https://introtodeeplearning.com) - A seven day bootcamp designed in MIT to introduce deep learning methods and applications (2019)[39m
|
||
[38;5;12m29. [39m[38;5;14m[1mDeep Blueberry: Deep Learning[0m[38;5;12m (https://mithi.github.io/deep-blueberry) - A free five-weekend plan to self-learners to learn the basics of deep-learning architectures like CNNs, LSTMs, RNNs, VAEs, GANs, DQN, A3C and more (2019)[39m
|
||
[38;5;12m30. [39m[38;5;14m[1mSpinning Up in Deep Reinforcement Learning[0m[38;5;12m (https://spinningup.openai.com/) - A free deep reinforcement learning course by OpenAI (2019)[39m
|
||
[38;5;12m31. [39m[38;5;14m[1mDeep Learning Specialization - Coursera[0m[38;5;12m (https://www.coursera.org/specializations/deep-learning) - Breaking into AI with the best course from Andrew NG.[39m
|
||
[38;5;12m32. [39m[38;5;14m[1mDeep Learning - UC Berkeley | STAT-157[0m[38;5;12m (https://www.youtube.com/playlist?list=PLZSO_6-bSqHQHBCoGaObUljoXAyyqhpFW) by Alex Smola and Mu Li (2019)[39m
|
||
[38;5;12m33. [39m[38;5;14m[1mMachine Learning for Mere Mortals video course[0m[38;5;12m (https://www.manning.com/livevideo/machine-learning-for-mere-mortals) by Nick Chase[39m
|
||
[38;5;12m34. [39m[38;5;14m[1mMachine Learning Crash Course with TensorFlow APIs[0m[38;5;12m (https://developers.google.com/machine-learning/crash-course/) -Google AI[39m
|
||
[38;5;12m35. [39m[38;5;14m[1mDeep Learning from the Foundations[0m[38;5;12m (https://course.fast.ai/part2) Jeremy Howard - Fast.ai[39m
|
||
[38;5;12m36. [39m[38;5;14m[1mDeep Reinforcement Learning (nanodegree) - Udacity[0m[38;5;12m (https://www.udacity.com/course/deep-reinforcement-learning-nanodegree--nd893) a 3-6 month Udacity nanodegree, spanning multiple courses (2018)[39m
|
||
[38;5;12m37. [39m[38;5;14m[1mGrokking Deep Learning in Motion[0m[38;5;12m (https://www.manning.com/livevideo/grokking-deep-learning-in-motion) by Beau Carnes (2018)[39m
|
||
[38;5;12m38. [39m[38;5;14m[1mFace Detection with Computer Vision and Deep Learning[0m[38;5;12m (https://www.udemy.com/share/1000gAA0QdcV9aQng=/) by Hakan Cebeci[39m
|
||
[38;5;12m39. [39m[38;5;14m[1mDeep Learning Online Course list at Classpert[0m[38;5;12m (https://classpert.com/deep-learning) List of Deep Learning online courses (some are free) from Classpert Online Course Search[39m
|
||
[38;5;12m40. [39m[38;5;14m[1mAWS Machine Learning[0m[38;5;12m (https://aws.training/machinelearning) Machine Learning and Deep Learning Courses from Amazon's Machine Learning university[39m
|
||
[38;5;12m41. [39m[38;5;14m[1mIntro to Deep Learning with PyTorch[0m[38;5;12m (https://www.udacity.com/course/deep-learning-pytorch--ud188) - A great introductory course on Deep Learning by Udacity and Facebook AI[39m
|
||
[38;5;12m42. [39m[38;5;14m[1mDeep Learning by Kaggle[0m[38;5;12m (https://www.kaggle.com/learn/deep-learning) - Kaggle's free course on Deep Learning[39m
|
||
[38;5;12m43. [39m[38;5;14m[1mYann LeCun’s Deep Learning Course at CDS[0m[38;5;12m (https://cds.nyu.edu/deep-learning/) - DS-GA 1008 · SPRING 2021 [39m
|
||
[38;5;12m44. [39m[38;5;14m[1mNeural Networks and Deep Learning[0m[38;5;12m (https://webcms3.cse.unsw.edu.au/COMP9444/19T3/) - COMP9444 19T3[39m
|
||
[38;5;12m45. [39m[38;5;14m[1mDeep Learning A.I.Shelf[0m[38;5;12m (http://aishelf.org/category/ia/deep-learning/)[39m
|
||
|
||
[38;2;255;187;0m[4mVideos and Lectures[0m
|
||
|
||
[38;5;12m1. [39m[38;5;14m[1mHow To Create A Mind[0m[38;5;12m (https://www.youtube.com/watch?v=RIkxVci-R4k) By Ray Kurzweil[39m
|
||
[38;5;12m2. [39m[38;5;14m[1mDeep Learning, Self-Taught Learning and Unsupervised Feature Learning[0m[38;5;12m (https://www.youtube.com/watch?v=n1ViNeWhC24) By Andrew Ng[39m
|
||
[38;5;12m3. [39m[38;5;14m[1mRecent Developments in Deep Learning[0m[38;5;12m (https://www.youtube.com/watch?v=vShMxxqtDDs&index=3&list=PL78U8qQHXgrhP9aZraxTT5-X1RccTcUYT) By Geoff Hinton[39m
|
||
[38;5;12m4. [39m[38;5;14m[1mThe Unreasonable Effectiveness of Deep Learning[0m[38;5;12m (https://www.youtube.com/watch?v=sc-KbuZqGkI) by Yann LeCun[39m
|
||
[38;5;12m5. [39m[38;5;14m[1mDeep Learning of Representations[0m[38;5;12m (https://www.youtube.com/watch?v=4xsVFLnHC_0) by Yoshua bengio[39m
|
||
[38;5;12m6. [39m[38;5;14m[1mPrinciples of Hierarchical Temporal Memory[0m[38;5;12m (https://www.youtube.com/watch?v=6ufPpZDmPKA) by Jeff Hawkins[39m
|
||
[38;5;12m7. [39m[38;5;14m[1mMachine Learning Discussion Group - Deep Learning w/ Stanford AI Lab[0m[38;5;12m (https://www.youtube.com/watch?v=2QJi0ArLq7s&list=PL78U8qQHXgrhP9aZraxTT5-X1RccTcUYT) by Adam Coates[39m
|
||
[38;5;12m8. [39m[38;5;14m[1mMaking Sense of the World with Deep Learning[0m[38;5;12m (http://vimeo.com/80821560) By Adam Coates[39m
|
||
[38;5;12m9. [39m[38;5;14m[1mDemystifying Unsupervised Feature Learning [0m[38;5;12m (https://www.youtube.com/watch?v=wZfVBwOO0-k) By Adam Coates[39m
|
||
[38;5;12m10. [39m[38;5;14m[1mVisual Perception with Deep Learning[0m[38;5;12m (https://www.youtube.com/watch?v=3boKlkPBckA) By Yann LeCun[39m
|
||
[38;5;12m11. [39m[38;5;14m[1mThe Next Generation of Neural Networks[0m[38;5;12m (https://www.youtube.com/watch?v=AyzOUbkUf3M) By Geoffrey Hinton at GoogleTechTalks[39m
|
||
[38;5;12m12. [39m[38;5;14m[1mThe wonderful and terrifying implications of computers that can learn[0m[38;5;12m (http://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn) By Jeremy Howard at TEDxBrussels[39m
|
||
[38;5;12m13. [39m[38;5;14m[1mUnsupervised Deep Learning - Stanford[0m[38;5;12m (http://web.stanford.edu/class/cs294a/handouts.html) by Andrew Ng in Stanford (2011)[39m
|
||
[38;5;12m14. [39m[38;5;14m[1mNatural Language Processing[0m[38;5;12m (http://web.stanford.edu/class/cs224n/handouts/) By Chris Manning in Stanford[39m
|
||
[38;5;12m15. [39m[38;5;14m[1mA beginners Guide to Deep Neural Networks[0m[38;5;12m (http://googleresearch.blogspot.com/2015/09/a-beginners-guide-to-deep-neural.html) By Natalie Hammel and Lorraine Yurshansky[39m
|
||
[38;5;12m16. [39m[38;5;14m[1mDeep Learning: Intelligence from Big Data[0m[38;5;12m (https://www.youtube.com/watch?v=czLI3oLDe8M) by Steve Jurvetson (and panel) at VLAB in Stanford.[39m
|
||
[38;5;12m17. [39m[38;5;14m[1mIntroduction to Artificial Neural Networks and Deep Learning[0m[38;5;12m (https://www.youtube.com/watch?v=FoO8qDB8gUU) by Leo Isikdogan at Motorola Mobility HQ[39m
|
||
[38;5;12m18. [39m[38;5;14m[1mNIPS 2016 lecture and workshop videos[0m[38;5;12m (https://nips.cc/Conferences/2016/Schedule) - NIPS 2016[39m
|
||
[38;5;12m19. [39m[38;5;14m[1mDeep Learning Crash Course[0m[38;5;12m (https://www.youtube.com/watch?v=oS5fz_mHVz0&list=PLWKotBjTDoLj3rXBL-nEIPRN9V3a9Cx07): a series of mini-lectures by Leo Isikdogan on YouTube (2018)[39m
|
||
[38;5;12m20. [39m[38;5;14m[1mDeep Learning Crash Course[0m[38;5;12m (https://www.manning.com/livevideo/deep-learning-crash-course) By Oliver Zeigermann[39m
|
||
[38;5;12m21.[39m[38;5;12m [39m[38;5;14m[1mDeep[0m[38;5;14m[1m [0m[38;5;14m[1mLearning[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1mR[0m[38;5;14m[1m [0m[38;5;14m[1min[0m[38;5;14m[1m [0m[38;5;14m[1mMotion[0m[38;5;12m [39m[38;5;12m(https://www.manning.com/livevideo/deep-learning-with-r-in-motion):[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mlive[39m[38;5;12m [39m[38;5;12mvideo[39m[38;5;12m [39m[38;5;12mcourse[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12mteaches[39m[38;5;12m [39m[38;5;12mhow[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mapply[39m[38;5;12m [39m[38;5;12mdeep[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mtext[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12musing[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mpowerful[39m[38;5;12m [39m[38;5;12mKeras[39m[38;5;12m [39m[38;5;12mlibrary[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mits[39m[38;5;12m [39m[38;5;12mR[39m[38;5;12m [39m[38;5;12mlanguage[39m[38;5;12m [39m
|
||
[38;5;12minterface.[39m
|
||
[38;5;12m22.[39m[38;5;12m [39m[38;5;14m[1mMedical[0m[38;5;14m[1m [0m[38;5;14m[1mImaging[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1mDeep[0m[38;5;14m[1m [0m[38;5;14m[1mLearning[0m[38;5;14m[1m [0m[38;5;14m[1mTutorial[0m[38;5;12m [39m[38;5;12m(https://www.youtube.com/playlist?list=PLheiZMDg_8ufxEx9cNVcOYXsT3BppJP4b):[39m[38;5;12m [39m[38;5;12mThis[39m[38;5;12m [39m[38;5;12mtutorial[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mstyled[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mgraduate[39m[38;5;12m [39m[38;5;12mlecture[39m[38;5;12m [39m[38;5;12mabout[39m[38;5;12m [39m[38;5;12mmedical[39m[38;5;12m [39m[38;5;12mimaging[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mdeep[39m[38;5;12m [39m[38;5;12mlearning.[39m[38;5;12m [39m[38;5;12mThis[39m[38;5;12m [39m[38;5;12mwill[39m[38;5;12m [39m[38;5;12mcover[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m
|
||
[38;5;12mbackground[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mpopular[39m[38;5;12m [39m[38;5;12mmedical[39m[38;5;12m [39m[38;5;12mimage[39m[38;5;12m [39m[38;5;12mdomains[39m[38;5;12m [39m[38;5;12m(chest[39m[38;5;12m [39m[38;5;12mX-ray[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mhistology)[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12mwell[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12mmethods[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mtackle[39m[38;5;12m [39m[38;5;12mmulti-modality/view,[39m[38;5;12m [39m[38;5;12msegmentation,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mcounting[39m[38;5;12m [39m[38;5;12mtasks.[39m
|
||
[38;5;12m23. [39m[38;5;14m[1mDeepmind x UCL Deeplearning[0m[38;5;12m (https://www.youtube.com/playlist?list=PLqYmG7hTraZCDxZ44o4p3N5Anz3lLRVZF): 2020 version [39m
|
||
[38;5;12m24. [39m[38;5;14m[1mDeepmind x UCL Reinforcement Learning[0m[38;5;12m (https://www.youtube.com/playlist?list=PLqYmG7hTraZBKeNJ-JE_eyJHZ7XgBoAyb): Deep Reinforcement Learning[39m
|
||
[38;5;12m25. [39m[38;5;14m[1mCMU 11-785 Intro to Deep learning Spring 2020[0m[38;5;12m (https://www.youtube.com/playlist?list=PLp-0K3kfddPzCnS4CqKphh-zT3aDwybDe) Course: 11-785, Intro to Deep Learning by Bhiksha Raj [39m
|
||
[38;5;12m26. [39m[38;5;14m[1mMachine Learning CS 229[0m[38;5;12m (https://www.youtube.com/playlist?list=PLoROMvodv4rMiGQp3WXShtMGgzqpfVfbU) : End part focuses on deep learning By Andrew Ng[39m
|
||
[38;5;12m27. [39m[38;5;14m[1mWhat is Neural Structured Learning by Andrew Ferlitsch[0m[38;5;12m (https://youtu.be/LXWSE_9gHd0)[39m
|
||
[38;5;12m28. [39m[38;5;14m[1mDeep Learning Design Patterns by Andrew Ferlitsch[0m[38;5;12m (https://youtu.be/_DaviS6K0Vc)[39m
|
||
[38;5;12m29. [39m[38;5;14m[1mArchitecture of a Modern CNN: the design pattern approach by Andrew Ferlitsch[0m[38;5;12m (https://youtu.be/QCGSS3kyGo0)[39m
|
||
[38;5;12m30. [39m[38;5;14m[1mMetaparameters in a CNN by Andrew Ferlitsch[0m[38;5;12m (https://youtu.be/K1PLeggQ33I)[39m
|
||
[38;5;12m31. [39m[38;5;14m[1mMulti-task CNN: a real-world example by Andrew Ferlitsch[0m[38;5;12m (https://youtu.be/dH2nuI-1-qM)[39m
|
||
[38;5;12m32. [39m[38;5;14m[1mA friendly introduction to deep reinforcement learning by Luis Serrano[0m[38;5;12m (https://youtu.be/1FyAh07jh0o)[39m
|
||
[38;5;12m33. [39m[38;5;14m[1mWhat are GANs and how do they work? by Edward Raff[0m[38;5;12m (https://youtu.be/f6ivp84qFUc)[39m
|
||
[38;5;12m34. [39m[38;5;14m[1mCoding a basic WGAN in PyTorch by Edward Raff[0m[38;5;12m (https://youtu.be/7VRdaqMDalQ)[39m
|
||
[38;5;12m35. [39m[38;5;14m[1mTraining a Reinforcement Learning Agent by Miguel Morales[0m[38;5;12m (https://youtu.be/8TMT-gHlj_Q)[39m
|
||
[38;5;12m36. [39m[38;5;14m[1mUnderstand what is Deep Learning[0m[38;5;12m (https://www.scaler.com/topics/what-is-deep-learning/)[39m
|
||
|
||
[38;2;255;187;0m[4mPapers[0m
|
||
[48;2;30;30;40m[38;5;13m[3mYou can also find the most cited deep learning papers from [0m[48;2;30;30;40m[38;5;14m[1m[3mhere[0m[48;2;30;30;40m[38;5;13m[3m (https://github.com/terryum/awesome-deep-learning-papers)[0m
|
||
|
||
[38;5;12m1. [39m[38;5;14m[1mImageNet Classification with Deep Convolutional Neural Networks[0m[38;5;12m (http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf)[39m
|
||
[38;5;12m2. [39m[38;5;14m[1mUsing Very Deep Autoencoders for Content Based Image Retrieval[0m[38;5;12m (http://www.cs.toronto.edu/~hinton/absps/esann-deep-final.pdf)[39m
|
||
[38;5;12m3. [39m[38;5;14m[1mLearning Deep Architectures for AI[0m[38;5;12m (http://www.iro.umontreal.ca/~lisa/pointeurs/TR1312.pdf)[39m
|
||
[38;5;12m4. [39m[38;5;14m[1mCMU’s list of papers[0m[38;5;12m (http://deeplearning.cs.cmu.edu/)[39m
|
||
[38;5;12m5. [39m[38;5;14m[1mNeural Networks for Named Entity Recognition[0m[38;5;12m (http://nlp.stanford.edu/~socherr/pa4_ner.pdf) [39m[38;5;14m[1mzip[0m[38;5;12m (http://nlp.stanford.edu/~socherr/pa4-ner.zip)[39m
|
||
[38;5;12m6. [39m[38;5;14m[1mTraining tricks by YB[0m[38;5;12m (http://www.iro.umontreal.ca/~bengioy/papers/YB-tricks.pdf)[39m
|
||
[38;5;12m7. [39m[38;5;14m[1mGeoff Hinton's reading list (all papers)[0m[38;5;12m (http://www.cs.toronto.edu/~hinton/deeprefs.html)[39m
|
||
[38;5;12m8. [39m[38;5;14m[1mSupervised Sequence Labelling with Recurrent Neural Networks[0m[38;5;12m (http://www.cs.toronto.edu/~graves/preprint.pdf)[39m
|
||
[38;5;12m9. [39m[38;5;14m[1mStatistical Language Models based on Neural Networks[0m[38;5;12m (http://www.fit.vutbr.cz/~imikolov/rnnlm/thesis.pdf)[39m
|
||
[38;5;12m10. [39m[38;5;14m[1mTraining Recurrent Neural Networks[0m[38;5;12m (http://www.cs.utoronto.ca/~ilya/pubs/ilya_sutskever_phd_thesis.pdf)[39m
|
||
[38;5;12m11. [39m[38;5;14m[1mRecursive Deep Learning for Natural Language Processing and Computer Vision[0m[38;5;12m (http://nlp.stanford.edu/~socherr/thesis.pdf)[39m
|
||
[38;5;12m12. [39m[38;5;14m[1mBi-directional RNN[0m[38;5;12m (http://www.di.ufpe.br/~fnj/RNA/bibliografia/BRNN.pdf)[39m
|
||
[38;5;12m13. [39m[38;5;14m[1mLSTM[0m[38;5;12m (http://web.eecs.utk.edu/~itamar/courses/ECE-692/Bobby_paper1.pdf)[39m
|
||
[38;5;12m14. [39m[38;5;14m[1mGRU - Gated Recurrent Unit[0m[38;5;12m (http://arxiv.org/pdf/1406.1078v3.pdf)[39m
|
||
[38;5;12m15. [39m[38;5;14m[1mGFRNN[0m[38;5;12m (http://arxiv.org/pdf/1502.02367v3.pdf) [39m[38;5;14m[1m.[0m[38;5;12m (http://jmlr.org/proceedings/papers/v37/chung15.pdf) [39m[38;5;14m[1m.[0m[38;5;12m (http://jmlr.org/proceedings/papers/v37/chung15-supp.pdf)[39m
|
||
[38;5;12m16. [39m[38;5;14m[1mLSTM: A Search Space Odyssey[0m[38;5;12m (http://arxiv.org/pdf/1503.04069v1.pdf)[39m
|
||
[38;5;12m17. [39m[38;5;14m[1mA Critical Review of Recurrent Neural Networks for Sequence Learning[0m[38;5;12m (http://arxiv.org/pdf/1506.00019v1.pdf)[39m
|
||
[38;5;12m18. [39m[38;5;14m[1mVisualizing and Understanding Recurrent Networks[0m[38;5;12m (http://arxiv.org/pdf/1506.02078v1.pdf)[39m
|
||
[38;5;12m19. [39m[38;5;14m[1mWojciech Zaremba, Ilya Sutskever, An Empirical Exploration of Recurrent Network Architectures[0m[38;5;12m (http://jmlr.org/proceedings/papers/v37/jozefowicz15.pdf)[39m
|
||
[38;5;12m20. [39m[38;5;14m[1mRecurrent Neural Network based Language Model[0m[38;5;12m (http://www.fit.vutbr.cz/research/groups/speech/publi/2010/mikolov_interspeech2010_IS100722.pdf)[39m
|
||
[38;5;12m21. [39m[38;5;14m[1mExtensions of Recurrent Neural Network Language Model[0m[38;5;12m (http://www.fit.vutbr.cz/research/groups/speech/publi/2011/mikolov_icassp2011_5528.pdf)[39m
|
||
[38;5;12m22. [39m[38;5;14m[1mRecurrent Neural Network based Language Modeling in Meeting Recognition[0m[38;5;12m (http://www.fit.vutbr.cz/~imikolov/rnnlm/ApplicationOfRNNinMeetingRecognition_IS2011.pdf)[39m
|
||
[38;5;12m23. [39m[38;5;14m[1mDeep Neural Networks for Acoustic Modeling in Speech Recognition[0m[38;5;12m (http://cs224d.stanford.edu/papers/maas_paper.pdf)[39m
|
||
[38;5;12m24. [39m[38;5;14m[1mSpeech Recognition with Deep Recurrent Neural Networks[0m[38;5;12m (http://www.cs.toronto.edu/~fritz/absps/RNN13.pdf)[39m
|
||
[38;5;12m25. [39m[38;5;14m[1mReinforcement Learning Neural Turing Machines[0m[38;5;12m (http://arxiv.org/pdf/1505.00521v1)[39m
|
||
[38;5;12m26. [39m[38;5;14m[1mLearning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation[0m[38;5;12m (http://arxiv.org/pdf/1406.1078v3.pdf)[39m
|
||
[38;5;12m27. [39m[38;5;14m[1mGoogle - Sequence to Sequence Learning with Neural Networks[0m[38;5;12m (http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf)[39m
|
||
[38;5;12m28. [39m[38;5;14m[1mMemory Networks[0m[38;5;12m (http://arxiv.org/pdf/1410.3916v10)[39m
|
||
[38;5;12m29. [39m[38;5;14m[1mPolicy Learning with Continuous Memory States for Partially Observed Robotic Control[0m[38;5;12m (http://arxiv.org/pdf/1507.01273v1)[39m
|
||
[38;5;12m30. [39m[38;5;14m[1mMicrosoft - Jointly Modeling Embedding and Translation to Bridge Video and Language[0m[38;5;12m (http://arxiv.org/pdf/1505.01861v1.pdf)[39m
|
||
[38;5;12m31. [39m[38;5;14m[1mNeural Turing Machines[0m[38;5;12m (http://arxiv.org/pdf/1410.5401v2.pdf)[39m
|
||
[38;5;12m32. [39m[38;5;14m[1mAsk Me Anything: Dynamic Memory Networks for Natural Language Processing[0m[38;5;12m (http://arxiv.org/pdf/1506.07285v1.pdf)[39m
|
||
[38;5;12m33. [39m[38;5;14m[1mMastering the Game of Go with Deep Neural Networks and Tree Search[0m[38;5;12m (http://www.nature.com/nature/journal/v529/n7587/pdf/nature16961.pdf)[39m
|
||
[38;5;12m34. [39m[38;5;14m[1mBatch Normalization[0m[38;5;12m (https://arxiv.org/abs/1502.03167)[39m
|
||
[38;5;12m35. [39m[38;5;14m[1mResidual Learning[0m[38;5;12m (https://arxiv.org/pdf/1512.03385v1.pdf)[39m
|
||
[38;5;12m36. [39m[38;5;14m[1mImage-to-Image Translation with Conditional Adversarial Networks[0m[38;5;12m (https://arxiv.org/pdf/1611.07004v1.pdf)[39m
|
||
[38;5;12m37. [39m[38;5;14m[1mBerkeley AI Research (BAIR) Laboratory[0m[38;5;12m (https://arxiv.org/pdf/1611.07004v1.pdf)[39m
|
||
[38;5;12m38. [39m[38;5;14m[1mMobileNets by Google[0m[38;5;12m (https://arxiv.org/abs/1704.04861)[39m
|
||
[38;5;12m39. [39m[38;5;14m[1mCross Audio-Visual Recognition in the Wild Using Deep Learning[0m[38;5;12m (https://arxiv.org/abs/1706.05739)[39m
|
||
[38;5;12m40. [39m[38;5;14m[1mDynamic Routing Between Capsules[0m[38;5;12m (https://arxiv.org/abs/1710.09829)[39m
|
||
[38;5;12m41. [39m[38;5;14m[1mMatrix Capsules With Em Routing[0m[38;5;12m (https://openreview.net/pdf?id=HJWLfGWRb)[39m
|
||
[38;5;12m42. [39m[38;5;14m[1mEfficient BackProp[0m[38;5;12m (http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf)[39m
|
||
[38;5;12m43. [39m[38;5;14m[1mGenerative Adversarial Nets[0m[38;5;12m (https://arxiv.org/pdf/1406.2661v1.pdf)[39m
|
||
[38;5;12m44. [39m[38;5;14m[1mFast R-CNN[0m[38;5;12m (https://arxiv.org/pdf/1504.08083.pdf)[39m
|
||
[38;5;12m45. [39m[38;5;14m[1mFaceNet: A Unified Embedding for Face Recognition and Clustering[0m[38;5;12m (https://arxiv.org/pdf/1503.03832.pdf)[39m
|
||
[38;5;12m46. [39m[38;5;14m[1mSiamese Neural Networks for One-shot Image Recognition[0m[38;5;12m (https://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf)[39m
|
||
[38;5;12m47. [39m[38;5;14m[1mUnsupervised Translation of Programming Languages[0m[38;5;12m (https://arxiv.org/pdf/2006.03511.pdf)[39m
|
||
[38;5;12m48. [39m[38;5;14m[1mMatching Networks for One Shot Learning[0m[38;5;12m (http://papers.nips.cc/paper/6385-matching-networks-for-one-shot-learning.pdf)[39m
|
||
[38;5;12m49. [39m[38;5;14m[1mVOLO: Vision Outlooker for Visual Recognition[0m[38;5;12m (https://arxiv.org/pdf/2106.13112.pdf)[39m
|
||
[38;5;12m50. [39m[38;5;14m[1mViT: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale[0m[38;5;12m (https://arxiv.org/pdf/2010.11929.pdf)[39m
|
||
[38;5;12m51. [39m[38;5;14m[1mBatch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift[0m[38;5;12m (http://proceedings.mlr.press/v37/ioffe15.pdf)[39m
|
||
[38;5;12m52. [39m[38;5;14m[1mDeepFaceDrawing: Deep Generation of Face Images from Sketches[0m[38;5;12m (http://geometrylearning.com/paper/DeepFaceDrawing.pdf?fbclid=IwAR0colWFHPGBCB1APZq9JVsWeWtmeZd9oCTNQvR52T5PRUJP_dLOwB8pt0I)[39m
|
||
|
||
[38;2;255;187;0m[4mTutorials[0m
|
||
|
||
[38;5;12m1. [39m[38;5;14m[1mUFLDL Tutorial 1[0m[38;5;12m (http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial)[39m
|
||
[38;5;12m2. [39m[38;5;14m[1mUFLDL Tutorial 2[0m[38;5;12m (http://ufldl.stanford.edu/tutorial/supervised/LinearRegression/)[39m
|
||
[38;5;12m3. [39m[38;5;14m[1mDeep Learning for NLP (without Magic)[0m[38;5;12m (http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial)[39m
|
||
[38;5;12m4. [39m[38;5;14m[1mA Deep Learning Tutorial: From Perceptrons to Deep Networks[0m[38;5;12m (http://www.toptal.com/machine-learning/an-introduction-to-deep-learning-from-perceptrons-to-deep-networks)[39m
|
||
[38;5;12m5. [39m[38;5;14m[1mDeep Learning from the Bottom up[0m[38;5;12m (http://www.metacademy.org/roadmaps/rgrosse/deep_learning)[39m
|
||
[38;5;12m6. [39m[38;5;14m[1mTheano Tutorial[0m[38;5;12m (http://deeplearning.net/tutorial/deeplearning.pdf)[39m
|
||
[38;5;12m7. [39m[38;5;14m[1mNeural Networks for Matlab[0m[38;5;12m (http://uk.mathworks.com/help/pdf_doc/nnet/nnet_ug.pdf)[39m
|
||
[38;5;12m8. [39m[38;5;14m[1mUsing convolutional neural nets to detect facial keypoints tutorial[0m[38;5;12m (http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/)[39m
|
||
[38;5;12m9. [39m[38;5;14m[1mTorch7 Tutorials[0m[38;5;12m (https://github.com/clementfarabet/ipam-tutorials/tree/master/th_tutorials)[39m
|
||
[38;5;12m10. [39m[38;5;14m[1mThe Best Machine Learning Tutorials On The Web[0m[38;5;12m (https://github.com/josephmisiti/machine-learning-module)[39m
|
||
[38;5;12m11. [39m[38;5;14m[1mVGG Convolutional Neural Networks Practical[0m[38;5;12m (http://www.robots.ox.ac.uk/~vgg/practicals/cnn/index.html)[39m
|
||
[38;5;12m12. [39m[38;5;14m[1mTensorFlow tutorials[0m[38;5;12m (https://github.com/nlintz/TensorFlow-Tutorials)[39m
|
||
[38;5;12m13. [39m[38;5;14m[1mMore TensorFlow tutorials[0m[38;5;12m (https://github.com/pkmital/tensorflow_tutorials)[39m
|
||
[38;5;12m13. [39m[38;5;14m[1mTensorFlow Python Notebooks[0m[38;5;12m (https://github.com/aymericdamien/TensorFlow-Examples)[39m
|
||
[38;5;12m14. [39m[38;5;14m[1mKeras and Lasagne Deep Learning Tutorials[0m[38;5;12m (https://github.com/Vict0rSch/deep_learning)[39m
|
||
[38;5;12m15. [39m[38;5;14m[1mClassification on raw time series in TensorFlow with a LSTM RNN[0m[38;5;12m (https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition)[39m
|
||
[38;5;12m16. [39m[38;5;14m[1mUsing convolutional neural nets to detect facial keypoints tutorial[0m[38;5;12m (http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/)[39m
|
||
[38;5;12m17. [39m[38;5;14m[1mTensorFlow-World[0m[38;5;12m (https://github.com/astorfi/TensorFlow-World)[39m
|
||
[38;5;12m18. [39m[38;5;14m[1mDeep Learning with Python[0m[38;5;12m (https://www.manning.com/books/deep-learning-with-python)[39m
|
||
[38;5;12m19. [39m[38;5;14m[1mGrokking Deep Learning[0m[38;5;12m (https://www.manning.com/books/grokking-deep-learning)[39m
|
||
[38;5;12m20. [39m[38;5;14m[1mDeep Learning for Search[0m[38;5;12m (https://www.manning.com/books/deep-learning-for-search)[39m
|
||
[38;5;12m21. [39m[38;5;14m[1mKeras Tutorial: Content Based Image Retrieval Using a Convolutional Denoising Autoencoder[0m[38;5;12m (https://medium.com/sicara/keras-tutorial-content-based-image-retrieval-convolutional-denoising-autoencoder-dc91450cc511)[39m
|
||
[38;5;12m22. [39m[38;5;14m[1mPytorch Tutorial by Yunjey Choi[0m[38;5;12m (https://github.com/yunjey/pytorch-tutorial)[39m
|
||
[38;5;12m23. [39m[38;5;14m[1mUnderstanding deep Convolutional Neural Networks with a practical use-case in Tensorflow and Keras[0m[38;5;12m (https://ahmedbesbes.com/understanding-deep-convolutional-neural-networks-with-a-practical-use-case-in-tensorflow-and-keras.html)[39m
|
||
[38;5;12m24. [39m[38;5;14m[1mOverview and benchmark of traditional and deep learning models in text classification[0m[38;5;12m (https://ahmedbesbes.com/overview-and-benchmark-of-traditional-and-deep-learning-models-in-text-classification.html)[39m
|
||
[38;5;12m25. [39m[38;5;14m[1mHardware for AI: Understanding computer hardware & build your own computer[0m[38;5;12m (https://github.com/MelAbgrall/HardwareforAI)[39m
|
||
[38;5;12m26. [39m[38;5;14m[1mProgramming Community Curated Resources[0m[38;5;12m (https://hackr.io/tutorials/learn-artificial-intelligence-ai)[39m
|
||
[38;5;12m27. [39m[38;5;14m[1mThe Illustrated Self-Supervised Learning[0m[38;5;12m (https://amitness.com/2020/02/illustrated-self-supervised-learning/)[39m
|
||
[38;5;12m28. [39m[38;5;14m[1mVisual Paper Summary: ALBERT (A Lite BERT)[0m[38;5;12m (https://amitness.com/2020/02/albert-visual-summary/)[39m
|
||
[38;5;12m28. [39m[38;5;14m[1mSemi-Supervised Deep Learning with GANs for Melanoma Detection[0m[38;5;12m (https://www.manning.com/liveproject/semi-supervised-deep-learning-with-gans-for-melanoma-detection/)[39m
|
||
[38;5;12m29. [39m[38;5;14m[1mNamed Entity Recognition using Reformers[0m[38;5;12m (https://github.com/SauravMaheshkar/Trax-Examples/blob/main/NLP/NER%20using%20Reformer.ipynb)[39m
|
||
[38;5;12m30. [39m[38;5;14m[1mDeep N-Gram Models on Shakespeare’s works[0m[38;5;12m (https://github.com/SauravMaheshkar/Trax-Examples/blob/main/NLP/Deep%20N-Gram.ipynb)[39m
|
||
[38;5;12m31. [39m[38;5;14m[1mWide Residual Networks[0m[38;5;12m (https://github.com/SauravMaheshkar/Trax-Examples/blob/main/vision/illustrated-wideresnet.ipynb)[39m
|
||
[38;5;12m32. [39m[38;5;14m[1mFashion MNIST using Flax[0m[38;5;12m (https://github.com/SauravMaheshkar/Flax-Examples)[39m
|
||
[38;5;12m33. [39m[38;5;14m[1mFake News Classification (with streamlit deployment)[0m[38;5;12m (https://github.com/SauravMaheshkar/Fake-News-Classification)[39m
|
||
[38;5;12m34. [39m[38;5;14m[1mRegression Analysis for Primary Biliary Cirrhosis[0m[38;5;12m (https://github.com/SauravMaheshkar/CoxPH-Model-for-Primary-Biliary-Cirrhosis)[39m
|
||
[38;5;12m35. [39m[38;5;14m[1mCross Matching Methods for Astronomical Catalogs[0m[38;5;12m (https://github.com/SauravMaheshkar/Cross-Matching-Methods-for-Astronomical-Catalogs)[39m
|
||
[38;5;12m36. [39m[38;5;14m[1mNamed Entity Recognition using BiDirectional LSTMs[0m[38;5;12m (https://github.com/SauravMaheshkar/Named-Entity-Recognition-)[39m
|
||
[38;5;12m37. [39m[38;5;14m[1mImage Recognition App using Tflite and Flutter[0m[38;5;12m (https://github.com/SauravMaheshkar/Flutter_Image-Recognition)[39m
|
||
|
||
[38;2;255;187;0m[4mResearchers[0m
|
||
|
||
[38;5;12m1. [39m[38;5;14m[1mAaron Courville[0m[38;5;12m (http://aaroncourville.wordpress.com)[39m
|
||
[38;5;12m2. [39m[38;5;14m[1mAbdel-rahman Mohamed[0m[38;5;12m (http://www.cs.toronto.edu/~asamir/)[39m
|
||
[38;5;12m3. [39m[38;5;14m[1mAdam Coates[0m[38;5;12m (http://cs.stanford.edu/~acoates/)[39m
|
||
[38;5;12m4. [39m[38;5;14m[1mAlex Acero[0m[38;5;12m (http://research.microsoft.com/en-us/people/alexac/)[39m
|
||
[38;5;12m5. [39m[38;5;14m[1m Alex Krizhevsky [0m[38;5;12m (http://www.cs.utoronto.ca/~kriz/index.html)[39m
|
||
[38;5;12m6. [39m[38;5;14m[1m Alexander Ilin [0m[38;5;12m (http://users.ics.aalto.fi/alexilin/)[39m
|
||
[38;5;12m7. [39m[38;5;14m[1m Amos Storkey [0m[38;5;12m (http://homepages.inf.ed.ac.uk/amos/)[39m
|
||
[38;5;12m8. [39m[38;5;14m[1m Andrej Karpathy [0m[38;5;12m (https://karpathy.ai/)[39m
|
||
[38;5;12m9. [39m[38;5;14m[1m Andrew M. Saxe [0m[38;5;12m (http://www.stanford.edu/~asaxe/)[39m
|
||
[38;5;12m10. [39m[38;5;14m[1m Andrew Ng [0m[38;5;12m (http://www.cs.stanford.edu/people/ang/)[39m
|
||
[38;5;12m11. [39m[38;5;14m[1m Andrew W. Senior [0m[38;5;12m (http://research.google.com/pubs/author37792.html)[39m
|
||
[38;5;12m12. [39m[38;5;14m[1m Andriy Mnih [0m[38;5;12m (http://www.gatsby.ucl.ac.uk/~amnih/)[39m
|
||
[38;5;12m13. [39m[38;5;14m[1m Ayse Naz Erkan [0m[38;5;12m (http://www.cs.nyu.edu/~naz/)[39m
|
||
[38;5;12m14. [39m[38;5;14m[1m Benjamin Schrauwen [0m[38;5;12m (http://reslab.elis.ugent.be/benjamin)[39m
|
||
[38;5;12m15. [39m[38;5;14m[1m Bernardete Ribeiro [0m[38;5;12m (https://www.cisuc.uc.pt/people/show/2020)[39m
|
||
[38;5;12m16. [39m[38;5;14m[1m Bo David Chen [0m[38;5;12m (http://vision.caltech.edu/~bchen3/Site/Bo_David_Chen.html)[39m
|
||
[38;5;12m17. [39m[38;5;14m[1m Boureau Y-Lan [0m[38;5;12m (http://cs.nyu.edu/~ylan/)[39m
|
||
[38;5;12m18. [39m[38;5;14m[1m Brian Kingsbury [0m[38;5;12m (http://researcher.watson.ibm.com/researcher/view.php?person=us-bedk)[39m
|
||
[38;5;12m19. [39m[38;5;14m[1m Christopher Manning [0m[38;5;12m (http://nlp.stanford.edu/~manning/)[39m
|
||
[38;5;12m20. [39m[38;5;14m[1m Clement Farabet [0m[38;5;12m (http://www.clement.farabet.net/)[39m
|
||
[38;5;12m21. [39m[38;5;14m[1m Dan Claudiu Cireșan [0m[38;5;12m (http://www.idsia.ch/~ciresan/)[39m
|
||
[38;5;12m22. [39m[38;5;14m[1m David Reichert [0m[38;5;12m (http://serre-lab.clps.brown.edu/person/david-reichert/)[39m
|
||
[38;5;12m23. [39m[38;5;14m[1m Derek Rose [0m[38;5;12m (http://mil.engr.utk.edu/nmil/member/5.html)[39m
|
||
[38;5;12m24. [39m[38;5;14m[1m Dong Yu [0m[38;5;12m (http://research.microsoft.com/en-us/people/dongyu/default.aspx)[39m
|
||
[38;5;12m25. [39m[38;5;14m[1m Drausin Wulsin [0m[38;5;12m (http://www.seas.upenn.edu/~wulsin/)[39m
|
||
[38;5;12m26. [39m[38;5;14m[1m Erik M. Schmidt [0m[38;5;12m (http://music.ece.drexel.edu/people/eschmidt)[39m
|
||
[38;5;12m27. [39m[38;5;14m[1m Eugenio Culurciello [0m[38;5;12m (https://engineering.purdue.edu/BME/People/viewPersonById?resource_id=71333)[39m
|
||
[38;5;12m28. [39m[38;5;14m[1m Frank Seide [0m[38;5;12m (http://research.microsoft.com/en-us/people/fseide/)[39m
|
||
[38;5;12m29. [39m[38;5;14m[1m Galen Andrew [0m[38;5;12m (http://homes.cs.washington.edu/~galen/)[39m
|
||
[38;5;12m30. [39m[38;5;14m[1m Geoffrey Hinton [0m[38;5;12m (http://www.cs.toronto.edu/~hinton/)[39m
|
||
[38;5;12m31. [39m[38;5;14m[1m George Dahl [0m[38;5;12m (http://www.cs.toronto.edu/~gdahl/)[39m
|
||
[38;5;12m32. [39m[38;5;14m[1m Graham Taylor [0m[38;5;12m (http://www.uoguelph.ca/~gwtaylor/)[39m
|
||
[38;5;12m33. [39m[38;5;14m[1m Grégoire Montavon [0m[38;5;12m (http://gregoire.montavon.name/)[39m
|
||
[38;5;12m34. [39m[38;5;14m[1m Guido Francisco Montúfar [0m[38;5;12m (http://personal-homepages.mis.mpg.de/montufar/)[39m
|
||
[38;5;12m35. [39m[38;5;14m[1m Guillaume Desjardins [0m[38;5;12m (http://brainlogging.wordpress.com/)[39m
|
||
[38;5;12m36. [39m[38;5;14m[1m Hannes Schulz [0m[38;5;12m (http://www.ais.uni-bonn.de/~schulz/)[39m
|
||
[38;5;12m37. [39m[38;5;14m[1m Hélène Paugam-Moisy [0m[38;5;12m (http://www.lri.fr/~hpaugam/)[39m
|
||
[38;5;12m38. [39m[38;5;14m[1m Honglak Lee [0m[38;5;12m (http://web.eecs.umich.edu/~honglak/)[39m
|
||
[38;5;12m39. [39m[38;5;14m[1m Hugo Larochelle [0m[38;5;12m (http://www.dmi.usherb.ca/~larocheh/index_en.html)[39m
|
||
[38;5;12m40. [39m[38;5;14m[1m Ilya Sutskever [0m[38;5;12m (http://www.cs.toronto.edu/~ilya/)[39m
|
||
[38;5;12m41. [39m[38;5;14m[1m Itamar Arel [0m[38;5;12m (http://mil.engr.utk.edu/nmil/member/2.html)[39m
|
||
[38;5;12m42. [39m[38;5;14m[1m James Martens [0m[38;5;12m (http://www.cs.toronto.edu/~jmartens/)[39m
|
||
[38;5;12m43. [39m[38;5;14m[1m Jason Morton [0m[38;5;12m (http://www.jasonmorton.com/)[39m
|
||
[38;5;12m44. [39m[38;5;14m[1m Jason Weston [0m[38;5;12m (http://www.thespermwhale.com/jaseweston/)[39m
|
||
[38;5;12m45. [39m[38;5;14m[1m Jeff Dean [0m[38;5;12m (http://research.google.com/pubs/jeff.html)[39m
|
||
[38;5;12m46. [39m[38;5;14m[1m Jiquan Mgiam [0m[38;5;12m (http://cs.stanford.edu/~jngiam/)[39m
|
||
[38;5;12m47. [39m[38;5;14m[1m Joseph Turian [0m[38;5;12m (http://www-etud.iro.umontreal.ca/~turian/)[39m
|
||
[38;5;12m48. [39m[38;5;14m[1m Joshua Matthew Susskind [0m[38;5;12m (http://aclab.ca/users/josh/index.html)[39m
|
||
[38;5;12m49. [39m[38;5;14m[1m Jürgen Schmidhuber [0m[38;5;12m (http://www.idsia.ch/~juergen/)[39m
|
||
[38;5;12m50. [39m[38;5;14m[1m Justin A. Blanco [0m[38;5;12m (https://sites.google.com/site/blancousna/)[39m
|
||
[38;5;12m51. [39m[38;5;14m[1m Koray Kavukcuoglu [0m[38;5;12m (http://koray.kavukcuoglu.org/)[39m
|
||
[38;5;12m52. [39m[38;5;14m[1m KyungHyun Cho [0m[38;5;12m (http://users.ics.aalto.fi/kcho/)[39m
|
||
[38;5;12m53. [39m[38;5;14m[1m Li Deng [0m[38;5;12m (http://research.microsoft.com/en-us/people/deng/)[39m
|
||
[38;5;12m54. [39m[38;5;14m[1m Lucas Theis [0m[38;5;12m (http://www.kyb.tuebingen.mpg.de/nc/employee/details/lucas.html)[39m
|
||
[38;5;12m55. [39m[38;5;14m[1m Ludovic Arnold [0m[38;5;12m (http://ludovicarnold.altervista.org/home/)[39m
|
||
[38;5;12m56. [39m[38;5;14m[1m Marc'Aurelio Ranzato [0m[38;5;12m (http://www.cs.nyu.edu/~ranzato/)[39m
|
||
[38;5;12m57. [39m[38;5;14m[1m Martin Längkvist [0m[38;5;12m (http://aass.oru.se/~mlt/)[39m
|
||
[38;5;12m58. [39m[38;5;14m[1m Misha Denil [0m[38;5;12m (http://mdenil.com/)[39m
|
||
[38;5;12m59. [39m[38;5;14m[1m Mohammad Norouzi [0m[38;5;12m (http://www.cs.toronto.edu/~norouzi/)[39m
|
||
[38;5;12m60. [39m[38;5;14m[1m Nando de Freitas [0m[38;5;12m (http://www.cs.ubc.ca/~nando/)[39m
|
||
[38;5;12m61. [39m[38;5;14m[1m Navdeep Jaitly [0m[38;5;12m (http://www.cs.utoronto.ca/~ndjaitly/)[39m
|
||
[38;5;12m62. [39m[38;5;14m[1m Nicolas Le Roux [0m[38;5;12m (http://nicolas.le-roux.name/)[39m
|
||
[38;5;12m63. [39m[38;5;14m[1m Nitish Srivastava [0m[38;5;12m (http://www.cs.toronto.edu/~nitish/)[39m
|
||
[38;5;12m64. [39m[38;5;14m[1m Noel Lopes [0m[38;5;12m (https://www.cisuc.uc.pt/people/show/2028)[39m
|
||
[38;5;12m65. [39m[38;5;14m[1m Oriol Vinyals [0m[38;5;12m (http://www.cs.berkeley.edu/~vinyals/)[39m
|
||
[38;5;12m66. [39m[38;5;14m[1m Pascal Vincent [0m[38;5;12m (http://www.iro.umontreal.ca/~vincentp)[39m
|
||
[38;5;12m67. [39m[38;5;14m[1m Patrick Nguyen [0m[38;5;12m (https://sites.google.com/site/drpngx/)[39m
|
||
[38;5;12m68. [39m[38;5;14m[1m Pedro Domingos [0m[38;5;12m (http://homes.cs.washington.edu/~pedrod/)[39m
|
||
[38;5;12m69. [39m[38;5;14m[1m Peggy Series [0m[38;5;12m (http://homepages.inf.ed.ac.uk/pseries/)[39m
|
||
[38;5;12m70. [39m[38;5;14m[1m Pierre Sermanet [0m[38;5;12m (http://cs.nyu.edu/~sermanet)[39m
|
||
[38;5;12m71. [39m[38;5;14m[1m Piotr Mirowski [0m[38;5;12m (http://www.cs.nyu.edu/~mirowski/)[39m
|
||
[38;5;12m72. [39m[38;5;14m[1m Quoc V. Le [0m[38;5;12m (http://ai.stanford.edu/~quocle/)[39m
|
||
[38;5;12m73. [39m[38;5;14m[1m Reinhold Scherer [0m[38;5;12m (http://bci.tugraz.at/scherer/)[39m
|
||
[38;5;12m74. [39m[38;5;14m[1m Richard Socher [0m[38;5;12m (http://www.socher.org/)[39m
|
||
[38;5;12m75. [39m[38;5;14m[1m Rob Fergus [0m[38;5;12m (http://cs.nyu.edu/~fergus/pmwiki/pmwiki.php)[39m
|
||
[38;5;12m76. [39m[38;5;14m[1m Robert Coop [0m[38;5;12m (http://mil.engr.utk.edu/nmil/member/19.html)[39m
|
||
[38;5;12m77. [39m[38;5;14m[1m Robert Gens [0m[38;5;12m (http://homes.cs.washington.edu/~rcg/)[39m
|
||
[38;5;12m78. [39m[38;5;14m[1m Roger Grosse [0m[38;5;12m (http://people.csail.mit.edu/rgrosse/)[39m
|
||
[38;5;12m79. [39m[38;5;14m[1m Ronan Collobert [0m[38;5;12m (http://ronan.collobert.com/)[39m
|
||
[38;5;12m80. [39m[38;5;14m[1m Ruslan Salakhutdinov [0m[38;5;12m (http://www.utstat.toronto.edu/~rsalakhu/)[39m
|
||
[38;5;12m81. [39m[38;5;14m[1m Sebastian Gerwinn [0m[38;5;12m (http://www.kyb.tuebingen.mpg.de/nc/employee/details/sgerwinn.html)[39m
|
||
[38;5;12m82. [39m[38;5;14m[1m Stéphane Mallat [0m[38;5;12m (http://www.cmap.polytechnique.fr/~mallat/)[39m
|
||
[38;5;12m83. [39m[38;5;14m[1m Sven Behnke [0m[38;5;12m (http://www.ais.uni-bonn.de/behnke/)[39m
|
||
[38;5;12m84. [39m[38;5;14m[1m Tapani Raiko [0m[38;5;12m (http://users.ics.aalto.fi/praiko/)[39m
|
||
[38;5;12m85. [39m[38;5;14m[1m Tara Sainath [0m[38;5;12m (https://sites.google.com/site/tsainath/)[39m
|
||
[38;5;12m86. [39m[38;5;14m[1m Tijmen Tieleman [0m[38;5;12m (http://www.cs.toronto.edu/~tijmen/)[39m
|
||
[38;5;12m87. [39m[38;5;14m[1m Tom Karnowski [0m[38;5;12m (http://mil.engr.utk.edu/nmil/member/36.html)[39m
|
||
[38;5;12m88. [39m[38;5;14m[1m Tomáš Mikolov [0m[38;5;12m (https://research.facebook.com/tomas-mikolov)[39m
|
||
[38;5;12m89. [39m[38;5;14m[1m Ueli Meier [0m[38;5;12m (http://www.idsia.ch/~meier/)[39m
|
||
[38;5;12m90. [39m[38;5;14m[1m Vincent Vanhoucke [0m[38;5;12m (http://vincent.vanhoucke.com)[39m
|
||
[38;5;12m91. [39m[38;5;14m[1m Volodymyr Mnih [0m[38;5;12m (http://www.cs.toronto.edu/~vmnih/)[39m
|
||
[38;5;12m92. [39m[38;5;14m[1m Yann LeCun [0m[38;5;12m (http://yann.lecun.com/)[39m
|
||
[38;5;12m93. [39m[38;5;14m[1m Yichuan Tang [0m[38;5;12m (http://www.cs.toronto.edu/~tang/)[39m
|
||
[38;5;12m94. [39m[38;5;14m[1m Yoshua Bengio [0m[38;5;12m (http://www.iro.umontreal.ca/~bengioy/yoshua_en/index.html)[39m
|
||
[38;5;12m95. [39m[38;5;14m[1m Yotaro Kubo [0m[38;5;12m (http://yota.ro/)[39m
|
||
[38;5;12m96. [39m[38;5;14m[1m Youzhi (Will) Zou [0m[38;5;12m (http://ai.stanford.edu/~wzou)[39m
|
||
[38;5;12m97. [39m[38;5;14m[1m Fei-Fei Li [0m[38;5;12m (http://vision.stanford.edu/feifeili)[39m
|
||
[38;5;12m98. [39m[38;5;14m[1m Ian Goodfellow [0m[38;5;12m (https://research.google.com/pubs/105214.html)[39m
|
||
[38;5;12m99. [39m[38;5;14m[1m Robert Laganière [0m[38;5;12m (http://www.site.uottawa.ca/~laganier/)[39m
|
||
[38;5;12m100. [39m[38;5;14m[1mMerve Ayyüce Kızrak[0m[38;5;12m (http://www.ayyucekizrak.com/)[39m
|
||
|
||
|
||
[38;2;255;187;0m[4mWebsites[0m
|
||
|
||
[38;5;12m1. [39m[38;5;14m[1mdeeplearning.net[0m[38;5;12m (http://deeplearning.net/)[39m
|
||
[38;5;12m2. [39m[38;5;14m[1mdeeplearning.stanford.edu[0m[38;5;12m (http://deeplearning.stanford.edu/)[39m
|
||
[38;5;12m3. [39m[38;5;14m[1mnlp.stanford.edu[0m[38;5;12m (http://nlp.stanford.edu/)[39m
|
||
[38;5;12m4. [39m[38;5;14m[1mai-junkie.com[0m[38;5;12m (http://www.ai-junkie.com/ann/evolved/nnt1.html)[39m
|
||
[38;5;12m5. [39m[38;5;14m[1mcs.brown.edu/research/ai[0m[38;5;12m (http://cs.brown.edu/research/ai/)[39m
|
||
[38;5;12m6. [39m[38;5;14m[1meecs.umich.edu/ai[0m[38;5;12m (http://www.eecs.umich.edu/ai/)[39m
|
||
[38;5;12m7. [39m[38;5;14m[1mcs.utexas.edu/users/ai-lab[0m[38;5;12m (http://www.cs.utexas.edu/users/ai-lab/)[39m
|
||
[38;5;12m8. [39m[38;5;14m[1mcs.washington.edu/research/ai[0m[38;5;12m (http://www.cs.washington.edu/research/ai/)[39m
|
||
[38;5;12m9. [39m[38;5;14m[1maiai.ed.ac.uk[0m[38;5;12m (http://www.aiai.ed.ac.uk/)[39m
|
||
[38;5;12m10. [39m[38;5;14m[1mwww-aig.jpl.nasa.gov[0m[38;5;12m (http://www-aig.jpl.nasa.gov/)[39m
|
||
[38;5;12m11. [39m[38;5;14m[1mcsail.mit.edu[0m[38;5;12m (http://www.csail.mit.edu/)[39m
|
||
[38;5;12m12. [39m[38;5;14m[1mcgi.cse.unsw.edu.au/~aishare[0m[38;5;12m (http://cgi.cse.unsw.edu.au/~aishare/)[39m
|
||
[38;5;12m13. [39m[38;5;14m[1mcs.rochester.edu/research/ai[0m[38;5;12m (http://www.cs.rochester.edu/research/ai/)[39m
|
||
[38;5;12m14. [39m[38;5;14m[1mai.sri.com[0m[38;5;12m (http://www.ai.sri.com/)[39m
|
||
[38;5;12m15. [39m[38;5;14m[1misi.edu/AI/isd.htm[0m[38;5;12m (http://www.isi.edu/AI/isd.htm)[39m
|
||
[38;5;12m16. [39m[38;5;14m[1mnrl.navy.mil/itd/aic[0m[38;5;12m (http://www.nrl.navy.mil/itd/aic/)[39m
|
||
[38;5;12m17. [39m[38;5;14m[1mhips.seas.harvard.edu[0m[38;5;12m (http://hips.seas.harvard.edu/)[39m
|
||
[38;5;12m18. [39m[38;5;14m[1mAI Weekly[0m[38;5;12m (http://aiweekly.co)[39m
|
||
[38;5;12m19. [39m[38;5;14m[1mstat.ucla.edu[0m[38;5;12m (http://statistics.ucla.edu/)[39m
|
||
[38;5;12m20. [39m[38;5;14m[1mdeeplearning.cs.toronto.edu[0m[38;5;12m (http://deeplearning.cs.toronto.edu/i2t)[39m
|
||
[38;5;12m21. [39m[38;5;14m[1mjeffdonahue.com/lrcn/[0m[38;5;12m (http://jeffdonahue.com/lrcn/)[39m
|
||
[38;5;12m22. [39m[38;5;14m[1mvisualqa.org[0m[38;5;12m (http://www.visualqa.org/)[39m
|
||
[38;5;12m23. [39m[38;5;14m[1mwww.mpi-inf.mpg.de/departments/computer-vision...[0m[38;5;12m (https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/)[39m
|
||
[38;5;12m24. [39m[38;5;14m[1mDeep Learning News[0m[38;5;12m (http://news.startup.ml/)[39m
|
||
[38;5;12m25. [39m[38;5;14m[1mMachine Learning is Fun! Adam Geitgey's Blog[0m[38;5;12m (https://medium.com/@ageitgey/)[39m
|
||
[38;5;12m26. [39m[38;5;14m[1mGuide to Machine Learning[0m[38;5;12m (http://yerevann.com/a-guide-to-deep-learning/)[39m
|
||
[38;5;12m27. [39m[38;5;14m[1mDeep Learning for Beginners[0m[38;5;12m (https://spandan-madan.github.io/DeepLearningProject/)[39m
|
||
[38;5;12m28. [39m[38;5;14m[1mMachine Learning Mastery blog[0m[38;5;12m (https://machinelearningmastery.com/blog/)[39m
|
||
[38;5;12m29. [39m[38;5;14m[1mML Compiled[0m[38;5;12m (https://ml-compiled.readthedocs.io/en/latest/)[39m
|
||
[38;5;12m30. [39m[38;5;14m[1mProgramming Community Curated Resources[0m[38;5;12m (https://hackr.io/tutorials/learn-artificial-intelligence-ai)[39m
|
||
[38;5;12m31. [39m[38;5;14m[1mA Beginner's Guide To Understanding Convolutional Neural Networks[0m[38;5;12m (https://adeshpande3.github.io/A-Beginner%27s-Guide-To-Understanding-Convolutional-Neural-Networks/)[39m
|
||
[38;5;12m32. [39m[38;5;14m[1mahmedbesbes.com[0m[38;5;12m (http://ahmedbesbes.com)[39m
|
||
[38;5;12m33. [39m[38;5;14m[1mamitness.com[0m[38;5;12m (https://amitness.com/)[39m
|
||
[38;5;12m34. [39m[38;5;14m[1mAI Summer[0m[38;5;12m (https://theaisummer.com/)[39m
|
||
[38;5;12m35. [39m[38;5;14m[1mAI Hub - supported by AAAI, NeurIPS[0m[38;5;12m (https://aihub.org/)[39m
|
||
[38;5;12m36. [39m[38;5;14m[1mCatalyzeX: Machine Learning Hub for Builders and Makers[0m[38;5;12m (https://www.catalyzeX.com)[39m
|
||
[38;5;12m37. [39m[38;5;14m[1mThe Epic Code[0m[38;5;12m (https://theepiccode.com/)[39m
|
||
[38;5;12m38. [39m[38;5;14m[1mall AI news[0m[38;5;12m (https://allainews.com/)[39m
|
||
|
||
[38;2;255;187;0m[4mDatasets[0m
|
||
|
||
[38;5;12m1. [39m[38;5;14m[1mMNIST[0m[38;5;12m (http://yann.lecun.com/exdb/mnist/) Handwritten digits[39m
|
||
[38;5;12m2. [39m[38;5;14m[1mGoogle House Numbers[0m[38;5;12m (http://ufldl.stanford.edu/housenumbers/) from street view[39m
|
||
[38;5;12m3. [39m[38;5;14m[1mCIFAR-10 and CIFAR-100[0m[38;5;12m (http://www.cs.toronto.edu/~kriz/cifar.html)[39m
|
||
[38;5;12m4. [39m[38;5;14m[1mIMAGENET[0m[38;5;12m (http://www.image-net.org/)[39m
|
||
[38;5;12m5. [39m[38;5;14m[1mTiny Images[0m[38;5;12m (http://groups.csail.mit.edu/vision/TinyImages/) 80 Million tiny images6. [39m
|
||
[38;5;12m6. [39m[38;5;14m[1mFlickr Data[0m[38;5;12m (https://yahooresearch.tumblr.com/post/89783581601/one-hundred-million-creative-commons-flickr-images) 100 Million Yahoo dataset[39m
|
||
[38;5;12m7. [39m[38;5;14m[1mBerkeley Segmentation Dataset 500[0m[38;5;12m (http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/)[39m
|
||
[38;5;12m8. [39m[38;5;14m[1mUC Irvine Machine Learning Repository[0m[38;5;12m (http://archive.ics.uci.edu/ml/)[39m
|
||
[38;5;12m9. [39m[38;5;14m[1mFlickr 8k[0m[38;5;12m (http://nlp.cs.illinois.edu/HockenmaierGroup/Framing_Image_Description/KCCA.html)[39m
|
||
[38;5;12m10. [39m[38;5;14m[1mFlickr 30k[0m[38;5;12m (http://shannon.cs.illinois.edu/DenotationGraph/)[39m
|
||
[38;5;12m11. [39m[38;5;14m[1mMicrosoft COCO[0m[38;5;12m (http://mscoco.org/home/)[39m
|
||
[38;5;12m12. [39m[38;5;14m[1mVQA[0m[38;5;12m (http://www.visualqa.org/)[39m
|
||
[38;5;12m13. [39m[38;5;14m[1mImage QA[0m[38;5;12m (http://www.cs.toronto.edu/~mren/imageqa/data/cocoqa/)[39m
|
||
[38;5;12m14. [39m[38;5;14m[1mAT&T Laboratories Cambridge face database[0m[38;5;12m (http://www.uk.research.att.com/facedatabase.html)[39m
|
||
[38;5;12m15. [39m[38;5;14m[1mAVHRR Pathfinder[0m[38;5;12m (http://xtreme.gsfc.nasa.gov)[39m
|
||
[38;5;12m16.[39m[38;5;12m [39m[38;5;14m[1mAir[0m[38;5;14m[1m [0m[38;5;14m[1mFreight[0m[38;5;12m [39m[38;5;12m(http://www.anc.ed.ac.uk/~amos/afreightdata.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mThe[39m[38;5;12m [39m[38;5;12mAir[39m[38;5;12m [39m[38;5;12mFreight[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12mset[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mray-traced[39m[38;5;12m [39m[38;5;12mimage[39m[38;5;12m [39m[38;5;12msequence[39m[38;5;12m [39m[38;5;12malong[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mground[39m[38;5;12m [39m[38;5;12mtruth[39m[38;5;12m [39m[38;5;12msegmentation[39m[38;5;12m [39m[38;5;12mbased[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12mtextural[39m[38;5;12m [39m[38;5;12mcharacteristics.[39m[38;5;12m [39m[38;5;12m(455[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12m+[39m[38;5;12m [39m[38;5;12mGT,[39m[38;5;12m [39m[38;5;12meach[39m[38;5;12m [39m[38;5;12m160x120[39m[38;5;12m [39m[38;5;12mpixels).[39m[38;5;12m [39m
|
||
[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mPNG)[39m[38;5;12m [39m
|
||
[38;5;12m17.[39m[38;5;12m [39m[38;5;14m[1mAmsterdam[0m[38;5;14m[1m [0m[38;5;14m[1mLibrary[0m[38;5;14m[1m [0m[38;5;14m[1mof[0m[38;5;14m[1m [0m[38;5;14m[1mObject[0m[38;5;14m[1m [0m[38;5;14m[1mImages[0m[38;5;12m [39m[38;5;12m(http://www.science.uva.nl/~aloi/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mALOI[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mcolor[39m[38;5;12m [39m[38;5;12mimage[39m[38;5;12m [39m[38;5;12mcollection[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mone-thousand[39m[38;5;12m [39m[38;5;12msmall[39m[38;5;12m [39m[38;5;12mobjects,[39m[38;5;12m [39m[38;5;12mrecorded[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mscientific[39m[38;5;12m [39m[38;5;12mpurposes.[39m[38;5;12m [39m[38;5;12mIn[39m[38;5;12m [39m[38;5;12morder[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mcapture[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12msensory[39m[38;5;12m [39m[38;5;12mvariation[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mobject[39m[38;5;12m [39m
|
||
[38;5;12mrecordings,[39m[38;5;12m [39m[38;5;12mwe[39m[38;5;12m [39m[38;5;12msystematically[39m[38;5;12m [39m[38;5;12mvaried[39m[38;5;12m [39m[38;5;12mviewing[39m[38;5;12m [39m[38;5;12mangle,[39m[38;5;12m [39m[38;5;12millumination[39m[38;5;12m [39m[38;5;12mangle,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12millumination[39m[38;5;12m [39m[38;5;12mcolor[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12meach[39m[38;5;12m [39m[38;5;12mobject,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12madditionally[39m[38;5;12m [39m[38;5;12mcaptured[39m[38;5;12m [39m[38;5;12mwide-baseline[39m[38;5;12m [39m[38;5;12mstereo[39m[38;5;12m [39m[38;5;12mimages.[39m[38;5;12m [39m[38;5;12mWe[39m[38;5;12m [39m[38;5;12mrecorded[39m[38;5;12m [39m[38;5;12mover[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mhundred[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12meach[39m[38;5;12m [39m[38;5;12mobject,[39m[38;5;12m [39m[38;5;12myielding[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mtotal[39m[38;5;12m [39m[38;5;12mof[39m
|
||
[38;5;12m110,250[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mcollection.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mpng)[39m
|
||
[38;5;12m18. [39m[38;5;14m[1mAnnotated face, hand, cardiac & meat images[0m[38;5;12m (http://www.imm.dtu.dk/~aam/) - Most images & annotations are supplemented by various ASM/AAM analyses using the AAM-API. (Formats: bmp,asf)[39m
|
||
[38;5;12m19. [39m[38;5;14m[1mImage Analysis and Computer Graphics[0m[38;5;12m (http://www.imm.dtu.dk/image/) [39m
|
||
[38;5;12m21. [39m[38;5;14m[1mBrown University Stimuli[0m[38;5;12m (http://www.cog.brown.edu/~tarr/stimuli.html) - A variety of datasets including geons, objects, and "greebles". Good for testing recognition algorithms. (Formats: pict)[39m
|
||
[38;5;12m22.[39m[38;5;12m [39m[38;5;14m[1mCAVIAR[0m[38;5;14m[1m [0m[38;5;14m[1mvideo[0m[38;5;14m[1m [0m[38;5;14m[1msequences[0m[38;5;14m[1m [0m[38;5;14m[1mof[0m[38;5;14m[1m [0m[38;5;14m[1mmall[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mpublic[0m[38;5;14m[1m [0m[38;5;14m[1mspace[0m[38;5;14m[1m [0m[38;5;14m[1mbehavior[0m[38;5;12m [39m[38;5;12m(http://homepages.inf.ed.ac.uk/rbf/CAVIARDATA1/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12m90K[39m[38;5;12m [39m[38;5;12mvideo[39m[38;5;12m [39m[38;5;12mframes[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12m90[39m[38;5;12m [39m[38;5;12msequences[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mvarious[39m[38;5;12m [39m[38;5;12mhuman[39m[38;5;12m [39m[38;5;12mactivities,[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mXML[39m[38;5;12m [39m[38;5;12mground[39m[38;5;12m [39m[38;5;12mtruth[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mdetection[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mbehavior[39m[38;5;12m [39m[38;5;12mclassification[39m
|
||
[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mMPEG2[39m[38;5;12m [39m[38;5;12m&[39m[38;5;12m [39m[38;5;12mJPEG)[39m
|
||
[38;5;12m23. [39m[38;5;14m[1mMachine Vision Unit[0m[38;5;12m (http://www.ipab.inf.ed.ac.uk/mvu/)[39m
|
||
[38;5;12m25. [39m[38;5;14m[1mCCITT Fax standard images[0m[38;5;12m (http://www.cs.waikato.ac.nz/~singlis/ccitt.html) - 8 images (Formats: gif)[39m
|
||
[38;5;12m26. [39m[38;5;14m[1mCMU CIL's Stereo Data with Ground Truth[0m[38;5;12m (cil-ster.html) - 3 sets of 11 images, including color tiff images with spectroradiometry (Formats: gif, tiff)[39m
|
||
[38;5;12m27. [39m[38;5;14m[1mCMU PIE Database[0m[38;5;12m (http://www.ri.cmu.edu/projects/project_418.html) - A database of 41,368 face images of 68 people captured under 13 poses, 43 illuminations conditions, and with 4 different expressions.[39m
|
||
[38;5;12m28. [39m[38;5;14m[1mCMU VASC Image Database[0m[38;5;12m (http://www.ius.cs.cmu.edu/idb/) - Images, sequences, stereo pairs (thousands of images) (Formats: Sun Rasterimage)[39m
|
||
[38;5;12m29. [39m[38;5;14m[1mCaltech Image Database[0m[38;5;12m (http://www.vision.caltech.edu/html-files/archive.html) - about 20 images - mostly top-down views of small objects and toys. (Formats: GIF)[39m
|
||
[38;5;12m30.[39m[38;5;12m [39m[38;5;14m[1mColumbia-Utrecht[0m[38;5;14m[1m [0m[38;5;14m[1mReflectance[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mTexture[0m[38;5;14m[1m [0m[38;5;14m[1mDatabase[0m[38;5;12m [39m[38;5;12m(http://www.cs.columbia.edu/CAVE/curet/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mTexture[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mreflectance[39m[38;5;12m [39m[38;5;12mmeasurements[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mover[39m[38;5;12m [39m[38;5;12m60[39m[38;5;12m [39m[38;5;12msamples[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12m3D[39m[38;5;12m [39m[38;5;12mtexture,[39m[38;5;12m [39m[38;5;12mobserved[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mover[39m[38;5;12m [39m[38;5;12m200[39m[38;5;12m [39m[38;5;12mdifferent[39m[38;5;12m [39m[38;5;12mcombinations[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mviewing[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m
|
||
[38;5;12millumination[39m[38;5;12m [39m[38;5;12mdirections.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mbmp)[39m
|
||
[38;5;12m31.[39m[38;5;12m [39m[38;5;14m[1mComputational[0m[38;5;14m[1m [0m[38;5;14m[1mColour[0m[38;5;14m[1m [0m[38;5;14m[1mConstancy[0m[38;5;14m[1m [0m[38;5;14m[1mData[0m[38;5;12m [39m[38;5;12m(http://www.cs.sfu.ca/~colour/data/index.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mA[39m[38;5;12m [39m[38;5;12mdataset[39m[38;5;12m [39m[38;5;12moriented[39m[38;5;12m [39m[38;5;12mtowards[39m[38;5;12m [39m[38;5;12mcomputational[39m[38;5;12m [39m[38;5;12mcolor[39m[38;5;12m [39m[38;5;12mconstancy,[39m[38;5;12m [39m[38;5;12mbut[39m[38;5;12m [39m[38;5;12museful[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mcomputer[39m[38;5;12m [39m[38;5;12mvision[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mgeneral.[39m[38;5;12m [39m[38;5;12mIt[39m[38;5;12m [39m[38;5;12mincludes[39m[38;5;12m [39m[38;5;12msynthetic[39m[38;5;12m [39m[38;5;12mdata,[39m[38;5;12m [39m[38;5;12mcamera[39m[38;5;12m [39m[38;5;12msensor[39m[38;5;12m [39m
|
||
[38;5;12mdata,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mover[39m[38;5;12m [39m[38;5;12m700[39m[38;5;12m [39m[38;5;12mimages.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mtiff)[39m
|
||
[38;5;12m32. [39m[38;5;14m[1mComputational Vision Lab[0m[38;5;12m (http://www.cs.sfu.ca/~colour/)[39m
|
||
[38;5;12m34.[39m[38;5;12m [39m[38;5;14m[1mContent-based[0m[38;5;14m[1m [0m[38;5;14m[1mimage[0m[38;5;14m[1m [0m[38;5;14m[1mretrieval[0m[38;5;14m[1m [0m[38;5;14m[1mdatabase[0m[38;5;12m [39m[38;5;12m(http://www.cs.washington.edu/research/imagedatabase/groundtruth/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12m11[39m[38;5;12m [39m[38;5;12msets[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mcolor[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mtesting[39m[38;5;12m [39m[38;5;12malgorithms[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mcontent-based[39m[38;5;12m [39m[38;5;12mretrieval.[39m[38;5;12m [39m[38;5;12mMost[39m[38;5;12m [39m[38;5;12msets[39m[38;5;12m [39m[38;5;12mhave[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mdescription[39m[38;5;12m [39m[38;5;12mfile[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mnames[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m
|
||
[38;5;12mobjects[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12meach[39m[38;5;12m [39m[38;5;12mimage.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mjpg)[39m
|
||
[38;5;12m35. [39m[38;5;14m[1mEfficient Content-based Retrieval Group[0m[38;5;12m (http://www.cs.washington.edu/research/imagedatabase/)[39m
|
||
[38;5;12m37.[39m[38;5;12m [39m[38;5;14m[1mDensely[0m[38;5;14m[1m [0m[38;5;14m[1mSampled[0m[38;5;14m[1m [0m[38;5;14m[1mView[0m[38;5;14m[1m [0m[38;5;14m[1mSpheres[0m[38;5;12m [39m[38;5;12m(http://ls7-www.cs.uni-dortmund.de/~peters/pages/research/modeladaptsys/modeladaptsys_vba_rov.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mDensely[39m[38;5;12m [39m[38;5;12msampled[39m[38;5;12m [39m[38;5;12mview[39m[38;5;12m [39m[38;5;12mspheres[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mupper[39m[38;5;12m [39m[38;5;12mhalf[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mview[39m[38;5;12m [39m[38;5;12msphere[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mtwo[39m[38;5;12m [39m[38;5;12mtoy[39m[38;5;12m [39m[38;5;12mobjects[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12m2500[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m
|
||
[38;5;12meach.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mtiff)[39m
|
||
[38;5;12m38. [39m[38;5;14m[1mComputer Science VII (Graphical Systems)[0m[38;5;12m (http://ls7-www.cs.uni-dortmund.de/)[39m
|
||
[38;5;12m40.[39m[38;5;12m [39m[38;5;14m[1mDigital[0m[38;5;14m[1m [0m[38;5;14m[1mEmbryos[0m[38;5;12m [39m[38;5;12m(https://web-beta.archive.org/web/20011216051535/vision.psych.umn.edu/www/kersten-lab/demos/digitalembryo.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mDigital[39m[38;5;12m [39m[38;5;12membryos[39m[38;5;12m [39m[38;5;12mare[39m[38;5;12m [39m[38;5;12mnovel[39m[38;5;12m [39m[38;5;12mobjects[39m[38;5;12m [39m[38;5;12mwhich[39m[38;5;12m [39m[38;5;12mmay[39m[38;5;12m [39m[38;5;12mbe[39m[38;5;12m [39m[38;5;12mused[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mdevelop[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mtest[39m[38;5;12m [39m[38;5;12mobject[39m[38;5;12m [39m[38;5;12mrecognition[39m[38;5;12m [39m[38;5;12msystems.[39m[38;5;12m [39m
|
||
[38;5;12mThey[39m[38;5;12m [39m[38;5;12mhave[39m[38;5;12m [39m[38;5;12man[39m[38;5;12m [39m[38;5;12morganic[39m[38;5;12m [39m[38;5;12mappearance.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mvarious[39m[38;5;12m [39m[38;5;12mformats[39m[38;5;12m [39m[38;5;12mare[39m[38;5;12m [39m[38;5;12mavailable[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12mrequest)[39m
|
||
[38;5;12m41. [39m[38;5;14m[1mUniverity of Minnesota Vision Lab[0m[38;5;12m (http://vision.psych.umn.edu/users/kersten//kersten-lab/kersten-lab.html) [39m
|
||
[38;5;12m42. [39m[38;5;14m[1mEl Salvador Atlas of Gastrointestinal VideoEndoscopy[0m[38;5;12m (http://www.gastrointestinalatlas.com) - Images and Videos of his-res of studies taken from Gastrointestinal Video endoscopy. (Formats: jpg, mpg, gif)[39m
|
||
[38;5;12m43. [39m[38;5;14m[1mFG-NET Facial Aging Database[0m[38;5;12m (http://sting.cycollege.ac.cy/~alanitis/fgnetaging/index.htm) - Database contains 1002 face images showing subjects at different ages. (Formats: jpg)[39m
|
||
[38;5;12m44.[39m[38;5;12m [39m[38;5;14m[1mFVC2000[0m[38;5;14m[1m [0m[38;5;14m[1mFingerprint[0m[38;5;14m[1m [0m[38;5;14m[1mDatabases[0m[38;5;12m [39m[38;5;12m(http://bias.csr.unibo.it/fvc2000/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mFVC2000[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mFirst[39m[38;5;12m [39m[38;5;12mInternational[39m[38;5;12m [39m[38;5;12mCompetition[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mFingerprint[39m[38;5;12m [39m[38;5;12mVerification[39m[38;5;12m [39m[38;5;12mAlgorithms.[39m[38;5;12m [39m[38;5;12mFour[39m[38;5;12m [39m[38;5;12mfingerprint[39m[38;5;12m [39m[38;5;12mdatabases[39m[38;5;12m [39m[38;5;12mconstitute[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mFVC2000[39m[38;5;12m [39m[38;5;12mbenchmark[39m[38;5;12m [39m[38;5;12m(3520[39m[38;5;12m [39m
|
||
[38;5;12mfingerprints[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mall).[39m
|
||
[38;5;12m45. [39m[38;5;14m[1mBiometric Systems Lab[0m[38;5;12m (http://biolab.csr.unibo.it/home.asp) - University of Bologna[39m
|
||
[38;5;12m46. [39m[38;5;14m[1mFace and Gesture images and image sequences[0m[38;5;12m (http://www.fg-net.org) - Several image datasets of faces and gestures that are ground truth annotated for benchmarking[39m
|
||
[38;5;12m47.[39m[38;5;12m [39m[38;5;14m[1mGerman[0m[38;5;14m[1m [0m[38;5;14m[1mFingerspelling[0m[38;5;14m[1m [0m[38;5;14m[1mDatabase[0m[38;5;12m [39m[38;5;12m(http://www-i6.informatik.rwth-aachen.de/~dreuw/database.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mThe[39m[38;5;12m [39m[38;5;12mdatabase[39m[38;5;12m [39m[38;5;12mcontains[39m[38;5;12m [39m[38;5;12m35[39m[38;5;12m [39m[38;5;12mgestures[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mconsists[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12m1400[39m[38;5;12m [39m[38;5;12mimage[39m[38;5;12m [39m[38;5;12msequences[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12mcontain[39m[38;5;12m [39m[38;5;12mgestures[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12m20[39m[38;5;12m [39m[38;5;12mdifferent[39m[38;5;12m [39m[38;5;12mpersons[39m[38;5;12m [39m[38;5;12mrecorded[39m[38;5;12m [39m[38;5;12munder[39m[38;5;12m [39m
|
||
[38;5;12mnon-uniform[39m[38;5;12m [39m[38;5;12mdaylight[39m[38;5;12m [39m[38;5;12mlighting[39m[38;5;12m [39m[38;5;12mconditions.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mmpg,jpg)[39m[38;5;12m [39m
|
||
[38;5;12m48. [39m[38;5;14m[1mLanguage Processing and Pattern Recognition[0m[38;5;12m (http://www-i6.informatik.rwth-aachen.de/)[39m
|
||
[38;5;12m50. [39m[38;5;14m[1mGroningen Natural Image Database[0m[38;5;12m (http://hlab.phys.rug.nl/archive.html) - 4000+ 1536x1024 (16 bit) calibrated outdoor images (Formats: homebrew)[39m
|
||
[38;5;12m51. [39m[38;5;14m[1mICG Testhouse sequence[0m[38;5;12m (http://www.icg.tu-graz.ac.at/~schindler/Data) - 2 turntable sequences from different viewing heights, 36 images each, resolution 1000x750, color (Formats: PPM)[39m
|
||
[38;5;12m52. [39m[38;5;14m[1mInstitute of Computer Graphics and Vision[0m[38;5;12m (http://www.icg.tu-graz.ac.at)[39m
|
||
[38;5;12m54. [39m[38;5;14m[1mIEN Image Library[0m[38;5;12m (http://www.ien.it/is/vislib/) - 1000+ images, mostly outdoor sequences (Formats: raw, ppm) [39m
|
||
[38;5;12m55. [39m[38;5;14m[1mINRIA's Syntim images database[0m[38;5;12m (http://www-rocq.inria.fr/~tarel/syntim/images.html) - 15 color image of simple objects (Formats: gif)[39m
|
||
[38;5;12m56. [39m[38;5;14m[1mINRIA[0m[38;5;12m (http://www.inria.fr/)[39m
|
||
[38;5;12m57. [39m[38;5;14m[1mINRIA's Syntim stereo databases[0m[38;5;12m (http://www-rocq.inria.fr/~tarel/syntim/paires.html) - 34 calibrated color stereo pairs (Formats: gif)[39m
|
||
[38;5;12m58. [39m[38;5;14m[1mImage Analysis Laboratory[0m[38;5;12m (http://www.ece.ncsu.edu/imaging/Archives/ImageDataBase/index.html) - Images obtained from a variety of imaging modalities -- raw CFA images, range images and a host of "medical images". (Formats: homebrew)[39m
|
||
[38;5;12m59. [39m[38;5;14m[1mImage Analysis Laboratory[0m[38;5;12m (http://www.ece.ncsu.edu/imaging)[39m
|
||
[38;5;12m61. [39m[38;5;14m[1mImage Database[0m[38;5;12m (http://www.prip.tuwien.ac.at/prip/image.html) - An image database including some textures [39m
|
||
[38;5;12m62.[39m[38;5;12m [39m[38;5;14m[1mJAFFE[0m[38;5;14m[1m [0m[38;5;14m[1mFacial[0m[38;5;14m[1m [0m[38;5;14m[1mExpression[0m[38;5;14m[1m [0m[38;5;14m[1mImage[0m[38;5;14m[1m [0m[38;5;14m[1mDatabase[0m[38;5;12m [39m[38;5;12m(http://www.mis.atr.co.jp/~mlyons/jaffe.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mThe[39m[38;5;12m [39m[38;5;12mJAFFE[39m[38;5;12m [39m[38;5;12mdatabase[39m[38;5;12m [39m[38;5;12mconsists[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12m213[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mJapanese[39m[38;5;12m [39m[38;5;12mfemale[39m[38;5;12m [39m[38;5;12msubjects[39m[38;5;12m [39m[38;5;12mposing[39m[38;5;12m [39m[38;5;12m6[39m[38;5;12m [39m[38;5;12mbasic[39m[38;5;12m [39m[38;5;12mfacial[39m[38;5;12m [39m[38;5;12mexpressions[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12mwell[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mneutral[39m[38;5;12m [39m[38;5;12mpose.[39m[38;5;12m [39m[38;5;12mRatings[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m
|
||
[38;5;12memotion[39m[38;5;12m [39m[38;5;12madjectives[39m[38;5;12m [39m[38;5;12mare[39m[38;5;12m [39m[38;5;12malso[39m[38;5;12m [39m[38;5;12mavailable,[39m[38;5;12m [39m[38;5;12mfree[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mcharge,[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mresearch[39m[38;5;12m [39m[38;5;12mpurposes.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mTIFF[39m[38;5;12m [39m[38;5;12mGrayscale[39m[38;5;12m [39m[38;5;12mimages.)[39m
|
||
[38;5;12m63. [39m[38;5;14m[1mATR Research, Kyoto, Japan[0m[38;5;12m (http://www.mic.atr.co.jp/)[39m
|
||
[38;5;12m64.[39m[38;5;12m [39m[38;5;14m[1mJISCT[0m[38;5;14m[1m [0m[38;5;14m[1mStereo[0m[38;5;14m[1m [0m[38;5;14m[1mEvaluation[0m[38;5;12m [39m[38;5;12m(ftp://ftp.vislist.com/IMAGERY/JISCT/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12m44[39m[38;5;12m [39m[38;5;12mimage[39m[38;5;12m [39m[38;5;12mpairs.[39m[38;5;12m [39m[38;5;12mThese[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12mhave[39m[38;5;12m [39m[38;5;12mbeen[39m[38;5;12m [39m[38;5;12mused[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12man[39m[38;5;12m [39m[38;5;12mevaluation[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mstereo[39m[38;5;12m [39m[38;5;12manalysis,[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12mdescribed[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mApril[39m[38;5;12m [39m[38;5;12m1993[39m[38;5;12m [39m[38;5;12mARPA[39m[38;5;12m [39m[38;5;12mImage[39m[38;5;12m [39m[38;5;12mUnderstanding[39m[38;5;12m [39m[38;5;12mWorkshop[39m[38;5;12m [39m[38;5;12mpaper[39m[38;5;12m [39m[38;5;12mThe[39m[38;5;12m [39m[38;5;12mJISCT[39m[38;5;12m [39m[38;5;12mStereo[39m[38;5;12m [39m
|
||
[38;5;12mEvaluation''[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mR.C.Bolles,[39m[38;5;12m [39m[38;5;12mH.H.Baker,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mM.J.Hannah,[39m[38;5;12m [39m[38;5;12m263--274[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mSSI)[39m
|
||
[38;5;12m65. [39m[38;5;14m[1mMIT Vision Texture[0m[38;5;12m (https://vismod.media.mit.edu/vismod/imagery/VisionTexture/vistex.html) - Image archive (100+ images) (Formats: ppm)[39m
|
||
[38;5;12m66. [39m[38;5;14m[1mMIT face images and more[0m[38;5;12m (ftp://whitechapel.media.mit.edu/pub/images) - hundreds of images (Formats: homebrew)[39m
|
||
[38;5;12m67. [39m[38;5;14m[1mMachine Vision[0m[38;5;12m (http://vision.cse.psu.edu/book/testbed/images/) - Images from the textbook by Jain, Kasturi, Schunck (20+ images) (Formats: GIF TIFF)[39m
|
||
[38;5;12m68.[39m[38;5;12m [39m[38;5;14m[1mMammography[0m[38;5;14m[1m [0m[38;5;14m[1mImage[0m[38;5;14m[1m [0m[38;5;14m[1mDatabases[0m[38;5;12m [39m[38;5;12m(http://marathon.csee.usf.edu/Mammography/Database.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12m100[39m[38;5;12m [39m[38;5;12mor[39m[38;5;12m [39m[38;5;12mmore[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mmammograms[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mground[39m[38;5;12m [39m[38;5;12mtruth.[39m[38;5;12m [39m[38;5;12mAdditional[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mavailable[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mrequest,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mlinks[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mseveral[39m[38;5;12m [39m[38;5;12mother[39m[38;5;12m [39m[38;5;12mmammography[39m[38;5;12m [39m[38;5;12mdatabases[39m[38;5;12m [39m[38;5;12mare[39m[38;5;12m [39m
|
||
[38;5;12mprovided.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mhomebrew)[39m
|
||
[38;5;12m69. [39m[38;5;14m[1mftp://ftp.cps.msu.edu/pub/prip[0m[38;5;12m (ftp://ftp.cps.msu.edu/pub/prip) - many images (Formats: unknown)[39m
|
||
[38;5;12m70.[39m[38;5;12m [39m[38;5;14m[1mMiddlebury[0m[38;5;14m[1m [0m[38;5;14m[1mStereo[0m[38;5;14m[1m [0m[38;5;14m[1mData[0m[38;5;14m[1m [0m[38;5;14m[1mSets[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1mGround[0m[38;5;14m[1m [0m[38;5;14m[1mTruth[0m[38;5;12m [39m[38;5;12m(http://www.middlebury.edu/stereo/data.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mSix[39m[38;5;12m [39m[38;5;12mmulti-frame[39m[38;5;12m [39m[38;5;12mstereo[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12msets[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mscenes[39m[38;5;12m [39m[38;5;12mcontaining[39m[38;5;12m [39m[38;5;12mplanar[39m[38;5;12m [39m[38;5;12mregions.[39m[38;5;12m [39m[38;5;12mEach[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12mset[39m[38;5;12m [39m[38;5;12mcontains[39m[38;5;12m [39m[38;5;12m9[39m[38;5;12m [39m[38;5;12mcolor[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12msubpixel-accuracy[39m[38;5;12m [39m
|
||
[38;5;12mground-truth[39m[38;5;12m [39m[38;5;12mdata.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mppm)[39m
|
||
[38;5;12m71. [39m[38;5;14m[1mMiddlebury Stereo Vision Research Page[0m[38;5;12m (http://www.middlebury.edu/stereo) - Middlebury College[39m
|
||
[38;5;12m72. [39m[38;5;14m[1mModis Airborne simulator, Gallery and data set[0m[38;5;12m (http://ltpwww.gsfc.nasa.gov/MODIS/MAS/) - High Altitude Imagery from around the world for environmental modeling in support of NASA EOS program (Formats: JPG and HDF)[39m
|
||
[38;5;12m73. [39m[38;5;14m[1mNIST Fingerprint and handwriting[0m[38;5;12m (ftp://sequoyah.ncsl.nist.gov/pub/databases/data) - datasets - thousands of images (Formats: unknown)[39m
|
||
[38;5;12m74. [39m[38;5;14m[1mNIST Fingerprint data[0m[38;5;12m (ftp://ftp.cs.columbia.edu/jpeg/other/uuencoded) - compressed multipart uuencoded tar file[39m
|
||
[38;5;12m75. [39m[38;5;14m[1mNLM HyperDoc Visible Human Project[0m[38;5;12m (http://www.nlm.nih.gov/research/visible/visible_human.html) - Color, CAT and MRI image samples - over 30 images (Formats: jpeg)[39m
|
||
[38;5;12m76. [39m[38;5;14m[1mNational Design Repository[0m[38;5;12m (http://www.designrepository.org) - Over 55,000 3D CAD and solid models of (mostly) mechanical/machined engineering designs. (Formats: gif,vrml,wrl,stp,sat) [39m
|
||
[38;5;12m77. [39m[38;5;14m[1mGeometric & Intelligent Computing Laboratory[0m[38;5;12m (http://gicl.mcs.drexel.edu)[39m
|
||
[38;5;12m79. [39m[38;5;14m[1mOSU (MSU) 3D Object Model Database[0m[38;5;12m (http://eewww.eng.ohio-state.edu/~flynn/3DDB/Models/) - several sets of 3D object models collected over several years to use in object recognition research (Formats: homebrew, vrml)[39m
|
||
[38;5;12m80. [39m[38;5;14m[1mOSU (MSU/WSU) Range Image Database[0m[38;5;12m (http://eewww.eng.ohio-state.edu/~flynn/3DDB/RID/) - Hundreds of real and synthetic images (Formats: gif, homebrew)[39m
|
||
[38;5;12m81.[39m[38;5;12m [39m[38;5;14m[1mOSU/SAMPL[0m[38;5;14m[1m [0m[38;5;14m[1mDatabase:[0m[38;5;14m[1m [0m[38;5;14m[1mRange[0m[38;5;14m[1m [0m[38;5;14m[1mImages,[0m[38;5;14m[1m [0m[38;5;14m[1m3D[0m[38;5;14m[1m [0m[38;5;14m[1mModels,[0m[38;5;14m[1m [0m[38;5;14m[1mStills,[0m[38;5;14m[1m [0m[38;5;14m[1mMotion[0m[38;5;14m[1m [0m[38;5;14m[1mSequences[0m[38;5;12m [39m[38;5;12m(http://sampl.eng.ohio-state.edu/~sampl/database.htm)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mOver[39m[38;5;12m [39m[38;5;12m1000[39m[38;5;12m [39m[38;5;12mrange[39m[38;5;12m [39m[38;5;12mimages,[39m[38;5;12m [39m[38;5;12m3D[39m[38;5;12m [39m[38;5;12mobject[39m[38;5;12m [39m[38;5;12mmodels,[39m[38;5;12m [39m[38;5;12mstill[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mmotion[39m[38;5;12m [39m[38;5;12msequences[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mgif,[39m[38;5;12m [39m[38;5;12mppm,[39m[38;5;12m [39m[38;5;12mvrml,[39m[38;5;12m [39m
|
||
[38;5;12mhomebrew)[39m
|
||
[38;5;12m82. [39m[38;5;14m[1mSignal Analysis and Machine Perception Laboratory[0m[38;5;12m (http://sampl.eng.ohio-state.edu)[39m
|
||
[38;5;12m84.[39m[38;5;12m [39m[38;5;14m[1mOtago[0m[38;5;14m[1m [0m[38;5;14m[1mOptical[0m[38;5;14m[1m [0m[38;5;14m[1mFlow[0m[38;5;14m[1m [0m[38;5;14m[1mEvaluation[0m[38;5;14m[1m [0m[38;5;14m[1mSequences[0m[38;5;12m [39m[38;5;12m(http://www.cs.otago.ac.nz/research/vision/Research/OpticalFlow/opticalflow.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mSynthetic[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mreal[39m[38;5;12m [39m[38;5;12msequences[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mmachine-readable[39m[38;5;12m [39m[38;5;12mground[39m[38;5;12m [39m[38;5;12mtruth[39m[38;5;12m [39m[38;5;12moptical[39m[38;5;12m [39m[38;5;12mflow[39m[38;5;12m [39m[38;5;12mfields,[39m[38;5;12m [39m[38;5;12mplus[39m[38;5;12m [39m[38;5;12mtools[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mgenerate[39m[38;5;12m [39m
|
||
[38;5;12mground[39m[38;5;12m [39m[38;5;12mtruth[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mnew[39m[38;5;12m [39m[38;5;12msequences.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mppm,tif,homebrew)[39m
|
||
[38;5;12m85. [39m[38;5;14m[1mVision Research Group[0m[38;5;12m (http://www.cs.otago.ac.nz/research/vision/index.html)[39m
|
||
[38;5;12m87.[39m[38;5;12m [39m[38;5;14m[1mftp://ftp.limsi.fr/pub/quenot/opflow/testdata/piv/[0m[38;5;12m [39m[38;5;12m(ftp://ftp.limsi.fr/pub/quenot/opflow/testdata/piv/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mReal[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12msynthetic[39m[38;5;12m [39m[38;5;12mimage[39m[38;5;12m [39m[38;5;12msequences[39m[38;5;12m [39m[38;5;12mused[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mtesting[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mParticle[39m[38;5;12m [39m[38;5;12mImage[39m[38;5;12m [39m[38;5;12mVelocimetry[39m[38;5;12m [39m[38;5;12mapplication.[39m[38;5;12m [39m[38;5;12mThese[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mmay[39m[38;5;12m [39m[38;5;12mbe[39m[38;5;12m [39m[38;5;12mused[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mthe[39m
|
||
[38;5;12mtest[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12moptical[39m[38;5;12m [39m[38;5;12mflow[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mimage[39m[38;5;12m [39m[38;5;12mmatching[39m[38;5;12m [39m[38;5;12malgorithms.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mpgm[39m[38;5;12m [39m[38;5;12m(raw))[39m
|
||
[38;5;12m88. [39m[38;5;14m[1mLIMSI-CNRS/CHM/IMM/vision[0m[38;5;12m (http://www.limsi.fr/Recherche/IMM/PageIMM.html)[39m
|
||
[38;5;12m89. [39m[38;5;14m[1mLIMSI-CNRS[0m[38;5;12m (http://www.limsi.fr/)[39m
|
||
[38;5;12m90.[39m[38;5;12m [39m[38;5;14m[1mPhotometric[0m[38;5;14m[1m [0m[38;5;14m[1m3D[0m[38;5;14m[1m [0m[38;5;14m[1mSurface[0m[38;5;14m[1m [0m[38;5;14m[1mTexture[0m[38;5;14m[1m [0m[38;5;14m[1mDatabase[0m[38;5;12m [39m[38;5;12m(http://www.taurusstudio.net/research/pmtexdb/index.htm)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mThis[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mfirst[39m[38;5;12m [39m[38;5;12m3D[39m[38;5;12m [39m[38;5;12mtexture[39m[38;5;12m [39m[38;5;12mdatabase[39m[38;5;12m [39m[38;5;12mwhich[39m[38;5;12m [39m[38;5;12mprovides[39m[38;5;12m [39m[38;5;12mboth[39m[38;5;12m [39m[38;5;12mfull[39m[38;5;12m [39m[38;5;12mreal[39m[38;5;12m [39m[38;5;12msurface[39m[38;5;12m [39m[38;5;12mrotations[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mregistered[39m[38;5;12m [39m[38;5;12mphotometric[39m[38;5;12m [39m[38;5;12mstereo[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12m(30[39m[38;5;12m [39m
|
||
[38;5;12mtextures,[39m[38;5;12m [39m[38;5;12m1680[39m[38;5;12m [39m[38;5;12mimages).[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mTIFF)[39m
|
||
[38;5;12m91. [39m[38;5;14m[1mSEQUENCES FOR OPTICAL FLOW ANALYSIS (SOFA)[0m[38;5;12m (http://www.cee.hw.ac.uk/~mtc/sofa) - 9 synthetic sequences designed for testing motion analysis applications, including full ground truth of motion and camera parameters. (Formats: gif)[39m
|
||
[38;5;12m92. [39m[38;5;14m[1mComputer Vision Group[0m[38;5;12m (http://www.cee.hw.ac.uk/~mtc/research.html)[39m
|
||
[38;5;12m94. [39m[38;5;14m[1mSequences for Flow Based Reconstruction[0m[38;5;12m (http://www.nada.kth.se/~zucch/CAMERA/PUB/seq.html) - synthetic sequence for testing structure from motion algorithms (Formats: pgm)[39m
|
||
[38;5;12m95.[39m[38;5;12m [39m[38;5;14m[1mStereo[0m[38;5;14m[1m [0m[38;5;14m[1mImages[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1mGround[0m[38;5;14m[1m [0m[38;5;14m[1mTruth[0m[38;5;14m[1m [0m[38;5;14m[1mDisparity[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mOcclusion[0m[38;5;12m [39m[38;5;12m(http://www-dbv.cs.uni-bonn.de/stereo_data/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12msmall[39m[38;5;12m [39m[38;5;12mset[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12msynthetic[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mhallway[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mvarying[39m[38;5;12m [39m[38;5;12mamounts[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mnoise[39m[38;5;12m [39m[38;5;12madded.[39m[38;5;12m [39m[38;5;12mUse[39m[38;5;12m [39m[38;5;12mthese[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mbenchmark[39m[38;5;12m [39m[38;5;12myour[39m[38;5;12m [39m[38;5;12mstereo[39m[38;5;12m [39m
|
||
[38;5;12malgorithm.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mraw,[39m[38;5;12m [39m[38;5;12mviff[39m[38;5;12m [39m[38;5;12m(khoros),[39m[38;5;12m [39m[38;5;12mor[39m[38;5;12m [39m[38;5;12mtiff)[39m
|
||
[38;5;12m96. [39m[38;5;14m[1mStuttgart Range Image Database[0m[38;5;12m (http://range.informatik.uni-stuttgart.de) - A collection of synthetic range images taken from high-resolution polygonal models available on the web (Formats: homebrew)[39m
|
||
[38;5;12m97. [39m[38;5;14m[1mDepartment Image Understanding[0m[38;5;12m (http://www.informatik.uni-stuttgart.de/ipvr/bv/bv_home_engl.html)[39m
|
||
[38;5;12m99.[39m[38;5;12m [39m[38;5;14m[1mThe[0m[38;5;14m[1m [0m[38;5;14m[1mAR[0m[38;5;14m[1m [0m[38;5;14m[1mFace[0m[38;5;14m[1m [0m[38;5;14m[1mDatabase[0m[38;5;12m [39m[38;5;12m(http://www2.ece.ohio-state.edu/~aleix/ARdatabase.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mContains[39m[38;5;12m [39m[38;5;12mover[39m[38;5;12m [39m[38;5;12m4,000[39m[38;5;12m [39m[38;5;12mcolor[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mcorresponding[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12m126[39m[38;5;12m [39m[38;5;12mpeople's[39m[38;5;12m [39m[38;5;12mfaces[39m[38;5;12m [39m[38;5;12m(70[39m[38;5;12m [39m[38;5;12mmen[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12m56[39m[38;5;12m [39m[38;5;12mwomen).[39m[38;5;12m [39m[38;5;12mFrontal[39m[38;5;12m [39m[38;5;12mviews[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mvariations[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mfacial[39m[38;5;12m [39m[38;5;12mexpressions,[39m[38;5;12m [39m
|
||
[38;5;12millumination,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mocclusions.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mRAW[39m[38;5;12m [39m[38;5;12m(RGB[39m[38;5;12m [39m[38;5;12m24-bit))[39m
|
||
[38;5;12m100. [39m[38;5;14m[1mPurdue Robot Vision Lab[0m[38;5;12m (http://rvl.www.ecn.purdue.edu/RVL/)[39m
|
||
[38;5;12m101.[39m[38;5;12m [39m[38;5;14m[1mThe[0m[38;5;14m[1m [0m[38;5;14m[1mMIT-CSAIL[0m[38;5;14m[1m [0m[38;5;14m[1mDatabase[0m[38;5;14m[1m [0m[38;5;14m[1mof[0m[38;5;14m[1m [0m[38;5;14m[1mObjects[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mScenes[0m[38;5;12m [39m[38;5;12m(http://web.mit.edu/torralba/www/database.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mDatabase[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mtesting[39m[38;5;12m [39m[38;5;12mmulticlass[39m[38;5;12m [39m[38;5;12mobject[39m[38;5;12m [39m[38;5;12mdetection[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mscene[39m[38;5;12m [39m[38;5;12mrecognition[39m[38;5;12m [39m[38;5;12malgorithms.[39m[38;5;12m [39m[38;5;12mOver[39m[38;5;12m [39m[38;5;12m72,000[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12m2873[39m[38;5;12m [39m[38;5;12mannotated[39m[38;5;12m [39m[38;5;12mframes.[39m[38;5;12m [39m[38;5;12mMore[39m[38;5;12m [39m
|
||
[38;5;12mthan[39m[38;5;12m [39m[38;5;12m50[39m[38;5;12m [39m[38;5;12mannotated[39m[38;5;12m [39m[38;5;12mobject[39m[38;5;12m [39m[38;5;12mclasses.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mjpg)[39m
|
||
[38;5;12m102.[39m[38;5;12m [39m[38;5;14m[1mThe[0m[38;5;14m[1m [0m[38;5;14m[1mRVL[0m[38;5;14m[1m [0m[38;5;14m[1mSPEC-DB[0m[38;5;14m[1m [0m[38;5;14m[1m(SPECularity[0m[38;5;14m[1m [0m[38;5;14m[1mDataBase)[0m[38;5;12m [39m[38;5;12m(http://rvl1.ecn.purdue.edu/RVL/specularity_database/)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mA[39m[38;5;12m [39m[38;5;12mcollection[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mover[39m[38;5;12m [39m[38;5;12m300[39m[38;5;12m [39m[38;5;12mreal[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12m100[39m[38;5;12m [39m[38;5;12mobjects[39m[38;5;12m [39m[38;5;12mtaken[39m[38;5;12m [39m[38;5;12munder[39m[38;5;12m [39m[38;5;12mthree[39m[38;5;12m [39m[38;5;12mdifferent[39m[38;5;12m [39m[38;5;12milluminaiton[39m[38;5;12m [39m[38;5;12mconditions[39m[38;5;12m [39m[38;5;12m(Diffuse/Ambient/Directed).[39m[38;5;12m [39m[38;5;12m--[39m[38;5;12m [39m
|
||
[38;5;12mUse[39m[38;5;12m [39m[38;5;12mthese[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mtest[39m[38;5;12m [39m[38;5;12malgorithms[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mdetecting[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mcompensating[39m[38;5;12m [39m[38;5;12mspecular[39m[38;5;12m [39m[38;5;12mhighlights[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mcolor[39m[38;5;12m [39m[38;5;12mimages.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mTIFF[39m[38;5;12m [39m[38;5;12m)[39m
|
||
[38;5;12m103. [39m[38;5;14m[1mRobot Vision Laboratory[0m[38;5;12m (http://rvl1.ecn.purdue.edu/RVL/)[39m
|
||
[38;5;12m105. [39m[38;5;14m[1mThe Xm2vts database[0m[38;5;12m (http://xm2vtsdb.ee.surrey.ac.uk) - The XM2VTSDB contains four digital recordings of 295 people taken over a period of four months. This database contains both image and video data of faces.[39m
|
||
[38;5;12m106. [39m[38;5;14m[1mCentre for Vision, Speech and Signal Processing[0m[38;5;12m (http://www.ee.surrey.ac.uk/Research/CVSSP)[39m
|
||
[38;5;12m107. [39m[38;5;14m[1mTraffic Image Sequences and 'Marbled Block' Sequence[0m[38;5;12m (http://i21www.ira.uka.de/image_sequences) - thousands of frames of digitized traffic image sequences as well as the 'Marbled Block' sequence (grayscale images) (Formats: GIF)[39m
|
||
[38;5;12m108. [39m[38;5;14m[1mIAKS/KOGS[0m[38;5;12m (http://i21www.ira.uka.de)[39m
|
||
[38;5;12m110. [39m[38;5;14m[1mU Bern Face images[0m[38;5;12m (ftp://ftp.iam.unibe.ch/pub/Images/FaceImages) - hundreds of images (Formats: Sun rasterfile)[39m
|
||
[38;5;12m111. [39m[38;5;14m[1mU Michigan textures[0m[38;5;12m (ftp://freebie.engin.umich.edu/pub/misc/textures) (Formats: compressed raw)[39m
|
||
[38;5;12m112. [39m[38;5;14m[1mU Oulu wood and knots database[0m[38;5;12m (http://www.ee.oulu.fi/~olli/Projects/Lumber.Grading.html) - Includes classifications - 1000+ color images (Formats: ppm)[39m
|
||
[38;5;12m113. [39m[38;5;14m[1mUCID - an Uncompressed Colour Image Database[0m[38;5;12m (http://vision.doc.ntu.ac.uk/datasets/UCID/ucid.html) - a benchmark database for image retrieval with predefined ground truth. (Formats: tiff)[39m
|
||
[38;5;12m115. [39m[38;5;14m[1mUMass Vision Image Archive[0m[38;5;12m (http://vis-www.cs.umass.edu/~vislib/) - Large image database with aerial, space, stereo, medical images and more. (Formats: homebrew)[39m
|
||
[38;5;12m116. [39m[38;5;14m[1mUNC's 3D image database[0m[38;5;12m (ftp://sunsite.unc.edu/pub/academic/computer-science/virtual-reality/3d) - many images (Formats: GIF)[39m
|
||
[38;5;12m117. [39m[38;5;14m[1mUSF Range Image Data with Segmentation Ground Truth[0m[38;5;12m (http://marathon.csee.usf.edu/range/seg-comp/SegComp.html) - 80 image sets (Formats: Sun rasterimage)[39m
|
||
[38;5;12m118.[39m[38;5;12m [39m[38;5;14m[1mUniversity[0m[38;5;14m[1m [0m[38;5;14m[1mof[0m[38;5;14m[1m [0m[38;5;14m[1mOulu[0m[38;5;14m[1m [0m[38;5;14m[1mPhysics-based[0m[38;5;14m[1m [0m[38;5;14m[1mFace[0m[38;5;14m[1m [0m[38;5;14m[1mDatabase[0m[38;5;12m [39m[38;5;12m(http://www.ee.oulu.fi/research/imag/color/pbfd.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mcontains[39m[38;5;12m [39m[38;5;12mcolor[39m[38;5;12m [39m[38;5;12mimages[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mfaces[39m[38;5;12m [39m[38;5;12munder[39m[38;5;12m [39m[38;5;12mdifferent[39m[38;5;12m [39m[38;5;12milluminants[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mcamera[39m[38;5;12m [39m[38;5;12mcalibration[39m[38;5;12m [39m[38;5;12mconditions[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12mwell[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12mskin[39m[38;5;12m [39m[38;5;12mspectral[39m[38;5;12m [39m
|
||
[38;5;12mreflectance[39m[38;5;12m [39m[38;5;12mmeasurements[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12meach[39m[38;5;12m [39m[38;5;12mperson.[39m
|
||
[38;5;12m119. [39m[38;5;14m[1mMachine Vision and Media Processing Unit[0m[38;5;12m (http://www.ee.oulu.fi/mvmp/)[39m
|
||
[38;5;12m121.[39m[38;5;12m [39m[38;5;14m[1mUniversity[0m[38;5;14m[1m [0m[38;5;14m[1mof[0m[38;5;14m[1m [0m[38;5;14m[1mOulu[0m[38;5;14m[1m [0m[38;5;14m[1mTexture[0m[38;5;14m[1m [0m[38;5;14m[1mDatabase[0m[38;5;12m [39m[38;5;12m(http://www.outex.oulu.fi)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mDatabase[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12m320[39m[38;5;12m [39m[38;5;12msurface[39m[38;5;12m [39m[38;5;12mtextures,[39m[38;5;12m [39m[38;5;12meach[39m[38;5;12m [39m[38;5;12mcaptured[39m[38;5;12m [39m[38;5;12munder[39m[38;5;12m [39m[38;5;12mthree[39m[38;5;12m [39m[38;5;12milluminants,[39m[38;5;12m [39m[38;5;12msix[39m[38;5;12m [39m[38;5;12mspatial[39m[38;5;12m [39m[38;5;12mresolutions[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mnine[39m[38;5;12m [39m[38;5;12mrotation[39m[38;5;12m [39m[38;5;12mangles.[39m[38;5;12m [39m[38;5;12mA[39m[38;5;12m [39m[38;5;12mset[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mtest[39m[38;5;12m [39m[38;5;12msuites[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12malso[39m[38;5;12m [39m[38;5;12mprovided[39m[38;5;12m [39m[38;5;12mso[39m[38;5;12m [39m
|
||
[38;5;12mthat[39m[38;5;12m [39m[38;5;12mtexture[39m[38;5;12m [39m[38;5;12msegmentation,[39m[38;5;12m [39m[38;5;12mclassification,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mretrieval[39m[38;5;12m [39m[38;5;12malgorithms[39m[38;5;12m [39m[38;5;12mcan[39m[38;5;12m [39m[38;5;12mbe[39m[38;5;12m [39m[38;5;12mtested[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mstandard[39m[38;5;12m [39m[38;5;12mmanner.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mbmp,[39m[38;5;12m [39m[38;5;12mras,[39m[38;5;12m [39m[38;5;12mxv)[39m
|
||
[38;5;12m122. [39m[38;5;14m[1mMachine Vision Group[0m[38;5;12m (http://www.ee.oulu.fi/mvg)[39m
|
||
[38;5;12m124. [39m[38;5;14m[1mUsenix face database[0m[38;5;12m (ftp://ftp.uu.net/published/usenix/faces) - Thousands of face images from many different sites (circa 994)[39m
|
||
[38;5;12m125.[39m[38;5;12m [39m[38;5;14m[1mView[0m[38;5;14m[1m [0m[38;5;14m[1mSphere[0m[38;5;14m[1m [0m[38;5;14m[1mDatabase[0m[38;5;12m [39m[38;5;12m(http://www-prima.inrialpes.fr/Prima/hall/view_sphere.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mImages[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12m8[39m[38;5;12m [39m[38;5;12mobjects[39m[38;5;12m [39m[38;5;12mseen[39m[38;5;12m [39m[38;5;12mfrom[39m[38;5;12m [39m[38;5;12mmany[39m[38;5;12m [39m[38;5;12mdifferent[39m[38;5;12m [39m[38;5;12mview[39m[38;5;12m [39m[38;5;12mpoints.[39m[38;5;12m [39m[38;5;12mThe[39m[38;5;12m [39m[38;5;12mview[39m[38;5;12m [39m[38;5;12msphere[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12msampled[39m[38;5;12m [39m[38;5;12musing[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mgeodesic[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12m172[39m[38;5;12m [39m[38;5;12mimages/sphere.[39m[38;5;12m [39m[38;5;12mTwo[39m[38;5;12m [39m[38;5;12msets[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mtraining[39m[38;5;12m [39m
|
||
[38;5;12mand[39m[38;5;12m [39m[38;5;12mtesting[39m[38;5;12m [39m[38;5;12mare[39m[38;5;12m [39m[38;5;12mavailable.[39m[38;5;12m [39m[38;5;12m(Formats:[39m[38;5;12m [39m[38;5;12mppm)[39m
|
||
[38;5;12m126. [39m[38;5;14m[1mPRIMA, GRAVIR[0m[38;5;12m (http://www-prima.inrialpes.fr/Prima/)[39m
|
||
[38;5;12m127. [39m[38;5;14m[1mVision-list Imagery Archive[0m[38;5;12m (ftp://ftp.vislist.com/IMAGERY/) - Many images, many formats[39m
|
||
[38;5;12m128. [39m[38;5;14m[1mWiry Object Recognition Database[0m[38;5;12m (http://www.cs.cmu.edu/~owenc/word.htm) - Thousands of images of a cart, ladder, stool, bicycle, chairs, and cluttered scenes with ground truth labelings of edges and regions. (Formats: jpg)[39m
|
||
[38;5;12m129. [39m[38;5;14m[1m3D Vision Group[0m[38;5;12m (http://www.cs.cmu.edu/0.000000E+003dvision/)[39m
|
||
[38;5;12m131. [39m[38;5;14m[1mYale Face Database[0m[38;5;12m (http://cvc.yale.edu/projects/yalefaces/yalefaces.html) - 165 images (15 individuals) with different lighting, expression, and occlusion configurations.[39m
|
||
[38;5;12m132. [39m[38;5;14m[1mYale Face Database B[0m[38;5;12m (http://cvc.yale.edu/projects/yalefacesB/yalefacesB.html) - 5760 single light source images of 10 subjects each seen under 576 viewing conditions (9 poses x 64 illumination conditions). (Formats: PGM)[39m
|
||
[38;5;12m133. [39m[38;5;14m[1mCenter for Computational Vision and Control[0m[38;5;12m (http://cvc.yale.edu/)[39m
|
||
[38;5;12m134. [39m[38;5;14m[1mDeepMind QA Corpus[0m[38;5;12m (https://github.com/deepmind/rc-data) - Textual QA corpus from CNN and DailyMail. More than 300K documents in total. [39m[38;5;14m[1mPaper[0m[38;5;12m (http://arxiv.org/abs/1506.03340) for reference.[39m
|
||
[38;5;12m135. [39m[38;5;14m[1mYouTube-8M Dataset[0m[38;5;12m (https://research.google.com/youtube8m/) - YouTube-8M is a large-scale labeled video dataset that consists of 8 million YouTube video IDs and associated labels from a diverse vocabulary of 4800 visual entities.[39m
|
||
[38;5;12m136. [39m[38;5;14m[1mOpen Images dataset[0m[38;5;12m (https://github.com/openimages/dataset) - Open Images is a dataset of ~9 million URLs to images that have been annotated with labels spanning over 6000 categories.[39m
|
||
[38;5;12m137. [39m[38;5;14m[1mVisual Object Classes Challenge 2012 (VOC2012)[0m[38;5;12m (http://host.robots.ox.ac.uk/pascal/VOC/voc2012/index.html#devkit) - VOC2012 dataset containing 12k images with 20 annotated classes for object detection and segmentation.[39m
|
||
[38;5;12m138.[39m[38;5;12m [39m[38;5;14m[1mFashion-MNIST[0m[38;5;12m [39m[38;5;12m(https://github.com/zalandoresearch/fashion-mnist)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mMNIST[39m[38;5;12m [39m[38;5;12mlike[39m[38;5;12m [39m[38;5;12mfashion[39m[38;5;12m [39m[38;5;12mproduct[39m[38;5;12m [39m[38;5;12mdataset[39m[38;5;12m [39m[38;5;12mconsisting[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mtraining[39m[38;5;12m [39m[38;5;12mset[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12m60,000[39m[38;5;12m [39m[38;5;12mexamples[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mtest[39m[38;5;12m [39m[38;5;12mset[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12m10,000[39m[38;5;12m [39m[38;5;12mexamples.[39m[38;5;12m [39m[38;5;12mEach[39m[38;5;12m [39m[38;5;12mexample[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12m28x28[39m[38;5;12m [39m[38;5;12mgrayscale[39m[38;5;12m [39m[38;5;12mimage,[39m[38;5;12m [39m
|
||
[38;5;12massociated[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mlabel[39m[38;5;12m [39m[38;5;12mfrom[39m[38;5;12m [39m[38;5;12m10[39m[38;5;12m [39m[38;5;12mclasses.[39m
|
||
[38;5;12m139.[39m[38;5;12m [39m[38;5;14m[1mLarge-scale[0m[38;5;14m[1m [0m[38;5;14m[1mFashion[0m[38;5;14m[1m [0m[38;5;14m[1m(DeepFashion)[0m[38;5;14m[1m [0m[38;5;14m[1mDatabase[0m[38;5;12m [39m[38;5;12m(http://mmlab.ie.cuhk.edu.hk/projects/DeepFashion.html)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mContains[39m[38;5;12m [39m[38;5;12mover[39m[38;5;12m [39m[38;5;12m800,000[39m[38;5;12m [39m[38;5;12mdiverse[39m[38;5;12m [39m[38;5;12mfashion[39m[38;5;12m [39m[38;5;12mimages.[39m[38;5;12m [39m[38;5;12mEach[39m[38;5;12m [39m[38;5;12mimage[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mthis[39m[38;5;12m [39m[38;5;12mdataset[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mlabeled[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12m50[39m[38;5;12m [39m[38;5;12mcategories,[39m[38;5;12m [39m[38;5;12m1,000[39m[38;5;12m [39m[38;5;12mdescriptive[39m[38;5;12m [39m
|
||
[38;5;12mattributes,[39m[38;5;12m [39m[38;5;12mbounding[39m[38;5;12m [39m[38;5;12mbox[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mclothing[39m[38;5;12m [39m[38;5;12mlandmarks[39m
|
||
[38;5;12m140. [39m[38;5;14m[1mFakeNewsCorpus[0m[38;5;12m (https://github.com/several27/FakeNewsCorpus) - Contains about 10 million news articles classified using [39m[38;5;14m[1mopensources.co[0m[38;5;12m (http://opensources.co) types[39m
|
||
[38;5;12m141. [39m[38;5;14m[1mLLVIP[0m[38;5;12m (https://github.com/bupt-ai-cz/LLVIP) - 15488 visible-infrared paired images (30976 images) for low-light vision research, [39m[38;5;14m[1mProject_Page[0m[38;5;12m (https://bupt-ai-cz.github.io/LLVIP/)[39m
|
||
[38;5;12m142. [39m[38;5;14m[1mMSDA[0m[38;5;12m (https://github.com/bupt-ai-cz/Meta-SelfLearning) - Over over 5 million images from 5 different domains for multi-source ocr/text recognition DA research, [39m[38;5;14m[1mProject_Page[0m[38;5;12m (https://bupt-ai-cz.github.io/Meta-SelfLearning/)[39m
|
||
[38;5;12m143.[39m[38;5;12m [39m[38;5;14m[1mSANAD:[0m[38;5;14m[1m [0m[38;5;14m[1mSingle-Label[0m[38;5;14m[1m [0m[38;5;14m[1mArabic[0m[38;5;14m[1m [0m[38;5;14m[1mNews[0m[38;5;14m[1m [0m[38;5;14m[1mArticles[0m[38;5;14m[1m [0m[38;5;14m[1mDataset[0m[38;5;14m[1m [0m[38;5;14m[1mfor[0m[38;5;14m[1m [0m[38;5;14m[1mAutomatic[0m[38;5;14m[1m [0m[38;5;14m[1mText[0m[38;5;14m[1m [0m[38;5;14m[1mCategorization[0m[38;5;12m [39m[38;5;12m(https://data.mendeley.com/datasets/57zpx667y9/2)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mSANAD[39m[38;5;12m [39m[38;5;12mDataset[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mlarge[39m[38;5;12m [39m[38;5;12mcollection[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mArabic[39m[38;5;12m [39m[38;5;12mnews[39m[38;5;12m [39m[38;5;12marticles[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12mcan[39m[38;5;12m [39m[38;5;12mbe[39m[38;5;12m [39m[38;5;12mused[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mdifferent[39m[38;5;12m [39m[38;5;12mArabic[39m
|
||
[38;5;12mNLP[39m[38;5;12m [39m[38;5;12mtasks[39m[38;5;12m [39m[38;5;12msuch[39m[38;5;12m [39m[38;5;12mas[39m[38;5;12m [39m[38;5;12mText[39m[38;5;12m [39m[38;5;12mClassification[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mWord[39m[38;5;12m [39m[38;5;12mEmbedding.[39m[38;5;12m [39m[38;5;12mThe[39m[38;5;12m [39m[38;5;12marticles[39m[38;5;12m [39m[38;5;12mwere[39m[38;5;12m [39m[38;5;12mcollected[39m[38;5;12m [39m[38;5;12musing[39m[38;5;12m [39m[38;5;12mPython[39m[38;5;12m [39m[38;5;12mscripts[39m[38;5;12m [39m[38;5;12mwritten[39m[38;5;12m [39m[38;5;12mspecifically[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mthree[39m[38;5;12m [39m[38;5;12mpopular[39m[38;5;12m [39m[38;5;12mnews[39m[38;5;12m [39m[38;5;12mwebsites:[39m[38;5;12m [39m[38;5;12mAlKhaleej,[39m[38;5;12m [39m[38;5;12mAlArabiya[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mAkhbarona.[39m[38;5;12m [39m
|
||
[38;5;12m144.[39m[38;5;12m [39m[38;5;14m[1mReferit3D[0m[38;5;12m [39m[38;5;12m(https://referit3d.github.io)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mTwo[39m[38;5;12m [39m[38;5;12mlarge-scale[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mcomplementary[39m[38;5;12m [39m[38;5;12mvisio-linguistic[39m[38;5;12m [39m[38;5;12mdatasets[39m[38;5;12m [39m[38;5;12m(aka[39m[38;5;12m [39m[38;5;12mNr3D[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mSr3D)[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12midentifying[39m[38;5;12m [39m[38;5;12mfine-grained[39m[38;5;12m [39m[38;5;12m3D[39m[38;5;12m [39m[38;5;12mobjects[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mScanNet[39m[38;5;12m [39m[38;5;12mscenes.[39m[38;5;12m [39m[38;5;12mNr3D[39m[38;5;12m [39m[38;5;12mcontains[39m[38;5;12m [39m[38;5;12m41.5K[39m[38;5;12m [39m[38;5;12mnatural,[39m[38;5;12m [39m[38;5;12mfree-form[39m[38;5;12m [39m[38;5;12mutterances,[39m
|
||
[38;5;12mand[39m[38;5;12m [39m[38;5;12mSr3d[39m[38;5;12m [39m[38;5;12mcontains[39m[38;5;12m [39m[38;5;12m83.5K[39m[38;5;12m [39m[38;5;12mtemplate-based[39m[38;5;12m [39m[38;5;12mutterances.[39m
|
||
[38;5;12m145. [39m[38;5;14m[1mSQuAD[0m[38;5;12m (https://rajpurkar.github.io/SQuAD-explorer/) - Stanford released ~100,000 English QA pairs and ~50,000 unanswerable questions[39m
|
||
[38;5;12m146. [39m[38;5;14m[1mFQuAD[0m[38;5;12m (https://fquad.illuin.tech/) - ~25,000 French QA pairs released by Illuin Technology[39m
|
||
[38;5;12m147. [39m[38;5;14m[1mGermanQuAD and GermanDPR[0m[38;5;12m (https://www.deepset.ai/germanquad) - deepset released ~14,000 German QA pairs[39m
|
||
[38;5;12m148. [39m[38;5;14m[1mSberQuAD[0m[38;5;12m (https://github.com/annnyway/QA-for-Russian) - Sberbank released ~90,000 Russian QA pairs[39m
|
||
[38;5;12m149. [39m[38;5;14m[1mArtEmis[0m[38;5;12m (http://artemisdataset.org/) - Contains 450K affective annotations of emotional responses and linguistic explanations for 80,000 artworks of WikiArt.[39m
|
||
|
||
[38;2;255;187;0m[4mConferences[0m
|
||
|
||
[38;5;12m1. [39m[38;5;14m[1mCVPR - IEEE Conference on Computer Vision and Pattern Recognition[0m[38;5;12m (http://cvpr2018.thecvf.com)[39m
|
||
[38;5;12m2. [39m[38;5;14m[1mAAMAS - International Joint Conference on Autonomous Agents and Multiagent Systems[0m[38;5;12m (http://celweb.vuse.vanderbilt.edu/aamas18/)[39m
|
||
[38;5;12m3. [39m[38;5;14m[1mIJCAI - International Joint Conference on Artificial Intelligence[0m[38;5;12m (https://www.ijcai-18.org/)[39m
|
||
[38;5;12m4. [39m[38;5;14m[1mICML - International Conference on Machine Learning[0m[38;5;12m (https://icml.cc)[39m
|
||
[38;5;12m5. [39m[38;5;14m[1mECML - European Conference on Machine Learning[0m[38;5;12m (http://www.ecmlpkdd2018.org)[39m
|
||
[38;5;12m6. [39m[38;5;14m[1mKDD - Knowledge Discovery and Data Mining[0m[38;5;12m (http://www.kdd.org/kdd2018/)[39m
|
||
[38;5;12m7. [39m[38;5;14m[1mNIPS - Neural Information Processing Systems[0m[38;5;12m (https://nips.cc/Conferences/2018)[39m
|
||
[38;5;12m8. [39m[38;5;14m[1mO'Reilly AI Conference - O'Reilly Artificial Intelligence Conference[0m[38;5;12m (https://conferences.oreilly.com/artificial-intelligence/ai-ny)[39m
|
||
[38;5;12m9. [39m[38;5;14m[1mICDM - International Conference on Data Mining[0m[38;5;12m (https://www.waset.org/conference/2018/07/istanbul/ICDM)[39m
|
||
[38;5;12m10. [39m[38;5;14m[1mICCV - International Conference on Computer Vision[0m[38;5;12m (http://iccv2017.thecvf.com)[39m
|
||
[38;5;12m11. [39m[38;5;14m[1mAAAI - Association for the Advancement of Artificial Intelligence[0m[38;5;12m (https://www.aaai.org)[39m
|
||
[38;5;12m12. [39m[38;5;14m[1mMAIS - Montreal AI Symposium[0m[38;5;12m (https://montrealaisymposium.wordpress.com/)[39m
|
||
|
||
[38;2;255;187;0m[4mFrameworks[0m
|
||
|
||
[38;5;12m1. [39m[38;5;14m[1mCaffe[0m[38;5;12m (http://caffe.berkeleyvision.org/) [39m
|
||
[38;5;12m2. [39m[38;5;14m[1mTorch7[0m[38;5;12m (http://torch.ch/)[39m
|
||
[38;5;12m3. [39m[38;5;14m[1mTheano[0m[38;5;12m (http://deeplearning.net/software/theano/)[39m
|
||
[38;5;12m4. [39m[38;5;14m[1mcuda-convnet[0m[38;5;12m (https://code.google.com/p/cuda-convnet2/)[39m
|
||
[38;5;12m5. [39m[38;5;14m[1mconvetjs[0m[38;5;12m (https://github.com/karpathy/convnetjs)[39m
|
||
[38;5;12m5. [39m[38;5;14m[1mCcv[0m[38;5;12m (http://libccv.org/doc/doc-convnet/)[39m
|
||
[38;5;12m6. [39m[38;5;14m[1mNuPIC[0m[38;5;12m (http://numenta.org/nupic.html)[39m
|
||
[38;5;12m7. [39m[38;5;14m[1mDeepLearning4J[0m[38;5;12m (http://deeplearning4j.org/)[39m
|
||
[38;5;12m8. [39m[38;5;14m[1mBrain[0m[38;5;12m (https://github.com/harthur/brain)[39m
|
||
[38;5;12m9. [39m[38;5;14m[1mDeepLearnToolbox[0m[38;5;12m (https://github.com/rasmusbergpalm/DeepLearnToolbox)[39m
|
||
[38;5;12m10. [39m[38;5;14m[1mDeepnet[0m[38;5;12m (https://github.com/nitishsrivastava/deepnet)[39m
|
||
[38;5;12m11. [39m[38;5;14m[1mDeeppy[0m[38;5;12m (https://github.com/andersbll/deeppy)[39m
|
||
[38;5;12m12. [39m[38;5;14m[1mJavaNN[0m[38;5;12m (https://github.com/ivan-vasilev/neuralnetworks)[39m
|
||
[38;5;12m13. [39m[38;5;14m[1mhebel[0m[38;5;12m (https://github.com/hannes-brt/hebel)[39m
|
||
[38;5;12m14. [39m[38;5;14m[1mMocha.jl[0m[38;5;12m (https://github.com/pluskid/Mocha.jl)[39m
|
||
[38;5;12m15. [39m[38;5;14m[1mOpenDL[0m[38;5;12m (https://github.com/guoding83128/OpenDL)[39m
|
||
[38;5;12m16. [39m[38;5;14m[1mcuDNN[0m[38;5;12m (https://developer.nvidia.com/cuDNN)[39m
|
||
[38;5;12m17. [39m[38;5;14m[1mMGL[0m[38;5;12m (http://melisgl.github.io/mgl-pax-world/mgl-manual.html)[39m
|
||
[38;5;12m18. [39m[38;5;14m[1mKnet.jl[0m[38;5;12m (https://github.com/denizyuret/Knet.jl)[39m
|
||
[38;5;12m19. [39m[38;5;14m[1mNvidia DIGITS - a web app based on Caffe[0m[38;5;12m (https://github.com/NVIDIA/DIGITS)[39m
|
||
[38;5;12m20. [39m[38;5;14m[1mNeon - Python based Deep Learning Framework[0m[38;5;12m (https://github.com/NervanaSystems/neon)[39m
|
||
[38;5;12m21. [39m[38;5;14m[1mKeras - Theano based Deep Learning Library[0m[38;5;12m (http://keras.io)[39m
|
||
[38;5;12m22. [39m[38;5;14m[1mChainer - A flexible framework of neural networks for deep learning[0m[38;5;12m (http://chainer.org/)[39m
|
||
[38;5;12m23. [39m[38;5;14m[1mRNNLM Toolkit[0m[38;5;12m (http://rnnlm.org/)[39m
|
||
[38;5;12m24. [39m[38;5;14m[1mRNNLIB - A recurrent neural network library[0m[38;5;12m (http://sourceforge.net/p/rnnl/wiki/Home/)[39m
|
||
[38;5;12m25. [39m[38;5;14m[1mchar-rnn[0m[38;5;12m (https://github.com/karpathy/char-rnn)[39m
|
||
[38;5;12m26. [39m[38;5;14m[1mMatConvNet: CNNs for MATLAB[0m[38;5;12m (https://github.com/vlfeat/matconvnet)[39m
|
||
[38;5;12m27. [39m[38;5;14m[1mMinerva - a fast and flexible tool for deep learning on multi-GPU[0m[38;5;12m (https://github.com/dmlc/minerva)[39m
|
||
[38;5;12m28. [39m[38;5;14m[1mBrainstorm - Fast, flexible and fun neural networks.[0m[38;5;12m (https://github.com/IDSIA/brainstorm)[39m
|
||
[38;5;12m29. [39m[38;5;14m[1mTensorflow - Open source software library for numerical computation using data flow graphs[0m[38;5;12m (https://github.com/tensorflow/tensorflow)[39m
|
||
[38;5;12m30. [39m[38;5;14m[1mDMTK - Microsoft Distributed Machine Learning Tookit[0m[38;5;12m (https://github.com/Microsoft/DMTK)[39m
|
||
[38;5;12m31. [39m[38;5;14m[1mScikit Flow - Simplified interface for TensorFlow (mimicking Scikit Learn)[0m[38;5;12m (https://github.com/google/skflow)[39m
|
||
[38;5;12m32. [39m[38;5;14m[1mMXnet - Lightweight, Portable, Flexible Distributed/Mobile Deep Learning framework[0m[38;5;12m (https://github.com/apache/incubator-mxnet)[39m
|
||
[38;5;12m33. [39m[38;5;14m[1mVeles - Samsung Distributed machine learning platform[0m[38;5;12m (https://github.com/Samsung/veles)[39m
|
||
[38;5;12m34. [39m[38;5;14m[1mMarvin - A Minimalist GPU-only N-Dimensional ConvNets Framework[0m[38;5;12m (https://github.com/PrincetonVision/marvin)[39m
|
||
[38;5;12m35. [39m[38;5;14m[1mApache SINGA - A General Distributed Deep Learning Platform[0m[38;5;12m (http://singa.incubator.apache.org/)[39m
|
||
[38;5;12m36. [39m[38;5;14m[1mDSSTNE - Amazon's library for building Deep Learning models[0m[38;5;12m (https://github.com/amznlabs/amazon-dsstne)[39m
|
||
[38;5;12m37. [39m[38;5;14m[1mSyntaxNet - Google's syntactic parser - A TensorFlow dependency library[0m[38;5;12m (https://github.com/tensorflow/models/tree/master/syntaxnet)[39m
|
||
[38;5;12m38. [39m[38;5;14m[1mmlpack - A scalable Machine Learning library[0m[38;5;12m (http://mlpack.org/)[39m
|
||
[38;5;12m39. [39m[38;5;14m[1mTorchnet - Torch based Deep Learning Library[0m[38;5;12m (https://github.com/torchnet/torchnet)[39m
|
||
[38;5;12m40. [39m[38;5;14m[1mPaddle - PArallel Distributed Deep LEarning by Baidu[0m[38;5;12m (https://github.com/baidu/paddle)[39m
|
||
[38;5;12m41. [39m[38;5;14m[1mNeuPy - Theano based Python library for ANN and Deep Learning[0m[38;5;12m (http://neupy.com)[39m
|
||
[38;5;12m42. [39m[38;5;14m[1mLasagne - a lightweight library to build and train neural networks in Theano[0m[38;5;12m (https://github.com/Lasagne/Lasagne)[39m
|
||
[38;5;12m43. [39m[38;5;14m[1mnolearn - wrappers and abstractions around existing neural network libraries, most notably Lasagne[0m[38;5;12m (https://github.com/dnouri/nolearn)[39m
|
||
[38;5;12m44. [39m[38;5;14m[1mSonnet - a library for constructing neural networks by Google's DeepMind[0m[38;5;12m (https://github.com/deepmind/sonnet)[39m
|
||
[38;5;12m45. [39m[38;5;14m[1mPyTorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration[0m[38;5;12m (https://github.com/pytorch/pytorch)[39m
|
||
[38;5;12m46. [39m[38;5;14m[1mCNTK - Microsoft Cognitive Toolkit[0m[38;5;12m (https://github.com/Microsoft/CNTK)[39m
|
||
[38;5;12m47. [39m[38;5;14m[1mSerpent.AI - Game agent framework: Use any video game as a deep learning sandbox[0m[38;5;12m (https://github.com/SerpentAI/SerpentAI)[39m
|
||
[38;5;12m48. [39m[38;5;14m[1mCaffe2 - A New Lightweight, Modular, and Scalable Deep Learning Framework[0m[38;5;12m (https://github.com/caffe2/caffe2)[39m
|
||
[38;5;12m49. [39m[38;5;14m[1mdeeplearn.js - Hardware-accelerated deep learning and linear algebra (NumPy) library for the web[0m[38;5;12m (https://github.com/PAIR-code/deeplearnjs)[39m
|
||
[38;5;12m50. [39m[38;5;14m[1mTVM - End to End Deep Learning Compiler Stack for CPUs, GPUs and specialized accelerators[0m[38;5;12m (https://tvm.ai/)[39m
|
||
[38;5;12m51. [39m[38;5;14m[1mCoach - Reinforcement Learning Coach by Intel® AI Lab[0m[38;5;12m (https://github.com/NervanaSystems/coach)[39m
|
||
[38;5;12m52. [39m[38;5;14m[1malbumentations - A fast and framework agnostic image augmentation library[0m[38;5;12m (https://github.com/albu/albumentations)[39m
|
||
[38;5;12m53. [39m[38;5;14m[1mNeuraxle - A general-purpose ML pipelining framework[0m[38;5;12m (https://github.com/Neuraxio/Neuraxle)[39m
|
||
[38;5;12m54. [39m[38;5;14m[1mCatalyst: High-level utils for PyTorch DL & RL research. It was developed with a focus on reproducibility, fast experimentation and code/ideas reusing[0m[38;5;12m (https://github.com/catalyst-team/catalyst)[39m
|
||
[38;5;12m55. [39m[38;5;14m[1mgarage - A toolkit for reproducible reinforcement learning research[0m[38;5;12m (https://github.com/rlworkgroup/garage)[39m
|
||
[38;5;12m56. [39m[38;5;14m[1mDetecto - Train and run object detection models with 5-10 lines of code[0m[38;5;12m (https://github.com/alankbi/detecto)[39m
|
||
[38;5;12m57. [39m[38;5;14m[1mKarate Club - An unsupervised machine learning library for graph structured data[0m[38;5;12m (https://github.com/benedekrozemberczki/karateclub)[39m
|
||
[38;5;12m58. [39m[38;5;14m[1mSynapses - A lightweight library for neural networks that runs anywhere[0m[38;5;12m (https://github.com/mrdimosthenis/Synapses)[39m
|
||
[38;5;12m59. [39m[38;5;14m[1mTensorForce - A TensorFlow library for applied reinforcement learning[0m[38;5;12m (https://github.com/reinforceio/tensorforce)[39m
|
||
[38;5;12m60. [39m[38;5;14m[1mHopsworks - A Feature Store for ML and Data-Intensive AI[0m[38;5;12m (https://github.com/logicalclocks/hopsworks)[39m
|
||
[38;5;12m61. [39m[38;5;14m[1mFeast - A Feature Store for ML for GCP by Gojek/Google[0m[38;5;12m (https://github.com/gojek/feast)[39m
|
||
[38;5;12m62. [39m[38;5;14m[1mPyTorch Geometric Temporal - Representation learning on dynamic graphs[0m[38;5;12m (https://github.com/gojek/feast)[39m
|
||
[38;5;12m63. [39m[38;5;14m[1mlightly - A computer vision framework for self-supervised learning[0m[38;5;12m (https://github.com/lightly-ai/lightly)[39m
|
||
[38;5;12m64. [39m[38;5;14m[1mTrax — Deep Learning with Clear Code and Speed[0m[38;5;12m (https://github.com/google/trax)[39m
|
||
[38;5;12m65. [39m[38;5;14m[1mFlax - a neural network ecosystem for JAX that is designed for flexibility[0m[38;5;12m (https://github.com/google/flax)[39m
|
||
[38;5;12m66. [39m[38;5;14m[1mQuickVision[0m[38;5;12m (https://github.com/Quick-AI/quickvision)[39m
|
||
[38;5;12m67. [39m[38;5;14m[1mColossal-AI - An Integrated Large-scale Model Training System with Efficient Parallelization Techniques[0m[38;5;12m (https://github.com/hpcaitech/ColossalAI)[39m
|
||
[38;5;12m68. [39m[38;5;14m[1mhaystack: an open-source neural search framework[0m[38;5;12m (https://haystack.deepset.ai/docs/intromd)[39m
|
||
[38;5;12m69. [39m[38;5;14m[1mMaze[0m[38;5;12m (https://github.com/enlite-ai/maze) - Application-oriented deep reinforcement learning framework addressing real-world decision problems.[39m
|
||
[38;5;12m70. [39m[38;5;14m[1mInsNet - A neural network library for building instance-dependent NLP models with padding-free dynamic batching[0m[38;5;12m (https://github.com/chncwang/InsNet)[39m
|
||
|
||
[38;2;255;187;0m[4mTools[0m
|
||
|
||
[38;5;12m1. [39m[38;5;14m[1mNebullvm[0m[38;5;12m (https://github.com/nebuly-ai/nebullvm) - Easy-to-use library to boost deep learning inference leveraging multiple deep learning compilers.[39m
|
||
[38;5;12m2. [39m[38;5;14m[1mNetron[0m[38;5;12m (https://github.com/lutzroeder/netron) - Visualizer for deep learning and machine learning models[39m
|
||
[38;5;12m2. [39m[38;5;14m[1mJupyter Notebook[0m[38;5;12m (http://jupyter.org) - Web-based notebook environment for interactive computing[39m
|
||
[38;5;12m3. [39m[38;5;14m[1mTensorBoard[0m[38;5;12m (https://github.com/tensorflow/tensorboard) - TensorFlow's Visualization Toolkit[39m
|
||
[38;5;12m4. [39m[38;5;14m[1mVisual Studio Tools for AI[0m[38;5;12m (https://www.microsoft.com/en-us/research/project/visual-studio-code-tools-ai/) - Develop, debug and deploy deep learning and AI solutions[39m
|
||
[38;5;12m5. [39m[38;5;14m[1mTensorWatch[0m[38;5;12m (https://github.com/microsoft/tensorwatch) - Debugging and visualization for deep learning[39m
|
||
[38;5;12m6. [39m[38;5;14m[1mML Workspace[0m[38;5;12m (https://github.com/ml-tooling/ml-workspace) - All-in-one web-based IDE for machine learning and data science.[39m
|
||
[38;5;12m7. [39m[38;5;14m[1mdowel[0m[38;5;12m (https://github.com/rlworkgroup/dowel) - A little logger for machine learning research. Log any object to the console, CSVs, TensorBoard, text log files, and more with just one call to [39m[48;5;235m[38;5;249mlogger.log()[49m[39m
|
||
[38;5;12m8. [39m[38;5;14m[1mNeptune[0m[38;5;12m (https://neptune.ai/) - Lightweight tool for experiment tracking and results visualization. [39m
|
||
[38;5;12m9.[39m[38;5;12m [39m[38;5;14m[1mCatalyzeX[0m[38;5;12m [39m[38;5;12m(https://chrome.google.com/webstore/detail/code-finder-for-research/aikkeehnlfpamidigaffhfmgbkdeheil)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mBrowser[39m[38;5;12m [39m[38;5;12mextension[39m[38;5;12m [39m[38;5;12m([39m[38;5;14m[1mChrome[0m[38;5;12m [39m
|
||
[38;5;12m(https://chrome.google.com/webstore/detail/code-finder-for-research/aikkeehnlfpamidigaffhfmgbkdeheil)[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;14m[1mFirefox[0m[38;5;12m [39m[38;5;12m(https://addons.mozilla.org/en-US/firefox/addon/code-finder-catalyzex/))[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12mautomatically[39m[38;5;12m [39m[38;5;12mfinds[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mlinks[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mcode[39m[38;5;12m [39m
|
||
[38;5;12mimplementations[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mML[39m[38;5;12m [39m[38;5;12mpapers[39m[38;5;12m [39m[38;5;12manywhere[39m[38;5;12m [39m[38;5;12monline:[39m[38;5;12m [39m[38;5;12mGoogle,[39m[38;5;12m [39m[38;5;12mTwitter,[39m[38;5;12m [39m[38;5;12mArxiv,[39m[38;5;12m [39m[38;5;12mScholar,[39m[38;5;12m [39m[38;5;12metc.[39m
|
||
[38;5;12m10. [39m[38;5;14m[1mDetermined[0m[38;5;12m (https://github.com/determined-ai/determined) - Deep learning training platform with integrated support for distributed training, hyperparameter tuning, smart GPU scheduling, experiment tracking, and a model registry.[39m
|
||
[38;5;12m11. [39m[38;5;14m[1mDAGsHub[0m[38;5;12m (https://dagshub.com/) - Community platform for Open Source ML – Manage experiments, data & models and create collaborative ML projects easily.[39m
|
||
[38;5;12m12.[39m[38;5;12m [39m[38;5;14m[1mhub[0m[38;5;12m [39m[38;5;12m(https://github.com/activeloopai/Hub)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mFastest[39m[38;5;12m [39m[38;5;12munstructured[39m[38;5;12m [39m[38;5;12mdataset[39m[38;5;12m [39m[38;5;12mmanagement[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mTensorFlow/PyTorch[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mactiveloop.ai.[39m[38;5;12m [39m[38;5;12mStream[39m[38;5;12m [39m[38;5;12m&[39m[38;5;12m [39m[38;5;12mversion-control[39m[38;5;12m [39m[38;5;12mdata.[39m[38;5;12m [39m[38;5;12mConverts[39m[38;5;12m [39m[38;5;12mlarge[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12minto[39m[38;5;12m [39m[38;5;12msingle[39m[38;5;12m [39m[38;5;12mnumpy-like[39m[38;5;12m [39m[38;5;12marray[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mcloud,[39m[38;5;12m [39m
|
||
[38;5;12maccessible[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12many[39m[38;5;12m [39m[38;5;12mmachine.[39m
|
||
[38;5;12m13. [39m[38;5;14m[1mDVC[0m[38;5;12m (https://dvc.org/) - DVC is built to make ML models shareable and reproducible. It is designed to handle large files, data sets, machine learning models, and metrics as well as code.[39m
|
||
[38;5;12m14. [39m[38;5;14m[1mCML[0m[38;5;12m (https://cml.dev/) - CML helps you bring your favorite DevOps tools to machine learning.[39m
|
||
[38;5;12m15. [39m[38;5;14m[1mMLEM[0m[38;5;12m (https://mlem.ai/) - MLEM is a tool to easily package, deploy and serve Machine Learning models. It seamlessly supports a variety of scenarios like real-time serving and batch processing.[39m
|
||
|
||
|
||
[38;2;255;187;0m[4mMiscellaneous[0m
|
||
|
||
[38;5;12m1. [39m[38;5;14m[1mCaffe Webinar[0m[38;5;12m (http://on-demand-gtc.gputechconf.com/gtcnew/on-demand-gtc.php?searchByKeyword=shelhamer&searchItems=&sessionTopic=&sessionEvent=4&sessionYear=2014&sessionFormat=&submit=&select=+)[39m
|
||
[38;5;12m2. [39m[38;5;14m[1m100 Best Github Resources in Github for DL[0m[38;5;12m (http://meta-guide.com/software-meta-guide/100-best-github-deep-learning/)[39m
|
||
[38;5;12m3. [39m[38;5;14m[1mWord2Vec[0m[38;5;12m (https://code.google.com/p/word2vec/)[39m
|
||
[38;5;12m4. [39m[38;5;14m[1mCaffe DockerFile[0m[38;5;12m (https://github.com/tleyden/docker/tree/master/caffe)[39m
|
||
[38;5;12m5. [39m[38;5;14m[1mTorontoDeepLEarning convnet[0m[38;5;12m (https://github.com/TorontoDeepLearning/convnet)[39m
|
||
[38;5;12m6. [39m[38;5;14m[1mgfx.js[0m[38;5;12m (https://github.com/clementfarabet/gfx.js)[39m
|
||
[38;5;12m7. [39m[38;5;14m[1mTorch7 Cheat sheet[0m[38;5;12m (https://github.com/torch/torch7/wiki/Cheatsheet)[39m
|
||
[38;5;12m8. [39m[38;5;14m[1mMisc from MIT's 'Advanced Natural Language Processing' course[0m[38;5;12m (http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-864-advanced-natural-language-processing-fall-2005/)[39m
|
||
[38;5;12m9. [39m[38;5;14m[1mMisc from MIT's 'Machine Learning' course[0m[38;5;12m (http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-867-machine-learning-fall-2006/lecture-notes/)[39m
|
||
[38;5;12m10. [39m[38;5;14m[1mMisc from MIT's 'Networks for Learning: Regression and Classification' course[0m[38;5;12m (http://ocw.mit.edu/courses/brain-and-cognitive-sciences/9-520-a-networks-for-learning-regression-and-classification-spring-2001/)[39m
|
||
[38;5;12m11. [39m[38;5;14m[1mMisc from MIT's 'Neural Coding and Perception of Sound' course[0m[38;5;12m (http://ocw.mit.edu/courses/health-sciences-and-technology/hst-723j-neural-coding-and-perception-of-sound-spring-2005/index.htm)[39m
|
||
[38;5;12m12. [39m[38;5;14m[1mImplementing a Distributed Deep Learning Network over Spark[0m[38;5;12m (http://www.datasciencecentral.com/profiles/blogs/implementing-a-distributed-deep-learning-network-over-spark)[39m
|
||
[38;5;12m13. [39m[38;5;14m[1mA chess AI that learns to play chess using deep learning.[0m[38;5;12m (https://github.com/erikbern/deep-pink)[39m
|
||
[38;5;12m14. [39m[38;5;14m[1mReproducing the results of "Playing Atari with Deep Reinforcement Learning" by DeepMind[0m[38;5;12m (https://github.com/kristjankorjus/Replicating-DeepMind)[39m
|
||
[38;5;12m15. [39m[38;5;14m[1mWiki2Vec. Getting Word2vec vectors for entities and word from Wikipedia Dumps[0m[38;5;12m (https://github.com/idio/wiki2vec)[39m
|
||
[38;5;12m16. [39m[38;5;14m[1mThe original code from the DeepMind article + tweaks[0m[38;5;12m (https://github.com/kuz/DeepMind-Atari-Deep-Q-Learner)[39m
|
||
[38;5;12m17. [39m[38;5;14m[1mGoogle deepdream - Neural Network art[0m[38;5;12m (https://github.com/google/deepdream)[39m
|
||
[38;5;12m18. [39m[38;5;14m[1mAn efficient, batched LSTM.[0m[38;5;12m (https://gist.github.com/karpathy/587454dc0146a6ae21fc)[39m
|
||
[38;5;12m19. [39m[38;5;14m[1mA recurrent neural network designed to generate classical music.[0m[38;5;12m (https://github.com/hexahedria/biaxial-rnn-music-composition)[39m
|
||
[38;5;12m20. [39m[38;5;14m[1mMemory Networks Implementations - Facebook[0m[38;5;12m (https://github.com/facebook/MemNN)[39m
|
||
[38;5;12m21. [39m[38;5;14m[1mFace recognition with Google's FaceNet deep neural network.[0m[38;5;12m (https://github.com/cmusatyalab/openface)[39m
|
||
[38;5;12m22. [39m[38;5;14m[1mBasic digit recognition neural network[0m[38;5;12m (https://github.com/joeledenberg/DigitRecognition)[39m
|
||
[38;5;12m23. [39m[38;5;14m[1mEmotion Recognition API Demo - Microsoft[0m[38;5;12m (https://www.projectoxford.ai/demo/emotion#detection)[39m
|
||
[38;5;12m24. [39m[38;5;14m[1mProof of concept for loading Caffe models in TensorFlow[0m[38;5;12m (https://github.com/ethereon/caffe-tensorflow)[39m
|
||
[38;5;12m25. [39m[38;5;14m[1mYOLO: Real-Time Object Detection[0m[38;5;12m (http://pjreddie.com/darknet/yolo/#webcam)[39m
|
||
[38;5;12m26. [39m[38;5;14m[1mYOLO: Practical Implementation using Python[0m[38;5;12m (https://www.analyticsvidhya.com/blog/2018/12/practical-guide-object-detection-yolo-framewor-python/)[39m
|
||
[38;5;12m27. [39m[38;5;14m[1mAlphaGo - A replication of DeepMind's 2016 Nature publication, "Mastering the game of Go with deep neural networks and tree search"[0m[38;5;12m (https://github.com/Rochester-NRT/AlphaGo)[39m
|
||
[38;5;12m28. [39m[38;5;14m[1mMachine Learning for Software Engineers[0m[38;5;12m (https://github.com/ZuzooVn/machine-learning-for-software-engineers)[39m
|
||
[38;5;12m29. [39m[38;5;14m[1mMachine Learning is Fun![0m[38;5;12m (https://medium.com/@ageitgey/machine-learning-is-fun-80ea3ec3c471#.oa4rzez3g)[39m
|
||
[38;5;12m30. [39m[38;5;14m[1mSiraj Raval's Deep Learning tutorials[0m[38;5;12m (https://www.youtube.com/channel/UCWN3xxRkmTPmbKwht9FuE5A)[39m
|
||
[38;5;12m31. [39m[38;5;14m[1mDockerface[0m[38;5;12m (https://github.com/natanielruiz/dockerface) - Easy to install and use deep learning Faster R-CNN face detection for images and video in a docker container.[39m
|
||
[38;5;12m32. [39m[38;5;14m[1mAwesome Deep Learning Music[0m[38;5;12m (https://github.com/ybayle/awesome-deep-learning-music) - Curated list of articles related to deep learning scientific research applied to music[39m
|
||
[38;5;12m33. [39m[38;5;14m[1mAwesome Graph Embedding[0m[38;5;12m (https://github.com/benedekrozemberczki/awesome-graph-embedding) - Curated list of articles related to deep learning scientific research on graph structured data at the graph level.[39m
|
||
[38;5;12m34. [39m[38;5;14m[1mAwesome Network Embedding[0m[38;5;12m (https://github.com/chihming/awesome-network-embedding) - Curated list of articles related to deep learning scientific research on graph structured data at the node level.[39m
|
||
[38;5;12m35.[39m[38;5;12m [39m[38;5;14m[1mMicrosoft[0m[38;5;14m[1m [0m[38;5;14m[1mRecommenders[0m[38;5;12m [39m[38;5;12m(https://github.com/Microsoft/Recommenders)[39m[38;5;12m [39m[38;5;12mcontains[39m[38;5;12m [39m[38;5;12mexamples,[39m[38;5;12m [39m[38;5;12mutilities[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mbest[39m[38;5;12m [39m[38;5;12mpractices[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mbuilding[39m[38;5;12m [39m[38;5;12mrecommendation[39m[38;5;12m [39m[38;5;12msystems.[39m[38;5;12m [39m[38;5;12mImplementations[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mseveral[39m[38;5;12m [39m[38;5;12mstate-of-the-art[39m[38;5;12m [39m[38;5;12malgorithms[39m[38;5;12m [39m[38;5;12mare[39m[38;5;12m [39m[38;5;12mprovided[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m
|
||
[38;5;12mself-study[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mcustomization[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12myour[39m[38;5;12m [39m[38;5;12mown[39m[38;5;12m [39m[38;5;12mapplications.[39m
|
||
[38;5;12m36. [39m[38;5;14m[1mThe Unreasonable Effectiveness of Recurrent Neural Networks[0m[38;5;12m (http://karpathy.github.io/2015/05/21/rnn-effectiveness/) - Andrej Karpathy blog post about using RNN for generating text.[39m
|
||
[38;5;12m37. [39m[38;5;14m[1mLadder Network[0m[38;5;12m (https://github.com/divamgupta/ladder_network_keras) - Keras Implementation of Ladder Network for Semi-Supervised Learning [39m
|
||
[38;5;12m38. [39m[38;5;14m[1mtoolbox: Curated list of ML libraries[0m[38;5;12m (https://github.com/amitness/toolbox)[39m
|
||
[38;5;12m39. [39m[38;5;14m[1mCNN Explainer[0m[38;5;12m (https://poloclub.github.io/cnn-explainer/)[39m
|
||
[38;5;12m40. [39m[38;5;14m[1mAI Expert Roadmap[0m[38;5;12m (https://github.com/AMAI-GmbH/AI-Expert-Roadmap) - Roadmap to becoming an Artificial Intelligence Expert[39m
|
||
[38;5;12m41. [39m[38;5;14m[1mAwesome Drug Interactions, Synergy, and Polypharmacy Prediction[0m[38;5;12m (https://github.com/AstraZeneca/awesome-polipharmacy-side-effect-prediction/)[39m
|
||
|
||
[38;5;238m――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――[39m
|
||
[38;2;255;187;0m[4mContributing[0m
|
||
[38;5;12mHave anything in mind that you think is awesome and would fit in this list? Feel free to send a [39m[38;5;14m[1mpull request[0m[38;5;12m (https://github.com/ashara12/awesome-deeplearning/pulls).[39m
|
||
|
||
[38;5;238m――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――[39m
|
||
[38;2;255;187;0m[4mLicense[0m
|
||
|
||
[38;5;14m[1m![0m[38;5;12mCC0[39m[38;5;14m[1m (http://i.creativecommons.org/p/zero/1.0/88x31.png)[0m[38;5;12m (http://creativecommons.org/publicdomain/zero/1.0/)[39m
|
||
|
||
[38;5;12mTo the extent possible under law, [39m[38;5;14m[1mChristos Christofidis[0m[38;5;12m (https://linkedin.com/in/Christofidis) has waived all copyright and related or neighboring rights to this work.[39m
|