469 lines
94 KiB
Plaintext
469 lines
94 KiB
Plaintext
[38;5;12m [39m[38;2;255;187;0m[1m[4mAwesome Deep Vision [0m[38;5;14m[1m[4m![0m[38;2;255;187;0m[1m[4mAwesome[0m[38;5;14m[1m[4m (https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)[0m[38;2;255;187;0m[1m[4m (https://github.com/sindresorhus/awesome)[0m
|
||
|
||
[38;5;12mA curated list of deep learning resources for computer vision, inspired by [39m[38;5;14m[1mawesome-php[0m[38;5;12m (https://github.com/ziadoz/awesome-php) and [39m[38;5;14m[1mawesome-computer-vision[0m[38;5;12m (https://github.com/jbhuang0604/awesome-computer-vision).[39m
|
||
|
||
[38;5;12mMaintainers - [39m[38;5;14m[1mJiwon Kim[0m[38;5;12m (https://github.com/kjw0612), [39m[38;5;14m[1mHeesoo Myeong[0m[38;5;12m (https://github.com/hmyeong), [39m[38;5;14m[1mMyungsub Choi[0m[38;5;12m (https://github.com/myungsub), [39m[38;5;14m[1mJung Kwon Lee[0m[38;5;12m (https://github.com/deruci), [39m[38;5;14m[1mTaeksoo Kim[0m[38;5;12m (https://github.com/jazzsaxmafia)[39m
|
||
|
||
[38;5;12mThe project is not actively maintained. [39m
|
||
|
||
[38;2;255;187;0m[4mContributing[0m
|
||
[38;5;12mPlease feel free to [39m[38;5;14m[1mpull requests[0m[38;5;12m (https://github.com/kjw0612/awesome-deep-vision/pulls) to add papers.[39m
|
||
|
||
[38;5;14m[1m![0m[38;5;12mJoin the chat at https://gitter.im/kjw0612/awesome-deep-vision[39m[38;5;14m[1m (https://badges.gitter.im/Join%20Chat.svg)[0m[38;5;12m (https://gitter.im/kjw0612/awesome-deep-vision?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)[39m
|
||
|
||
[38;2;255;187;0m[4mSharing[0m
|
||
[38;5;12m+ [39m[38;5;14m[1mShare on Twitter[0m[38;5;12m (http://twitter.com/home?status=http://jiwonkim.org/awesome-deep-vision%0ADeep Learning Resources for Computer Vision)[39m
|
||
[38;5;12m+ [39m[38;5;14m[1mShare on Facebook[0m[38;5;12m (http://www.facebook.com/sharer/sharer.php?u=https://jiwonkim.org/awesome-deep-vision)[39m
|
||
[38;5;12m+ [39m[38;5;14m[1mShare on Google Plus[0m[38;5;12m (http://plus.google.com/share?url=https://jiwonkim.org/awesome-deep-vision)[39m
|
||
[38;5;12m+ [39m[38;5;14m[1mShare on LinkedIn[0m[38;5;12m (http://www.linkedin.com/shareArticle?mini=true&url=https://jiwonkim.org/awesome-deep-vision&title=Awesome%20Deep%20Vision&summary=&source=)[39m
|
||
|
||
[38;2;255;187;0m[4mTable of Contents[0m
|
||
[38;5;12m- [39m[38;5;14m[1mPapers[0m[38;5;12m (#papers)[39m
|
||
[38;5;12m - [39m[38;5;14m[1mImageNet Classification[0m[38;5;12m (#imagenet-classification)[39m
|
||
[38;5;12m - [39m[38;5;14m[1mObject Detection[0m[38;5;12m (#object-detection)[39m
|
||
[38;5;12m - [39m[38;5;14m[1mObject Tracking[0m[38;5;12m (#object-tracking)[39m
|
||
[38;5;12m - [39m[38;5;14m[1mLow-Level Vision[0m[38;5;12m (#low-level-vision)[39m
|
||
[48;5;235m[38;5;249m- **Super-Resolution** (#super-resolution)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Other Applications** (#other-applications)[49m[39m
|
||
[38;5;12m - [39m[38;5;14m[1mEdge Detection[0m[38;5;12m (#edge-detection)[39m
|
||
[38;5;12m - [39m[38;5;14m[1mSemantic Segmentation[0m[38;5;12m (#semantic-segmentation)[39m
|
||
[38;5;12m - [39m[38;5;14m[1mVisual Attention and Saliency[0m[38;5;12m (#visual-attention-and-saliency)[39m
|
||
[38;5;12m - [39m[38;5;14m[1mObject Recognition[0m[38;5;12m (#object-recognition)[39m
|
||
[38;5;12m - [39m[38;5;14m[1mHuman Pose Estimation[0m[38;5;12m (#human-pose-estimation)[39m
|
||
[38;5;12m - [39m[38;5;14m[1mUnderstanding CNN[0m[38;5;12m (#understanding-cnn)[39m
|
||
[38;5;12m - [39m[38;5;14m[1mImage and Language[0m[38;5;12m (#image-and-language)[39m
|
||
[48;5;235m[38;5;249m- **Image Captioning** (#image-captioning)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Video Captioning** (#video-captioning)[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **Question Answering** (#question-answering)[49m[39m
|
||
[38;5;12m - [39m[38;5;14m[1mImage Generation[0m[38;5;12m (#image-generation)[39m
|
||
[38;5;12m - [39m[38;5;14m[1mOther Topics[0m[38;5;12m (#other-topics)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mCourses[0m[38;5;12m (#courses)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mBooks[0m[38;5;12m (#books)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mVideos[0m[38;5;12m (#videos)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mSoftware[0m[38;5;12m (#software)[39m
|
||
[38;5;12m - [39m[38;5;14m[1mFramework[0m[38;5;12m (#framework)[39m
|
||
[38;5;12m - [39m[38;5;14m[1mApplications[0m[38;5;12m (#applications)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mTutorials[0m[38;5;12m (#tutorials)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mBlogs[0m[38;5;12m (#blogs)[39m
|
||
|
||
[38;2;255;187;0m[4mPapers[0m
|
||
|
||
[38;2;255;187;0m[4mImageNet Classification[0m
|
||
[38;5;12m![39m[38;5;14m[1mclassification[0m[38;5;12m (https://cloud.githubusercontent.com/assets/5226447/8451949/327b9566-2022-11e5-8b34-53b4a64c13ad.PNG)[39m
|
||
[38;5;12m(from Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks, NIPS, 2012.)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMicrosoft (Deep Residual Learning) [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1512.03385v1.pdf)[0m[38;5;12m [39m[38;5;12mSlide[39m[38;5;14m[1m (http://image-net.org/challenges/talks/ilsvrc2015_deep_residual_learning_kaiminghe.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mKaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Deep Residual Learning for Image Recognition, arXiv:1512.03385.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMicrosoft (PReLu/Weight Initialization) [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1502.01852)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mKaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification, arXiv:1502.01852.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mBatch Normalization [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1502.03167)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSergey Ioffe, Christian Szegedy, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift, arXiv:1502.03167.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mGoogLeNet [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1409.4842)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mChristian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mVGG-Net [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (http://www.robots.ox.ac.uk/~vgg/research/very_deep/) [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1409.1556)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mKaren Simonyan and Andrew Zisserman, Very Deep Convolutional Networks for Large-Scale Visual Recognition, ICLR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAlexNet [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://papers.nips.cc/book/advances-in-neural-information-processing-systems-25-2012)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAlex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton, ImageNet Classification with Deep Convolutional Neural Networks, NIPS, 2012.[39m
|
||
|
||
[38;2;255;187;0m[4mObject Detection[0m
|
||
[38;5;12m![39m[38;5;14m[1mobject_detection[0m[38;5;12m (https://cloud.githubusercontent.com/assets/5226447/8452063/f76ba500-2022-11e5-8db1-2cd5d490e3b3.PNG)[39m
|
||
[38;5;12m(from Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, arXiv:1506.01497.)[39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mPVANET [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (https://arxiv.org/pdf/1608.08021) [39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/sanghoon/pva-faster-rcnn)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mKye-Hyeon Kim, Sanghoon Hong, Byungseok Roh, Yeongjae Cheon, Minje Park, PVANET: Deep but Lightweight Neural Networks for Real-time Object Detection, arXiv:1608.08021[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mOverFeat, NYU [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1312.6229.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mOverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks, ICLR, 2014.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mR-CNN, UC Berkeley [39m[38;5;12mPaper-CVPR14[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.pdf) [39m[38;5;12mPaper-arXiv14[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1311.2524)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mRoss Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, CVPR, 2014.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSPP, Microsoft Research [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1406.4729)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mKaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition, ECCV, 2014.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mFast R-CNN, Microsoft Research [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1504.08083)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mRoss Girshick, Fast R-CNN, arXiv:1504.08083.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mFaster R-CNN, Microsoft Research [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1506.01497)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mShaoqing Ren, Kaiming He, Ross Girshick, Jian Sun, Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks, arXiv:1506.01497.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mR-CNN minus R, Oxford [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1506.06981)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mKarel Lenc, Andrea Vedaldi, R-CNN minus R, arXiv:1506.06981.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mEnd-to-end people detection in crowded scenes [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/abs/1506.04878)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mRussell Stewart, Mykhaylo Andriluka, End-to-end people detection in crowded scenes, arXiv:1506.04878.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mYou[39m[38;5;12m [39m[38;5;12mOnly[39m[38;5;12m [39m[38;5;12mLook[39m[38;5;12m [39m[38;5;12mOnce:[39m[38;5;12m [39m[38;5;12mUnified,[39m[38;5;12m [39m[38;5;12mReal-Time[39m[38;5;12m [39m[38;5;12mObject[39m[38;5;12m [39m[38;5;12mDetection[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m [39m[38;5;12m(http://arxiv.org/abs/1506.02640),[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;12m [39m[38;5;12mVersion[39m[38;5;12m [39m[38;5;12m2[39m[38;5;14m[1m [0m[38;5;12m [39m[38;5;12m(https://arxiv.org/abs/1612.08242),[39m[38;5;12m [39m[38;5;12mC[39m[38;5;12m [39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;12m [39m[38;5;12m(https://github.com/pjreddie/darknet),[39m[38;5;12m [39m[38;5;12mTensorflow[39m[38;5;12m [39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;12m [39m
|
||
[38;5;12m(https://github.com/thtrieu/darkflow)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJoseph Redmon, Santosh Divvala, Ross Girshick, Ali Farhadi, You Only Look Once: Unified, Real-Time Object Detection, arXiv:1506.02640[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJoseph Redmon, Ali Farhadi (Version 2)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mInside-Outside Net [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/abs/1512.04143)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSean Bell, C. Lawrence Zitnick, Kavita Bala, Ross Girshick, Inside-Outside Net: Detecting Objects in Context with Skip Pooling and Recurrent Neural Networks[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mDeep Residual Network (Current State-of-the-Art) [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/abs/1512.03385)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mKaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Deep Residual Learning for Image Recognition[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mWeakly Supervised Object Localization with Multi-fold Multiple Instance Learning [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1503.00949.pdf)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mR-FCN [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (https://arxiv.org/abs/1605.06409) [39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/daijifeng001/R-FCN)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJifeng Dai, Yi Li, Kaiming He, Jian Sun, R-FCN: Object Detection via Region-based Fully Convolutional Networks[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSSD [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (https://arxiv.org/pdf/1512.02325v2.pdf) [39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/weiliu89/caffe/tree/ssd)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mWei Liu1, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, Alexander C. Berg, SSD: Single Shot MultiBox Detector, arXiv:1512.02325[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSpeed/accuracy trade-offs for modern convolutional object detectors [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (https://arxiv.org/pdf/1611.10012v1.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJonathan Huang, Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi, Ian Fischer, Zbigniew Wojna, Yang Song, Sergio Guadarrama, Kevin Murphy, Google Research, arXiv:1611.10012[39m
|
||
|
||
[38;2;255;187;0m[4mVideo Classification[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mNicolas Ballas, Li Yao, Pal Chris, Aaron Courville, "Delving Deeper into Convolutional Networks for Learning Video Representations", ICLR 2016. [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1511.06432v4.pdf)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMichael Mathieu, camille couprie, Yann Lecun, "Deep Multi Scale Video Prediction Beyond Mean Square Error", ICLR 2016. [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1511.05440v6.pdf)[0m[38;5;12m [39m
|
||
|
||
[38;2;255;187;0m[4mObject Tracking[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSeunghoon Hong, Tackgeun You, Suha Kwak, Bohyung Han, Online Tracking by Learning Discriminative Saliency Map with Convolutional Neural Network, arXiv:1502.06796. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1502.06796)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mHanxi Li, Yi Li and Fatih Porikli, DeepTrack: Learning Discriminative Feature Representations by Convolutional Neural Networks for Visual Tracking, BMVC, 2014. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.bmva.org/bmvc/2014/files/paper028.pdf)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mN Wang, DY Yeung, Learning a Deep Compact Image Representation for Visual Tracking, NIPS, 2013. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://winsty.net/papers/dlt.pdf)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mChao[39m[38;5;12m [39m[38;5;12mMa,[39m[38;5;12m [39m[38;5;12mJia-Bin[39m[38;5;12m [39m[38;5;12mHuang,[39m[38;5;12m [39m[38;5;12mXiaokang[39m[38;5;12m [39m[38;5;12mYang[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mMing-Hsuan[39m[38;5;12m [39m[38;5;12mYang,[39m[38;5;12m [39m[38;5;12mHierarchical[39m[38;5;12m [39m[38;5;12mConvolutional[39m[38;5;12m [39m[38;5;12mFeatures[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mVisual[39m[38;5;12m [39m[38;5;12mTracking,[39m[38;5;12m [39m[38;5;12mICCV[39m[38;5;12m [39m[38;5;12m2015[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m
|
||
[38;5;14m[1m(http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Ma_Hierarchical_Convolutional_Features_ICCV_2015_paper.pdf)[0m[38;5;12m [39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;14m[1m(https://github.com/jbhuang0604/CF2)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mLijun Wang, Wanli Ouyang, Xiaogang Wang, and Huchuan Lu, Visual Tracking with fully Convolutional Networks, ICCV 2015 [39m[38;5;12mPaper[39m[38;5;14m[1m (http://202.118.75.4/lu/Paper/ICCV2015/iccv15_lijun.pdf)[0m[38;5;12m [39m[38;5;12mCode[39m[38;5;14m[1m (https://github.com/scott89/FCNT)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mHyeonseob[39m[38;5;12m [39m[38;5;12mNamand[39m[38;5;12m [39m[38;5;12mBohyung[39m[38;5;12m [39m[38;5;12mHan,[39m[38;5;12m [39m[38;5;12mLearning[39m[38;5;12m [39m[38;5;12mMulti-Domain[39m[38;5;12m [39m[38;5;12mConvolutional[39m[38;5;12m [39m[38;5;12mNeural[39m[38;5;12m [39m[38;5;12mNetworks[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mVisual[39m[38;5;12m [39m[38;5;12mTracking,[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;14m[1m(http://arxiv.org/pdf/1510.07945.pdf)[0m[38;5;12m [39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;14m[1m(https://github.com/HyeonseobNam/MDNet)[0m[38;5;12m [39m[38;5;12mProject[39m[38;5;12m [39m[38;5;12mPage[39m[38;5;14m[1m [0m
|
||
[38;5;14m[1m(http://cvlab.postech.ac.kr/research/mdnet/)[0m[38;5;12m [39m
|
||
|
||
[38;2;255;187;0m[4mLow-Level Vision[0m
|
||
|
||
[38;2;255;187;0m[4mSuper-Resolution[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mIterative Image Reconstruction[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSven Behnke: Learning Iterative Image Reconstruction. IJCAI, 2001. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.ais.uni-bonn.de/behnke/papers/ijcai01.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSven[39m[38;5;12m [39m[38;5;12mBehnke:[39m[38;5;12m [39m[38;5;12mLearning[39m[38;5;12m [39m[38;5;12mIterative[39m[38;5;12m [39m[38;5;12mImage[39m[38;5;12m [39m[38;5;12mReconstruction[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mNeural[39m[38;5;12m [39m[38;5;12mAbstraction[39m[38;5;12m [39m[38;5;12mPyramid.[39m[38;5;12m [39m[38;5;12mInternational[39m[38;5;12m [39m[38;5;12mJournal[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mComputational[39m[38;5;12m [39m[38;5;12mIntelligence[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mApplications,[39m[38;5;12m [39m[38;5;12mvol.[39m[38;5;12m [39m[38;5;12m1,[39m[38;5;12m [39m[38;5;12mno.[39m[38;5;12m [39m[38;5;12m4,[39m[38;5;12m [39m[38;5;12mpp.[39m[38;5;12m [39m[38;5;12m427-438,[39m[38;5;12m [39m[38;5;12m2001.[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m [39m
|
||
[38;5;12m(http://www.ais.uni-bonn.de/behnke/papers/ijcia01.pdf)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSuper-Resolution (SRCNN) [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (http://mmlab.ie.cuhk.edu.hk/projects/SRCNN.html) [39m[38;5;12mPaper-ECCV14[39m[38;5;14m[1m [0m[38;5;12m (http://personal.ie.cuhk.edu.hk/~ccloy/files/eccv_2014_deepresolution.pdf) [39m[38;5;12mPaper-arXiv15[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1501.00092.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mChao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Learning a Deep Convolutional Network for Image Super-Resolution, ECCV, 2014.[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mChao Dong, Chen Change Loy, Kaiming He, Xiaoou Tang, Image Super-Resolution Using Deep Convolutional Networks, arXiv:1501.00092.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mVery Deep Super-Resolution[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Accurate Image Super-Resolution Using Very Deep Convolutional Networks, arXiv:1511.04587, 2015. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/abs/1511.04587)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mDeeply-Recursive Convolutional Network[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJiwon Kim, Jung Kwon Lee, Kyoung Mu Lee, Deeply-Recursive Convolutional Network for Image Super-Resolution, arXiv:1511.04491, 2015. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/abs/1511.04491)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mCasade-Sparse-Coding-Network[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mZhaowen[39m[38;5;12m [39m[38;5;12mWang,[39m[38;5;12m [39m[38;5;12mDing[39m[38;5;12m [39m[38;5;12mLiu,[39m[38;5;12m [39m[38;5;12mWei[39m[38;5;12m [39m[38;5;12mHan,[39m[38;5;12m [39m[38;5;12mJianchao[39m[38;5;12m [39m[38;5;12mYang[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mThomas[39m[38;5;12m [39m[38;5;12mS.[39m[38;5;12m [39m[38;5;12mHuang,[39m[38;5;12m [39m[38;5;12mDeep[39m[38;5;12m [39m[38;5;12mNetworks[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mImage[39m[38;5;12m [39m[38;5;12mSuper-Resolution[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mSparse[39m[38;5;12m [39m[38;5;12mPrior.[39m[38;5;12m [39m[38;5;12mICCV,[39m[38;5;12m [39m[38;5;12m2015.[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m [39m[38;5;12m(http://www.ifp.illinois.edu/~dingliu2/iccv15/iccv15.pdf)[39m[38;5;12m [39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;12m [39m
|
||
[38;5;12m(http://www.ifp.illinois.edu/~dingliu2/iccv15/)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mPerceptual Losses for Super-Resolution[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJustin[39m[38;5;12m [39m[38;5;12mJohnson,[39m[38;5;12m [39m[38;5;12mAlexandre[39m[38;5;12m [39m[38;5;12mAlahi,[39m[38;5;12m [39m[38;5;12mLi[39m[38;5;12m [39m[38;5;12mFei-Fei,[39m[38;5;12m [39m[38;5;12mPerceptual[39m[38;5;12m [39m[38;5;12mLosses[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mReal-Time[39m[38;5;12m [39m[38;5;12mStyle[39m[38;5;12m [39m[38;5;12mTransfer[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mSuper-Resolution,[39m[38;5;12m [39m[38;5;12marXiv:1603.08155,[39m[38;5;12m [39m[38;5;12m2016.[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m [39m[38;5;12m(http://arxiv.org/abs/1603.08155)[39m[38;5;12m [39m[38;5;12mSupplementary[39m[38;5;14m[1m [0m[38;5;12m [39m
|
||
[38;5;12m(http://cs.stanford.edu/people/jcjohns/papers/fast-style/fast-style-supp.pdf)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSRGAN[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mChristian[39m[38;5;12m [39m[38;5;12mLedig,[39m[38;5;12m [39m[38;5;12mLucas[39m[38;5;12m [39m[38;5;12mTheis,[39m[38;5;12m [39m[38;5;12mFerenc[39m[38;5;12m [39m[38;5;12mHuszar,[39m[38;5;12m [39m[38;5;12mJose[39m[38;5;12m [39m[38;5;12mCaballero,[39m[38;5;12m [39m[38;5;12mAndrew[39m[38;5;12m [39m[38;5;12mCunningham,[39m[38;5;12m [39m[38;5;12mAlejandro[39m[38;5;12m [39m[38;5;12mAcosta,[39m[38;5;12m [39m[38;5;12mAndrew[39m[38;5;12m [39m[38;5;12mAitken,[39m[38;5;12m [39m[38;5;12mAlykhan[39m[38;5;12m [39m[38;5;12mTejani,[39m[38;5;12m [39m[38;5;12mJohannes[39m[38;5;12m [39m[38;5;12mTotz,[39m[38;5;12m [39m[38;5;12mZehan[39m[38;5;12m [39m[38;5;12mWang,[39m[38;5;12m [39m[38;5;12mWenzhe[39m[38;5;12m [39m[38;5;12mShi,[39m[38;5;12m [39m[38;5;12mPhoto-Realistic[39m[38;5;12m [39m[38;5;12mSingle[39m[38;5;12m [39m[38;5;12mImage[39m[38;5;12m [39m[38;5;12mSuper-Resolution[39m[38;5;12m [39m[38;5;12mUsing[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mGenerative[39m[38;5;12m [39m
|
||
[38;5;12mAdversarial[39m[38;5;12m [39m[38;5;12mNetwork,[39m[38;5;12m [39m[38;5;12marXiv:1609.04802v3,[39m[38;5;12m [39m[38;5;12m2016.[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m [39m[38;5;12m(https://arxiv.org/pdf/1609.04802v3.pdf)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mOthers[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mOsendorfer, Christian, Hubert Soyer, and Patrick van der Smagt, Image Super-Resolution with Fast Approximate Convolutional Sparse Coding, ICONIP, 2014. [39m[38;5;12mPaper ICONIP-2014[39m[38;5;14m[1m [0m[38;5;12m (http://brml.org/uploads/tx_sibibtex/281.pdf)[39m
|
||
|
||
[38;2;255;187;0m[4mOther Applications[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mOptical Flow (FlowNet) [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1504.06852)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mPhilipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazırbaş, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, Thomas Brox, FlowNet: Learning Optical Flow with Convolutional Networks, arXiv:1504.06852.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mCompression Artifacts Reduction [39m[38;5;12mPaper-arXiv15[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1504.06993)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mChao Dong, Yubin Deng, Chen Change Loy, Xiaoou Tang, Compression Artifacts Reduction by a Deep Convolutional Network, arXiv:1504.06993.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mBlur Removal[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mChristian J. Schuler, Michael Hirsch, Stefan Harmeling, Bernhard Schölkopf, Learning to Deblur, arXiv:1406.7444 [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1406.7444.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJian Sun, Wenfei Cao, Zongben Xu, Jean Ponce, Learning a Convolutional Neural Network for Non-uniform Motion Blur Removal, CVPR, 2015 [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1503.00593)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mImage Deconvolution [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (http://lxu.me/projects/dcnn/) [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://lxu.me/mypapers/dcnn_nips14.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mLi Xu, Jimmy SJ. Ren, Ce Liu, Jiaya Jia, Deep Convolutional Neural Network for Image Deconvolution, NIPS, 2014.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mDeep Edge-Aware Filter [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://jmlr.org/proceedings/papers/v37/xub15.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mLi Xu, Jimmy SJ. Ren, Qiong Yan, Renjie Liao, Jiaya Jia, Deep Edge-Aware Filters, ICML, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mComputing the Stereo Matching Cost with a Convolutional Neural Network [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Zbontar_Computing_the_Stereo_2015_CVPR_paper.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJure Žbontar, Yann LeCun, Computing the Stereo Matching Cost with a Convolutional Neural Network, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mColorful Image Colorization Richard Zhang, Phillip Isola, Alexei A. Efros, ECCV, 2016 [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1603.08511.pdf), [39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/richzhang/colorization)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mRyan Dahl, [39m[38;5;12mBlog[39m[38;5;14m[1m [0m[38;5;12m (http://tinyclouds.org/colorize/)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mFeature Learning by Inpainting[39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (https://arxiv.org/pdf/1604.07379v1.pdf)[39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/pathak22/context-encoder)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mDeepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor Darrell, Alexei A. Efros, Context Encoders: Feature Learning by Inpainting, CVPR, 2016[39m
|
||
|
||
[38;2;255;187;0m[4mEdge Detection[0m
|
||
[38;5;12m![39m[38;5;14m[1medge_detection[0m[38;5;12m (https://cloud.githubusercontent.com/assets/5226447/8452371/93ca6f7e-2025-11e5-90f2-d428fd5ff7ac.PNG)[39m
|
||
[38;5;12m(from Gedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR, 2015.)[39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mHolistically-Nested Edge Detection [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1504.06375) [39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/s9xie/hed)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSaining Xie, Zhuowen Tu, Holistically-Nested Edge Detection, arXiv:1504.06375.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mDeepEdge [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1412.1123)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mGedas Bertasius, Jianbo Shi, Lorenzo Torresani, DeepEdge: A Multi-Scale Bifurcated Deep Network for Top-Down Contour Detection, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mDeepContour [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://mc.eistar.net/UpLoadFiles/Papers/DeepContour_cvpr15.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mWei Shen, Xinggang Wang, Yan Wang, Xiang Bai, Zhijiang Zhang, DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection, CVPR, 2015.[39m
|
||
|
||
[38;2;255;187;0m[4mSemantic Segmentation[0m
|
||
[38;5;12m![39m[38;5;14m[1msemantic_segmantation[0m[38;5;12m (https://cloud.githubusercontent.com/assets/5226447/8452076/0ba8340c-2023-11e5-88bc-bebf4509b6bb.PNG)[39m
|
||
[38;5;12m(from Jifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640.)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mPASCAL VOC2012 Challenge Leaderboard (01 Sep. 2016)[39m
|
||
[38;5;12m ![39m[38;5;14m[1mVOC2012_top_rankings[0m[38;5;12m (https://cloud.githubusercontent.com/assets/3803777/18164608/c3678488-7038-11e6-9ec1-74a1542dce13.png)[39m
|
||
[38;5;12m (from PASCAL VOC2012 [39m[38;5;14m[1mleaderboards[0m[38;5;12m (http://host.robots.ox.ac.uk:8080/leaderboard/displaylb.php?challengeid=11&compid=6))[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSEC: Seed, Expand and Constrain[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12m [39m[38;5;12mAlexander[39m[38;5;12m [39m[38;5;12mKolesnikov,[39m[38;5;12m [39m[38;5;12mChristoph[39m[38;5;12m [39m[38;5;12mLampert,[39m[38;5;12m [39m[38;5;12mSeed,[39m[38;5;12m [39m[38;5;12mExpand[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mConstrain:[39m[38;5;12m [39m[38;5;12mThree[39m[38;5;12m [39m[38;5;12mPrinciples[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mWeakly-Supervised[39m[38;5;12m [39m[38;5;12mImage[39m[38;5;12m [39m[38;5;12mSegmentation,[39m[38;5;12m [39m[38;5;12mECCV,[39m[38;5;12m [39m[38;5;12m2016.[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m [39m[38;5;12m(http://pub.ist.ac.at/~akolesnikov/files/ECCV2016/main.pdf)[39m[38;5;12m [39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;12m [39m
|
||
[38;5;12m(https://github.com/kolesman/SEC)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAdelaide[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mGuosheng Lin, Chunhua Shen, Ian Reid, Anton van dan Hengel, Efficient piecewise training of deep structured models for semantic segmentation, arXiv:1504.01013. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1504.01013) (1st ranked in VOC2012)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mGuosheng Lin, Chunhua Shen, Ian Reid, Anton van den Hengel, Deeply Learning the Messages in Message Passing Inference, arXiv:1508.02108. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1506.02108) (4th ranked in VOC2012)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mDeep Parsing Network (DPN)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mZiwei Liu, Xiaoxiao Li, Ping Luo, Chen Change Loy, Xiaoou Tang, Semantic Image Segmentation via Deep Parsing Network, arXiv:1509.02634 / ICCV 2015 [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1509.02634.pdf) (2nd ranked in VOC 2012)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mCentraleSuperBoundaries, INRIA [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1511.07386)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mIasonas Kokkinos, Surpassing Humans in Boundary Detection using Deep Learning, arXiv:1411.07386 (4th ranked in VOC 2012)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mBoxSup [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1503.01640)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJifeng Dai, Kaiming He, Jian Sun, BoxSup: Exploiting Bounding Boxes to Supervise Convolutional Networks for Semantic Segmentation, arXiv:1503.01640. (6th ranked in VOC2012)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mPOSTECH[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mHyeonwoo Noh, Seunghoon Hong, Bohyung Han, Learning Deconvolution Network for Semantic Segmentation, arXiv:1505.04366. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1505.04366) (7th ranked in VOC2012)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSeunghoon Hong, Hyeonwoo Noh, Bohyung Han, Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation, arXiv:1506.04924. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1506.04924)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSeunghoon[39m[38;5;12m [39m[38;5;12mHong,Junhyuk[39m[38;5;12m [39m[38;5;12mOh,[39m[38;5;12m [39m[38;5;12mBohyung[39m[38;5;12m [39m[38;5;12mHan,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mHonglak[39m[38;5;12m [39m[38;5;12mLee,[39m[38;5;12m [39m[38;5;12mLearning[39m[38;5;12m [39m[38;5;12mTransferrable[39m[38;5;12m [39m[38;5;12mKnowledge[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mSemantic[39m[38;5;12m [39m[38;5;12mSegmentation[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mDeep[39m[38;5;12m [39m[38;5;12mConvolutional[39m[38;5;12m [39m[38;5;12mNeural[39m[38;5;12m [39m[38;5;12mNetwork,[39m[38;5;12m [39m[38;5;12marXiv:1512.07928[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;14m[1m(http://arxiv.org/pdf/1512.07928.pdf)[0m[38;5;12m [39m[38;5;12mProject[39m[38;5;12m [39m[38;5;12mPage[39m[38;5;14m[1m [0m
|
||
[38;5;14m[1m(http://cvlab.postech.ac.kr/research/transfernet/)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mConditional Random Fields as Recurrent Neural Networks [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1502.03240)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mShuai Zheng, Sadeep Jayasumana, Bernardino Romera-Paredes, Vibhav Vineet, Zhizhong Su, Dalong Du, Chang Huang, Philip H. S. Torr, Conditional Random Fields as Recurrent Neural Networks, arXiv:1502.03240. (8th ranked in VOC2012)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mDeepLab[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mLiang-Chieh Chen, George Papandreou, Kevin Murphy, Alan L. Yuille, Weakly-and semi-supervised learning of a DCNN for semantic image segmentation, arXiv:1502.02734. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1502.02734) (9th ranked in VOC2012)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mZoom-out [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Mostajabi_Feedforward_Semantic_Segmentation_2015_CVPR_paper.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMohammadreza Mostajabi, Payman Yadollahpour, Gregory Shakhnarovich, Feedforward Semantic Segmentation With Zoom-Out Features, CVPR, 2015[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJoint Calibration [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1507.01581)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mHolger Caesar, Jasper Uijlings, Vittorio Ferrari, Joint Calibration for Semantic Segmentation, arXiv:1507.01581.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mFully Convolutional Networks for Semantic Segmentation [39m[38;5;12mPaper-CVPR15[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdf) [39m[38;5;12mPaper-arXiv15[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1411.4038)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJonathan Long, Evan Shelhamer, Trevor Darrell, Fully Convolutional Networks for Semantic Segmentation, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mHypercolumn [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Hariharan_Hypercolumns_for_Object_2015_CVPR_paper.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mBharath Hariharan, Pablo Arbelaez, Ross Girshick, Jitendra Malik, Hypercolumns for Object Segmentation and Fine-Grained Localization, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mDeep Hierarchical Parsing[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAbhishek Sharma, Oncel Tuzel, David W. Jacobs, Deep Hierarchical Parsing for Semantic Segmentation, CVPR, 2015. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Sharma_Deep_Hierarchical_Parsing_2015_CVPR_paper.pdf)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mLearning Hierarchical Features for Scene Labeling [39m[38;5;12mPaper-ICML12[39m[38;5;14m[1m [0m[38;5;12m (http://yann.lecun.com/exdb/publis/pdf/farabet-icml-12.pdf) [39m[38;5;12mPaper-PAMI13[39m[38;5;14m[1m [0m[38;5;12m (http://yann.lecun.com/exdb/publis/pdf/farabet-pami-13.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mClement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers, ICML, 2012.[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mClement Farabet, Camille Couprie, Laurent Najman, Yann LeCun, Learning Hierarchical Features for Scene Labeling, PAMI, 2013.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUniversity of Cambridge [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (http://mi.eng.cam.ac.uk/projects/segnet/)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mVijay Badrinarayanan, Alex Kendall and Roberto Cipolla "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation." arXiv preprint arXiv:1511.00561, 2015. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/abs/1511.00561)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAlex[39m[38;5;12m [39m[38;5;12mKendall,[39m[38;5;12m [39m[38;5;12mVijay[39m[38;5;12m [39m[38;5;12mBadrinarayanan[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mRoberto[39m[38;5;12m [39m[38;5;12mCipolla[39m[38;5;12m [39m[38;5;12m"Bayesian[39m[38;5;12m [39m[38;5;12mSegNet:[39m[38;5;12m [39m[38;5;12mModel[39m[38;5;12m [39m[38;5;12mUncertainty[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mDeep[39m[38;5;12m [39m[38;5;12mConvolutional[39m[38;5;12m [39m[38;5;12mEncoder-Decoder[39m[38;5;12m [39m[38;5;12mArchitectures[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mScene[39m[38;5;12m [39m[38;5;12mUnderstanding."[39m[38;5;12m [39m[38;5;12marXiv[39m[38;5;12m [39m[38;5;12mpreprint[39m[38;5;12m [39m[38;5;12marXiv:1511.02680,[39m[38;5;12m [39m[38;5;12m2015.[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m [39m
|
||
[38;5;12m(http://arxiv.org/abs/1511.00561)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mPrinceton[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mFisher Yu, Vladlen Koltun, "Multi-Scale Context Aggregation by Dilated Convolutions", ICLR 2016, [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1511.07122v2.pdf)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUniv. of Washington, Allen AI[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mHamid[39m[38;5;12m [39m[38;5;12mIzadinia,[39m[38;5;12m [39m[38;5;12mFereshteh[39m[38;5;12m [39m[38;5;12mSadeghi,[39m[38;5;12m [39m[38;5;12mSantosh[39m[38;5;12m [39m[38;5;12mKumar[39m[38;5;12m [39m[38;5;12mDivvala,[39m[38;5;12m [39m[38;5;12mYejin[39m[38;5;12m [39m[38;5;12mChoi,[39m[38;5;12m [39m[38;5;12mAli[39m[38;5;12m [39m[38;5;12mFarhadi,[39m[38;5;12m [39m[38;5;12m"Segment-Phrase[39m[38;5;12m [39m[38;5;12mTable[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mSemantic[39m[38;5;12m [39m[38;5;12mSegmentation,[39m[38;5;12m [39m[38;5;12mVisual[39m[38;5;12m [39m[38;5;12mEntailment[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mParaphrasing",[39m[38;5;12m [39m[38;5;12mICCV,[39m[38;5;12m [39m[38;5;12m2015,[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m
|
||
[38;5;14m[1m(http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Izadinia_Segment-Phrase_Table_for_ICCV_2015_paper.pdf)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mINRIA[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mIasonas Kokkinos, "Pusing the Boundaries of Boundary Detection Using deep Learning", ICLR 2016, [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1511.07386v2.pdf)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUCSB[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mNiloufar[39m[38;5;12m [39m[38;5;12mPourian,[39m[38;5;12m [39m[38;5;12mS.[39m[38;5;12m [39m[38;5;12mKarthikeyan,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mB.S.[39m[38;5;12m [39m[38;5;12mManjunath,[39m[38;5;12m [39m[38;5;12m"Weakly[39m[38;5;12m [39m[38;5;12msupervised[39m[38;5;12m [39m[38;5;12mgraph[39m[38;5;12m [39m[38;5;12mbased[39m[38;5;12m [39m[38;5;12msemantic[39m[38;5;12m [39m[38;5;12msegmentation[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12mcommunities[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mimage-parts",[39m[38;5;12m [39m[38;5;12mICCV,[39m[38;5;12m [39m[38;5;12m2015,[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m
|
||
[38;5;14m[1m(http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Pourian_Weakly_Supervised_Graph_ICCV_2015_paper.pdf)[0m[38;5;12m [39m
|
||
|
||
[38;2;255;187;0m[4mVisual Attention and Saliency[0m
|
||
[38;5;12m![39m[38;5;14m[1msaliency[0m[38;5;12m (https://cloud.githubusercontent.com/assets/5226447/8492362/7ec65b88-2183-11e5-978f-017e45ddba32.png)[39m
|
||
[38;5;12m(from Nian Liu, Junwei Han, Dingwen Zhang, Shifeng Wen, Tianming Liu, Predicting Eye Fixations using Convolutional Neural Networks, CVPR, 2015.)[39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMr-CNN [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Liu_Predicting_Eye_Fixations_2015_CVPR_paper.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mNian Liu, Junwei Han, Dingwen Zhang, Shifeng Wen, Tianming Liu, Predicting Eye Fixations using Convolutional Neural Networks, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mLearning a Sequential Search for Landmarks [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Singh_Learning_a_Sequential_2015_CVPR_paper.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSaurabh Singh, Derek Hoiem, David Forsyth, Learning a Sequential Search for Landmarks, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMultiple Object Recognition with Visual Attention [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1412.7755.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJimmy Lei Ba, Volodymyr Mnih, Koray Kavukcuoglu, Multiple Object Recognition with Visual Attention, ICLR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mRecurrent Models of Visual Attention [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://papers.nips.cc/paper/5542-recurrent-models-of-visual-attention.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mVolodymyr Mnih, Nicolas Heess, Alex Graves, Koray Kavukcuoglu, Recurrent Models of Visual Attention, NIPS, 2014.[39m
|
||
|
||
[38;2;255;187;0m[4mObject Recognition[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mWeakly-supervised learning with convolutional neural networks [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Oquab_Is_Object_Localization_2015_CVPR_paper.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMaxime Oquab, Leon Bottou, Ivan Laptev, Josef Sivic, Is object localization for free? – Weakly-supervised learning with convolutional neural networks, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mFV-CNN [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Cimpoi_Deep_Filter_Banks_2015_CVPR_paper.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMircea Cimpoi, Subhransu Maji, Andrea Vedaldi, Deep Filter Banks for Texture Recognition and Segmentation, CVPR, 2015.[39m
|
||
|
||
[38;2;255;187;0m[4mHuman Pose Estimation[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mZhe Cao, Tomas Simon, Shih-En Wei, and Yaser Sheikh, Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields, CVPR, 2017.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mLeonid Pishchulin, Eldar Insafutdinov, Siyu Tang, Bjoern Andres, Mykhaylo Andriluka, Peter Gehler, and Bernt Schiele, Deepcut: Joint subset partition and labeling for multi person pose estimation, CVPR, 2016.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mShih-En Wei, Varun Ramakrishna, Takeo Kanade, and Yaser Sheikh, Convolutional pose machines, CVPR, 2016.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAlejandro Newell, Kaiyu Yang, and Jia Deng, Stacked hourglass networks for human pose estimation, ECCV, 2016.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mTomas Pfister, James Charles, and Andrew Zisserman, Flowing convnets for human pose estimation in videos, ICCV, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJonathan J. Tompson, Arjun Jain, Yann LeCun, Christoph Bregler, Joint training of a convolutional network and a graphical model for human pose estimation, NIPS, 2014.[39m
|
||
|
||
[38;2;255;187;0m[4mUnderstanding CNN[0m
|
||
[38;5;12m![39m[38;5;14m[1munderstanding[0m[38;5;12m (https://cloud.githubusercontent.com/assets/5226447/8452083/1aaa0066-2023-11e5-800b-2248ead51584.PNG)[39m
|
||
[38;5;12m(from Aravindh Mahendran, Andrea Vedaldi, Understanding Deep Image Representations by Inverting Them, CVPR, 2015.)[39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mKarel[39m[38;5;12m [39m[38;5;12mLenc,[39m[38;5;12m [39m[38;5;12mAndrea[39m[38;5;12m [39m[38;5;12mVedaldi,[39m[38;5;12m [39m[38;5;12mUnderstanding[39m[38;5;12m [39m[38;5;12mimage[39m[38;5;12m [39m[38;5;12mrepresentations[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mmeasuring[39m[38;5;12m [39m[38;5;12mtheir[39m[38;5;12m [39m[38;5;12mequivariance[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mequivalence,[39m[38;5;12m [39m[38;5;12mCVPR,[39m[38;5;12m [39m[38;5;12m2015.[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m [39m
|
||
[38;5;12m(http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Lenc_Understanding_Image_Representations_2015_CVPR_paper.pdf)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAnh[39m[38;5;12m [39m[38;5;12mNguyen,[39m[38;5;12m [39m[38;5;12mJason[39m[38;5;12m [39m[38;5;12mYosinski,[39m[38;5;12m [39m[38;5;12mJeff[39m[38;5;12m [39m[38;5;12mClune,[39m[38;5;12m [39m[38;5;12mDeep[39m[38;5;12m [39m[38;5;12mNeural[39m[38;5;12m [39m[38;5;12mNetworks[39m[38;5;12m [39m[38;5;12mare[39m[38;5;12m [39m[38;5;12mEasily[39m[38;5;12m [39m[38;5;12mFooled:High[39m[38;5;12m [39m[38;5;12mConfidence[39m[38;5;12m [39m[38;5;12mPredictions[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mUnrecognizable[39m[38;5;12m [39m[38;5;12mImages,[39m[38;5;12m [39m[38;5;12mCVPR,[39m[38;5;12m [39m[38;5;12m2015.[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m [39m
|
||
[38;5;12m(http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Nguyen_Deep_Neural_Networks_2015_CVPR_paper.pdf)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAravindh Mahendran, Andrea Vedaldi, Understanding Deep Image Representations by Inverting Them, CVPR, 2015. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Mahendran_Understanding_Deep_Image_2015_CVPR_paper.pdf)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mBolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba, Object Detectors Emerge in Deep Scene CNNs, ICLR, 2015. [39m[38;5;12marXiv Paper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/abs/1412.6856)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAlexey Dosovitskiy, Thomas Brox, Inverting Visual Representations with Convolutional Networks, arXiv, 2015. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/abs/1506.02753)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMatthrew Zeiler, Rob Fergus, Visualizing and Understanding Convolutional Networks, ECCV, 2014. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (https://www.cs.nyu.edu/~fergus/papers/zeilerECCV2014.pdf)[39m
|
||
|
||
|
||
[38;2;255;187;0m[4mImage and Language[0m
|
||
|
||
[38;2;255;187;0m[4mImage Captioning[0m
|
||
[38;5;12m![39m[38;5;14m[1mimage_captioning[0m[38;5;12m (https://cloud.githubusercontent.com/assets/5226447/8452051/e8f81030-2022-11e5-85db-c68e7d8251ce.PNG)[39m
|
||
[38;5;12m(from Andrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR, 2015.)[39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUCLA / Baidu [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1410.1090)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJunhua Mao, Wei Xu, Yi Yang, Jiang Wang, Alan L. Yuille, Explain Images with Multimodal Recurrent Neural Networks, arXiv:1410.1090.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mToronto [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1411.2539)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mRyan Kiros, Ruslan Salakhutdinov, Richard S. Zemel, Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models, arXiv:1411.2539.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mBerkeley [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1411.4389)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, arXiv:1411.4389.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mGoogle [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1411.4555)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mOriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, Show and Tell: A Neural Image Caption Generator, arXiv:1411.4555.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mStanford [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (http://cs.stanford.edu/people/karpathy/deepimagesent/) [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://cs.stanford.edu/people/karpathy/cvpr2015.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAndrej Karpathy, Li Fei-Fei, Deep Visual-Semantic Alignments for Generating Image Description, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUML / UT [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1412.4729)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSubhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, NAACL-HLT, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mCMU / Microsoft [39m[38;5;12mPaper-arXiv[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1411.5654) [39m[38;5;12mPaper-CVPR[39m[38;5;14m[1m [0m[38;5;12m (http://www.cs.cmu.edu/~xinleic/papers/cvpr15_rnn.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mXinlei Chen, C. Lawrence Zitnick, Learning a Recurrent Visual Representation for Image Caption Generation, arXiv:1411.5654.[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mXinlei Chen, C. Lawrence Zitnick, Mind’s Eye: A Recurrent Visual Representation for Image Caption Generation, CVPR 2015[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMicrosoft [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1411.4952)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mHao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig, From Captions to Visual Concepts and Back, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUniv. Montreal / Univ. Toronto [39m[38;5;12mWeb[39m[38;5;14m[1m (http://kelvinxu.github.io/projects/capgen.html)[0m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m (http://www.cs.toronto.edu/~zemel/documents/captionAttn.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mKelvin Xu, Jimmy Lei Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard S. Zemel, Yoshua Bengio, Show, Attend, and Tell: Neural Image Caption Generation with Visual Attention, arXiv:1502.03044 / ICML 2015[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mIdiap / EPFL / Facebook [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1502.03671)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mRemi Lebret, Pedro O. Pinheiro, Ronan Collobert, Phrase-based Image Captioning, arXiv:1502.03671 / ICML 2015[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUCLA / Baidu [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1504.06692)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJunhua Mao, Wei Xu, Yi Yang, Jiang Wang, Zhiheng Huang, Alan L. Yuille, Learning like a Child: Fast Novel Visual Concept Learning from Sentence Descriptions of Images, arXiv:1504.06692[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMS + Berkeley[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJacob Devlin, Saurabh Gupta, Ross Girshick, Margaret Mitchell, C. Lawrence Zitnick, Exploring Nearest Neighbor Approaches for Image Captioning, arXiv:1505.04467 [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1505.04467.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJacob Devlin, Hao Cheng, Hao Fang, Saurabh Gupta, Li Deng, Xiaodong He, Geoffrey Zweig, Margaret Mitchell, Language Models for Image Captioning: The Quirks and What Works, arXiv:1505.01809 [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1505.01809.pdf)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAdelaide [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1506.01144.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mQi Wu, Chunhua Shen, Anton van den Hengel, Lingqiao Liu, Anthony Dick, Image Captioning with an Intermediate Attributes Layer, arXiv:1506.01144[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mTilburg [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1506.03694.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mGrzegorz Chrupala, Akos Kadar, Afra Alishahi, Learning language through pictures, arXiv:1506.03694[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUniv. Montreal [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1507.01053.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mKyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks, arXiv:1507.01053[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mCornell [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1508.02091.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJack Hessel, Nicolas Savva, Michael J. Wilber, Image Representations and New Domains in Neural Image Captioning, arXiv:1508.02091[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMS + City Univ. of HongKong [39m[38;5;12mPaper[39m[38;5;14m[1m (http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Yao_Learning_Query_and_ICCV_2015_paper.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mTing Yao, Tao Mei, and Chong-Wah Ngo, "Learning Query and Image Similarities[39m
|
||
[48;5;235m[38;5;249mwith Ranking Canonical Correlation Analysis", ICCV, 2015[49m[39m
|
||
|
||
[38;2;255;187;0m[4mVideo Captioning[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mBerkeley [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (http://jeffdonahue.com/lrcn/) [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1411.4389.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, Trevor Darrell, Long-term Recurrent Convolutional Networks for Visual Recognition and Description, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUT / UML / Berkeley [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1412.4729)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSubhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko, Translating Videos to Natural Language Using Deep Recurrent Neural Networks, arXiv:1412.4729.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMicrosoft [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1505.01861)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mYingwei Pan, Tao Mei, Ting Yao, Houqiang Li, Yong Rui, Joint Modeling Embedding and Translation to Bridge Video and Language, arXiv:1505.01861.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUT / Berkeley / UML [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1505.00487)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSubhashini Venugopalan, Marcus Rohrbach, Jeff Donahue, Raymond Mooney, Trevor Darrell, Kate Saenko, Sequence to Sequence--Video to Text, arXiv:1505.00487.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUniv. Montreal / Univ. Sherbrooke [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1502.08029.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mLi Yao, Atousa Torabi, Kyunghyun Cho, Nicolas Ballas, Christopher Pal, Hugo Larochelle, Aaron Courville, Describing Videos by Exploiting Temporal Structure, arXiv:1502.08029[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMPI / Berkeley [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1506.01698.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAnna Rohrbach, Marcus Rohrbach, Bernt Schiele, The Long-Short Story of Movie Description, arXiv:1506.01698[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUniv. Toronto / MIT [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1506.06724.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mYukun Zhu, Ryan Kiros, Richard Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, Sanja Fidler, Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books, arXiv:1506.06724[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUniv. Montreal [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1507.01053.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mKyunghyun Cho, Aaron Courville, Yoshua Bengio, Describing Multimedia Content using Attention-based Encoder-Decoder Networks, arXiv:1507.01053[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mTAU / USC [39m[38;5;12mpaper[39m[38;5;14m[1m (https://arxiv.org/pdf/1612.06950.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mDotan Kaufman, Gil Levi, Tal Hassner, Lior Wolf, Temporal Tessellation for Video Annotation and Summarization, arXiv:1612.06950.[39m
|
||
|
||
[38;2;255;187;0m[4mQuestion Answering[0m
|
||
[38;5;12m![39m[38;5;14m[1mquestion_answering[0m[38;5;12m (https://cloud.githubusercontent.com/assets/5226447/8452068/ffe7b1f6-2022-11e5-87ab-4f6d4696c220.PNG)[39m
|
||
[38;5;12m(from Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop)[39m
|
||
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mVirginia Tech / MSR [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (http://www.visualqa.org/) [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1505.00468)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mStanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh, VQA: Visual Question Answering, CVPR, 2015 SUNw:Scene Understanding workshop.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMPI / Berkeley [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/vision-and-language/visual-turing-challenge/) [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1505.01121)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMateusz Malinowski, Marcus Rohrbach, Mario Fritz, Ask Your Neurons: A Neural-based Approach to Answering Questions about Images, arXiv:1505.01121.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mToronto [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1505.02074) [39m[38;5;12mDataset[39m[38;5;14m[1m [0m[38;5;12m (http://www.cs.toronto.edu/~mren/imageqa/data/cocoqa/)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMengye Ren, Ryan Kiros, Richard Zemel, Image Question Answering: A Visual Semantic Embedding Model and a New Dataset, arXiv:1505.02074 / ICML 2015 deep learning workshop.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mBaidu / UCLA [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/pdf/1505.05612) [39m[38;5;12mDataset[39m[38;5;14m[1m [0m[38;5;12m ()[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mHauyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, Wei Xu, Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering, arXiv:1505.05612.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mPOSTECH [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1511.05756.pdf)[0m[38;5;12m [39m[38;5;12mProject Page[39m[38;5;14m[1m (http://cvlab.postech.ac.kr/research/dppnet/)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mHyeonwoo Noh, Paul Hongsuck Seo, and Bohyung Han, Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction, arXiv:1511.05765[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mCMU / Microsoft Research [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1511.02274v2.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mYang, Z., He, X., Gao, J., Deng, L., & Smola, A. (2015). Stacked Attention Networks for Image Question Answering. arXiv:1511.02274.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMetaMind [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1603.01417v1.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mXiong, Caiming, Stephen Merity, and Richard Socher. "Dynamic Memory Networks for Visual and Textual Question Answering." arXiv:1603.01417 (2016).[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSNU + NAVER [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/abs/1606.01455)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJin-Hwa Kim, Sang-Woo Lee, Dong-Hyun Kwak, Min-Oh Heo, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang, [39m[48;2;30;30;40m[38;5;13m[3mMultimodal Residual Learning for Visual QA[0m[38;5;12m, arXiv:1606:01455[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUC Berkeley + Sony [39m[38;5;12mPaper[39m[38;5;14m[1m (https://arxiv.org/pdf/1606.01847)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAkira Fukui, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach, [39m[48;2;30;30;40m[38;5;13m[3mMultimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding[0m[38;5;12m, arXiv:1606.01847[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mPostech [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1606.03647.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mHyeonwoo Noh and Bohyung Han, [39m[48;2;30;30;40m[38;5;13m[3mTraining Recurrent Answering Units with Joint Loss Minimization for VQA[0m[38;5;12m, arXiv:1606.03647[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSNU + NAVER [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/abs/1610.04325)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJin-Hwa Kim, Kyoung Woon On, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang, [39m[48;2;30;30;40m[38;5;13m[3mHadamard Product for Low-rank Bilinear Pooling[0m[38;5;12m, arXiv:1610.04325.[39m
|
||
|
||
[38;2;255;187;0m[4mImage Generation[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mConvolutional / Recurrent Networks[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAäron[39m[38;5;12m [39m[38;5;12mvan[39m[38;5;12m [39m[38;5;12mden[39m[38;5;12m [39m[38;5;12mOord,[39m[38;5;12m [39m[38;5;12mNal[39m[38;5;12m [39m[38;5;12mKalchbrenner,[39m[38;5;12m [39m[38;5;12mOriol[39m[38;5;12m [39m[38;5;12mVinyals,[39m[38;5;12m [39m[38;5;12mLasse[39m[38;5;12m [39m[38;5;12mEspeholt,[39m[38;5;12m [39m[38;5;12mAlex[39m[38;5;12m [39m[38;5;12mGraves,[39m[38;5;12m [39m[38;5;12mKoray[39m[38;5;12m [39m[38;5;12mKavukcuoglu.[39m[38;5;12m [39m[38;5;12m"Conditional[39m[38;5;12m [39m[38;5;12mImage[39m[38;5;12m [39m[38;5;12mGeneration[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mPixelCNN[39m[38;5;12m [39m[38;5;12mDecoders"[39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m [39m[38;5;12m(https://arxiv.org/pdf/1606.05328v2.pdf)[39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;12m [39m
|
||
[38;5;12m(https://github.com/kundan2510/pixelCNN)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAlexey[39m[38;5;12m [39m[38;5;12mDosovitskiy,[39m[38;5;12m [39m[38;5;12mJost[39m[38;5;12m [39m[38;5;12mTobias[39m[38;5;12m [39m[38;5;12mSpringenberg,[39m[38;5;12m [39m[38;5;12mThomas[39m[38;5;12m [39m[38;5;12mBrox,[39m[38;5;12m [39m[38;5;12m"Learning[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mGenerate[39m[38;5;12m [39m[38;5;12mChairs[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mConvolutional[39m[38;5;12m [39m[38;5;12mNeural[39m[38;5;12m [39m[38;5;12mNetworks",[39m[38;5;12m [39m[38;5;12mCVPR,[39m[38;5;12m [39m[38;5;12m2015.[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m [39m
|
||
[38;5;12m(http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Dosovitskiy_Learning_to_Generate_2015_CVPR_paper.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mKarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra, "DRAW: A Recurrent Neural Network For Image Generation", ICML, 2015. [39m[38;5;12mPaper[39m[38;5;14m[1m (https://arxiv.org/pdf/1502.04623v2.pdf)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAdversarial Networks[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mIan J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Generative Adversarial Networks, NIPS, 2014. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/abs/1406.2661)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mEmily Denton, Soumith Chintala, Arthur Szlam, Rob Fergus, Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks, NIPS, 2015. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/abs/1506.05751)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mLucas Theis, Aäron van den Oord, Matthias Bethge, "A note on the evaluation of generative models", ICLR 2016. [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/abs/1511.01844)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mZhenwen Dai, Andreas Damianou, Javier Gonzalez, Neil Lawrence, "Variationally Auto-Encoded Deep Gaussian Processes", ICLR 2016. [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1511.06455v2.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mElman Mansimov, Emilio Parisotto, Jimmy Ba, Ruslan Salakhutdinov, "Generating Images from Captions with Attention", ICLR 2016, [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1511.02793v2.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJost Tobias Springenberg, "Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks", ICLR 2016, [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1511.06390v1.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mHarrison Edwards, Amos Storkey, "Censoring Representations with an Adversary", ICLR 2016, [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1511.05897v3.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mTakeru Miyato, Shin-ichi Maeda, Masanori Koyama, Ken Nakae, Shin Ishii, "Distributional Smoothing with Virtual Adversarial Training", ICLR 2016, [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1507.00677v8.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mJun-Yan[39m[38;5;12m [39m[38;5;12mZhu,[39m[38;5;12m [39m[38;5;12mPhilipp[39m[38;5;12m [39m[38;5;12mKrahenbuhl,[39m[38;5;12m [39m[38;5;12mEli[39m[38;5;12m [39m[38;5;12mShechtman,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mAlexei[39m[38;5;12m [39m[38;5;12mA.[39m[38;5;12m [39m[38;5;12mEfros,[39m[38;5;12m [39m[38;5;12m"Generative[39m[38;5;12m [39m[38;5;12mVisual[39m[38;5;12m [39m[38;5;12mManipulation[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mNatural[39m[38;5;12m [39m[38;5;12mImage[39m[38;5;12m [39m[38;5;12mManifold",[39m[38;5;12m [39m[38;5;12mECCV[39m[38;5;12m [39m[38;5;12m2016.[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;14m[1m(https://arxiv.org/pdf/1609.03552v2.pdf)[0m[38;5;12m [39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;14m[1m(https://github.com/junyanz/iGAN)[0m[38;5;12m [39m[38;5;12mVideo[39m[38;5;14m[1m [0m
|
||
[38;5;14m[1m(https://youtu.be/9c4z6YsBGQ0)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMixing Convolutional and Adversarial Networks[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAlec Radford, Luke Metz, Soumith Chintala, "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks", ICLR 2016. [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1511.06434.pdf)[0m[38;5;12m [39m
|
||
|
||
[38;2;255;187;0m[4mOther Topics[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mVisual Analogy [39m[38;5;12mPaper[39m[38;5;14m[1m (https://web.eecs.umich.edu/~honglak/nips2015-analogy.pdf)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mScott Reed, Yi Zhang, Yuting Zhang, Honglak Lee, Deep Visual Analogy Making, NIPS, 2015[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSurface Normal Estimation [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Wang_Designing_Deep_Networks_2015_CVPR_paper.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mXiaolong Wang, David F. Fouhey, Abhinav Gupta, Designing Deep Networks for Surface Normal Estimation, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAction Detection [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Gkioxari_Finding_Action_Tubes_2015_CVPR_paper.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mGeorgia Gkioxari, Jitendra Malik, Finding Action Tubes, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mCrowd Counting [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Zhang_Cross-Scene_Crowd_Counting_2015_CVPR_paper.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mCong Zhang, Hongsheng Li, Xiaogang Wang, Xiaokang Yang, Cross-scene Crowd Counting via Deep Convolutional Neural Networks, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12m3D Shape Retrieval [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Wang_Sketch-Based_3D_Shape_2015_CVPR_paper.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mFang Wang, Le Kang, Yi Li, Sketch-based 3D Shape Retrieval using Convolutional Neural Networks, CVPR, 2015.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mWeakly-supervised Classification[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSamaneh Azadi, Jiashi Feng, Stefanie Jegelka, Trevor Darrell, "Auxiliary Image Regularization for Deep CNNs with Noisy Labels", ICLR 2016, [39m[38;5;12mPaper[39m[38;5;14m[1m (http://arxiv.org/pdf/1511.07069v2.pdf)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mArtistic Style [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/abs/1508.06576) [39m[38;5;12mCode[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/jcjohnson/neural-style)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mLeon A. Gatys, Alexander S. Ecker, Matthias Bethge, A Neural Algorithm of Artistic Style.[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mHuman Gaze Estimation[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mXucong[39m[38;5;12m [39m[38;5;12mZhang,[39m[38;5;12m [39m[38;5;12mYusuke[39m[38;5;12m [39m[38;5;12mSugano,[39m[38;5;12m [39m[38;5;12mMario[39m[38;5;12m [39m[38;5;12mFritz,[39m[38;5;12m [39m[38;5;12mAndreas[39m[38;5;12m [39m[38;5;12mBulling,[39m[38;5;12m [39m[38;5;12mAppearance-Based[39m[38;5;12m [39m[38;5;12mGaze[39m[38;5;12m [39m[38;5;12mEstimation[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mWild,[39m[38;5;12m [39m[38;5;12mCVPR,[39m[38;5;12m [39m[38;5;12m2015.[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m [39m
|
||
[38;5;12m(http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Zhang_Appearance-Based_Gaze_Estimation_2015_CVPR_paper.pdf)[39m[38;5;12m [39m[38;5;12mWebsite[39m[38;5;14m[1m [0m[38;5;12m [39m
|
||
[38;5;12m(https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/gaze-based-human-computer-interaction/appearance-based-gaze-estimation-in-the-wild-mpiigaze/)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mFace Recognition[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mYaniv Taigman, Ming Yang, Marc'Aurelio Ranzato, Lior Wolf, DeepFace: Closing the Gap to Human-Level Performance in Face Verification, CVPR, 2014. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (https://www.cs.toronto.edu/~ranzato/publications/taigman_cvpr14.pdf)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mYi Sun, Ding Liang, Xiaogang Wang, Xiaoou Tang, DeepID3: Face Recognition with Very Deep Neural Networks, 2015. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/abs/1502.00873)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mFlorian Schroff, Dmitry Kalenichenko, James Philbin, FaceNet: A Unified Embedding for Face Recognition and Clustering, CVPR, 2015. [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m (http://arxiv.org/abs/1503.03832)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mFacial Landmark Detection[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mYue[39m[38;5;12m [39m[38;5;12mWu,[39m[38;5;12m [39m[38;5;12mTal[39m[38;5;12m [39m[38;5;12mHassner,[39m[38;5;12m [39m[38;5;12mKangGeon[39m[38;5;12m [39m[38;5;12mKim,[39m[38;5;12m [39m[38;5;12mGerard[39m[38;5;12m [39m[38;5;12mMedioni,[39m[38;5;12m [39m[38;5;12mPrem[39m[38;5;12m [39m[38;5;12mNatarajan,[39m[38;5;12m [39m[38;5;12mFacial[39m[38;5;12m [39m[38;5;12mLandmark[39m[38;5;12m [39m[38;5;12mDetection[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mTweaked[39m[38;5;12m [39m[38;5;12mConvolutional[39m[38;5;12m [39m[38;5;12mNeural[39m[38;5;12m [39m[38;5;12mNetworks,[39m[38;5;12m [39m[38;5;12m2015.[39m[38;5;12m [39m[38;5;12mPaper[39m[38;5;14m[1m [0m[38;5;12m [39m[38;5;12m(http://arxiv.org/abs/1511.04031)[39m[38;5;12m [39m[38;5;12mProject[39m[38;5;14m[1m [0m[38;5;12m [39m
|
||
[38;5;12m(http://www.openu.ac.il/home/hassner/projects/tcnn_landmarks/)[39m
|
||
|
||
[38;2;255;187;0m[4mCourses[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mDeep Vision[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mStanford[0m[38;5;12m [39m[38;5;14m[1mCS231n: Convolutional Neural Networks for Visual Recognition[0m[38;5;12m (http://cs231n.stanford.edu/)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mCUHK[0m[38;5;12m [39m[38;5;14m[1mELEG 5040: Advanced Topics in Signal Processing(Introduction to Deep Learning)[0m[38;5;12m (https://piazza.com/cuhk.edu.hk/spring2015/eleg5040/home)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMore Deep Learning[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mStanford[0m[38;5;12m [39m[38;5;14m[1mCS224d: Deep Learning for Natural Language Processing[0m[38;5;12m (http://cs224d.stanford.edu/)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mOxford[0m[38;5;12m [39m[38;5;14m[1mDeep Learning by Prof. Nando de Freitas[0m[38;5;12m (https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mNYU[0m[38;5;12m [39m[38;5;14m[1mDeep Learning by Prof. Yann LeCun[0m[38;5;12m (http://cilvr.cs.nyu.edu/doku.php?id=courses:deeplearning2014:start)[39m
|
||
|
||
[38;2;255;187;0m[4mBooks[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mFree Online Books[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mDeep Learning by Ian Goodfellow, Yoshua Bengio, and Aaron Courville[0m[38;5;12m (http://www.iro.umontreal.ca/~bengioy/dlbook/)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mNeural Networks and Deep Learning by Michael Nielsen[0m[38;5;12m (http://neuralnetworksanddeeplearning.com/)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mDeep Learning Tutorial by LISA lab, University of Montreal[0m[38;5;12m (http://deeplearning.net/tutorial/deeplearning.pdf)[39m
|
||
|
||
[38;2;255;187;0m[4mVideos[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mTalks[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mDeep Learning, Self-Taught Learning and Unsupervised Feature Learning By Andrew Ng[0m[38;5;12m (https://www.youtube.com/watch?v=n1ViNeWhC24)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mRecent Developments in Deep Learning By Geoff Hinton[0m[38;5;12m (https://www.youtube.com/watch?v=vShMxxqtDDs)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mThe Unreasonable Effectiveness of Deep Learning by Yann LeCun[0m[38;5;12m (https://www.youtube.com/watch?v=sc-KbuZqGkI)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mDeep Learning of Representations by Yoshua bengio[0m[38;5;12m (https://www.youtube.com/watch?v=4xsVFLnHC_0)[39m
|
||
|
||
|
||
[38;2;255;187;0m[4mSoftware[0m
|
||
[38;2;255;187;0m[4mFramework[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mTensorflow: An open source software library for numerical computation using data flow graph by Google [39m[38;5;12mWeb[39m[38;5;14m[1m (https://www.tensorflow.org/)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mTorch7: Deep learning library in Lua, used by Facebook and Google Deepmind [39m[38;5;12mWeb[39m[38;5;14m[1m (http://torch.ch/)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mTorch-based deep learning libraries: [39m[38;5;12mtorchnet[39m[38;5;14m[1m (https://github.com/torchnet/torchnet)[0m[38;5;12m ,[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mCaffe: Deep learning framework by the BVLC [39m[38;5;12mWeb[39m[38;5;14m[1m (http://caffe.berkeleyvision.org/)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mTheano: Mathematical library in Python, maintained by LISA lab [39m[38;5;12mWeb[39m[38;5;14m[1m (http://deeplearning.net/software/theano/)[0m[38;5;12m [39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mTheano-based deep learning libraries: [39m[38;5;12mPylearn2[39m[38;5;14m[1m (http://deeplearning.net/software/pylearn2/)[0m[38;5;12m , [39m[38;5;12mBlocks[39m[38;5;14m[1m (https://github.com/mila-udem/blocks)[0m[38;5;12m , [39m[38;5;12mKeras[39m[38;5;14m[1m (http://keras.io/)[0m[38;5;12m , [39m[38;5;12mLasagne[39m[38;5;14m[1m (https://github.com/Lasagne/Lasagne)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMatConvNet: CNNs for MATLAB [39m[38;5;12mWeb[39m[38;5;14m[1m (http://www.vlfeat.org/matconvnet/)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mMXNet: A flexible and efficient deep learning library for heterogeneous distributed systems with multi-language support [39m[38;5;12mWeb[39m[38;5;14m[1m (http://mxnet.io/)[0m[38;5;12m [39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mDeepgaze: A computer vision library for human-computer interaction based on CNNs [39m[38;5;12mWeb[39m[38;5;14m[1m (https://github.com/mpatacchiola/deepgaze)[0m[38;5;12m [39m
|
||
|
||
[38;2;255;187;0m[4mApplications[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mAdversarial Training[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mCode and hyperparameters for the paper "Generative Adversarial Networks" [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/goodfeli/adversarial)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mUnderstanding and Visualizing[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSource code for "Understanding Deep Image Representations by Inverting Them," CVPR, 2015. [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/aravindhm/deep-goggle)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSemantic Segmentation[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSource code for the paper "Rich feature hierarchies for accurate object detection and semantic segmentation," CVPR, 2014. [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/rbgirshick/rcnn)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSource code for the paper "Fully Convolutional Networks for Semantic Segmentation," CVPR, 2015. [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/longjon/caffe/tree/future)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSuper-Resolution[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mImage Super-Resolution for Anime-Style-Art [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/nagadomi/waifu2x)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mEdge Detection[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSource code for the paper "DeepContour: A Deep Convolutional Feature Learned by Positive-Sharing Loss for Contour Detection," CVPR, 2015. [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/shenwei1231/DeepContour)[39m
|
||
[38;5;12m [39m[38;5;12m [39m[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;12mSource code for the paper "Holistically-Nested Edge Detection", ICCV 2015. [39m[38;5;12mWeb[39m[38;5;14m[1m [0m[38;5;12m (https://github.com/s9xie/hed)[39m
|
||
|
||
[38;2;255;187;0m[4mTutorials[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mCVPR 2014[0m[38;5;12m [39m[38;5;14m[1mTutorial on Deep Learning in Computer Vision[0m[38;5;12m (https://sites.google.com/site/deeplearningcvpr2014/)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mCVPR 2015[0m[38;5;12m [39m[38;5;14m[1mApplied Deep Learning for Computer Vision with Torch[0m[38;5;12m (https://github.com/soumith/cvpr2015)[39m
|
||
|
||
[38;2;255;187;0m[4mBlogs[0m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mDeep down the rabbit hole: CVPR 2015 and beyond@Tombone's Computer Vision Blog[0m[38;5;12m (http://www.computervisionblog.com/2015/06/deep-down-rabbit-hole-cvpr-2015-and.html)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mCVPR recap and where we're going@Zoya Bylinskii (MIT PhD Student)'s Blog[0m[38;5;12m (http://zoyathinks.blogspot.kr/2015/06/cvpr-recap-and-where-were-going.html)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mFacebook's AI Painting@Wired[0m[38;5;12m (http://www.wired.com/2015/06/facebook-googles-fake-brains-spawn-new-visual-reality/)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mInceptionism: Going Deeper into Neural Networks@Google Research[0m[38;5;12m (http://googleresearch.blogspot.kr/2015/06/inceptionism-going-deeper-into-neural.html)[39m
|
||
[48;5;12m[38;5;11m⟡[49m[39m[38;5;12m [39m[38;5;14m[1mImplementing Neural networks[0m[38;5;12m (http://peterroelants.github.io/) [39m
|
||
|
||
[38;5;12mdeepvision Github: https://github.com/kjw0612/awesome-deep-vision[39m
|