339 lines
63 KiB
Plaintext
339 lines
63 KiB
Plaintext
[38;5;12m [39m[38;2;255;187;0m[1m[4mAwesome Question Answering [0m[38;5;14m[1m[4m![0m[38;2;255;187;0m[1m[4mAwesome[0m[38;5;14m[1m[4m (https://awesome.re/badge.svg)[0m[38;2;255;187;0m[1m[4m (https://github.com/sindresorhus/awesome) [0m
|
||
|
||
[38;5;12m_A[39m[38;5;12m [39m[38;5;12mcurated[39m[38;5;12m [39m[38;5;12mlist[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12m__[39m[38;5;14m[1mQuestion[0m[38;5;14m[1m [0m[38;5;14m[1mAnswering[0m[38;5;14m[1m [0m[38;5;14m[1m(QA)[0m[38;5;12m [39m[38;5;12m(https://en.wikipedia.org/wiki/Question_answering)__[39m[38;5;12m [39m[38;5;12msubject[39m[38;5;12m [39m[38;5;12mwhich[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mcomputer[39m[38;5;12m [39m[38;5;12mscience[39m[38;5;12m [39m[38;5;12mdiscipline[39m[38;5;12m [39m[38;5;12mwithin[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mfields[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12minformation[39m[38;5;12m [39m[38;5;12mretrieval[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mnatural[39m[38;5;12m [39m[38;5;12mlanguage[39m[38;5;12m [39m[38;5;12mprocessing[39m[38;5;12m [39m[38;5;12m(NLP)[39m[38;5;12m [39m[38;5;12mtoward[39m[38;5;12m [39m[38;5;12musing[39m[38;5;12m [39m
|
||
[38;5;12mmachine[39m[38;5;12m [39m[38;5;12mlearning[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mdeep[39m[38;5;12m [39m[38;5;12mlearning_[39m
|
||
|
||
[38;5;12m_정보 검색 및 자연 언어 처리 분야의 질의응답에 관한 큐레이션 - 머신러닝과 딥러닝 단계까지_[39m
|
||
[38;5;12m_问答系统主题的精选列表,是信息检索和自然语言处理领域的计算机科学学科 - 使用机器学习和深度学习_[39m
|
||
|
||
[38;2;255;187;0m[4mContents[0m
|
||
|
||
|
||
|
||
|
||
[38;5;12m- [39m[38;5;14m[1mRecent Trends[0m[38;5;12m (#recent-trends)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mAbout QA[0m[38;5;12m (#about-qa)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mEvents[0m[38;5;12m (#events)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mSystems[0m[38;5;12m (#systems)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mCompetitions in QA[0m[38;5;12m (#competitions-in-qa)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mPublications[0m[38;5;12m (#publications)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mCodes[0m[38;5;12m (#codes)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mLectures[0m[38;5;12m (#lectures)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mSlides[0m[38;5;12m (#slides)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mDataset Collections[0m[38;5;12m (#dataset-collections)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mDatasets[0m[38;5;12m (#datasets)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mBooks[0m[38;5;12m (#books)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mLinks[0m[38;5;12m (#links)[39m
|
||
|
||
|
||
|
||
[38;2;255;187;0m[4mRecent Trends[0m
|
||
[38;2;255;187;0m[4mRecent QA Models[0m
|
||
[38;5;12m- DilBert: Delaying Interaction Layers in Transformer-based Encoders for Efficient Open Domain Question Answering (2020)[39m
|
||
[38;5;12m - paper: https://arxiv.org/pdf/2010.08422.pdf[39m
|
||
[38;5;12m - github: https://github.com/wissam-sib/dilbert[39m
|
||
[38;5;12m- UnifiedQA: Crossing Format Boundaries With a Single QA System (2020)[39m
|
||
[38;5;12m - Demo: https://unifiedqa.apps.allenai.org/[39m
|
||
[38;5;12m- ProQA: Resource-efficient method for pretraining a dense corpus index for open-domain QA and IR. (2020)[39m
|
||
[38;5;12m - paper: https://arxiv.org/pdf/2005.00038.pdf[39m
|
||
[38;5;12m - github: https://github.com/xwhan/ProQA[39m
|
||
[38;5;12m- TYDI QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages (2020)[39m
|
||
[38;5;12m - paper: https://arxiv.org/ftp/arxiv/papers/2003/2003.05002.pdf[39m
|
||
[38;5;12m- Retrospective Reader for Machine Reading Comprehension[39m
|
||
[38;5;12m - paper: https://arxiv.org/pdf/2001.09694v2.pdf[39m
|
||
[38;5;12m- TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection (AAAI 2020)[39m
|
||
[38;5;12m - paper: https://arxiv.org/pdf/1911.04118.pdf[39m
|
||
[38;2;255;187;0m[4mRecent Language Models[0m
|
||
[38;5;12m- [39m[38;5;14m[1mELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators[0m[38;5;12m (https://openreview.net/pdf?id=r1xMH1BtvB), Kevin Clark, et al., ICLR, 2020.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mTinyBERT: Distilling BERT for Natural Language Understanding[0m[38;5;12m (https://openreview.net/pdf?id=rJx0Q6EFPB), Xiaoqi Jiao, et al., ICLR, 2020.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mMINILM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers[0m[38;5;12m (https://arxiv.org/abs/2002.10957), Wenhui Wang, et al., arXiv, 2020.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mT5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer[0m[38;5;12m (https://arxiv.org/abs/1910.10683), Colin Raffel, et al., arXiv preprint, 2019.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mERNIE: Enhanced Language Representation with Informative Entities[0m[38;5;12m (https://arxiv.org/abs/1905.07129), Zhengyan Zhang, et al., ACL, 2019.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mXLNet: Generalized Autoregressive Pretraining for Language Understanding[0m[38;5;12m (https://arxiv.org/abs/1906.08237), Zhilin Yang, et al., arXiv preprint, 2019.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mALBERT: A Lite BERT for Self-supervised Learning of Language Representations[0m[38;5;12m (https://arxiv.org/abs/1909.11942), Zhenzhong Lan, et al., arXiv preprint, 2019.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mRoBERTa: A Robustly Optimized BERT Pretraining Approach[0m[38;5;12m (https://arxiv.org/abs/1907.11692), Yinhan Liu, et al., arXiv preprint, 2019.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mDistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter[0m[38;5;12m (https://arxiv.org/pdf/1910.01108.pdf), Victor sanh, et al., arXiv, 2019.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mSpanBERT: Improving Pre-training by Representing and Predicting Spans[0m[38;5;12m (https://arxiv.org/pdf/1907.10529v3.pdf), Mandar Joshi, et al., TACL, 2019.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding[0m[38;5;12m (https://arxiv.org/abs/1810.04805), Jacob Devlin, et al., NAACL 2019, 2018.[39m
|
||
[38;2;255;187;0m[4mAAAI 2020[0m
|
||
[38;5;12m - [39m[38;5;14m[1mTANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection[0m[38;5;12m (https://arxiv.org/pdf/1911.04118.pdf), Siddhant Garg, et al., AAAI 2020, Nov 2019.[39m
|
||
[38;2;255;187;0m[4mACL 2019[0m
|
||
[38;5;12m - [39m[38;5;14m[1mOverview of the MEDIQA 2019 Shared Task on Textual Inference,[0m
|
||
[38;5;12mQuestion Entailment and Question Answering[39m[38;5;14m[1m (https://www.aclweb.org/anthology/W19-5039), Asma Ben Abacha, et al., ACL-W 2019, Aug 2019.[0m
|
||
[38;5;12m - [39m[38;5;14m[1mTowards Scalable and Reliable Capsule Networks for Challenging NLP Applications[0m[38;5;12m (https://arxiv.org/pdf/1906.02829v1.pdf), Wei Zhao, et al., ACL 2019, Jun 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mCognitive Graph for Multi-Hop Reading Comprehension at Scale[0m[38;5;12m (https://arxiv.org/pdf/1905.05460v2.pdf), Ming Ding, et al., ACL 2019, Jun 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mReal-Time Open-Domain Question Answering with Dense-Sparse Phrase Index[0m[38;5;12m (https://arxiv.org/abs/1906.05807), Minjoon Seo, et al., ACL 2019, Jun 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mUnsupervised Question Answering by Cloze Translation[0m[38;5;12m (https://arxiv.org/abs/1906.04980), Patrick Lewis, et al., ACL 2019, Jun 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mSemEval-2019 Task 10: Math Question Answering[0m[38;5;12m (https://www.aclweb.org/anthology/S19-2153), Mark Hopkins, et al., ACL-W 2019, Jun 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mImproving Question Answering over Incomplete KBs with Knowledge-Aware Reader[0m[38;5;12m (https://arxiv.org/abs/1905.07098), Wenhan Xiong, et al., ACL 2019, May 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mMatching Article Pairs with Graphical Decomposition and Convolutions[0m[38;5;12m (https://arxiv.org/pdf/1802.07459v2.pdf), Bang Liu, et al., ACL 2019, May 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mEpisodic Memory Reader: Learning what to Remember for Question Answering from Streaming Data[0m[38;5;12m (https://arxiv.org/abs/1903.06164), Moonsu Han, et al., ACL 2019, Mar 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mNatural Questions: a Benchmark for Question Answering Research[0m[38;5;12m (https://ai.google/research/pubs/pub47761), Tom Kwiatkowski, et al., TACL 2019, Jan 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mTextbook Question Answering with Multi-modal Context Graph Understanding and Self-supervised Open-set Comprehension[0m[38;5;12m (https://arxiv.org/abs/1811.00232), Daesik Kim, et al., ACL 2019, Nov 2018.[39m
|
||
[38;2;255;187;0m[4mEMNLP-IJCNLP 2019[0m
|
||
[38;5;12m - [39m[38;5;14m[1mLanguage Models as Knowledge Bases?[0m[38;5;12m (https://arxiv.org/pdf/1909.01066v2.pdf), Fabio Petron, et al., EMNLP-IJCNLP 2019, Sep 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mLXMERT: Learning Cross-Modality Encoder Representations from Transformers[0m[38;5;12m (https://arxiv.org/pdf/1908.07490v3.pdf), Hao Tan, et al., EMNLP-IJCNLP 2019, Dec 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mAnswering Complex Open-domain Questions Through Iterative Query Generation[0m[38;5;12m (https://arxiv.org/pdf/1910.07000v1.pdf), Peng Qi, et al., EMNLP-IJCNLP 2019, Oct 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mKagNet: Knowledge-Aware Graph Networks for Commonsense Reasoning[0m[38;5;12m (https://arxiv.org/pdf/1909.02151v1.pdf), Bill Yuchen Lin, et al., EMNLP-IJCNLP 2019, Sep 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mMixture Content Selection for Diverse Sequence Generation[0m[38;5;12m (https://arxiv.org/pdf/1909.01953v1.pdf), Jaemin Cho, et al., EMNLP-IJCNLP 2019, Sep 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mA Discrete Hard EM Approach for Weakly Supervised Question Answering[0m[38;5;12m (https://arxiv.org/pdf/1909.04849v1.pdf), Sewon Min, et al., EMNLP-IJCNLP, 2019, Sep 2019.[39m
|
||
[38;2;255;187;0m[4mArxiv[0m
|
||
[38;5;12m - [39m[38;5;14m[1mInvestigating the Successes and Failures of BERT for Passage Re-Ranking[0m[38;5;12m (https://arxiv.org/abs/1905.01758), Harshith Padigela, et al., arXiv preprint, May 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mBERT with History Answer Embedding for Conversational Question Answering[0m[38;5;12m (https://arxiv.org/abs/1905.05412), Chen Qu, et al., arXiv preprint, May 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mUnderstanding the Behaviors of BERT in Ranking[0m[38;5;12m (https://arxiv.org/abs/1904.07531), Yifan Qiao, et al., arXiv preprint, Apr 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mBERT Post-Training for Review Reading Comprehension and Aspect-based Sentiment Analysis[0m[38;5;12m (https://arxiv.org/abs/1904.02232), Hu Xu, et al., arXiv preprint, Apr 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mEnd-to-End Open-Domain Question Answering with BERTserini[0m[38;5;12m (https://arxiv.org/abs/1902.01718), Wei Yang, et al., arXiv preprint, Feb 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mA BERT Baseline for the Natural Questions[0m[38;5;12m (https://arxiv.org/abs/1901.08634), Chris Alberti, et al., arXiv preprint, Jan 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mPassage Re-ranking with BERT[0m[38;5;12m (https://arxiv.org/abs/1901.04085), Rodrigo Nogueira, et al., arXiv preprint, Jan 2019.[39m
|
||
[38;5;12m - [39m[38;5;14m[1mSDNet: Contextualized Attention-based Deep Network for Conversational Question Answering[0m[38;5;12m (https://arxiv.org/abs/1812.03593), Chenguang Zhu, et al., arXiv, Dec 2018.[39m
|
||
[38;2;255;187;0m[4mDataset[0m
|
||
[38;5;12m - [39m[38;5;14m[1mELI5: Long Form Question Answering[0m[38;5;12m (https://arxiv.org/abs/1907.09190), Angela Fan, et al., ACL 2019, Jul 2019[39m
|
||
[38;5;12m - [39m[38;5;14m[1mCODAH: An Adversarially-Authored Question Answering Dataset for[0m
|
||
[38;5;12mCommon Sense[39m[38;5;14m[1m (https://www.aclweb.org/anthology/W19-2008.pdf), Michael Chen, et al., RepEval 2019, Jun 2019.[0m
|
||
[38;5;12m [39m
|
||
[38;2;255;187;0m[4mAbout QA[0m
|
||
[38;2;255;187;0m[4mTypes of QA[0m
|
||
[38;5;12m- Single-turn QA: answer without considering any context[39m
|
||
[38;5;12m- Conversational QA: use previsous conversation turns[39m
|
||
[38;2;255;187;0m[4mSubtypes of QA[0m
|
||
[38;5;12m- Knowledge-based QA[39m
|
||
[38;5;12m- Table/List-based QA[39m
|
||
[38;5;12m- Text-based QA[39m
|
||
[38;5;12m- Community-based QA[39m
|
||
[38;5;12m- Visual QA[39m
|
||
|
||
[38;2;255;187;0m[4mAnalysis and Parsing for Pre-processing in QA systems[0m
|
||
[38;5;12mLanugage Analysis[39m
|
||
[38;5;12m 1. [39m[38;5;14m[1mMorphological analysis[0m[38;5;12m (https://www.cs.bham.ac.uk/~pjh/sem1a5/pt2/pt2_intro_morphology.html)[39m
|
||
[38;5;12m 2. [39m[38;5;14m[1mNamed Entity Recognition(NER)[0m[38;5;12m (mds/named-entity-recognition.md)[39m
|
||
[38;5;12m 3. Homonyms / Polysemy Analysis[39m
|
||
[38;5;12m 4. Syntactic Parsing (Dependency Parsing)[39m
|
||
[38;5;12m 5. Semantic Recognition[39m
|
||
|
||
[38;2;255;187;0m[4mMost QA systems have roughly 3 parts[0m
|
||
[38;5;12m1. Fact extraction [39m
|
||
[48;5;235m[38;5;249m1. Entity Extraction [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m 1. **Named-Entity Recognition(NER)** (mds/named-entity-recognition.md)[49m[39m
|
||
[48;5;235m[38;5;249m2. **Relation Extraction** (mds/relation-extraction.md) [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[38;5;12m2. Understanding the question[39m
|
||
[38;5;12m3. Generating an answer[39m
|
||
|
||
[38;2;255;187;0m[4mEvents[0m
|
||
[38;5;12m- Wolfram Alpha launced the answer engine in 2009.[39m
|
||
[38;5;12m- IBM Watson system defeated top [39m[48;2;30;30;40m[38;5;14m[1m[3mJeopardy![0m[48;2;30;30;40m[38;5;13m[3m (https://www.jeopardy.com)[0m[38;5;12m champions in 2011.[39m
|
||
[38;5;12m- Apple's Siri integrated Wolfram Alpha's answer engine in 2011.[39m
|
||
[38;5;12m- Google embraced QA by launching its Knowledge Graph, leveraging the free base knowledge base in 2012.[39m
|
||
[38;5;12m- Amazon Echo | Alexa (2015), Google Home | Google Assistant (2016), INVOKE | MS Cortana (2017), HomePod (2017)[39m
|
||
|
||
[38;2;255;187;0m[4mSystems[0m
|
||
[38;5;12m- [39m[38;5;14m[1mIBM Watson[0m[38;5;12m (https://www.ibm.com/watson/) - Has state-of-the-arts performance. [39m
|
||
[38;5;12m- [39m[38;5;14m[1mFacebook DrQA[0m[38;5;12m (https://research.fb.com/downloads/drqa/) - Applied to the SQuAD1.0 dataset. The SQuAD2.0 dataset has released. but DrQA is not tested yet.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mMIT media lab's Knowledge graph[0m[38;5;12m (http://conceptnet.io/) - Is a freely-available semantic network, designed to help computers understand the meanings of words that people use.[39m
|
||
|
||
[38;2;255;187;0m[4mCompetitions in QA[0m
|
||
|
||
[38;5;239m│[39m[38;5;12m [39m[38;5;12m [39m[38;5;239m│[39m[38;5;12m [39m[38;5;12mDataset[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mLanguage[39m[38;5;239m│[39m[38;5;12m [39m[38;5;12mOrganizer[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mSince[39m[38;5;239m│[39m[38;5;12m [39m[38;5;12mTop Rank[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12m [39m[38;5;12mModel[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mStatus[39m[38;5;239m│[39m[38;5;12mOver Human Performance[39m[38;5;239m│[39m
|
||
[38;5;239m├[39m[38;5;239m───[39m[38;5;239m┼[39m[38;5;239m─────────────────────────────────────────────────────────────────────────────────────────[39m[38;5;239m┼[39m[38;5;239m────────[39m[38;5;239m┼[39m[38;5;239m───────────────────[39m[38;5;239m┼[39m[38;5;239m─────[39m[38;5;239m┼[39m[38;5;239m───────────────────────────[39m[38;5;239m┼[39m[38;5;239m───────────────────────────────────[39m[38;5;239m┼[39m[38;5;239m──────[39m[38;5;239m┼[39m[38;5;239m──────────────────────[39m[38;5;239m┤[39m
|
||
[38;5;239m│[39m[38;5;12m0[39m[38;5;12m [39m[38;5;239m│[39m[38;5;14m[1mStory Cloze Test[0m[38;5;12m (http://cs.rochester.edu/~nasrinm/files/Papers/lsdsem17-shared-task.pdf)[39m[38;5;239m│[39m[38;5;12mEnglish[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mUniv. of Rochester[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12m2016[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mmsap[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mLogistic regression[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mClosed[39m[38;5;239m│[39m[38;5;12mx[39m[38;5;12m [39m[38;5;239m│[39m
|
||
[38;5;239m│[39m[38;5;12m1[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mMS MARCO[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mEnglish[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mMicrosoft[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12m2016[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mYUANFUDAO research NLP[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mMARS[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mClosed[39m[38;5;239m│[39m[38;5;12mo[39m[38;5;12m [39m[38;5;239m│[39m
|
||
[38;5;239m│[39m[38;5;12m2[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mMS MARCO V2[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mEnglish[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mMicrosoft[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12m2018[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mNTT Media Intelli. Lab.[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mMasque Q&A Style[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mOpened[39m[38;5;239m│[39m[38;5;12mx[39m[38;5;12m [39m[38;5;239m│[39m
|
||
[38;5;239m│[39m[38;5;12m3[39m[38;5;12m [39m[38;5;239m│[39m[38;5;14m[1mSQuAD[0m[38;5;12m (https://arxiv.org/abs/1606.05250)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mEnglish[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mUniv. of Stanford[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12m2018[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mXLNet (single model)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mXLNet Team[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mClosed[39m[38;5;239m│[39m[38;5;12mo[39m[38;5;12m [39m[38;5;239m│[39m
|
||
[38;5;239m│[39m[38;5;12m4[39m[38;5;12m [39m[38;5;239m│[39m[38;5;14m[1mSQuAD 2.0[0m[38;5;12m (https://rajpurkar.github.io/SQuAD-explorer/)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mEnglish[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mUniv. of Stanford[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12m2018[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mPINGAN Omni-Sinitic[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mALBERT + DAAF + Verifier (ensemble)[39m[38;5;239m│[39m[38;5;12mOpened[39m[38;5;239m│[39m[38;5;12mo[39m[38;5;12m [39m[38;5;239m│[39m
|
||
[38;5;239m│[39m[38;5;12m5[39m[38;5;12m [39m[38;5;239m│[39m[38;5;14m[1mTriviaQA[0m[38;5;12m (http://nlp.cs.washington.edu/triviaqa/)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mEnglish[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mUniv. of Washington[39m[38;5;239m│[39m[38;5;12m2017[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mMing Yan[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12m-[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mClosed[39m[38;5;239m│[39m[38;5;12m-[39m[38;5;12m [39m[38;5;239m│[39m
|
||
[38;5;239m│[39m[38;5;12m6[39m[38;5;12m [39m[38;5;239m│[39m[38;5;14m[1mdecaNLP[0m[38;5;12m (https://decanlp.com/)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mEnglish[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mSalesforce Research[39m[38;5;239m│[39m[38;5;12m2018[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mSalesforce Research[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mMQAN[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mClosed[39m[38;5;239m│[39m[38;5;12mx[39m[38;5;12m [39m[38;5;239m│[39m
|
||
[38;5;239m│[39m[38;5;12m7[39m[38;5;12m [39m[38;5;239m│[39m[38;5;14m[1mDuReader Ver1.[0m[38;5;12m (https://ai.baidu.com/broad/introduction)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mChinese[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mBaidu[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12m2015[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mTryer[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mT-Reader (single)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mClosed[39m[38;5;239m│[39m[38;5;12mx[39m[38;5;12m [39m[38;5;239m│[39m
|
||
[38;5;239m│[39m[38;5;12m8[39m[38;5;12m [39m[38;5;239m│[39m[38;5;14m[1mDuReader Ver2.[0m[38;5;12m (https://ai.baidu.com/broad/introduction)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mChinese[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mBaidu[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12m2017[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mrenaissance[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mAliReader[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mOpened[39m[38;5;239m│[39m[38;5;12m-[39m[38;5;12m [39m[38;5;239m│[39m
|
||
[38;5;239m│[39m[38;5;12m9[39m[38;5;12m [39m[38;5;239m│[39m[38;5;14m[1mKorQuAD[0m[38;5;12m (https://korquad.github.io/KorQuad%201.0/)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mKorean[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mLG CNS AI Research[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12m2018[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mClova AI LaRva Team[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mLaRva-Kor-Large+ + CLaF (single)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mClosed[39m[38;5;239m│[39m[38;5;12mo[39m[38;5;12m [39m[38;5;239m│[39m
|
||
[38;5;239m│[39m[38;5;12m10[39m[38;5;12m [39m[38;5;239m│[39m[38;5;14m[1mKorQuAD 2.0[0m[38;5;12m (https://korquad.github.io/)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mKorean[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mLG CNS AI Research[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12m2019[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mKangwon National University[39m[38;5;239m│[39m[38;5;12mKNU-baseline(single model)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mOpened[39m[38;5;239m│[39m[38;5;12mx[39m[38;5;12m [39m[38;5;239m│[39m
|
||
[38;5;239m│[39m[38;5;12m11[39m[38;5;12m [39m[38;5;239m│[39m[38;5;14m[1mCoQA[0m[38;5;12m (https://stanfordnlp.github.io/coqa/)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mEnglish[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mUniv. of Stanford[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12m2018[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mZhuiyi Technology[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mRoBERTa + AT + KD (ensemble)[39m[38;5;12m [39m[38;5;239m│[39m[38;5;12mOpened[39m[38;5;239m│[39m[38;5;12mo[39m[38;5;12m [39m[38;5;239m│[39m
|
||
|
||
[38;2;255;187;0m[4mPublications[0m
|
||
[38;5;12m- Papers[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Learning to Skim Text"[0m[38;5;12m (https://arxiv.org/pdf/1704.06877.pdf), Adams Wei Yu, Hongrae Lee, Quoc V. Le, 2017.[39m
|
||
[48;5;235m[38;5;249m: Show only what you want in Text[49m[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Deep Joint Entity Disambiguation with Local Neural Attention"[0m[38;5;12m (https://arxiv.org/pdf/1704.04920.pdf), Octavian-Eugen Ganea and Thomas Hofmann, 2017.[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"BI-DIRECTIONAL ATTENTION FLOW FOR MACHINE COMPREHENSION"[0m[38;5;12m (https://arxiv.org/pdf/1611.01603.pdf), Minjoon Seo, Aniruddha Kembhavi, Ali Farhadi, Hananneh Hajishirzi, ICLR, 2017.[39m
|
||
[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1m"Capturing[0m[38;5;14m[1m [0m[38;5;14m[1mSemantic[0m[38;5;14m[1m [0m[38;5;14m[1mSimilarity[0m[38;5;14m[1m [0m[38;5;14m[1mfor[0m[38;5;14m[1m [0m[38;5;14m[1mEntity[0m[38;5;14m[1m [0m[38;5;14m[1mLinking[0m[38;5;14m[1m [0m[38;5;14m[1mwith[0m[38;5;14m[1m [0m[38;5;14m[1mConvolutional[0m[38;5;14m[1m [0m[38;5;14m[1mNeural[0m[38;5;14m[1m [0m[38;5;14m[1mNetworks"[0m[38;5;12m [39m[38;5;12m(http://nlp.cs.berkeley.edu/pubs/FrancisLandau-Durrett-Klein_2016_EntityConvnets_paper.pdf),[39m[38;5;12m [39m[38;5;12mMatthew[39m[38;5;12m [39m[38;5;12mFrancis-Landau,[39m[38;5;12m [39m[38;5;12mGreg[39m[38;5;12m [39m[38;5;12mDurrett[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mDan[39m[38;5;12m [39m[38;5;12mKlei,[39m[38;5;12m [39m[38;5;12mNAACL-HLT[39m[38;5;12m [39m
|
||
[38;5;12m2016.[39m
|
||
[48;5;235m[38;5;249m- https://GitHub.com/matthewfl/nlp-entity-convnet[49m[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Entity Linking with a Knowledge Base: Issues, Techniques, and Solutions"[0m[38;5;12m (https://ieeexplore.ieee.org/document/6823700/), Wei Shen, Jianyong Wang, Jiawei Han, IEEE Transactions on Knowledge and Data Engineering(TKDE), 2014.[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Introduction to “This is Watson"[0m[38;5;12m (https://ieeexplore.ieee.org/document/6177724/), IBM Journal of Research and Development, D. A. Ferrucci, 2012.[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"A survey on question answering technology from an information retrieval perspective"[0m[38;5;12m (https://www.sciencedirect.com/science/article/pii/S0020025511003860), Information Sciences, 2011.[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Question Answering in Restricted Domains: An Overview"[0m[38;5;12m (https://www.mitpressjournals.org/doi/abs/10.1162/coli.2007.33.1.41), Diego Mollá and José Luis Vicedo, Computational Linguistics, 2007[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Natural language question answering: the view from here"[0m[38;5;12m (), L Hirschman, R Gaizauskas, natural language engineering, 2001.[39m
|
||
[38;5;12m - Entity Disambiguation / Entity Linking[39m
|
||
|
||
[38;2;255;187;0m[4mCodes[0m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mBiDAF[0m[38;5;12m [39m[38;5;12m(https://github.com/allenai/bi-att-flow)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mBi-Directional[39m[38;5;12m [39m[38;5;12mAttention[39m[38;5;12m [39m[38;5;12mFlow[39m[38;5;12m [39m[38;5;12m(BIDAF)[39m[38;5;12m [39m[38;5;12mnetwork[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mmulti-stage[39m[38;5;12m [39m[38;5;12mhierarchical[39m[38;5;12m [39m[38;5;12mprocess[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12mrepresents[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mcontext[39m[38;5;12m [39m[38;5;12mat[39m[38;5;12m [39m[38;5;12mdifferent[39m[38;5;12m [39m[38;5;12mlevels[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mgranularity[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12muses[39m[38;5;12m [39m[38;5;12mbi-directional[39m[38;5;12m [39m[38;5;12mattention[39m[38;5;12m [39m[38;5;12mflow[39m[38;5;12m [39m[38;5;12mmechanism[39m
|
||
[38;5;12mto[39m[38;5;12m [39m[38;5;12mobtain[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mquery-aware[39m[38;5;12m [39m[38;5;12mcontext[39m[38;5;12m [39m[38;5;12mrepresentation[39m[38;5;12m [39m[38;5;12mwithout[39m[38;5;12m [39m[38;5;12mearly[39m[38;5;12m [39m[38;5;12msummarization.[39m[38;5;12m [39m
|
||
[38;5;12m - Official; Tensorflow v1.2[39m
|
||
[38;5;12m - [39m[38;5;14m[1mPaper[0m[38;5;12m (https://arxiv.org/pdf/1611.01603.pdf)[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mQANet[0m[38;5;12m [39m[38;5;12m(https://github.com/NLPLearn/QANet)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mA[39m[38;5;12m [39m[38;5;12mQ&A[39m[38;5;12m [39m[38;5;12marchitecture[39m[38;5;12m [39m[38;5;12mdoes[39m[38;5;12m [39m[38;5;12mnot[39m[38;5;12m [39m[38;5;12mrequire[39m[38;5;12m [39m[38;5;12mrecurrent[39m[38;5;12m [39m[38;5;12mnetworks:[39m[38;5;12m [39m[38;5;12mIts[39m[38;5;12m [39m[38;5;12mencoder[39m[38;5;12m [39m[38;5;12mconsists[39m[38;5;12m [39m[38;5;12mexclusively[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mconvolution[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mself-attention,[39m[38;5;12m [39m[38;5;12mwhere[39m[38;5;12m [39m[38;5;12mconvolution[39m[38;5;12m [39m[38;5;12mmodels[39m[38;5;12m [39m[38;5;12mlocal[39m[38;5;12m [39m[38;5;12minteractions[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mself-attention[39m[38;5;12m [39m[38;5;12mmodels[39m[38;5;12m [39m[38;5;12mglobal[39m
|
||
[38;5;12minteractions.[39m
|
||
[38;5;12m - Google; Unofficial; Tensorflow v1.5[39m
|
||
[38;5;12m - [39m[38;5;14m[1mPaper[0m[38;5;12m (#qanet)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mR-Net[0m[38;5;12m (https://github.com/HKUST-KnowComp/R-Net) - An end-to-end neural networks model for reading comprehension style question answering, which aims to answer questions from a given passage.[39m
|
||
[38;5;12m - MS; Unofficially by HKUST; Tensorflow v1.5[39m
|
||
[38;5;12m - [39m[38;5;14m[1mPaper[0m[38;5;12m (https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mR-Net-in-Keras[0m[38;5;12m (https://github.com/YerevaNN/R-NET-in-Keras) - R-NET re-implementation in Keras.[39m
|
||
[38;5;12m - MS; Unofficial; Keras v2.0.6[39m
|
||
[38;5;12m - [39m[38;5;14m[1mPaper[0m[38;5;12m (https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mDrQA[0m[38;5;12m (https://github.com/hitvoice/DrQA) - DrQA is a system for reading comprehension applied to open-domain question answering.[39m
|
||
[38;5;12m - Facebook; Official; Pytorch v0.4[39m
|
||
[38;5;12m - [39m[38;5;14m[1mPaper[0m[38;5;12m (#drqa)[39m
|
||
[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mBERT[0m[38;5;12m [39m[38;5;12m(https://github.com/google-research/bert)[39m[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mA[39m[38;5;12m [39m[38;5;12mnew[39m[38;5;12m [39m[38;5;12mlanguage[39m[38;5;12m [39m[38;5;12mrepresentation[39m[38;5;12m [39m[38;5;12mmodel[39m[38;5;12m [39m[38;5;12mwhich[39m[38;5;12m [39m[38;5;12mstands[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mBidirectional[39m[38;5;12m [39m[38;5;12mEncoder[39m[38;5;12m [39m[38;5;12mRepresentations[39m[38;5;12m [39m[38;5;12mfrom[39m[38;5;12m [39m[38;5;12mTransformers.[39m[38;5;12m [39m[38;5;12mUnlike[39m[38;5;12m [39m[38;5;12mrecent[39m[38;5;12m [39m[38;5;12mlanguage[39m[38;5;12m [39m[38;5;12mrepresentation[39m[38;5;12m [39m[38;5;12mmodels,[39m[38;5;12m [39m[38;5;12mBERT[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12mdesigned[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mpre-train[39m[38;5;12m [39m[38;5;12mdeep[39m[38;5;12m [39m
|
||
[38;5;12mbidirectional[39m[38;5;12m [39m[38;5;12mrepresentations[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mjointly[39m[38;5;12m [39m[38;5;12mconditioning[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12mboth[39m[38;5;12m [39m[38;5;12mleft[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mright[39m[38;5;12m [39m[38;5;12mcontext[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mall[39m[38;5;12m [39m[38;5;12mlayers.[39m[38;5;12m [39m
|
||
[38;5;12m - Google; Official implementation; Tensorflow v1.11.0[39m
|
||
[38;5;12m - [39m[38;5;14m[1mPaper[0m[38;5;12m (https://arxiv.org/abs/1810.04805)[39m
|
||
|
||
[38;2;255;187;0m[4mLectures[0m
|
||
[38;5;12m- [39m[38;5;14m[1mQuestion Answering - Natural Language Processing[0m[38;5;12m (https://youtu.be/Kzi6tE4JaGo) - By Dragomir Radev, Ph.D. | University of Michigan | 2016.[39m
|
||
|
||
[38;2;255;187;0m[4mSlides[0m
|
||
[38;5;12m- [39m[38;5;14m[1mQuestion Answering with Knowledge Bases, Web and Beyond[0m[38;5;12m (https://github.com/scottyih/Slides/blob/master/QA%20Tutorial.pdf) - By Scott Wen-tau Yih & Hao Ma | Microsoft Research | 2016.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mQuestion Answering[0m[38;5;12m (https://hpi.de/fileadmin/user_upload/fachgebiete/plattner/teaching/NaturalLanguageProcessing/NLP2017/NLP8_QuestionAnswering.pdf) - By Dr. Mariana Neves | Hasso Plattner Institut | 2017.[39m
|
||
|
||
[38;2;255;187;0m[4mDataset Collections[0m
|
||
[38;5;12m- [39m[38;5;14m[1mNLIWOD's Question answering datasets[0m[38;5;12m (https://github.com/dice-group/NLIWOD/tree/master/qa.datasets)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mkarthinkncode's Datasets for Natural Language Processing[0m[38;5;12m (https://github.com/karthikncode/nlp-datasets)[39m
|
||
|
||
[38;2;255;187;0m[4mDatasets[0m
|
||
[38;5;12m- [39m[38;5;14m[1mAI2 Science Questions v2.1(2017)[0m[38;5;12m (http://data.allenai.org/ai2-science-questions/)[39m
|
||
[38;5;12m - It consists of questions used in student assessments in the United States across elementary and middle school grade levels. Each question is 4-way multiple choice format and may or may not include a diagram element.[39m
|
||
[38;5;12m - Paper: http://ai2-website.s3.amazonaws.com/publications/AI2ReasoningChallenge2018.pdf[39m
|
||
[38;5;12m- [39m[38;5;14m[1mChildren's Book Test[0m[38;5;12m (https://uclmr.github.io/ai4exams/data.html)[39m
|
||
[38;5;12m- It is one of the bAbI project of Facebook AI Research which is organized towards the goal of automatic text understanding and reasoning. The CBT is designed to measure directly how well language models can exploit wider linguistic context.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mCODAH Dataset[0m[38;5;12m (https://github.com/Websail-NU/CODAH)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mDeepMind Q&A Dataset; CNN/Daily Mail[0m[38;5;12m (https://github.com/deepmind/rc-data)[39m
|
||
[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mHermann[39m[38;5;12m [39m[38;5;12met[39m[38;5;12m [39m[38;5;12mal.[39m[38;5;12m [39m[38;5;12m(2015)[39m[38;5;12m [39m[38;5;12mcreated[39m[38;5;12m [39m[38;5;12mtwo[39m[38;5;12m [39m[38;5;12mawesome[39m[38;5;12m [39m[38;5;12mdatasets[39m[38;5;12m [39m[38;5;12musing[39m[38;5;12m [39m[38;5;12mnews[39m[38;5;12m [39m[38;5;12marticles[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12mQ&A[39m[38;5;12m [39m[38;5;12mresearch.[39m[38;5;12m [39m[38;5;12mEach[39m[38;5;12m [39m[38;5;12mdataset[39m[38;5;12m [39m[38;5;12mcontains[39m[38;5;12m [39m[38;5;12mmany[39m[38;5;12m [39m[38;5;12mdocuments[39m[38;5;12m [39m[38;5;12m(90k[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12m197k[39m[38;5;12m [39m[38;5;12meach),[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12meach[39m[38;5;12m [39m[38;5;12mdocument[39m[38;5;12m [39m[38;5;12mcompanies[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12maverage[39m[38;5;12m [39m[38;5;12m4[39m[38;5;12m [39m[38;5;12mquestions[39m[38;5;12m [39m[38;5;12mapproximately.[39m[38;5;12m [39m[38;5;12mEach[39m[38;5;12m [39m[38;5;12mquestion[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12msentence[39m[38;5;12m [39m
|
||
[38;5;12mwith[39m[38;5;12m [39m[38;5;12mone[39m[38;5;12m [39m[38;5;12mmissing[39m[38;5;12m [39m[38;5;12mword/phrase[39m[38;5;12m [39m[38;5;12mwhich[39m[38;5;12m [39m[38;5;12mcan[39m[38;5;12m [39m[38;5;12mbe[39m[38;5;12m [39m[38;5;12mfound[39m[38;5;12m [39m[38;5;12mfrom[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12maccompanying[39m[38;5;12m [39m[38;5;12mdocument/context.[39m
|
||
[38;5;12m - Paper: https://arxiv.org/abs/1506.03340[39m
|
||
[38;5;12m- [39m[38;5;14m[1mELI5[0m[38;5;12m (https://github.com/facebookresearch/ELI5)[39m
|
||
[38;5;12m - Paper: https://arxiv.org/abs/1907.09190[39m
|
||
[38;5;12m- [39m[38;5;14m[1mGraphQuestions[0m[38;5;12m (https://github.com/ysu1989/GraphQuestions)[39m
|
||
[38;5;12m - On generating Characteristic-rich Question sets for QA evaluation.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mLC-QuAD[0m[38;5;12m (http://sda.cs.uni-bonn.de/projects/qa-dataset/)[39m
|
||
[38;5;12m - It is a gold standard KBQA (Question Answering over Knowledge Base) dataset containing 5000 Question and SPARQL queries. LC-QuAD uses DBpedia v04.16 as the target KB.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mMS MARCO[0m[38;5;12m (http://www.msmarco.org/dataset.aspx)[39m
|
||
[38;5;12m - This is for real-world question answering.[39m
|
||
[38;5;12m - Paper: https://arxiv.org/abs/1611.09268[39m
|
||
[38;5;12m- [39m[38;5;14m[1mMultiRC[0m[38;5;12m (https://cogcomp.org/multirc/)[39m
|
||
[38;5;12m - A dataset of short paragraphs and multi-sentence questions[39m
|
||
[38;5;12m - Paper: http://cogcomp.org/page/publication_view/833 [39m
|
||
[38;5;12m- [39m[38;5;14m[1mNarrativeQA[0m[38;5;12m (https://github.com/deepmind/narrativeqa)[39m
|
||
[38;5;12m - It includes the list of documents with Wikipedia summaries, links to full stories, and questions and answers.[39m
|
||
[38;5;12m - Paper: https://arxiv.org/pdf/1712.07040v1.pdf[39m
|
||
[38;5;12m- [39m[38;5;14m[1mNewsQA[0m[38;5;12m (https://github.com/Maluuba/newsqa)[39m
|
||
[38;5;12m - A machine comprehension dataset[39m
|
||
[38;5;12m - Paper: https://arxiv.org/pdf/1611.09830.pdf[39m
|
||
[38;5;12m- [39m[38;5;14m[1mQestion-Answer Dataset by CMU[0m[38;5;12m (http://www.cs.cmu.edu/~ark/QA-data/)[39m
|
||
[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mThis[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mcorpus[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mWikipedia[39m[38;5;12m [39m[38;5;12marticles,[39m[38;5;12m [39m[38;5;12mmanually-generated[39m[38;5;12m [39m[38;5;12mfactoid[39m[38;5;12m [39m[38;5;12mquestions[39m[38;5;12m [39m[38;5;12mfrom[39m[38;5;12m [39m[38;5;12mthem,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mmanually-generated[39m[38;5;12m [39m[38;5;12manswers[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mthese[39m[38;5;12m [39m[38;5;12mquestions,[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12muse[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12macademic[39m[38;5;12m [39m[38;5;12mresearch.[39m[38;5;12m [39m[38;5;12mThese[39m[38;5;12m [39m[38;5;12mdata[39m[38;5;12m [39m[38;5;12mwere[39m[38;5;12m [39m[38;5;12mcollected[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mNoah[39m[38;5;12m [39m[38;5;12mSmith,[39m[38;5;12m [39m[38;5;12mMichael[39m[38;5;12m [39m[38;5;12mHeilman,[39m[38;5;12m [39m[38;5;12mRebecca[39m[38;5;12m [39m[38;5;12mHwa,[39m[38;5;12m [39m
|
||
[38;5;12mShay[39m[38;5;12m [39m[38;5;12mCohen,[39m[38;5;12m [39m[38;5;12mKevin[39m[38;5;12m [39m[38;5;12mGimpel,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mmany[39m[38;5;12m [39m[38;5;12mstudents[39m[38;5;12m [39m[38;5;12mat[39m[38;5;12m [39m[38;5;12mCarnegie[39m[38;5;12m [39m[38;5;12mMellon[39m[38;5;12m [39m[38;5;12mUniversity[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mUniversity[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mPittsburgh[39m[38;5;12m [39m[38;5;12mbetween[39m[38;5;12m [39m[38;5;12m2008[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12m2010.[39m
|
||
[38;5;12m- [39m[38;5;14m[1mSQuAD1.0[0m[38;5;12m (https://rajpurkar.github.io/SQuAD-explorer/)[39m
|
||
[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mStanford[39m[38;5;12m [39m[38;5;12mQuestion[39m[38;5;12m [39m[38;5;12mAnswering[39m[38;5;12m [39m[38;5;12mDataset[39m[38;5;12m [39m[38;5;12m(SQuAD)[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mreading[39m[38;5;12m [39m[38;5;12mcomprehension[39m[38;5;12m [39m[38;5;12mdataset,[39m[38;5;12m [39m[38;5;12mconsisting[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mquestions[39m[38;5;12m [39m[38;5;12mposed[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mcrowdworkers[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mset[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mWikipedia[39m[38;5;12m [39m[38;5;12marticles,[39m[38;5;12m [39m[38;5;12mwhere[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12manswer[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mevery[39m[38;5;12m [39m[38;5;12mquestion[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12msegment[39m[38;5;12m [39m[38;5;12mof[39m[38;5;12m [39m[38;5;12mtext,[39m[38;5;12m [39m[38;5;12mor[39m[38;5;12m [39m[38;5;12mspan,[39m[38;5;12m [39m[38;5;12mfrom[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m
|
||
[38;5;12mcorresponding[39m[38;5;12m [39m[38;5;12mreading[39m[38;5;12m [39m[38;5;12mpassage,[39m[38;5;12m [39m[38;5;12mor[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mquestion[39m[38;5;12m [39m[38;5;12mmight[39m[38;5;12m [39m[38;5;12mbe[39m[38;5;12m [39m[38;5;12munanswerable.[39m
|
||
[38;5;12m - Paper: https://arxiv.org/abs/1606.05250[39m
|
||
[38;5;12m- [39m[38;5;14m[1mSQuAD2.0[0m[38;5;12m (https://rajpurkar.github.io/SQuAD-explorer/)[39m
|
||
[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mSQuAD2.0[39m[38;5;12m [39m[38;5;12mcombines[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12m100,000[39m[38;5;12m [39m[38;5;12mquestions[39m[38;5;12m [39m[38;5;12min[39m[38;5;12m [39m[38;5;12mSQuAD1.1[39m[38;5;12m [39m[38;5;12mwith[39m[38;5;12m [39m[38;5;12mover[39m[38;5;12m [39m[38;5;12m50,000[39m[38;5;12m [39m[38;5;12mnew,[39m[38;5;12m [39m[38;5;12munanswerable[39m[38;5;12m [39m[38;5;12mquestions[39m[38;5;12m [39m[38;5;12mwritten[39m[38;5;12m [39m[38;5;12madversarially[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mcrowdworkers[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12mlook[39m[38;5;12m [39m[38;5;12msimilar[39m[38;5;12m [39m[38;5;12mto[39m[38;5;12m [39m[38;5;12manswerable[39m[38;5;12m [39m[38;5;12mones.[39m[38;5;12m [39m[38;5;12mTo[39m[38;5;12m [39m[38;5;12mdo[39m[38;5;12m [39m[38;5;12mwell[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m[38;5;12mSQuAD2.0,[39m[38;5;12m [39m[38;5;12msystems[39m[38;5;12m [39m[38;5;12mmust[39m[38;5;12m [39m[38;5;12mnot[39m[38;5;12m [39m[38;5;12monly[39m[38;5;12m [39m[38;5;12manswer[39m[38;5;12m [39m[38;5;12mquestions[39m[38;5;12m [39m[38;5;12mwhen[39m[38;5;12m [39m
|
||
[38;5;12mpossible,[39m[38;5;12m [39m[38;5;12mbut[39m[38;5;12m [39m[38;5;12malso[39m[38;5;12m [39m[38;5;12mdetermine[39m[38;5;12m [39m[38;5;12mwhen[39m[38;5;12m [39m[38;5;12mno[39m[38;5;12m [39m[38;5;12manswer[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12msupported[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mparagraph[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mabstain[39m[38;5;12m [39m[38;5;12mfrom[39m[38;5;12m [39m[38;5;12manswering.[39m
|
||
[38;5;12m - Paper: https://arxiv.org/abs/1806.03822[39m
|
||
[38;5;12m- [39m[38;5;14m[1mStory cloze test[0m[38;5;12m (http://cs.rochester.edu/nlp/rocstories/)[39m
|
||
[38;5;12m - 'Story Cloze Test' is a new commonsense reasoning framework for evaluating story understanding, story generation, and script learning. This test requires a system to choose the correct ending to a four-sentence story.[39m
|
||
[38;5;12m - Paper: https://arxiv.org/abs/1604.01696[39m
|
||
[38;5;12m- [39m[38;5;14m[1mTriviaQA[0m[38;5;12m (http://nlp.cs.washington.edu/triviaqa/)[39m
|
||
[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;12mTriviaQA[39m[38;5;12m [39m[38;5;12mis[39m[38;5;12m [39m[38;5;12ma[39m[38;5;12m [39m[38;5;12mreading[39m[38;5;12m [39m[38;5;12mcomprehension[39m[38;5;12m [39m[38;5;12mdataset[39m[38;5;12m [39m[38;5;12mcontaining[39m[38;5;12m [39m[38;5;12mover[39m[38;5;12m [39m[38;5;12m650K[39m[38;5;12m [39m[38;5;12mquestion-answer-evidence[39m[38;5;12m [39m[38;5;12mtriples.[39m[38;5;12m [39m[38;5;12mTriviaQA[39m[38;5;12m [39m[38;5;12mincludes[39m[38;5;12m [39m[38;5;12m95K[39m[38;5;12m [39m[38;5;12mquestion-answer[39m[38;5;12m [39m[38;5;12mpairs[39m[38;5;12m [39m[38;5;12mauthored[39m[38;5;12m [39m[38;5;12mby[39m[38;5;12m [39m[38;5;12mtrivia[39m[38;5;12m [39m[38;5;12menthusiasts[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mindependently[39m[38;5;12m [39m[38;5;12mgathered[39m[38;5;12m [39m[38;5;12mevidence[39m[38;5;12m [39m[38;5;12mdocuments,[39m[38;5;12m [39m[38;5;12msix[39m[38;5;12m [39m[38;5;12mper[39m[38;5;12m [39m[38;5;12mquestion[39m[38;5;12m [39m[38;5;12mon[39m[38;5;12m [39m
|
||
[38;5;12maverage,[39m[38;5;12m [39m[38;5;12mthat[39m[38;5;12m [39m[38;5;12mprovide[39m[38;5;12m [39m[38;5;12mhigh[39m[38;5;12m [39m[38;5;12mquality[39m[38;5;12m [39m[38;5;12mdistant[39m[38;5;12m [39m[38;5;12msupervision[39m[38;5;12m [39m[38;5;12mfor[39m[38;5;12m [39m[38;5;12manswering[39m[38;5;12m [39m[38;5;12mthe[39m[38;5;12m [39m[38;5;12mquestions.[39m[38;5;12m [39m
|
||
[38;5;12m - Paper: https://arxiv.org/abs/1705.03551[39m
|
||
[38;5;12m- [39m[38;5;14m[1mWikiQA[0m[38;5;12m (https://www.microsoft.com/en-us/download/details.aspx?id=52419&from=https%3A%2F%2Fresearch.microsoft.com%2Fen-US%2Fdownloads%2F4495da01-db8c-4041-a7f6-7984a4f6a905%2Fdefault.aspx)[39m
|
||
[38;5;12m - A publicly available set of question and sentence pairs for open-domain question answering.[39m
|
||
[38;5;12m [39m
|
||
[38;2;255;187;0m[4mThe DeepQA Research Team in IBM Watson's publication within 5 years[0m
|
||
[38;5;12m- 2015[39m
|
||
[38;5;12m - "Automated Problem List Generation from Electronic Medical Records in IBM Watson", Murthy Devarakonda, Ching-Huei Tsou, IAAI, 2015.[39m
|
||
[38;5;12m - "Decision Making in IBM Watson Question Answering", J. William Murdock, Ontology summit, 2015.[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Unsupervised Entity-Relation Analysis in IBM Watson"[0m[38;5;12m (http://www.cogsys.org/papers/ACS2015/article12.pdf), Aditya Kalyanpur, J William Murdock, ACS, 2015.[39m
|
||
[38;5;12m - "Commonsense Reasoning: An Event Calculus Based Approach", E T Mueller, Morgan Kaufmann/Elsevier, 2015.[39m
|
||
[38;5;12m- 2014[39m
|
||
[38;5;12m - "Problem-oriented patient record summary: An early report on a Watson application", M. Devarakonda, Dongyang Zhang, Ching-Huei Tsou, M. Bornea, Healthcom, 2014.[39m
|
||
[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1m"WatsonPaths:[0m[38;5;14m[1m [0m[38;5;14m[1mScenario-based[0m[38;5;14m[1m [0m[38;5;14m[1mQuestion[0m[38;5;14m[1m [0m[38;5;14m[1mAnswering[0m[38;5;14m[1m [0m[38;5;14m[1mand[0m[38;5;14m[1m [0m[38;5;14m[1mInference[0m[38;5;14m[1m [0m[38;5;14m[1mover[0m[38;5;14m[1m [0m[38;5;14m[1mUnstructured[0m[38;5;14m[1m [0m[38;5;14m[1mInformation"[0m[38;5;12m [39m
|
||
[38;5;12m(http://domino.watson.ibm.com/library/Cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/088f74984a07645485257d5f006ace96!OpenDocument&Highlight=0,RC25489),[39m[38;5;12m [39m[38;5;12mAdam[39m[38;5;12m [39m[38;5;12mLally,[39m[38;5;12m [39m[38;5;12mSugato[39m[38;5;12m [39m[38;5;12mBachi,[39m[38;5;12m [39m[38;5;12mMichael[39m[38;5;12m [39m[38;5;12mA.[39m[38;5;12m [39m[38;5;12mBarborak,[39m[38;5;12m [39m[38;5;12mDavid[39m[38;5;12m [39m[38;5;12mW.[39m[38;5;12m [39m[38;5;12mBuchanan,[39m[38;5;12m [39m[38;5;12mJennifer[39m[38;5;12m [39m[38;5;12mChu-Carroll,[39m[38;5;12m [39m[38;5;12mDavid[39m[38;5;12m [39m
|
||
[38;5;12mA.[39m[38;5;12m [39m[38;5;12mFerrucci[39m[48;2;30;30;40m[38;5;13m[3m,[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mMichael[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mR.[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mGlass,[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mAditya[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mKalyanpur,[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mErik[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mT.[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mMueller,[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mJ.[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mWilliam[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mMurdock,[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mSiddharth[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mPatwardhan,[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mJohn[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mM.[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mPrager,[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mChristopher[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mA.[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mWelty,[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mIBM[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mResearch[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mReport[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3mRC25489,[0m[48;2;30;30;40m[38;5;13m[3m [0m[48;2;30;30;40m[38;5;13m[3m2014.[0m
|
||
[38;5;12m - [39m[38;5;14m[1m"Medical Relation Extraction with Manifold Models"[0m[38;5;12m (http://acl2014.org/acl2014/P14-1/pdf/P14-1078.pdf), Chang Wang and James Fan, ACL, 2014.[39m
|
||
|
||
[38;2;255;187;0m[4mMS Research's publication within 5 years[0m
|
||
[38;5;12m- 2018[39m
|
||
[38;5;12m - "Characterizing and Supporting Question Answering in Human-to-Human Communication", Xiao Yang, Ahmed Hassan Awadallah, Madian Khabsa, Wei Wang, Miaosen Wang, ACM SIGIR, 2018.[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"FigureQA: An Annotated Figure Dataset for Visual Reasoning"[0m[38;5;12m (https://arxiv.org/abs/1710.07300), Samira Ebrahimi Kahou, Vincent Michalski, Adam Atkinson, Akos Kadar, Adam Trischler, Yoshua Bengio, ICLR, 2018[39m
|
||
[38;5;12m- 2017[39m
|
||
[38;5;12m - "Multi-level Attention Networks for Visual Question Answering", Dongfei Yu, Jianlong Fu, Tao Mei, Yong Rui, CVPR, 2017.[39m
|
||
[38;5;12m - "A Joint Model for Question Answering and Question Generation", Tong Wang, Xingdi (Eric) Yuan, Adam Trischler, ICML, 2017.[39m
|
||
[38;5;12m - "Two-Stage Synthesis Networks for Transfer Learning in Machine Comprehension", David Golub, Po-Sen Huang, Xiaodong He, Li Deng, EMNLP, 2017.[39m
|
||
[38;5;12m - "Question-Answering with Grammatically-Interpretable Representations", Hamid Palangi, Paul Smolensky, Xiaodong He, Li Deng, [39m
|
||
[38;5;12m - "Search-based Neural Structured Learning for Sequential Question Answering", Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang, ACL, 2017.[39m
|
||
[38;5;12m- 2016[39m
|
||
[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1m"Stacked[0m[38;5;14m[1m [0m[38;5;14m[1mAttention[0m[38;5;14m[1m [0m[38;5;14m[1mNetworks[0m[38;5;14m[1m [0m[38;5;14m[1mfor[0m[38;5;14m[1m [0m[38;5;14m[1mImage[0m[38;5;14m[1m [0m[38;5;14m[1mQuestion[0m[38;5;14m[1m [0m[38;5;14m[1mAnswering"[0m[38;5;12m [39m[38;5;12m(https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Yang_Stacked_Attention_Networks_CVPR_2016_paper.html),[39m[38;5;12m [39m[38;5;12mZichao[39m[38;5;12m [39m[38;5;12mYang,[39m[38;5;12m [39m[38;5;12mXiaodong[39m[38;5;12m [39m[38;5;12mHe,[39m[38;5;12m [39m[38;5;12mJianfeng[39m[38;5;12m [39m[38;5;12mGao,[39m[38;5;12m [39m[38;5;12mLi[39m[38;5;12m [39m[38;5;12mDeng,[39m[38;5;12m [39m[38;5;12mAlex[39m[38;5;12m [39m[38;5;12mSmola,[39m[38;5;12m [39m
|
||
[38;5;12mCVPR,[39m[38;5;12m [39m[38;5;12m2016.[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Question Answering with Knowledge Base, Web and Beyond"[0m[38;5;12m (https://www.microsoft.com/en-us/research/publication/question-answering-with-knowledge-base-web-and-beyond/), Yih, Scott Wen-tau and Ma, Hao, ACM SIGIR, 2016.[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"NewsQA: A Machine Comprehension Dataset"[0m[38;5;12m (https://arxiv.org/abs/1611.09830), Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman, RepL4NLP, 2016.[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Table Cell Search for Question Answering"[0m[38;5;12m (https://dl.acm.org/citation.cfm?id=2883080), Sun, Huan and Ma, Hao and He, Xiaodong and Yih, Wen-tau and Su, Yu and Yan, Xifeng, WWW, 2016.[39m
|
||
[38;5;12m- 2015[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"WIKIQA: A Challenge Dataset for Open-Domain Question Answering"[0m[38;5;12m (https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/YangYihMeek_EMNLP-15_WikiQA.pdf), Yi Yang, Wen-tau Yih, and Christopher Meek, EMNLP, 2015.[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Web-based Question Answering: Revisiting AskMSR"[0m[38;5;12m (https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/AskMSRPlusTR_082815.pdf), Chen-Tse Tsai, Wen-tau Yih, and Christopher J.C. Burges, MSR-TR, 2015.[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Open Domain Question Answering via Semantic Enrichment"[0m[38;5;12m (https://dl.acm.org/citation.cfm?id=2741651), Huan Sun, Hao Ma, Wen-tau Yih, Chen-Tse Tsai, Jingjing Liu, and Ming-Wei Chang, WWW, 2015.[39m
|
||
[38;5;12m- 2014[39m
|
||
[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1m"An[0m[38;5;14m[1m [0m[38;5;14m[1mOverview[0m[38;5;14m[1m [0m[38;5;14m[1mof[0m[38;5;14m[1m [0m[38;5;14m[1mMicrosoft[0m[38;5;14m[1m [0m[38;5;14m[1mDeep[0m[38;5;14m[1m [0m[38;5;14m[1mQA[0m[38;5;14m[1m [0m[38;5;14m[1mSystem[0m[38;5;14m[1m [0m[38;5;14m[1mon[0m[38;5;14m[1m [0m[38;5;14m[1mStanford[0m[38;5;14m[1m [0m[38;5;14m[1mWebQuestions[0m[38;5;14m[1m [0m[38;5;14m[1mBenchmark"[0m[38;5;12m [39m[38;5;12m(https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/Microsoft20Deep20QA.pdf),[39m[38;5;12m [39m[38;5;12mZhenghao[39m[38;5;12m [39m[38;5;12mWang,[39m[38;5;12m [39m[38;5;12mShengquan[39m[38;5;12m [39m[38;5;12mYan,[39m[38;5;12m [39m[38;5;12mHuaming[39m[38;5;12m [39m[38;5;12mWang,[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mXuedong[39m[38;5;12m [39m[38;5;12mHuang,[39m[38;5;12m [39m[38;5;12mMSR-TR,[39m
|
||
[38;5;12m2014.[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Semantic Parsing for Single-Relation Question Answering"[0m[38;5;12m (), Wen-tau Yih, Xiaodong He, Christopher Meek, ACL, 2014.[39m
|
||
[38;5;12m [39m
|
||
[38;2;255;187;0m[4mGoogle AI's publication within 5 years[0m
|
||
[38;5;12m- 2018[39m
|
||
[38;5;12m - Google QA [39m
|
||
[48;5;235m[38;5;249m- **"QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension"** (https://openreview.net/pdf?id=B14TlG-RW), Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, Quoc V. Le, ICLR, 2018.[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **"Ask the Right Questions: Active Question Reformulation with Reinforcement Learning"** (https://openreview.net/pdf?id=S1CChZ-CZ), Christian Buck and Jannis Bulian and Massimiliano Ciaramita and Wojciech Paweł Gajewski and Andrea Gesmundo and [49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249mNeil Houlsby and Wei Wang, ICLR, 2018.[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[48;5;235m[38;5;249m- **"Building Large Machine Reading-Comprehension Datasets using Paragraph Vectors"** (https://arxiv.org/pdf/1612.04342.pdf), Radu Soricut, Nan Ding, 2018.[49m[39m[48;5;235m[38;5;249m [49m[39m
|
||
[38;5;12m - Sentence representation[39m
|
||
[48;5;235m[38;5;249m- **"An efficient framework for learning sentence representations"** (https://arxiv.org/pdf/1803.02893.pdf), Lajanugen Logeswaran, Honglak Lee, ICLR, 2018.[49m[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Did the model understand the question?"[0m[38;5;12m (https://arxiv.org/pdf/1805.05492.pdf), Pramod K. Mudrakarta and Ankur Taly and Mukund Sundararajan and Kedar Dhamdhere, ACL, 2018.[39m
|
||
[38;5;12m- 2017[39m
|
||
[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1m"Analyzing[0m[38;5;14m[1m [0m[38;5;14m[1mLanguage[0m[38;5;14m[1m [0m[38;5;14m[1mLearned[0m[38;5;14m[1m [0m[38;5;14m[1mby[0m[38;5;14m[1m [0m[38;5;14m[1man[0m[38;5;14m[1m [0m[38;5;14m[1mActive[0m[38;5;14m[1m [0m[38;5;14m[1mQuestion[0m[38;5;14m[1m [0m[38;5;14m[1mAnswering[0m[38;5;14m[1m [0m[38;5;14m[1mAgent"[0m[38;5;12m [39m[38;5;12m(https://arxiv.org/pdf/1801.07537.pdf),[39m[38;5;12m [39m[38;5;12mChristian[39m[38;5;12m [39m[38;5;12mBuck[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mJannis[39m[38;5;12m [39m[38;5;12mBulian[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mMassimiliano[39m[38;5;12m [39m[38;5;12mCiaramita[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mWojciech[39m[38;5;12m [39m[38;5;12mGajewski[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mAndrea[39m[38;5;12m [39m[38;5;12mGesmundo[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mNeil[39m[38;5;12m [39m[38;5;12mHoulsby[39m[38;5;12m [39m[38;5;12mand[39m[38;5;12m [39m[38;5;12mWei[39m[38;5;12m [39m[38;5;12mWang,[39m[38;5;12m [39m
|
||
[38;5;12mNIPS,[39m[38;5;12m [39m[38;5;12m2017.[39m
|
||
[38;5;12m - [39m[38;5;14m[1m"Learning Recurrent Span Representations for Extractive Question Answering"[0m[38;5;12m (https://arxiv.org/pdf/1611.01436.pdf), Kenton Lee and Shimi Salant and Tom Kwiatkowski and Ankur Parikh and Dipanjan Das and Jonathan Berant, ICLR, 2017.[39m
|
||
[38;5;12m - Identify the same question[39m
|
||
[48;5;235m[38;5;249m- **"Neural Paraphrase Identification of Questions with Noisy Pretraining"** (https://arxiv.org/pdf/1704.04565.pdf), Gaurav Singh Tomar and Thyago Duque and Oscar Täckström and Jakob Uszkoreit and Dipanjan Das, SCLeM, 2017.[49m[39m
|
||
[38;5;12m- 2014[39m
|
||
[38;5;12m - "Great Question! Question Quality in Community Q&A", Sujith Ravi and Bo Pang and Vibhor Rastogi and Ravi Kumar, ICWSM, 2014.[39m
|
||
|
||
[38;2;255;187;0m[4mFacebook AI Research's publication within 5 years[0m
|
||
[38;5;12m- 2018[39m
|
||
[38;5;12m - [39m[38;5;14m[1mEmbodied Question Answering[0m[38;5;12m (https://research.fb.com/publications/embodied-question-answering/), Abhishek Das, Samyak Datta, Georgia Gkioxari, Stefan Lee, Devi Parikh, and Dhruv Batra, CVPR, 2018[39m
|
||
[38;5;12m [39m[38;5;12m-[39m[38;5;12m [39m[38;5;14m[1mDo[0m[38;5;14m[1m [0m[38;5;14m[1mexplanations[0m[38;5;14m[1m [0m[38;5;14m[1mmake[0m[38;5;14m[1m [0m[38;5;14m[1mVQA[0m[38;5;14m[1m [0m[38;5;14m[1mmodels[0m[38;5;14m[1m [0m[38;5;14m[1mmore[0m[38;5;14m[1m [0m[38;5;14m[1mpredictable[0m[38;5;14m[1m [0m[38;5;14m[1mto[0m[38;5;14m[1m [0m[38;5;14m[1ma[0m[38;5;14m[1m [0m[38;5;14m[1mhuman?[0m[38;5;12m [39m[38;5;12m(https://research.fb.com/publications/do-explanations-make-vqa-models-more-predictable-to-a-human/),[39m[38;5;12m [39m[38;5;12mArjun[39m[38;5;12m [39m[38;5;12mChandrasekaran,[39m[38;5;12m [39m[38;5;12mViraj[39m[38;5;12m [39m[38;5;12mPrabhu,[39m[38;5;12m [39m[38;5;12mDeshraj[39m[38;5;12m [39m[38;5;12mYadav,[39m[38;5;12m [39m[38;5;12mPrithvijit[39m[38;5;12m [39m[38;5;12mChattopadhyay,[39m[38;5;12m [39m[38;5;12mand[39m
|
||
[38;5;12mDevi[39m[38;5;12m [39m[38;5;12mParikh,[39m[38;5;12m [39m[38;5;12mEMNLP,[39m[38;5;12m [39m[38;5;12m2018[39m
|
||
[38;5;12m - [39m[38;5;14m[1mNeural Compositional Denotational Semantics for Question Answering[0m[38;5;12m (https://research.fb.com/publications/neural-compositional-denotational-semantics-for-question-answering/), Nitish Gupta, Mike Lewis, EMNLP, 2018[39m
|
||
[38;5;12m- 2017[39m
|
||
[38;5;12m - DrQA [39m
|
||
[48;5;235m[38;5;249m- **Reading Wikipedia to Answer Open-Domain Questions** (https://cs.stanford.edu/people/danqi/papers/acl2017.pdf), Danqi Chen, Adam Fisch, Jason Weston & Antoine Bordes, ACL, 2017.[49m[39m
|
||
|
||
[38;2;255;187;0m[4mBooks[0m
|
||
[38;5;12m- Natural Language Question Answering system Paperback - Boris Galitsky (2003)[39m
|
||
[38;5;12m- New Directions in Question Answering - Mark T. Maybury (2004)[39m
|
||
[38;5;12m- Part 3. 5. Question Answering in The Oxford Handbook of Computational Linguistics - Sanda Harabagiu and Dan Moldovan (2005)[39m
|
||
[38;5;12m- Chap.28 Question Answering in Speech and Language Processing - Daniel Jurafsky & James H. Martin (2017)[39m
|
||
|
||
[38;2;255;187;0m[4mLinks[0m
|
||
[38;5;12m- [39m[38;5;14m[1mBuilding a Question-Answering System from Scratch— Part 1[0m[38;5;12m (https://towardsdatascience.com/building-a-question-answering-system-part-1-9388aadff507)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mQeustion Answering with Tensorflow By Steven Hewitt, O'REILLY, 2017[0m[38;5;12m (https://www.oreilly.com/ideas/question-answering-with-tensorflow)[39m
|
||
[38;5;12m- [39m[38;5;14m[1mWhy question answering is hard[0m[38;5;12m (http://nicklothian.com/blog/2014/09/25/why-question-answering-is-hard/)[39m
|
||
|
||
|
||
[38;2;255;187;0m[4mContributing[0m
|
||
|
||
[38;5;12mContributions welcome! Read the [39m[38;5;14m[1mcontribution guidelines[0m[38;5;12m (contributing.md) first.[39m
|
||
|
||
[38;2;255;187;0m[4mLicense[0m
|
||
[38;5;14m[1m![0m[38;5;12mCC0[39m[38;5;14m[1m (http://mirrors.creativecommons.org/presskit/buttons/88x31/svg/cc-zero.svg)[0m[38;5;12m (https://creativecommons.org/share-your-work/public-domain/cc0/)[39m
|
||
|
||
[38;5;12mTo the extent possible under law, [39m[38;5;14m[1mseriousmac[0m[38;5;12m (https://github.com/seriousmac) (the maintainer) has waived all copyright and related or neighboring rights to this work.[39m
|
||
|
||
[38;5;12mqa Github: https://github.com/seriousran/awesome-qa[39m
|