880 lines
35 KiB
HTML
880 lines
35 KiB
HTML
<h1 id="awesome-question-answering-awesome">Awesome Question Answering
|
||
<a href="https://github.com/sindresorhus/awesome"><img
|
||
src="https://awesome.re/badge.svg" alt="Awesome" /></a></h1>
|
||
<p><em>A curated list of the <strong><a
|
||
href="https://en.wikipedia.org/wiki/Question_answering">Question
|
||
Answering (QA)</a></strong> subject which is a computer science
|
||
discipline within the fields of information retrieval and natural
|
||
language processing (NLP) toward using machine learning and deep
|
||
learning</em></p>
|
||
<p><em>정보 검색 및 자연 언어 처리 분야의 질의응답에 관한 큐레이션 -
|
||
머신러닝과 딥러닝 단계까지</em><br/>
|
||
<em>问答系统主题的精选列表,是信息检索和自然语言处理领域的计算机科学学科
|
||
- 使用机器学习和深度学习</em></p>
|
||
<h2 id="contents">Contents</h2>
|
||
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
|
||
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->
|
||
<ul>
|
||
<li><a href="#recent-trends">Recent Trends</a></li>
|
||
<li><a href="#about-qa">About QA</a></li>
|
||
<li><a href="#events">Events</a></li>
|
||
<li><a href="#systems">Systems</a></li>
|
||
<li><a href="#competitions-in-qa">Competitions in QA</a></li>
|
||
<li><a href="#publications">Publications</a></li>
|
||
<li><a href="#codes">Codes</a></li>
|
||
<li><a href="#lectures">Lectures</a></li>
|
||
<li><a href="#slides">Slides</a></li>
|
||
<li><a href="#dataset-collections">Dataset Collections</a></li>
|
||
<li><a href="#datasets">Datasets</a></li>
|
||
<li><a href="#books">Books</a></li>
|
||
<li><a href="#links">Links</a></li>
|
||
</ul>
|
||
<!-- END doctoc generated TOC please keep comment here to allow auto update -->
|
||
<h2 id="recent-trends">Recent Trends</h2>
|
||
<h3 id="recent-qa-models">Recent QA Models</h3>
|
||
<ul>
|
||
<li>DilBert: Delaying Interaction Layers in Transformer-based Encoders
|
||
for Efficient Open Domain Question Answering (2020)
|
||
<ul>
|
||
<li>paper: https://arxiv.org/pdf/2010.08422.pdf</li>
|
||
<li>github: https://github.com/wissam-sib/dilbert</li>
|
||
</ul></li>
|
||
<li>UnifiedQA: Crossing Format Boundaries With a Single QA System (2020)
|
||
<ul>
|
||
<li>Demo: https://unifiedqa.apps.allenai.org/</li>
|
||
</ul></li>
|
||
<li>ProQA: Resource-efficient method for pretraining a dense corpus
|
||
index for open-domain QA and IR. (2020)
|
||
<ul>
|
||
<li>paper: https://arxiv.org/pdf/2005.00038.pdf</li>
|
||
<li>github: https://github.com/xwhan/ProQA</li>
|
||
</ul></li>
|
||
<li>TYDI QA: A Benchmark for Information-Seeking Question Answering in
|
||
Typologically Diverse Languages (2020)
|
||
<ul>
|
||
<li>paper: https://arxiv.org/ftp/arxiv/papers/2003/2003.05002.pdf</li>
|
||
</ul></li>
|
||
<li>Retrospective Reader for Machine Reading Comprehension
|
||
<ul>
|
||
<li>paper: https://arxiv.org/pdf/2001.09694v2.pdf</li>
|
||
</ul></li>
|
||
<li>TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer
|
||
Sentence Selection (AAAI 2020)
|
||
<ul>
|
||
<li>paper: https://arxiv.org/pdf/1911.04118.pdf ### Recent Language
|
||
Models</li>
|
||
</ul></li>
|
||
<li><a href="https://openreview.net/pdf?id=r1xMH1BtvB">ELECTRA:
|
||
Pre-training Text Encoders as Discriminators Rather Than Generators</a>,
|
||
Kevin Clark, et al., ICLR, 2020.</li>
|
||
<li><a href="https://openreview.net/pdf?id=rJx0Q6EFPB">TinyBERT:
|
||
Distilling BERT for Natural Language Understanding</a>, Xiaoqi Jiao, et
|
||
al., ICLR, 2020.</li>
|
||
<li><a href="https://arxiv.org/abs/2002.10957">MINILM: Deep
|
||
Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained
|
||
Transformers</a>, Wenhui Wang, et al., arXiv, 2020.</li>
|
||
<li><a href="https://arxiv.org/abs/1910.10683">T5: Exploring the Limits
|
||
of Transfer Learning with a Unified Text-to-Text Transformer</a>, Colin
|
||
Raffel, et al., arXiv preprint, 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1905.07129">ERNIE: Enhanced Language
|
||
Representation with Informative Entities</a>, Zhengyan Zhang, et al.,
|
||
ACL, 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1906.08237">XLNet: Generalized
|
||
Autoregressive Pretraining for Language Understanding</a>, Zhilin Yang,
|
||
et al., arXiv preprint, 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1909.11942">ALBERT: A Lite BERT for
|
||
Self-supervised Learning of Language Representations</a>, Zhenzhong Lan,
|
||
et al., arXiv preprint, 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1907.11692">RoBERTa: A Robustly
|
||
Optimized BERT Pretraining Approach</a>, Yinhan Liu, et al., arXiv
|
||
preprint, 2019.</li>
|
||
<li><a href="https://arxiv.org/pdf/1910.01108.pdf">DistilBERT, a
|
||
distilled version of BERT: smaller, faster, cheaper and lighter</a>,
|
||
Victor sanh, et al., arXiv, 2019.</li>
|
||
<li><a href="https://arxiv.org/pdf/1907.10529v3.pdf">SpanBERT: Improving
|
||
Pre-training by Representing and Predicting Spans</a>, Mandar Joshi, et
|
||
al., TACL, 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1810.04805">BERT: Pre-training of
|
||
Deep Bidirectional Transformers for Language Understanding</a>, Jacob
|
||
Devlin, et al., NAACL 2019, 2018. ### AAAI 2020
|
||
<ul>
|
||
<li><a href="https://arxiv.org/pdf/1911.04118.pdf">TANDA: Transfer and
|
||
Adapt Pre-Trained Transformer Models for Answer Sentence Selection</a>,
|
||
Siddhant Garg, et al., AAAI 2020, Nov 2019. ### ACL 2019</li>
|
||
<li><a href="https://www.aclweb.org/anthology/W19-5039">Overview of the
|
||
MEDIQA 2019 Shared Task on Textual Inference, Question Entailment and
|
||
Question Answering</a>, Asma Ben Abacha, et al., ACL-W 2019, Aug
|
||
2019.</li>
|
||
<li><a href="https://arxiv.org/pdf/1906.02829v1.pdf">Towards Scalable
|
||
and Reliable Capsule Networks for Challenging NLP Applications</a>, Wei
|
||
Zhao, et al., ACL 2019, Jun 2019.</li>
|
||
<li><a href="https://arxiv.org/pdf/1905.05460v2.pdf">Cognitive Graph for
|
||
Multi-Hop Reading Comprehension at Scale</a>, Ming Ding, et al., ACL
|
||
2019, Jun 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1906.05807">Real-Time Open-Domain
|
||
Question Answering with Dense-Sparse Phrase Index</a>, Minjoon Seo, et
|
||
al., ACL 2019, Jun 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1906.04980">Unsupervised Question
|
||
Answering by Cloze Translation</a>, Patrick Lewis, et al., ACL 2019, Jun
|
||
2019.</li>
|
||
<li><a href="https://www.aclweb.org/anthology/S19-2153">SemEval-2019
|
||
Task 10: Math Question Answering</a>, Mark Hopkins, et al., ACL-W 2019,
|
||
Jun 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1905.07098">Improving Question
|
||
Answering over Incomplete KBs with Knowledge-Aware Reader</a>, Wenhan
|
||
Xiong, et al., ACL 2019, May 2019.</li>
|
||
<li><a href="https://arxiv.org/pdf/1802.07459v2.pdf">Matching Article
|
||
Pairs with Graphical Decomposition and Convolutions</a>, Bang Liu, et
|
||
al., ACL 2019, May 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1903.06164">Episodic Memory Reader:
|
||
Learning what to Remember for Question Answering from Streaming
|
||
Data</a>, Moonsu Han, et al., ACL 2019, Mar 2019.</li>
|
||
<li><a href="https://ai.google/research/pubs/pub47761">Natural
|
||
Questions: a Benchmark for Question Answering Research</a>, Tom
|
||
Kwiatkowski, et al., TACL 2019, Jan 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1811.00232">Textbook Question
|
||
Answering with Multi-modal Context Graph Understanding and
|
||
Self-supervised Open-set Comprehension</a>, Daesik Kim, et al., ACL
|
||
2019, Nov 2018. ### EMNLP-IJCNLP 2019</li>
|
||
<li><a href="https://arxiv.org/pdf/1909.01066v2.pdf">Language Models as
|
||
Knowledge Bases?</a>, Fabio Petron, et al., EMNLP-IJCNLP 2019, Sep
|
||
2019.</li>
|
||
<li><a href="https://arxiv.org/pdf/1908.07490v3.pdf">LXMERT: Learning
|
||
Cross-Modality Encoder Representations from Transformers</a>, Hao Tan,
|
||
et al., EMNLP-IJCNLP 2019, Dec 2019.</li>
|
||
<li><a href="https://arxiv.org/pdf/1910.07000v1.pdf">Answering Complex
|
||
Open-domain Questions Through Iterative Query Generation</a>, Peng Qi,
|
||
et al., EMNLP-IJCNLP 2019, Oct 2019.</li>
|
||
<li><a href="https://arxiv.org/pdf/1909.02151v1.pdf">KagNet:
|
||
Knowledge-Aware Graph Networks for Commonsense Reasoning</a>, Bill
|
||
Yuchen Lin, et al., EMNLP-IJCNLP 2019, Sep 2019.</li>
|
||
<li><a href="https://arxiv.org/pdf/1909.01953v1.pdf">Mixture Content
|
||
Selection for Diverse Sequence Generation</a>, Jaemin Cho, et al.,
|
||
EMNLP-IJCNLP 2019, Sep 2019.</li>
|
||
<li><a href="https://arxiv.org/pdf/1909.04849v1.pdf">A Discrete Hard EM
|
||
Approach for Weakly Supervised Question Answering</a>, Sewon Min, et
|
||
al., EMNLP-IJCNLP, 2019, Sep 2019. ### Arxiv</li>
|
||
<li><a href="https://arxiv.org/abs/1905.01758">Investigating the
|
||
Successes and Failures of BERT for Passage Re-Ranking</a>, Harshith
|
||
Padigela, et al., arXiv preprint, May 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1905.05412">BERT with History Answer
|
||
Embedding for Conversational Question Answering</a>, Chen Qu, et al.,
|
||
arXiv preprint, May 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1904.07531">Understanding the
|
||
Behaviors of BERT in Ranking</a>, Yifan Qiao, et al., arXiv preprint,
|
||
Apr 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1904.02232">BERT Post-Training for
|
||
Review Reading Comprehension and Aspect-based Sentiment Analysis</a>, Hu
|
||
Xu, et al., arXiv preprint, Apr 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1902.01718">End-to-End Open-Domain
|
||
Question Answering with BERTserini</a>, Wei Yang, et al., arXiv
|
||
preprint, Feb 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1901.08634">A BERT Baseline for the
|
||
Natural Questions</a>, Chris Alberti, et al., arXiv preprint, Jan
|
||
2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1901.04085">Passage Re-ranking with
|
||
BERT</a>, Rodrigo Nogueira, et al., arXiv preprint, Jan 2019.</li>
|
||
<li><a href="https://arxiv.org/abs/1812.03593">SDNet: Contextualized
|
||
Attention-based Deep Network for Conversational Question Answering</a>,
|
||
Chenguang Zhu, et al., arXiv, Dec 2018. ### Dataset</li>
|
||
<li><a href="https://arxiv.org/abs/1907.09190">ELI5: Long Form Question
|
||
Answering</a>, Angela Fan, et al., ACL 2019, Jul 2019</li>
|
||
<li><a href="https://www.aclweb.org/anthology/W19-2008.pdf">CODAH: An
|
||
Adversarially-Authored Question Answering Dataset for Common Sense</a>,
|
||
Michael Chen, et al., RepEval 2019, Jun 2019.</li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="about-qa">About QA</h2>
|
||
<h3 id="types-of-qa">Types of QA</h3>
|
||
<ul>
|
||
<li>Single-turn QA: answer without considering any context</li>
|
||
<li>Conversational QA: use previsous conversation turns #### Subtypes of
|
||
QA</li>
|
||
<li>Knowledge-based QA</li>
|
||
<li>Table/List-based QA</li>
|
||
<li>Text-based QA</li>
|
||
<li>Community-based QA</li>
|
||
<li>Visual QA</li>
|
||
</ul>
|
||
<h3 id="analysis-and-parsing-for-pre-processing-in-qa-systems">Analysis
|
||
and Parsing for Pre-processing in QA systems</h3>
|
||
<p>Lanugage Analysis 1. <a
|
||
href="https://www.cs.bham.ac.uk/~pjh/sem1a5/pt2/pt2_intro_morphology.html">Morphological
|
||
analysis</a> 2. <a href="mds/named-entity-recognition.md">Named Entity
|
||
Recognition(NER)</a> 3. Homonyms / Polysemy Analysis 4. Syntactic
|
||
Parsing (Dependency Parsing) 5. Semantic Recognition</p>
|
||
<h3 id="most-qa-systems-have-roughly-3-parts">Most QA systems have
|
||
roughly 3 parts</h3>
|
||
<ol type="1">
|
||
<li>Fact extraction <br/>
|
||
<ol type="1">
|
||
<li>Entity Extraction <br/>
|
||
<ol type="1">
|
||
<li><a href="mds/named-entity-recognition.md">Named-Entity
|
||
Recognition(NER)</a></li>
|
||
</ol></li>
|
||
<li><a href="mds/relation-extraction.md">Relation Extraction</a>
|
||
<br/></li>
|
||
</ol></li>
|
||
<li>Understanding the question</li>
|
||
<li>Generating an answer</li>
|
||
</ol>
|
||
<h2 id="events">Events</h2>
|
||
<ul>
|
||
<li>Wolfram Alpha launced the answer engine in 2009.</li>
|
||
<li>IBM Watson system defeated top <em><a
|
||
href="https://www.jeopardy.com">Jeopardy!</a></em> champions in
|
||
2011.</li>
|
||
<li>Apple’s Siri integrated Wolfram Alpha’s answer engine in 2011.</li>
|
||
<li>Google embraced QA by launching its Knowledge Graph, leveraging the
|
||
free base knowledge base in 2012.</li>
|
||
<li>Amazon Echo | Alexa (2015), Google Home | Google Assistant (2016),
|
||
INVOKE | MS Cortana (2017), HomePod (2017)</li>
|
||
</ul>
|
||
<h2 id="systems">Systems</h2>
|
||
<ul>
|
||
<li><a href="https://www.ibm.com/watson/">IBM Watson</a> - Has
|
||
state-of-the-arts performance.</li>
|
||
<li><a href="https://research.fb.com/downloads/drqa/">Facebook DrQA</a>
|
||
- Applied to the SQuAD1.0 dataset. The SQuAD2.0 dataset has released.
|
||
but DrQA is not tested yet.</li>
|
||
<li><a href="http://conceptnet.io/">MIT media lab’s Knowledge graph</a>
|
||
- Is a freely-available semantic network, designed to help computers
|
||
understand the meanings of words that people use.</li>
|
||
</ul>
|
||
<h2 id="competitions-in-qa">Competitions in QA</h2>
|
||
<table>
|
||
<colgroup>
|
||
<col style="width: 2%" />
|
||
<col style="width: 12%" />
|
||
<col style="width: 10%" />
|
||
<col style="width: 14%" />
|
||
<col style="width: 4%" />
|
||
<col style="width: 17%" />
|
||
<col style="width: 17%" />
|
||
<col style="width: 5%" />
|
||
<col style="width: 16%" />
|
||
</colgroup>
|
||
<thead>
|
||
<tr class="header">
|
||
<th></th>
|
||
<th>Dataset</th>
|
||
<th>Language</th>
|
||
<th>Organizer</th>
|
||
<th>Since</th>
|
||
<th>Top Rank</th>
|
||
<th>Model</th>
|
||
<th>Status</th>
|
||
<th>Over Human Performance</th>
|
||
</tr>
|
||
</thead>
|
||
<tbody>
|
||
<tr class="odd">
|
||
<td>0</td>
|
||
<td><a
|
||
href="http://cs.rochester.edu/~nasrinm/files/Papers/lsdsem17-shared-task.pdf">Story
|
||
Cloze Test</a></td>
|
||
<td>English</td>
|
||
<td>Univ. of Rochester</td>
|
||
<td>2016</td>
|
||
<td>msap</td>
|
||
<td>Logistic regression</td>
|
||
<td>Closed</td>
|
||
<td>x</td>
|
||
</tr>
|
||
<tr class="even">
|
||
<td>1</td>
|
||
<td>MS MARCO</td>
|
||
<td>English</td>
|
||
<td>Microsoft</td>
|
||
<td>2016</td>
|
||
<td>YUANFUDAO research NLP</td>
|
||
<td>MARS</td>
|
||
<td>Closed</td>
|
||
<td>o</td>
|
||
</tr>
|
||
<tr class="odd">
|
||
<td>2</td>
|
||
<td>MS MARCO V2</td>
|
||
<td>English</td>
|
||
<td>Microsoft</td>
|
||
<td>2018</td>
|
||
<td>NTT Media Intelli. Lab.</td>
|
||
<td>Masque Q&A Style</td>
|
||
<td>Opened</td>
|
||
<td>x</td>
|
||
</tr>
|
||
<tr class="even">
|
||
<td>3</td>
|
||
<td><a href="https://arxiv.org/abs/1606.05250">SQuAD</a></td>
|
||
<td>English</td>
|
||
<td>Univ. of Stanford</td>
|
||
<td>2018</td>
|
||
<td>XLNet (single model)</td>
|
||
<td>XLNet Team</td>
|
||
<td>Closed</td>
|
||
<td>o</td>
|
||
</tr>
|
||
<tr class="odd">
|
||
<td>4</td>
|
||
<td><a href="https://rajpurkar.github.io/SQuAD-explorer/">SQuAD
|
||
2.0</a></td>
|
||
<td>English</td>
|
||
<td>Univ. of Stanford</td>
|
||
<td>2018</td>
|
||
<td>PINGAN Omni-Sinitic</td>
|
||
<td>ALBERT + DAAF + Verifier (ensemble)</td>
|
||
<td>Opened</td>
|
||
<td>o</td>
|
||
</tr>
|
||
<tr class="even">
|
||
<td>5</td>
|
||
<td><a href="http://nlp.cs.washington.edu/triviaqa/">TriviaQA</a></td>
|
||
<td>English</td>
|
||
<td>Univ. of Washington</td>
|
||
<td>2017</td>
|
||
<td>Ming Yan</td>
|
||
<td>-</td>
|
||
<td>Closed</td>
|
||
<td>-</td>
|
||
</tr>
|
||
<tr class="odd">
|
||
<td>6</td>
|
||
<td><a href="https://decanlp.com/">decaNLP</a></td>
|
||
<td>English</td>
|
||
<td>Salesforce Research</td>
|
||
<td>2018</td>
|
||
<td>Salesforce Research</td>
|
||
<td>MQAN</td>
|
||
<td>Closed</td>
|
||
<td>x</td>
|
||
</tr>
|
||
<tr class="even">
|
||
<td>7</td>
|
||
<td><a href="https://ai.baidu.com/broad/introduction">DuReader
|
||
Ver1.</a></td>
|
||
<td>Chinese</td>
|
||
<td>Baidu</td>
|
||
<td>2015</td>
|
||
<td>Tryer</td>
|
||
<td>T-Reader (single)</td>
|
||
<td>Closed</td>
|
||
<td>x</td>
|
||
</tr>
|
||
<tr class="odd">
|
||
<td>8</td>
|
||
<td><a href="https://ai.baidu.com/broad/introduction">DuReader
|
||
Ver2.</a></td>
|
||
<td>Chinese</td>
|
||
<td>Baidu</td>
|
||
<td>2017</td>
|
||
<td>renaissance</td>
|
||
<td>AliReader</td>
|
||
<td>Opened</td>
|
||
<td>-</td>
|
||
</tr>
|
||
<tr class="even">
|
||
<td>9</td>
|
||
<td><a href="https://korquad.github.io/KorQuad%201.0/">KorQuAD</a></td>
|
||
<td>Korean</td>
|
||
<td>LG CNS AI Research</td>
|
||
<td>2018</td>
|
||
<td>Clova AI LaRva Team</td>
|
||
<td>LaRva-Kor-Large+ + CLaF (single)</td>
|
||
<td>Closed</td>
|
||
<td>o</td>
|
||
</tr>
|
||
<tr class="odd">
|
||
<td>10</td>
|
||
<td><a href="https://korquad.github.io/">KorQuAD 2.0</a></td>
|
||
<td>Korean</td>
|
||
<td>LG CNS AI Research</td>
|
||
<td>2019</td>
|
||
<td>Kangwon National University</td>
|
||
<td>KNU-baseline(single model)</td>
|
||
<td>Opened</td>
|
||
<td>x</td>
|
||
</tr>
|
||
<tr class="even">
|
||
<td>11</td>
|
||
<td><a href="https://stanfordnlp.github.io/coqa/">CoQA</a></td>
|
||
<td>English</td>
|
||
<td>Univ. of Stanford</td>
|
||
<td>2018</td>
|
||
<td>Zhuiyi Technology</td>
|
||
<td>RoBERTa + AT + KD (ensemble)</td>
|
||
<td>Opened</td>
|
||
<td>o</td>
|
||
</tr>
|
||
</tbody>
|
||
</table>
|
||
<h2 id="publications">Publications</h2>
|
||
<ul>
|
||
<li>Papers
|
||
<ul>
|
||
<li><dl>
|
||
<dt><a href="https://arxiv.org/pdf/1704.06877.pdf">“Learning to Skim
|
||
Text”</a>, Adams Wei Yu, Hongrae Lee, Quoc V. Le, 2017.</dt>
|
||
<dd>
|
||
Show only what you want in Text
|
||
</dd>
|
||
</dl></li>
|
||
<li><a href="https://arxiv.org/pdf/1704.04920.pdf">“Deep Joint Entity
|
||
Disambiguation with Local Neural Attention”</a>, Octavian-Eugen Ganea
|
||
and Thomas Hofmann, 2017.</li>
|
||
<li><a href="https://arxiv.org/pdf/1611.01603.pdf">“BI-DIRECTIONAL
|
||
ATTENTION FLOW FOR MACHINE COMPREHENSION”</a>, Minjoon Seo, Aniruddha
|
||
Kembhavi, Ali Farhadi, Hananneh Hajishirzi, ICLR, 2017.</li>
|
||
<li><a
|
||
href="http://nlp.cs.berkeley.edu/pubs/FrancisLandau-Durrett-Klein_2016_EntityConvnets_paper.pdf">“Capturing
|
||
Semantic Similarity for Entity Linking with Convolutional Neural
|
||
Networks”</a>, Matthew Francis-Landau, Greg Durrett and Dan Klei,
|
||
NAACL-HLT 2016.
|
||
<ul>
|
||
<li>https://GitHub.com/matthewfl/nlp-entity-convnet</li>
|
||
</ul></li>
|
||
<li><a href="https://ieeexplore.ieee.org/document/6823700/">“Entity
|
||
Linking with a Knowledge Base: Issues, Techniques, and Solutions”</a>,
|
||
Wei Shen, Jianyong Wang, Jiawei Han, IEEE Transactions on Knowledge and
|
||
Data Engineering(TKDE), 2014.</li>
|
||
<li><a
|
||
href="https://ieeexplore.ieee.org/document/6177724/">“Introduction to
|
||
“This is Watson”</a>, IBM Journal of Research and Development, D. A.
|
||
Ferrucci, 2012.</li>
|
||
<li><a
|
||
href="https://www.sciencedirect.com/science/article/pii/S0020025511003860">“A
|
||
survey on question answering technology from an information retrieval
|
||
perspective”</a>, Information Sciences, 2011.</li>
|
||
<li><a
|
||
href="https://www.mitpressjournals.org/doi/abs/10.1162/coli.2007.33.1.41">“Question
|
||
Answering in Restricted Domains: An Overview”</a>, Diego Mollá and José
|
||
Luis Vicedo, Computational Linguistics, 2007</li>
|
||
<li><a href="">“Natural language question answering: the view from
|
||
here”</a>, L Hirschman, R Gaizauskas, natural language engineering,
|
||
2001.</li>
|
||
<li>Entity Disambiguation / Entity Linking</li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="codes">Codes</h2>
|
||
<ul>
|
||
<li><a href="https://github.com/allenai/bi-att-flow">BiDAF</a> -
|
||
Bi-Directional Attention Flow (BIDAF) network is a multi-stage
|
||
hierarchical process that represents the context at different levels of
|
||
granularity and uses bi-directional attention flow mechanism to obtain a
|
||
query-aware context representation without early summarization.
|
||
<ul>
|
||
<li>Official; Tensorflow v1.2</li>
|
||
<li><a href="https://arxiv.org/pdf/1611.01603.pdf">Paper</a></li>
|
||
</ul></li>
|
||
<li><a href="https://github.com/NLPLearn/QANet">QANet</a> - A Q&A
|
||
architecture does not require recurrent networks: Its encoder consists
|
||
exclusively of convolution and self-attention, where convolution models
|
||
local interactions and self-attention models global interactions.
|
||
<ul>
|
||
<li>Google; Unofficial; Tensorflow v1.5</li>
|
||
<li><a href="#qanet">Paper</a></li>
|
||
</ul></li>
|
||
<li><a href="https://github.com/HKUST-KnowComp/R-Net">R-Net</a> - An
|
||
end-to-end neural networks model for reading comprehension style
|
||
question answering, which aims to answer questions from a given passage.
|
||
<ul>
|
||
<li>MS; Unofficially by HKUST; Tensorflow v1.5</li>
|
||
<li><a
|
||
href="https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf">Paper</a></li>
|
||
</ul></li>
|
||
<li><a
|
||
href="https://github.com/YerevaNN/R-NET-in-Keras">R-Net-in-Keras</a> -
|
||
R-NET re-implementation in Keras.
|
||
<ul>
|
||
<li>MS; Unofficial; Keras v2.0.6</li>
|
||
<li><a
|
||
href="https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/r-net.pdf">Paper</a></li>
|
||
</ul></li>
|
||
<li><a href="https://github.com/hitvoice/DrQA">DrQA</a> - DrQA is a
|
||
system for reading comprehension applied to open-domain question
|
||
answering.
|
||
<ul>
|
||
<li>Facebook; Official; Pytorch v0.4</li>
|
||
<li><a href="#drqa">Paper</a></li>
|
||
</ul></li>
|
||
<li><a href="https://github.com/google-research/bert">BERT</a> - A new
|
||
language representation model which stands for Bidirectional Encoder
|
||
Representations from Transformers. Unlike recent language representation
|
||
models, BERT is designed to pre-train deep bidirectional representations
|
||
by jointly conditioning on both left and right context in all layers.
|
||
<ul>
|
||
<li>Google; Official implementation; Tensorflow v1.11.0</li>
|
||
<li><a href="https://arxiv.org/abs/1810.04805">Paper</a></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="lectures">Lectures</h2>
|
||
<ul>
|
||
<li><a href="https://youtu.be/Kzi6tE4JaGo">Question Answering - Natural
|
||
Language Processing</a> - By Dragomir Radev, Ph.D. | University of
|
||
Michigan | 2016.</li>
|
||
</ul>
|
||
<h2 id="slides">Slides</h2>
|
||
<ul>
|
||
<li><a
|
||
href="https://github.com/scottyih/Slides/blob/master/QA%20Tutorial.pdf">Question
|
||
Answering with Knowledge Bases, Web and Beyond</a> - By Scott Wen-tau
|
||
Yih & Hao Ma | Microsoft Research | 2016.</li>
|
||
<li><a
|
||
href="https://hpi.de/fileadmin/user_upload/fachgebiete/plattner/teaching/NaturalLanguageProcessing/NLP2017/NLP8_QuestionAnswering.pdf">Question
|
||
Answering</a> - By Dr. Mariana Neves | Hasso Plattner Institut |
|
||
2017.</li>
|
||
</ul>
|
||
<h2 id="dataset-collections">Dataset Collections</h2>
|
||
<ul>
|
||
<li><a
|
||
href="https://github.com/dice-group/NLIWOD/tree/master/qa.datasets">NLIWOD’s
|
||
Question answering datasets</a></li>
|
||
<li><a
|
||
href="https://github.com/karthikncode/nlp-datasets">karthinkncode’s
|
||
Datasets for Natural Language Processing</a></li>
|
||
</ul>
|
||
<h2 id="datasets">Datasets</h2>
|
||
<ul>
|
||
<li><a href="http://data.allenai.org/ai2-science-questions/">AI2 Science
|
||
Questions v2.1(2017)</a>
|
||
<ul>
|
||
<li>It consists of questions used in student assessments in the United
|
||
States across elementary and middle school grade levels. Each question
|
||
is 4-way multiple choice format and may or may not include a diagram
|
||
element.</li>
|
||
<li>Paper:
|
||
http://ai2-website.s3.amazonaws.com/publications/AI2ReasoningChallenge2018.pdf</li>
|
||
</ul></li>
|
||
<li><a href="https://uclmr.github.io/ai4exams/data.html">Children’s Book
|
||
Test</a></li>
|
||
<li>It is one of the bAbI project of Facebook AI Research which is
|
||
organized towards the goal of automatic text understanding and
|
||
reasoning. The CBT is designed to measure directly how well language
|
||
models can exploit wider linguistic context.</li>
|
||
<li><a href="https://github.com/Websail-NU/CODAH">CODAH Dataset</a></li>
|
||
<li><a href="https://github.com/deepmind/rc-data">DeepMind Q&A
|
||
Dataset; CNN/Daily Mail</a>
|
||
<ul>
|
||
<li>Hermann et al. (2015) created two awesome datasets using news
|
||
articles for Q&A research. Each dataset contains many documents (90k
|
||
and 197k each), and each document companies on average 4 questions
|
||
approximately. Each question is a sentence with one missing word/phrase
|
||
which can be found from the accompanying document/context.</li>
|
||
<li>Paper: https://arxiv.org/abs/1506.03340</li>
|
||
</ul></li>
|
||
<li><a href="https://github.com/facebookresearch/ELI5">ELI5</a>
|
||
<ul>
|
||
<li>Paper: https://arxiv.org/abs/1907.09190</li>
|
||
</ul></li>
|
||
<li><a
|
||
href="https://github.com/ysu1989/GraphQuestions">GraphQuestions</a>
|
||
<ul>
|
||
<li>On generating Characteristic-rich Question sets for QA
|
||
evaluation.</li>
|
||
</ul></li>
|
||
<li><a href="http://sda.cs.uni-bonn.de/projects/qa-dataset/">LC-QuAD</a>
|
||
<ul>
|
||
<li>It is a gold standard KBQA (Question Answering over Knowledge Base)
|
||
dataset containing 5000 Question and SPARQL queries. LC-QuAD uses
|
||
DBpedia v04.16 as the target KB.</li>
|
||
</ul></li>
|
||
<li><a href="http://www.msmarco.org/dataset.aspx">MS MARCO</a>
|
||
<ul>
|
||
<li>This is for real-world question answering.</li>
|
||
<li>Paper: https://arxiv.org/abs/1611.09268</li>
|
||
</ul></li>
|
||
<li><a href="https://cogcomp.org/multirc/">MultiRC</a>
|
||
<ul>
|
||
<li>A dataset of short paragraphs and multi-sentence questions</li>
|
||
<li>Paper: http://cogcomp.org/page/publication_view/833</li>
|
||
</ul></li>
|
||
<li><a href="https://github.com/deepmind/narrativeqa">NarrativeQA</a>
|
||
<ul>
|
||
<li>It includes the list of documents with Wikipedia summaries, links to
|
||
full stories, and questions and answers.</li>
|
||
<li>Paper: https://arxiv.org/pdf/1712.07040v1.pdf</li>
|
||
</ul></li>
|
||
<li><a href="https://github.com/Maluuba/newsqa">NewsQA</a>
|
||
<ul>
|
||
<li>A machine comprehension dataset</li>
|
||
<li>Paper: https://arxiv.org/pdf/1611.09830.pdf</li>
|
||
</ul></li>
|
||
<li><a href="http://www.cs.cmu.edu/~ark/QA-data/">Qestion-Answer Dataset
|
||
by CMU</a>
|
||
<ul>
|
||
<li>This is a corpus of Wikipedia articles, manually-generated factoid
|
||
questions from them, and manually-generated answers to these questions,
|
||
for use in academic research. These data were collected by Noah Smith,
|
||
Michael Heilman, Rebecca Hwa, Shay Cohen, Kevin Gimpel, and many
|
||
students at Carnegie Mellon University and the University of Pittsburgh
|
||
between 2008 and 2010.</li>
|
||
</ul></li>
|
||
<li><a href="https://rajpurkar.github.io/SQuAD-explorer/">SQuAD1.0</a>
|
||
<ul>
|
||
<li>Stanford Question Answering Dataset (SQuAD) is a reading
|
||
comprehension dataset, consisting of questions posed by crowdworkers on
|
||
a set of Wikipedia articles, where the answer to every question is a
|
||
segment of text, or span, from the corresponding reading passage, or the
|
||
question might be unanswerable.</li>
|
||
<li>Paper: https://arxiv.org/abs/1606.05250</li>
|
||
</ul></li>
|
||
<li><a href="https://rajpurkar.github.io/SQuAD-explorer/">SQuAD2.0</a>
|
||
<ul>
|
||
<li>SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000
|
||
new, unanswerable questions written adversarially by crowdworkers to
|
||
look similar to answerable ones. To do well on SQuAD2.0, systems must
|
||
not only answer questions when possible, but also determine when no
|
||
answer is supported by the paragraph and abstain from answering.</li>
|
||
<li>Paper: https://arxiv.org/abs/1806.03822</li>
|
||
</ul></li>
|
||
<li><a href="http://cs.rochester.edu/nlp/rocstories/">Story cloze
|
||
test</a>
|
||
<ul>
|
||
<li>‘Story Cloze Test’ is a new commonsense reasoning framework for
|
||
evaluating story understanding, story generation, and script learning.
|
||
This test requires a system to choose the correct ending to a
|
||
four-sentence story.</li>
|
||
<li>Paper: https://arxiv.org/abs/1604.01696</li>
|
||
</ul></li>
|
||
<li><a href="http://nlp.cs.washington.edu/triviaqa/">TriviaQA</a>
|
||
<ul>
|
||
<li>TriviaQA is a reading comprehension dataset containing over 650K
|
||
question-answer-evidence triples. TriviaQA includes 95K question-answer
|
||
pairs authored by trivia enthusiasts and independently gathered evidence
|
||
documents, six per question on average, that provide high quality
|
||
distant supervision for answering the questions.</li>
|
||
<li>Paper: https://arxiv.org/abs/1705.03551</li>
|
||
</ul></li>
|
||
<li><a
|
||
href="https://www.microsoft.com/en-us/download/details.aspx?id=52419&from=https%3A%2F%2Fresearch.microsoft.com%2Fen-US%2Fdownloads%2F4495da01-db8c-4041-a7f6-7984a4f6a905%2Fdefault.aspx">WikiQA</a>
|
||
<ul>
|
||
<li>A publicly available set of question and sentence pairs for
|
||
open-domain question answering.</li>
|
||
</ul></li>
|
||
</ul>
|
||
<h3
|
||
id="the-deepqa-research-team-in-ibm-watsons-publication-within-5-years">The
|
||
DeepQA Research Team in IBM Watson’s publication within 5 years</h3>
|
||
<ul>
|
||
<li>2015
|
||
<ul>
|
||
<li>“Automated Problem List Generation from Electronic Medical Records
|
||
in IBM Watson”, Murthy Devarakonda, Ching-Huei Tsou, IAAI, 2015.</li>
|
||
<li>“Decision Making in IBM Watson Question Answering”, J. William
|
||
Murdock, Ontology summit, 2015.</li>
|
||
<li><a
|
||
href="http://www.cogsys.org/papers/ACS2015/article12.pdf">“Unsupervised
|
||
Entity-Relation Analysis in IBM Watson”</a>, Aditya Kalyanpur, J William
|
||
Murdock, ACS, 2015.</li>
|
||
<li>“Commonsense Reasoning: An Event Calculus Based Approach”, E T
|
||
Mueller, Morgan Kaufmann/Elsevier, 2015.</li>
|
||
</ul></li>
|
||
<li>2014
|
||
<ul>
|
||
<li>“Problem-oriented patient record summary: An early report on a
|
||
Watson application”, M. Devarakonda, Dongyang Zhang, Ching-Huei Tsou, M.
|
||
Bornea, Healthcom, 2014.</li>
|
||
<li><a
|
||
href="http://domino.watson.ibm.com/library/Cyberdig.nsf/1e4115aea78b6e7c85256b360066f0d4/088f74984a07645485257d5f006ace96!OpenDocument&Highlight=0,RC25489">“WatsonPaths:
|
||
Scenario-based Question Answering and Inference over Unstructured
|
||
Information”</a>, Adam Lally, Sugato Bachi, Michael A. Barborak, David
|
||
W. Buchanan, Jennifer Chu-Carroll, David A. Ferrucci*, Michael R. Glass,
|
||
Aditya Kalyanpur, Erik T. Mueller, J. William Murdock, Siddharth
|
||
Patwardhan, John M. Prager, Christopher A. Welty, IBM Research Report
|
||
RC25489, 2014.</li>
|
||
<li><a href="http://acl2014.org/acl2014/P14-1/pdf/P14-1078.pdf">“Medical
|
||
Relation Extraction with Manifold Models”</a>, Chang Wang and James Fan,
|
||
ACL, 2014.</li>
|
||
</ul></li>
|
||
</ul>
|
||
<h3 id="ms-researchs-publication-within-5-years">MS Research’s
|
||
publication within 5 years</h3>
|
||
<ul>
|
||
<li>2018
|
||
<ul>
|
||
<li>“Characterizing and Supporting Question Answering in Human-to-Human
|
||
Communication”, Xiao Yang, Ahmed Hassan Awadallah, Madian Khabsa, Wei
|
||
Wang, Miaosen Wang, ACM SIGIR, 2018.</li>
|
||
<li><a href="https://arxiv.org/abs/1710.07300">“FigureQA: An Annotated
|
||
Figure Dataset for Visual Reasoning”</a>, Samira Ebrahimi Kahou, Vincent
|
||
Michalski, Adam Atkinson, Akos Kadar, Adam Trischler, Yoshua Bengio,
|
||
ICLR, 2018</li>
|
||
</ul></li>
|
||
<li>2017
|
||
<ul>
|
||
<li>“Multi-level Attention Networks for Visual Question Answering”,
|
||
Dongfei Yu, Jianlong Fu, Tao Mei, Yong Rui, CVPR, 2017.</li>
|
||
<li>“A Joint Model for Question Answering and Question Generation”, Tong
|
||
Wang, Xingdi (Eric) Yuan, Adam Trischler, ICML, 2017.</li>
|
||
<li>“Two-Stage Synthesis Networks for Transfer Learning in Machine
|
||
Comprehension”, David Golub, Po-Sen Huang, Xiaodong He, Li Deng, EMNLP,
|
||
2017.</li>
|
||
<li>“Question-Answering with Grammatically-Interpretable
|
||
Representations”, Hamid Palangi, Paul Smolensky, Xiaodong He, Li
|
||
Deng,</li>
|
||
<li>“Search-based Neural Structured Learning for Sequential Question
|
||
Answering”, Mohit Iyyer, Wen-tau Yih, Ming-Wei Chang, ACL, 2017.</li>
|
||
</ul></li>
|
||
<li>2016
|
||
<ul>
|
||
<li><a
|
||
href="https://www.cv-foundation.org/openaccess/content_cvpr_2016/html/Yang_Stacked_Attention_Networks_CVPR_2016_paper.html">“Stacked
|
||
Attention Networks for Image Question Answering”</a>, Zichao Yang,
|
||
Xiaodong He, Jianfeng Gao, Li Deng, Alex Smola, CVPR, 2016.</li>
|
||
<li><a
|
||
href="https://www.microsoft.com/en-us/research/publication/question-answering-with-knowledge-base-web-and-beyond/">“Question
|
||
Answering with Knowledge Base, Web and Beyond”</a>, Yih, Scott Wen-tau
|
||
and Ma, Hao, ACM SIGIR, 2016.</li>
|
||
<li><a href="https://arxiv.org/abs/1611.09830">“NewsQA: A Machine
|
||
Comprehension Dataset”</a>, Adam Trischler, Tong Wang, Xingdi Yuan,
|
||
Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman,
|
||
RepL4NLP, 2016.</li>
|
||
<li><a href="https://dl.acm.org/citation.cfm?id=2883080">“Table Cell
|
||
Search for Question Answering”</a>, Sun, Huan and Ma, Hao and He,
|
||
Xiaodong and Yih, Wen-tau and Su, Yu and Yan, Xifeng, WWW, 2016.</li>
|
||
</ul></li>
|
||
<li>2015
|
||
<ul>
|
||
<li><a
|
||
href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/YangYihMeek_EMNLP-15_WikiQA.pdf">“WIKIQA:
|
||
A Challenge Dataset for Open-Domain Question Answering”</a>, Yi Yang,
|
||
Wen-tau Yih, and Christopher Meek, EMNLP, 2015.</li>
|
||
<li><a
|
||
href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/AskMSRPlusTR_082815.pdf">“Web-based
|
||
Question Answering: Revisiting AskMSR”</a>, Chen-Tse Tsai, Wen-tau Yih,
|
||
and Christopher J.C. Burges, MSR-TR, 2015.</li>
|
||
<li><a href="https://dl.acm.org/citation.cfm?id=2741651">“Open Domain
|
||
Question Answering via Semantic Enrichment”</a>, Huan Sun, Hao Ma,
|
||
Wen-tau Yih, Chen-Tse Tsai, Jingjing Liu, and Ming-Wei Chang, WWW,
|
||
2015.</li>
|
||
</ul></li>
|
||
<li>2014
|
||
<ul>
|
||
<li><a
|
||
href="https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/Microsoft20Deep20QA.pdf">“An
|
||
Overview of Microsoft Deep QA System on Stanford WebQuestions
|
||
Benchmark”</a>, Zhenghao Wang, Shengquan Yan, Huaming Wang, and Xuedong
|
||
Huang, MSR-TR, 2014.</li>
|
||
<li><a href="">“Semantic Parsing for Single-Relation Question
|
||
Answering”</a>, Wen-tau Yih, Xiaodong He, Christopher Meek, ACL,
|
||
2014.</li>
|
||
</ul></li>
|
||
</ul>
|
||
<h3 id="google-ais-publication-within-5-years">Google AI’s publication
|
||
within 5 years</h3>
|
||
<ul>
|
||
<li>2018
|
||
<ul>
|
||
<li>Google QA <a name="qanet"></a>
|
||
<ul>
|
||
<li><a href="https://openreview.net/pdf?id=B14TlG-RW">“QANet: Combining
|
||
Local Convolution with Global Self-Attention for Reading
|
||
Comprehension”</a>, Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui
|
||
Zhao, Kai Chen, Mohammad Norouzi, Quoc V. Le, ICLR, 2018.</li>
|
||
<li><a href="https://openreview.net/pdf?id=S1CChZ-CZ">“Ask the Right
|
||
Questions: Active Question Reformulation with Reinforcement
|
||
Learning”</a>, Christian Buck and Jannis Bulian and Massimiliano
|
||
Ciaramita and Wojciech Paweł Gajewski and Andrea Gesmundo and Neil
|
||
Houlsby and Wei Wang, ICLR, 2018.</li>
|
||
<li><a href="https://arxiv.org/pdf/1612.04342.pdf">“Building Large
|
||
Machine Reading-Comprehension Datasets using Paragraph Vectors”</a>,
|
||
Radu Soricut, Nan Ding, 2018.</li>
|
||
</ul></li>
|
||
<li>Sentence representation
|
||
<ul>
|
||
<li><a href="https://arxiv.org/pdf/1803.02893.pdf">“An efficient
|
||
framework for learning sentence representations”</a>, Lajanugen
|
||
Logeswaran, Honglak Lee, ICLR, 2018.</li>
|
||
</ul></li>
|
||
<li><a href="https://arxiv.org/pdf/1805.05492.pdf">“Did the model
|
||
understand the question?”</a>, Pramod K. Mudrakarta and Ankur Taly and
|
||
Mukund Sundararajan and Kedar Dhamdhere, ACL, 2018.</li>
|
||
</ul></li>
|
||
<li>2017
|
||
<ul>
|
||
<li><a href="https://arxiv.org/pdf/1801.07537.pdf">“Analyzing Language
|
||
Learned by an Active Question Answering Agent”</a>, Christian Buck and
|
||
Jannis Bulian and Massimiliano Ciaramita and Wojciech Gajewski and
|
||
Andrea Gesmundo and Neil Houlsby and Wei Wang, NIPS, 2017.</li>
|
||
<li><a href="https://arxiv.org/pdf/1611.01436.pdf">“Learning Recurrent
|
||
Span Representations for Extractive Question Answering”</a>, Kenton Lee
|
||
and Shimi Salant and Tom Kwiatkowski and Ankur Parikh and Dipanjan Das
|
||
and Jonathan Berant, ICLR, 2017.</li>
|
||
<li>Identify the same question
|
||
<ul>
|
||
<li><a href="https://arxiv.org/pdf/1704.04565.pdf">“Neural Paraphrase
|
||
Identification of Questions with Noisy Pretraining”</a>, Gaurav Singh
|
||
Tomar and Thyago Duque and Oscar Täckström and Jakob Uszkoreit and
|
||
Dipanjan Das, SCLeM, 2017.</li>
|
||
</ul></li>
|
||
</ul></li>
|
||
<li>2014
|
||
<ul>
|
||
<li>“Great Question! Question Quality in Community Q&A”, Sujith Ravi
|
||
and Bo Pang and Vibhor Rastogi and Ravi Kumar, ICWSM, 2014.</li>
|
||
</ul></li>
|
||
</ul>
|
||
<h3 id="facebook-ai-researchs-publication-within-5-years">Facebook AI
|
||
Research’s publication within 5 years</h3>
|
||
<ul>
|
||
<li>2018
|
||
<ul>
|
||
<li><a
|
||
href="https://research.fb.com/publications/embodied-question-answering/">Embodied
|
||
Question Answering</a>, Abhishek Das, Samyak Datta, Georgia Gkioxari,
|
||
Stefan Lee, Devi Parikh, and Dhruv Batra, CVPR, 2018</li>
|
||
<li><a
|
||
href="https://research.fb.com/publications/do-explanations-make-vqa-models-more-predictable-to-a-human/">Do
|
||
explanations make VQA models more predictable to a human?</a>, Arjun
|
||
Chandrasekaran, Viraj Prabhu, Deshraj Yadav, Prithvijit Chattopadhyay,
|
||
and Devi Parikh, EMNLP, 2018</li>
|
||
<li><a
|
||
href="https://research.fb.com/publications/neural-compositional-denotational-semantics-for-question-answering/">Neural
|
||
Compositional Denotational Semantics for Question Answering</a>, Nitish
|
||
Gupta, Mike Lewis, EMNLP, 2018</li>
|
||
</ul></li>
|
||
<li>2017
|
||
<ul>
|
||
<li>DrQA <a name="drqa"></a>
|
||
<ul>
|
||
<li><a
|
||
href="https://cs.stanford.edu/people/danqi/papers/acl2017.pdf">Reading
|
||
Wikipedia to Answer Open-Domain Questions</a>, Danqi Chen, Adam Fisch,
|
||
Jason Weston & Antoine Bordes, ACL, 2017.</li>
|
||
</ul></li>
|
||
</ul></li>
|
||
</ul>
|
||
<h2 id="books">Books</h2>
|
||
<ul>
|
||
<li>Natural Language Question Answering system Paperback - Boris
|
||
Galitsky (2003)</li>
|
||
<li>New Directions in Question Answering - Mark T. Maybury (2004)</li>
|
||
<li>Part 3. 5. Question Answering in The Oxford Handbook of
|
||
Computational Linguistics - Sanda Harabagiu and Dan Moldovan (2005)</li>
|
||
<li>Chap.28 Question Answering in Speech and Language Processing -
|
||
Daniel Jurafsky & James H. Martin (2017)</li>
|
||
</ul>
|
||
<h2 id="links">Links</h2>
|
||
<ul>
|
||
<li><a
|
||
href="https://towardsdatascience.com/building-a-question-answering-system-part-1-9388aadff507">Building
|
||
a Question-Answering System from Scratch— Part 1</a></li>
|
||
<li><a
|
||
href="https://www.oreilly.com/ideas/question-answering-with-tensorflow">Qeustion
|
||
Answering with Tensorflow By Steven Hewitt, O’REILLY, 2017</a></li>
|
||
<li><a
|
||
href="http://nicklothian.com/blog/2014/09/25/why-question-answering-is-hard/">Why
|
||
question answering is hard</a></li>
|
||
</ul>
|
||
<h2 id="contributing">Contributing</h2>
|
||
<p>Contributions welcome! Read the <a
|
||
href="contributing.md">contribution guidelines</a> first.</p>
|
||
<h2 id="license">License</h2>
|
||
<p><a
|
||
href="https://creativecommons.org/share-your-work/public-domain/cc0/"><img
|
||
src="http://mirrors.creativecommons.org/presskit/buttons/88x31/svg/cc-zero.svg"
|
||
alt="CC0" /></a></p>
|
||
<p>To the extent possible under law, <a
|
||
href="https://github.com/seriousmac">seriousmac</a> (the maintainer) has
|
||
waived all copyright and related or neighboring rights to this work.</p>
|