Question answering is the task of answering a question.
Table of contents
- Reading comprehension
The AI2 Reasoning Challenge (ARC) dataset is a question answering, which contains 7,787 genuine grade-school level, multiple-choice science questions. The dataset is partitioned into a Challenge Set and an Easy Set. The Challenge Set contains only questions answered incorrectly by both a retrieval-based algorithm and a word co-occurrence algorithm. Models are evaluated based on accuracy.
A public leaderboard is available on the ARC website.
ShARC is a challenging QA dataset that requires logical reasoning, elements of entailment/NLI and natural language generation.
Most work in machine reading focuses on question answering problems where the answer is directly expressed in the text to read. However, many real-world question answering problems require the reading of text not because it contains the literal answer, but because it contains a recipe to derive an answer together with the reader’s background knowledge. We formalise this task and introduce the challenging ShARC dataset with 32k task instances.
The goal is to answer questions by possibly asking follow-up questions first. We assume that the question does not provide enough information to be answered directly. However, a model can use the supporting rule text to infer what needs to be asked in order to determine the final answer. Concretely, The model must decide whether to answer with “Yes”, “No”, “Irrelevant”, or to generate a follow-up question given rule text, a user scenario and a conversation history. Performance is measured with Micro and Macro Accuracy for “Yes”/”No”/”Irrelevant”/”More” classifications, and the quality of follow-up questions are measured with BLEU.
The public data, further task details and public leaderboard are available on the ShARC Website.
Most current question answering datasets frame the task as reading comprehension where the question is about a paragraph or document and the answer often is a span in the document. The Machine Reading group at UCL also provides an overview of reading comprehension tasks.
The CliCR dataset is a gap-filling reading comprehension dataset consisting of around 100,000 queries and their associated documents. The dataset was built from clinical case reports, requiring the reader to answer the query with a medical problem/test/treatment entity. The abilities to perform bridging inferences and track objects have been found to be the most frequently required skills for successful answering.
The instructions for accessing the dataset, the processing scripts, the baselines and the adaptations of some neural models can be found here.
|We report a case of a 72-year-old Caucasian woman with pl-7 positive antisynthetase syndrome. Clinical presentation included interstitial lung disease, myositis, mechanic’s hands and dysphagia. As lung injury was the main concern, treatment consisted of prednisolone and cyclophosphamide. Complete remission with reversal of pulmonary damage was achieved, as reported by CT scan, pulmonary function tests and functional status. […]||Therefore, in severe cases an aggressive treatment, combining ____ and glucocorticoids as used in systemic vasculitis, is suggested.||cyclophoshamide|
|Gated-Attention Reader (Dhingra et al., 2017)||33.9||CliCR: A Dataset of Clinical Case Reports for Machine Reading Comprehension|
|Stanford Attentive Reader (Chen et al., 2016)||27.2||CliCR: A Dataset of Clinical Case Reports for Machine Reading Comprehension|
CNN / Daily Mail
The CNN / Daily Mail dataset is a Cloze-style reading comprehension dataset created from CNN and Daily Mail news articles using heuristics. Close-style means that a missing word has to be inferred. In this case, “questions” were created by replacing entities from bullet points summarizing one or several aspects of the article. Coreferent entities have been replaced with an entity marker @entityn where n is a distinct index. The model is tasked to infer the missing entity in the bullet point based on the content of the corresponding article and models are evaluated based on their accuracy on the test set.
|( @entity4 ) if you feel a ripple in the force today , it may be the news that the official @entity6 is getting its first gay character . according to the sci-fi website @entity9 , the upcoming novel “ @entity11 “ will feature a capable but flawed @entity13 official named @entity14 who “ also happens to be a lesbian . “ the character is the first gay figure in the official @entity6 – the movies , television shows , comics and books approved by @entity6 franchise owner @entity22 – according to @entity24 , editor of “ @entity6 “ books at @entity28 imprint @entity26 .||characters in “ @placeholder “ movies have gradually become more diverse||@entity6|
|Model||CNN||Daily Mail||Paper / Source|
|GA Reader(Dhingra et al., 2017)||77.9||80.9||Gated-Attention Readers for Text Comprehension|
|BIDAF(Seo et al., 2017)||76.9||79.6||Bidirectional Attention Flow for Machine Comprehension|
|AoA Reader(Cui et al., 2017)||74.4||-||Attention-over-Attention Neural Networks for Reading Comprehension|
|Neural net (Chen et al., 2016)||72.4||75.8||A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task|
|Classifier (Chen et al., 2016)||67.9||68.3||A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task|
|Impatient Reader (Hermann et al., 2015)||63.8||68.0||Teaching Machines to Read and Comprehend|
CoQA is a large-scale dataset for building Conversational Question Answering systems. CoQA contains 127,000+ questions with answers collected from 8000+ conversations. Each conversation is collected by pairing two crowdworkers to chat about a passage in the form of questions and answers.
The data and public leaderboard are available here.
HotpotQA is a dataset with 113k Wikipedia-based question-answer pairs. Questions require finding and reasoning over multiple supporting documents and are not constrained to any pre-existing knowledge bases. Sentence-level supporting facts are available.
The data and public leaderboard are available from the HotpotQA website.
- The questions are obtained from real anonymized user queries.
- The answers are human generated. The context passages from which the answers are obtained are extracted from real documents using the latest Bing search engine.
- The data set contains 100,000 queries and a subset of them contain multiple answers, and aim to release 1M queries in the future.
The leaderboards for multiple tasks are available on the MS MARCO leaderboard page.
MultiRC (Multi-Sentence Reading Comprehension) is a dataset of short paragraphs and multi-sentence questions that can be answered from the content of the paragraph. We have designed the dataset with three key challenges in mind:
- The number of correct answer-options for each question is not pre-specified. This removes the over-reliance of current approaches on answer-options and forces them to decide on the correctness of each candidate answer independently of others. In other words, unlike previous work, the task here is not to simply identify the best answer-option, but to evaluate the correctness of each answer-option individually.
- The correct answer(s) is not required to be a span in the text.
- The paragraphs in our dataset have diverse provenance by being extracted from 7 different domains such as news, fiction, historical text etc., and hence are expected to be more diverse in their contents as compared to single-domain datasets.
The leaderboards for the dataset is available on the MultiRC website.
The NewsQA dataset is a reading comprehension dataset of over 100,000 human-generated question-answer pairs from over 10,000 news articles from CNN, with answers consisting of spans of text from the corresponding articles. Some challenging characteristics of this dataset are:
- Answers are spans of arbitrary length;
- Some questions have no answer in the corresponding article;
- There are no candidate answers from which to choose. Although very similar to the SQuAD dataset, NewsQA offers a greater challenge to existing models at time of introduction (eg. the paragraphs are longer than those in SQuAD). Models are evaluated based on F1 and Exact Match.
|MOSCOW, Russia (CNN) – Russian space officials say the crew of the Soyuz space ship is resting after a rough ride back to Earth. A South Korean bioengineer was one of three people on board the Soyuz capsule. The craft carrying South Korea’s first astronaut landed in northern Kazakhstan on Saturday, 260 miles (418 kilometers) off its mark, they said. Mission Control spokesman Valery Lyndin said the condition of the crew – South Korean bioengineer Yi So-yeon, American astronaut Peggy Whitson and Russian flight engineer Yuri Malenchenko – was satisfactory, though the three had been subjected to severe G-forces during the re-entry. […]||Where did the Soyuz capsule land?||northern Kazakhstan|
The dataset can be downloaded here.
|Model||F1||EM||Paper / Source|
|MINIMAL(Dyn) (Min et al., 2018)||63.2||50.1||Efficient and Robust Question Answering from Minimal Context over Documents|
|FastQAExt (Weissenborn et al., 2017)||56.1||43.7||Making Neural QA as Simple as Possible but not Simpler|
QAngaroo is a set of two reading comprehension datasets, which require multiple steps of inference that combine facts from multiple documents. The first dataset, WikiHop is open-domain and focuses on Wikipedia articles. The second dataset, MedHop is based on paper abstracts from PubMed.
The leaderboards for both datasets are available on the QAngaroo website.
Question Answering in Context (QuAC) is a dataset for modeling, understanding, and participating in information seeking dialog. Data instances consist of an interactive dialog between two crowd workers: (1) a student who poses a sequence of freeform questions to learn as much as possible about a hidden Wikipedia text, and (2) a teacher who answers the questions by providing short excerpts (spans) from the text.
The leaderboard and data are available on the QuAC website.
The RACE dataset is a reading comprehension dataset collected from English examinations in China, which are designed for middle school and high school students. The dataset contains more than 28,000 passages and nearly 100,000 questions and can be downloaded here. Models are evaluated based on accuracy on middle school examinations (RACE-m), high school examinations (RACE-h), and on the total dataset (RACE).
|Model||RACE-m||RACE-h||RACE||Paper / Source|
|Finetuned Transformer LM (Radford et al., 2018)||62.9||57.4||59.0||Improving Language Understanding by Generative Pre-Training|
|BiAttention MRU (Tay et al., 2018)||60.2||50.3||53.3||Multi-range Reasoning for Machine Comprehension|
The Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles. The answer to every question is a segment of text (a span) from the corresponding reading passage. Recently, SQuAD 2.0 has been released, which includes unanswerable questions.
The public leaderboard is available on the SQuAD website.
Story Cloze Test
The Story Cloze Test is a dataset for story understanding that provides systems with four-sentence stories and two possible endings. The systems must then choose the correct ending to the story.
|Model||Accuracy||Paper / Source||Code|
|Finetuned Transformer LM (Radford et al., 2018)||86.5||Improving Language Understanding by Generative Pre-Training|
|Liu et al. (2018)||78.7||Narrative Modeling with Memory Chains and Semantic Supervision||Official|
|Hidden Coherence Model (Chaturvedi et al., 2017)||77.6||Story Comprehension for Predicting What Happens Next|
|val-LS-skip (Srinivasan et al., 2018)||76.5||A Simple and Effective Approach to the Story Cloze Test|
RecipeQA is a dataset for multimodal comprehension of cooking recipes. It consists of over 36K question-answer pairs automatically generated from approximately 20K unique recipes with step-by-step instructions and images. Each question in RecipeQA involves multiple modalities such as titles, descriptions or images, and working towards an answer requires (i) joint understanding of images and text, (ii) capturing the temporal flow of events, and (iii) making sense of procedural knowledge.
The public leaderboard is available on the RecipeQA website.
DuReader has three advantages over other MRC datasets:
- (1) data sources: questions and documents are based on Baidu Search and Baidu Zhidao; answers are manually generated.
- (2) question types: it provides rich annotations for more question types, especially yes-no and opinion questions, that leaves more opportunity for the research community.
- (3) scale: it contains 300K questions, 660K answers and 1.5M documents; it is the largest Chinese MRC dataset so far.
The leaderboard is avaiable on DuReader page.