公开数据集
数据结构 ? 44.34M
Data Structure ?
* 以上分析是由系统提取分析形成的结果,具体实际数据为准。
README.md
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting
of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every
question is a segment of text, or span, from the corresponding reading passage, or the question
might be unanswerable.
SQuAD2.0 combines the 100,000 questions in SQuAD1.1 with over 50,000
unanswerable questions written adversarially by crowdworkers to look similar to answerable
ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but also
determine when no answer is supported by the paragraph and abstain from answering.
Data Collection
We employed crowd workers on the Daemo crowd-sourcing platform to write unanswerable questions.
Each task consisted of an entire article from SQuAD 1.1. For each paragraph in the article,
workers were asked to pose up to five questions that were impossible to answer based on the
paragraph alone, while referencing entities in the paragraph and ensuring that a plausible
answer is present. As inspiration, we also showed questions from SQuAD 1.1 for each paragraph;
this further encouraged unanswerable questions to look similar to answerable ones.
We removed
questions from workers who wrote 25 or fewer questions on that article; this filter helped
remove noise from workers who had trouble understanding the task, and therefore quit before
completing the whole article. We applied this filter to both our new data and the existing
answerable questions from SQuAD 1.1. To generate train, development, and test splits, we used
the same partition of articles as SQuAD 1.1, and combined the existing data with our new data
for each split. For the SQuAD 2.0 development and test sets, we removed articles for which
we did not collect unanswerable questions. This resulted in a roughly one-to-one ratio of answerable
to unanswerable questions in these splits, whereas the train data has roughly twice as many
answerable questions as unanswerable ones.
To confirm that our dataset is clean, we hired
additional crowd workers to answer all questions in the SQuAD 2.0 development and test sets.
In each task, we showed workers an entire article from the dataset. For each paragraph, we
showed all associated questions; unanswerable and answerable questions were shuffled together.
For each question, workers were told to either highlight the answer in the paragraph, or mark
it as unanswerable. Workers were told to expect every paragraph to have some answerable and
some unanswerable questions. They were asked to spend one minuteper question, and were paid
$10.50 per hour.
To reduce crowd worker noise, we collected multiple human answers for each
question and selected the final answer by majority vote, breaking ties in favor of answering
questions and preferring shorter answers to longer ones. On average, we collected 4.8 answers
per question.
帕依提提提温馨提示
该数据集正在整理中,为您准备了其他渠道,请您使用
- 分享你的想法
全部内容
数据使用声明:
- 1、该数据来自于互联网数据采集或服务商的提供,本平台为用户提供数据集的展示与浏览。
- 2、本平台仅作为数据集的基本信息展示、包括但不限于图像、文本、视频、音频等文件类型。
- 3、数据集基本信息来自数据原地址或数据提供方提供的信息,如数据集描述中有描述差异,请以数据原地址或服务商原地址为准。
- 1、本站中的所有数据集的版权都归属于原数据发布者或数据提供方所有。
- 1、如您需要转载本站数据,请保留原数据地址及相关版权声明。
- 1、如本站中的部分数据涉及侵权展示,请及时联系本站,我们会安排进行数据下线。