公开数据集
数据结构 ? 672.61M
Data Structure ?
* 以上分析是由系统提取分析形成的结果,具体实际数据为准。
README.md
ELI5 means "Explain like I am 5" . It's originally a "long and free form" Question-Answering scraping from reddit eli5 subforum.
Original ELI5 datasets (https://github.com/facebookresearch/ELI5) can be used to train a model for "long & free" form Question-Answering ,
e.g. by Encoder-Decoder models like T5 or Bart
Conventional performance evaluation : ROUGE scores
When we get a model, how can we estimate model performance (ability to give high-quality answers) ?
Conventional methods are ROUGE-family metrics (see ELI5 paper linked above)
However, ROUGE scores are based on n-gram and and need to compare a generated answer to a ground-truth answer.
Unfortunately, n-gram scoring cannot evaluate high-quality paraphrase answers.
Worse, the need to a ground-truth answer in order to compare and calculate (ROUGE) score. This scoring perspective is against the "spirit" of the "free form" question answering where there are many possible (non-paraphrase) valid and good answers .
To summarize, "creative & high-quality" answers cannot be estimated with ROUGE , which prevents us to construct (and estimate) creative models.
This dataset : to create a better scorer
This dataset, in contrast, is aimed for training a "scoring" (regression) model , which can predict an upvote score on each Q-A pair individually (not A-A pair like ROUGE) .
The data is simply a CSV file containing Q-A pairs and their scores.
Each line contains Q-A texts (in Roberta format) and its upvote score (non-negative integer)
It is intended to be easy and direct to create scoring model with Roberta (or other Transformer models with changing separation token) .
CSV file
In the csv file, there is qa
column and answer_score
column
Each row in qa
is written in Roberta paired-sentences format -- Answer
With answer_score
we have the following principle :
High quality answer related to its question should get high score (upvotes)
Low quality answer related to its question should get low score
Well written answer NOT related to its question should get 0 score
Each positive Q-A pair comes from the original ELI5 dataset (true upvote score).
Each 0-score Q-A pair is constructed with details in the next subsection.
0-score construction details via RetriBERT & FAISS
The principle is contrastive training. We need somewhat high-quality 0-score pairs for model to generalize.
Too easy 0-score pairs (e.g. a question with random answers will be too easy and a model will learn nothing)
Therefore, for each question, we try to construct two answers (two 0-score pairs) where each answer is related to the topic of the question, but does not answer the question.
This can be achieve by vectorizing all questions into vectors using RetriBERT and storing with FAISS. We can then measure a distance between two question vectors using cosine distance.
More precisely, for a question Q1, we choose two answers of related (but non-identical) questions Q2 and Q3 , i.e. answer A2 and A3, to construct Q1-A2 and Q1-A3 pairs of 0-score. Combining with the Q1-A1 pair of positive score, we will have 3 Q1 pairs , and 3 pairs for each questions in total. Therefore, from 272,000 examples of original ELI5 , in this dataset we have 3 times of its size = 816,000 examples .
Note that two question vectors that are very close can be the same
(paraphrase) question , and two questions that are very far apart are
totally different questions.
Therefore, we need a threshold to determine not-too-close &
not-too-far pair of questions so that we get non-identical but
same-topic question pairs.
In a simple experiment, a cosine distance of 10-11 of RetriBERT vectors
seem work well, so we use this number as a threshold to construct a
0-score Q-A pair.
baseline Model
roberta-base
baseline with MAE 3.91 on validation set can be found here :
https://www.kaggle.com/ratthachat/eli5-scorer-roberta-base-500k-mae391
Acknowledgements
Facebook AI team for creating original ELI5 dataset, and Huggingface NLP library for make us access this dataset easily .
Inspiration
My project on ELI5 is mainly inspired from this amazing work of Yacine Jernite : https://yjernite.github.io/lfqa.html
帕依提提提温馨提示
该数据集正在整理中,为您准备了其他渠道,请您使用
- 分享你的想法
全部内容
数据使用声明:
- 1、该数据来自于互联网数据采集或服务商的提供,本平台为用户提供数据集的展示与浏览。
- 2、本平台仅作为数据集的基本信息展示、包括但不限于图像、文本、视频、音频等文件类型。
- 3、数据集基本信息来自数据原地址或数据提供方提供的信息,如数据集描述中有描述差异,请以数据原地址或服务商原地址为准。
- 1、本站中的所有数据集的版权都归属于原数据发布者或数据提供方所有。
- 1、如您需要转载本站数据,请保留原数据地址及相关版权声明。
- 1、如本站中的部分数据涉及侵权展示,请及时联系本站,我们会安排进行数据下线。