公开数据集
数据结构 ? 94.36M
Data Structure ?
* 以上分析是由系统提取分析形成的结果,具体实际数据为准。
README.md
## Is BERT the right model to fine tune your data on? Or do you need to pretrain from scratch?
Know your model's training data
BERT models have become commonly available and the use of subword tokenization has become widespread. But are these base models suitable for fine tuning against your data? Subword tokenization obscures the vocabulary the base model was trained on. By examining the original training data unigrams and their distributions, you can determine whether or not your data would benefit from training a model from scratch.
Content
This dataset is a best effort reconstruction of the training data used to train the English BERT base uncased model. The dataset comes from the BookCorpus dataset and a processed dump of Wikipedia (August 2019). Following the principles of BERT's tokenization scheme, no punctuation nor stopwords have been removed. The original unicode text was normalized using [NFKC](https://unicode.org/reports/tr15/), tokenized using SpaCy English model (large), and the total count for each unigram across the corpora was recorded. The unigrams are sorted in descending order of frequency. The CSV file column values are tab separated.
Acknowledgements
Wikipedia and a public archive site provided the data prior to processing.
Inspiration
Here are some useful ideas
* Construct a probability distribution of data in your domain and determine if BERT base is close enough for your task.
* Analyze the training data of a new BERT model (e.g. Bio-BERT, Legal-BERT) and quantify how similar/different they are to BERT base by calculating the Kullback–Leibler divergence for the shared vocabulary.
* Use this data to evaluate and locate important bigrams in its sister dataset [BERT bigrams](https://www.kaggle.com/toddcook/bert-english-uncased-bigrams).
* Determine how much of your data is OOV (out of vocabulary), which can be a strong signal of the need for retraining.
Bugs
If you find any problem with the data, please let me know, and I will make corrections.
Updates
Besides minor corrections, if I learn of a Wikipedia data release that more closely approximate the BERT training data, then I will update this dataset.
×
帕依提提提温馨提示
该数据集正在整理中,为您准备了其他渠道,请您使用
注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
暂无相关内容。
暂无相关内容。
- 分享你的想法
去分享你的想法~~
全部内容
欢迎交流分享
开始分享您的观点和意见,和大家一起交流分享.
数据使用声明:
- 1、该数据来自于互联网数据采集或服务商的提供,本平台为用户提供数据集的展示与浏览。
- 2、本平台仅作为数据集的基本信息展示、包括但不限于图像、文本、视频、音频等文件类型。
- 3、数据集基本信息来自数据原地址或数据提供方提供的信息,如数据集描述中有描述差异,请以数据原地址或服务商原地址为准。
- 1、本站中的所有数据集的版权都归属于原数据发布者或数据提供方所有。
- 1、如您需要转载本站数据,请保留原数据地址及相关版权声明。
- 1、如本站中的部分数据涉及侵权展示,请及时联系本站,我们会安排进行数据下线。