Select Language

AI社区

公开数据集

BERT英语无编码unigrams

BERT英语无编码unigrams

94.36M
190 浏览
0 喜欢
0 次下载
0 条讨论
Music,NLP Classification

数据结构 ? 94.36M

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    ## Is BERT the right model to fine tune your data on? Or do you need to pretrain from scratch? Know your model's training data BERT models have become commonly available and the use of subword tokenization has become widespread. But are these base models suitable for fine tuning against your data? Subword tokenization obscures the vocabulary the base model was trained on. By examining the original training data unigrams and their distributions, you can determine whether or not your data would benefit from training a model from scratch. Content This dataset is a best effort reconstruction of the training data used to train the English BERT base uncased model. The dataset comes from the BookCorpus dataset and a processed dump of Wikipedia (August 2019). Following the principles of BERT's tokenization scheme, no punctuation nor stopwords have been removed. The original unicode text was normalized using [NFKC](https://unicode.org/reports/tr15/), tokenized using SpaCy English model (large), and the total count for each unigram across the corpora was recorded. The unigrams are sorted in descending order of frequency. The CSV file column values are tab separated. Acknowledgements Wikipedia and a public archive site provided the data prior to processing. Inspiration Here are some useful ideas * Construct a probability distribution of data in your domain and determine if BERT base is close enough for your task. * Analyze the training data of a new BERT model (e.g. Bio-BERT, Legal-BERT) and quantify how similar/different they are to BERT base by calculating the Kullback–Leibler divergence for the shared vocabulary. * Use this data to evaluate and locate important bigrams in its sister dataset [BERT bigrams](https://www.kaggle.com/toddcook/bert-english-uncased-bigrams). * Determine how much of your data is OOV (out of vocabulary), which can be a strong signal of the need for retraining. Bugs If you find any problem with the data, please let me know, and I will make corrections. Updates Besides minor corrections, if I learn of a Wikipedia data release that more closely approximate the BERT training data, then I will update this dataset.
    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:0 去赚积分?
    • 190浏览
    • 0下载
    • 0点赞
    • 收藏
    • 分享