Select Language

AI社区

公开数据集

WikiText

WikiText

373.28M
648 浏览
0 喜欢
0 次下载
0 条讨论
Others Text

数据结构 ? 373.28M

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia.
    Compared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models that can take advantage of long term dependencies.
    In comparison to the Mikolov processed version of the Penn Treebank (PTB), the WikiText datasets are larger. WikiText-2 aims to be of a similar size to the PTB while WikiText-103 contains all articles extracted from Wikipedia. The WikiText datasets also retain numbers (as opposed to replacing them with N), case (as opposed to all text being lowercased), and punctuation (as opposed to stripping them out).
    dataset statistics

    Data Collection

    We selected articles only fitting the Good or Featured article criteria specified by editors on Wikipedia. These articles have been reviewed by humans and are considered well written, factually accurate, broad in coverage, neutralin point of view, and stable. This resulted in 23,805 Good articles and 4,790 Featured articles. The text for each article was extracted using the Wikipedia API. Extracting the raw text from Wikipedia mark-up is nontrivial due to the large number of macros in use. These macros are used extensively and include metric conversion, abbreviations, language notation, and date handling.
    Once extracted, specific sections which primarily featured lists were removed by default. Other minor bugs, such assort keys and Edit buttons that leaked in from the HTML, were also removed. Mathematical formulae and LaTeX code, were replaced with¡´formula¡µtokens. Normalization and tokenization were performed using the Moses tokenizer, slightly augmented to further split numbers (8,600¡ú8 @,@ 600) and with some additional minor fixes. A vocab-ulary was constructed by discarding all words with a count below 3. Words outside of the vocabulary were mapped to the¡´unk¡µtoken, also a part of the vocabulary.

    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:0 去赚积分?
    • 648浏览
    • 0下载
    • 0点赞
    • 收藏
    • 分享