Select Language

AI社区

公开数据集

纯文本维基百科,每个文件都包含维基百科文章的集合

纯文本维基百科,每个文件都包含维基百科文章的集合

23.71G
264 浏览
0 喜欢
0 次下载
0 条讨论
NLP,Computer Science,Text Data,Text Mining Classification

Wikipedia dumps contain a tremendous amount of markup. WikiMedia Text is a hybrid of markdown and HTML, making it very d......

数据结构 ? 23.71G

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    Wikipedia dumps contain a tremendous amount of markup. WikiMedia Text is a hybrid of markdown and HTML, making it very difficult to use. Wikipedia, however, is an extremely valuable dataset. I wanted a more usable format.

    Content

    This dataset includes ~40MB JSON files, each of which contains a collection of Wikipedia articles. Each article element in the JSON contains only 3 keys: an ID number, the title of the article, and the text of the article. Each article has been "flattened" to occupy a single plain text string. This makes it easier for humans to read, as opposed to the markup version. It also makes it easier for NLP tasks. You will have much less cleanup to do.

    Each file looks like this:

    [
     {
      "id": "17279752",
      "text": "Hawthorne Road was a cricket and football ground in Bootle in England...",
      "title": "Hawthorne Road"
     }]

    Acknowledgements

    Absolutely all thanks goes to Wikipedia! Everyone who helped build and fund Wikipedia, and everyone who has contributed their time and expertise to the content of Wikipedia


    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:25 去赚积分?
    • 264浏览
    • 0下载
    • 0点赞
    • 收藏
    • 分享