Select Language

AI社区

公开数据集

维基百科的句子,英语维基百科转储中收集了780万个句子

维基百科的句子,英语维基百科转储中收集了780万个句子

891.28M
335 浏览
0 喜欢
0 次下载
0 条讨论
NLP,Text Mining Classification

The wikipedia dump is a giant XML file and contains loads of not-so-useful content. I needed some english text for some......

数据结构 ? 891.28M

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    The wikipedia dump is a giant XML file and contains loads of not-so-useful content. I needed some english text for some unsupervised learning so I spent quite a bit of time extracting and cleaning up the text.

    Content

    Each line of the txt file is a 'sentence'. I put sentence in quote because the content in these files haven't been read all the way through for errors. Here is what I did:

    • Parsed out the opening text on non-disambiguation and non-table-of-contents pages.

    • Removed sentences requiring citations, because these were usually poorly formed.

    • Parse each block of text into sentences using SpaCy. I then checked for bracket and quote correctness, filtering out sentences that didn't quite match up.

    • Removed sentences shorter than 3 letters and longer than 255 characters. This covers 97% of the data.

    • Remove duplicate sentences, and, as a byproduct, sorted alphabetically.


    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:18 去赚积分?
    • 335浏览
    • 0下载
    • 0点赞
    • 收藏
    • 分享