公开数据集
数据结构 ? 891.28M
Data Structure ?
* 以上分析是由系统提取分析形成的结果,具体实际数据为准。
README.md
The wikipedia dump is a giant XML file and contains loads of not-so-useful content. I needed some english text for some unsupervised learning so I spent quite a bit of time extracting and cleaning up the text.
Content
Each line of the txt file is a 'sentence'. I put sentence in quote because the content in these files haven't been read all the way through for errors. Here is what I did:
Parsed out the opening text on non-disambiguation and non-table-of-contents pages.
Removed sentences requiring citations, because these were usually poorly formed.
Parse each block of text into sentences using SpaCy. I then checked for bracket and quote correctness, filtering out sentences that didn't quite match up.
Removed sentences shorter than 3 letters and longer than 255 characters. This covers 97% of the data.
Remove duplicate sentences, and, as a byproduct, sorted alphabetically.
帕依提提提温馨提示
该数据集正在整理中,为您准备了其他渠道,请您使用
- 分享你的想法
全部内容
数据使用声明:
- 1、该数据来自于互联网数据采集或服务商的提供,本平台为用户提供数据集的展示与浏览。
- 2、本平台仅作为数据集的基本信息展示、包括但不限于图像、文本、视频、音频等文件类型。
- 3、数据集基本信息来自数据原地址或数据提供方提供的信息,如数据集描述中有描述差异,请以数据原地址或服务商原地址为准。
- 1、本站中的所有数据集的版权都归属于原数据发布者或数据提供方所有。
- 1、如您需要转载本站数据,请保留原数据地址及相关版权声明。
- 1、如本站中的部分数据涉及侵权展示,请及时联系本站,我们会安排进行数据下线。