公开数据集
数据结构 ? 23.71G
Data Structure ?
* 以上分析是由系统提取分析形成的结果,具体实际数据为准。
README.md
Wikipedia dumps contain a tremendous amount of markup. WikiMedia Text is a hybrid of markdown and HTML, making it very difficult to use. Wikipedia, however, is an extremely valuable dataset. I wanted a more usable format.
Content
This dataset includes ~40MB JSON files, each of which contains a collection of Wikipedia articles. Each article element in the JSON contains only 3 keys: an ID number, the title of the article, and the text of the article. Each article has been "flattened" to occupy a single plain text string. This makes it easier for humans to read, as opposed to the markup version. It also makes it easier for NLP tasks. You will have much less cleanup to do.
Each file looks like this:
[ { "id": "17279752", "text": "Hawthorne Road was a cricket and football ground in Bootle in England...", "title": "Hawthorne Road" }]
Acknowledgements
Absolutely all thanks goes to Wikipedia! Everyone who helped build and fund Wikipedia, and everyone who has contributed their time and expertise to the content of Wikipedia
帕依提提提温馨提示
该数据集正在整理中,为您准备了其他渠道,请您使用
- 分享你的想法
全部内容
数据使用声明:
- 1、该数据来自于互联网数据采集或服务商的提供,本平台为用户提供数据集的展示与浏览。
- 2、本平台仅作为数据集的基本信息展示、包括但不限于图像、文本、视频、音频等文件类型。
- 3、数据集基本信息来自数据原地址或数据提供方提供的信息,如数据集描述中有描述差异,请以数据原地址或服务商原地址为准。
- 1、本站中的所有数据集的版权都归属于原数据发布者或数据提供方所有。
- 1、如您需要转载本站数据,请保留原数据地址及相关版权声明。
- 1、如本站中的部分数据涉及侵权展示,请及时联系本站,我们会安排进行数据下线。