公开数据集
数据结构 ? 47.59M
Data Structure ?
* 以上分析是由系统提取分析形成的结果,具体实际数据为准。
README.md
Context
This dataset is an extension of the [original dataset](https://www.kaggle.com/uciml/sms-spam-collection-dataset) which is a set of English SMS messages tagged with being **spam** or **ham**.
The dataset was created to add the possibility to work with BERT-Embeddings. Since creating these embeddings in kaggle kernels is not feasible for memory efficiency reasons, I've created them locally and provide you the original dataset plus the embedings. So in this dataset you get the original dataset plus the embeddings for each SMS message!
Please refer to the [original dataset](https://www.kaggle.com/uciml/sms-spam-collection-dataset) for further clarification.
Content
The dataset contains the same information as the [original dataset](https://www.kaggle.com/uciml/sms-spam-collection-dataset) plus the additional DiltilBERT classification embeddings.
This results in a dataset with 5574 rows and 770 columns:
- `spam` -> Target column specifying if the message is *spam* or *ham*
- `original_message` -> The original unprocessed messages
- `0` up to `768` -> columns containing the DistilBERT classification embeddings for the message, after it being processed
Inspiration
- Can you classify spam messages using the embeddings?
- Does BERT-Embeddings work better than TF-IDF?
- What is the highest ROC-AUC you can get?
- What features can be derived from the dataset?
- What is the most common words from Spam/Ham messages?
- What are some Spam messages you **can't** correctly classify?
Procedure for creating the dataset
HuggingFace's DistilBERT is used from their [transformers](https://github.com/huggingface/transformers) package.
[Jay Allamar's tutorial](http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/) was followed to encode the messages using DistilBERT.
For memory efficiency reasons all messages are first stripped from punctuation and then english stopwords are removed. Then only the first 30 tokens are kept.
As per [my analysis](https://www.kaggle.com/mrlucasfischer/bert-the-spam-detector-that-uses-just-10-words) of the original dataset it can be seen that most *ham* messages have around 10 words and *spam* messages around 29 words, without stopwords. This means that once stopwords are removed from the messages, keeping the first 30 tokens might mean some information loss but not to critical. (Acrually in [my analysis](https://www.kaggle.com/mrlucasfischer/bert-the-spam-detector-that-uses-just-10-words) it is demonstrated that encoding the messages using only the first 10 tokens after processing them is enough to have a good encoding capable of achieving 0.881 ROC-AUC with a baseline random forest.)
To better understand how the embeddings were created I encourage to check out the [Github repo](https://github.com/lsfischer/bert-spam-embeddings) with the script for creating the dataset.
Acknowledgements
[Jay Allamar's tutorial](http://jalammar.github.io/a-visual-guide-to-using-bert-for-the-first-time/) was followed to encode the messages using DistilBERT.
The original dataset is part of the [UCI Machine Learning repository](https://archive.ics.uci.edu/ml/index.php) and can be found [here](https://archive.ics.uci.edu/ml/datasets/SMS+Spam+Collection).
UCI Machine Learning urges to if you find the original dataset useful, cite the original authors found [here](http://www.dt.fee.unicamp.br/~tiago/smsspamcollection/).
Almeida, T.A., Gómez Hidalgo, J.M., Yamakami, A. Contributions to the Study of SMS Spam Filtering: New Collection and Results. Proceedings of the 2011 ACM Symposium on Document Engineering (DOCENG'11), Mountain View, CA, USA, 2011
×
帕依提提提温馨提示
该数据集正在整理中,为您准备了其他渠道,请您使用
注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
暂无相关内容。
暂无相关内容。
- 分享你的想法
去分享你的想法~~
全部内容
欢迎交流分享
开始分享您的观点和意见,和大家一起交流分享.
数据使用声明:
- 1、该数据来自于互联网数据采集或服务商的提供,本平台为用户提供数据集的展示与浏览。
- 2、本平台仅作为数据集的基本信息展示、包括但不限于图像、文本、视频、音频等文件类型。
- 3、数据集基本信息来自数据原地址或数据提供方提供的信息,如数据集描述中有描述差异,请以数据原地址或服务商原地址为准。
- 1、本站中的所有数据集的版权都归属于原数据发布者或数据提供方所有。
- 1、如您需要转载本站数据,请保留原数据地址及相关版权声明。
- 1、如本站中的部分数据涉及侵权展示,请及时联系本站,我们会安排进行数据下线。