公开数据集
数据结构 ? 107.14M
Data Structure ?
* 以上分析是由系统提取分析形成的结果,具体实际数据为准。
README.md
Context
As part of my [OpenAI Scholars summer program][1], I wanted to try out the ULMFiT approach to text classification: [http://nlp.fast.ai/classification/2018/05/15/introducting-ulmfit.html][2].
ULMFiT has been described as a "state-of-the-art AWD LSTM" language model *backbone* or *encoder* with a linear classifier *head* or *decoder*.
The language model released by Jeremy Howard and Sebastian Ruder comes pre-trained with WikiText-103, and optionally one can choose to fine-tune it with a corpus more related to the downstream task.
The general idea is to first teach the model English (Wikipedia), then teach it about more specific writing (e.g., movie reviews). With that kind of prior knowledge, sentiment analysis should be a whole lot easier.
Approach
I initially tried fine-tuning the WikiText-103 language model on the complete sentences provided by the Rotten Tomatoes dataset from the [Movie Review Sentiment Analysis Playground Competition][3] - however, my classification results were lackluster.
I got better results by fine-tuning first on the larger [IMDB movie reviews dataset][4], then fine-tuning that on sentences from Rotten Tomatoes, then finally applying the linear head and classifying sentiment. The result of this process is the pre-trained model `fwd_pretrain_aclImdb_clas_1.h5`. It was pre-trained with scripts provided [here][5]. I executed the scripts in this approximate order:
# fine-tune from WikiText-103 to IMDB
python create_toks.py data/aclImdb/imdb_lm/
python tok2id.py data/aclImdb/imdb_lm/
python finetune_lm.py data/aclImdb/imdb_lm/ data/wt103/ 0 50 --lm-id pretrain_wt103 --early_stopping True
# fine-tune from IMDB to RT
python create_toks.py data/rt/rt_lm/
python tok2id.py data/rt/rt_lm/
python finetune_lm.py data/rt/rt_lm/ data/aclImdb/imdb_lm/ 0 50 --lm-id pretrain_aclImdb --early_stopping True --pretrain_id aclImdb
# classify
python train_clas.py data/rt/rt_clas/ 0 --lm-id pretrain_aclImdb --clas-id pretrain_aclImdb --lr 0.0001 --cl=25
I then zipped up all the files necessary to run the [kernel][6] for competition submission.
Conclusion
To be honest, I was hoping for a more impressive result - my ok-ish [result][7] in the competition is likely a testament to the challenging task of assigning the same sentiment to all "phrases" of a sentence (down to single punctuation marks). Perhaps more epochs or time spent tinkering with parameters would help.
Acknowledgements
All credit goes to Jeremy Howard and Sebastian Ruder. Check out ["Introducing state of the art text classification with universal language models"][8] for more explanation, plus links to the paper, video, and code.
[1]: https://iconix.github.io/dl/2018/05/30/openai-scholar
[2]: http://nlp.fast.ai/category/classification.html
[3]: https://www.kaggle.com/c/movie-review-sentiment-analysis-kernels-only/
[4]: http://ai.stanford.edu/~amaas/data/sentiment/
[5]: https://github.com/fastai/fastai/tree/master/courses/dl2/imdb_scripts
[6]: https://www.kaggle.com/iconix/ulmfit-for-rotten-tomatoes/code
[7]: https://www.kaggle.com/iconix/ulmfit-for-rotten-tomatoes
[8]: http://nlp.fast.ai/classification/2018/05/15/introducting-ulmfit.html
×
帕依提提提温馨提示
该数据集正在整理中,为您准备了其他渠道,请您使用
注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
暂无相关内容。
暂无相关内容。
- 分享你的想法
去分享你的想法~~
全部内容
欢迎交流分享
开始分享您的观点和意见,和大家一起交流分享.
数据使用声明:
- 1、该数据来自于互联网数据采集或服务商的提供,本平台为用户提供数据集的展示与浏览。
- 2、本平台仅作为数据集的基本信息展示、包括但不限于图像、文本、视频、音频等文件类型。
- 3、数据集基本信息来自数据原地址或数据提供方提供的信息,如数据集描述中有描述差异,请以数据原地址或服务商原地址为准。
- 1、本站中的所有数据集的版权都归属于原数据发布者或数据提供方所有。
- 1、如您需要转载本站数据,请保留原数据地址及相关版权声明。
- 1、如本站中的部分数据涉及侵权展示,请及时联系本站,我们会安排进行数据下线。