Select Language

AI社区

公开数据集

范围外意图分类数据集

范围外意图分类数据集

2.02M
489 浏览
0 喜欢
0 次下载
0 条讨论
Business,Computer Science,NLP,Classification,Multiclass Classification Classification

数据结构 ? 2.02M

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    Context Most supervised machine learning tasks assume a dataset with a set of well-defined target label set. But what happens when a trained model meets the real world, where inputs to the trained model might not be from the well-defined target label set? This dataset offers a way to evaluate intent classification models on "out-of-scope" inputs. "Out-of-scope" inputs are those that do not belong to the set of "in-scope" target labels. You may have heard other ways of referring to out-of-scope, including "out-of-domain" or "out-of-distribution". Content - `is_*.json`: these files house the train/val/test sets for the in-scope data. There are 150 in-scope "intents" (aka classes), which include samples such as "what is my balance" (which belongs to the `balance` class). - `oos_*.json`: these files house the train/val/test sets for the out-of-scope data. There is one out-of-scope intent: `oos`. Note that you don't have to use the `oos_train.json` data. In other words, an ML solution to the out-of-scope problem need not be trained on out-of-scope data, but it might help! Evaluation Metrics The task is intent classification, which generalizes to text classification (or categorization). This is a supervised ML problem. We use two metrics to evaluate: - In-scope accuracy is defined as #(correctly classified in-scope samples) / #(in-scope samples). - Out-of-scope recall is defined as #(correctly classified out-of-scope samples) / #(out-of-scope samples). Acknowledgements This dataset is from *[An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction](https://www.aclweb.org/anthology/D19-1131.pdf)* by Larson et al., which was published in EMNLP in 2019. The GitHub page for this dataset is [linked here](https://github.com/clinc/oos-eval). Inspiration Most supervised machine learning tasks assume a dataset with a set of well-defined target label set. But what happens when a trained model meets the real world, where inputs to the trained model might not be from the well-defined target label set? This "out-of-distribution" problem has seen lots of recent development, as researchers and practitioners in both academia and industry are observing that many ML methods struggle on out-of-distribution data in a wide variety of tasks.
    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:0 去赚积分?
    • 489浏览
    • 0下载
    • 0点赞
    • 收藏
    • 分享