Select Language

AI社区

公开数据集

奖牌数据集

奖牌数据集

20085.3M
200 浏览
0 喜欢
0 次下载
0 条讨论
Computer Science,NLP,Deep Learning,Healthcare,Artificial Intelligence,Transformers Classification

数据结构 ? 20085.3M

    Data Structure ?

    * 以上分析是由系统提取分析形成的结果,具体实际数据为准。

    README.md

    ![](https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F2352583%2F868a18fb09d7a1d3da946d74a9857130%2FLogo.PNG?generation=1604973725053566&alt=media) **Me**dical **D**ataset for **A**bbreviation Disambiguation for Natural **L**anguage Understanding (MeDAL) is a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. It was published at the ClinicalNLP workshop at EMNLP. ?? [Code](https://github.com/BruceWen120/medal) ?? [Dataset (Hugging Face)](https://huggingface.co/datasets/medal) ?? [Dataset (Kaggle)](https://www.kaggle.com/xhlulu/medal-emnlp) ?? [Dataset (Zenodo)](https://zenodo.org/record/4265632) ?? [Paper (ACL)](https://www.aclweb.org/anthology/2020.clinicalnlp-1.15/) ?? [Paper (Arxiv)](https://arxiv.org/abs/2012.13978) ? [Pre-trained ELECTRA (Hugging Face)](https://huggingface.co/xhlu/electra-medal) ## Downloading the data We recommend downloading from Kaggle if you can authenticate through their API. The advantage to Kaggle is that the data is compressed, so it will be faster to download. Links to the data can be found at the top of the readme. First, you will need to create an account on kaggle.com. Afterwards, you will need to install the kaggle API: pip install kaggle Then, you will need to follow the [instructions here](https://github.com/Kaggle/kaggle-api#api-credentials) to add your username and key. Once that's done, you can run: kaggle datasets download xhlulu/medal-emnlp Now, unzip everything and place them inside the `data` directory: unzip -nq crawl-300d-2M-subword.zip -d data mv data/pretrain_sample/* data/ Loading FastText Embeddings For the LSTM models, we will need to use the fastText embeddings. To do so, first download and extract the weights: wget -nc -P data/ https://dl.fbaipublicfiles.com/fasttext/vectors-english/crawl-300d-2M-subword.zip unzip -nq data/crawl-300d-2M-subword.zip -d data/ ## Model Quickstart Using Torch Hub You can directly load LSTM and LSTM-SA with `torch.hub`: python import torch lstm = torch.hub.load("BruceWen120/medal
    ×

    帕依提提提温馨提示

    该数据集正在整理中,为您准备了其他渠道,请您使用

    注:部分数据正在处理中,未能直接提供下载,还请大家理解和支持。
    暂无相关内容。
    暂无相关内容。
    • 分享你的想法
    去分享你的想法~~

    全部内容

      欢迎交流分享
      开始分享您的观点和意见,和大家一起交流分享.
    所需积分:0 去赚积分?
    • 200浏览
    • 0下载
    • 0点赞
    • 收藏
    • 分享