Edit model card
chat doctor bioGPT logo

BioGPT (Large) 🧬 fine-tuned on ChatDoctor 🩺 for QA

Microsoft's BioGPT Large fine-tuned on ChatDoctor dataset for Question Answering.

Intended Use

This is just a research model and does NOT have to be used out of this scope.

Limitations

TBA

Model

Microsoft's BioGPT Large:

Pre-trained language models have attracted increasing attention in the biomedical domain, inspired by their great success in the general natural language domain. Among the two main branches of pre-trained language models in the general language domain, i.e. BERT (and its variants) and GPT (and its variants), the first one has been extensively studied in the biomedical domain, such as BioBERT and PubMedBERT. While they have achieved great success on a variety of discriminative downstream biomedical tasks, the lack of generation ability constrains their application scope. In this paper, we propose BioGPT, a domain-specific generative Transformer language model pre-trained on large-scale biomedical literature. We evaluate BioGPT on six biomedical natural language processing tasks and demonstrate that our model outperforms previous models on most tasks. Especially, we get 44.98%, 38.42% and 40.76% F1 score on BC5CDR, KD-DTI and DDI end-to-end relation extraction tasks, respectively, and 78.2% accuracy on PubMedQA, creating a new record. Our case study on text generation further demonstrates the advantage of BioGPT on biomedical literature to generate fluent descriptions for biomedical terms.

Dataset

ChatDoctor-200K dataset is collected from this paper https://arxiv.org/pdf/2303.14070.pdf

The dataset is composed by:

Usage

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig


model_id = "Narrativaai/BioGPT-Large-finetuned-chatdoctor"

tokenizer = AutoTokenizer.from_pretrained("microsoft/BioGPT-Large")

model = AutoModelForCausalLM.from_pretrained(model_id)

def answer_question(
        prompt,
        temperature=0.1,
        top_p=0.75,
        top_k=40,
        num_beams=2,
        **kwargs,
):
    inputs = tokenizer(prompt, return_tensors="pt")
    input_ids = inputs["input_ids"].to("cuda")
    attention_mask = inputs["attention_mask"].to("cuda")
    generation_config = GenerationConfig(
        temperature=temperature,
        top_p=top_p,
        top_k=top_k,
        num_beams=num_beams,
        **kwargs,
    )
    with torch.no_grad():
        generation_output = model.generate(
            input_ids=input_ids,
            attention_mask=attention_mask,
            generation_config=generation_config,
            return_dict_in_generate=True,
            output_scores=True,
            max_new_tokens=512,
            eos_token_id=tokenizer.eos_token_id

        )
    s = generation_output.sequences[0]
    output = tokenizer.decode(s, skip_special_tokens=True)
    return output.split(" Response:")[1]

example_prompt = """
Below is an instruction that describes a task, paired with an input that provides further context.Write a response that appropriately completes the request.

### Instruction:
If you are a doctor, please answer the medical questions based on the patient's description.

### Input:
Hi i have sore lumps under the skin on my legs. they started on my left ankle and are approx 1 - 2cm diameter and are spreading up onto my thies. I am eating panadol night and anti allergy pills (Atarax). I have had this for about two weeks now. Please advise.

### Response:
"""

print(answer_question(example_prompt))

Citation

@misc {narrativa_2023,
    author       = { {Narrativa} },
    title        = { BioGPT-Large-finetuned-chatdoctor (Revision 13764c0) },
    year         = 2023,
    url          = { https://huggingface.co/Narrativaai/BioGPT-Large-finetuned-chatdoctor },
    doi          = { 10.57967/hf/0601 },
    publisher    = { Hugging Face }
}
Downloads last month
138
Safetensors
Model size
1.57B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Narrativaai/BioGPT-Large-finetuned-chatdoctor

Spaces using Narrativaai/BioGPT-Large-finetuned-chatdoctor 5