CoD, Towards an Interpretable Medical Agent using Chain of Diagnosis
Abstract
The field of medical diagnosis has undergone a significant transformation with the advent of large language models (LLMs), yet the challenges of interpretability within these models remain largely unaddressed. This study introduces Chain-of-Diagnosis (CoD) to enhance the interpretability of LLM-based medical diagnostics. CoD transforms the diagnostic process into a diagnostic chain that mirrors a physician's thought process, providing a transparent reasoning pathway. Additionally, CoD outputs the disease confidence distribution to ensure transparency in decision-making. This interpretability makes model diagnostics controllable and aids in identifying critical symptoms for inquiry through the entropy reduction of confidences. With CoD, we developed DiagnosisGPT, capable of diagnosing 9604 diseases. Experimental results demonstrate that DiagnosisGPT outperforms other LLMs on diagnostic benchmarks. Moreover, DiagnosisGPT provides interpretability while ensuring controllability in diagnostic rigor.
Community
We propose the Chain-of-Diagnosis(CoD) to improve interpretability in medical diagnostics for LLMs. CoD features include:
- Transforming the opaque decision-making process into a five-step diagnostic chain that reflects a physician’s thought process.
- Producing a confidence distribution, where higher confidence suggests greater certainty in diagnosing a specific disease. CoD formalizes the diagnostic process as a process of reducing diagnostic certainty entropy.
Our code and data are available at: https://github.com/FreedomIntelligence/Chain-of-Diagnosis.
Hi @jymcc , congrats! Thanks for publishing artifacts on the hub: https://huggingface.co/FreedomIntelligence.
It would be great to link the models and datasets to the paper, see here on how to do that: https://huggingface.co/docs/hub/en/datasets-cards#linking-a-paper
Interesting paper! Big kudos to your work!🔥
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Interpretable Differential Diagnosis with Dual-Inference Large Language Models (2024)
- LLMs for Doctors: Leveraging Medical LLMs to Assist Doctors, Not Replace Them (2024)
- CliBench: Multifaceted Evaluation of Large Language Models in Clinical Decisions on Diagnoses, Procedures, Lab Tests Orders and Prescriptions (2024)
- medIKAL: Integrating Knowledge Graphs as Assistants of LLMs for Enhanced Clinical Diagnosis on EMRs (2024)
- Large Language Models are Interpretable Learners (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 2
Datasets citing this paper 4
Spaces citing this paper 0
No Space linking this paper