LianghuiZhu
commited on
Commit
•
dfbebe0
1
Parent(s):
53d5361
Create README.md
Browse files[add] initial commit.
README.md
ADDED
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
inference: false
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
tags:
|
7 |
+
- instruction-finetuning
|
8 |
+
pretty_name: JudgeLM-100K
|
9 |
+
task_categories:
|
10 |
+
- text-generation
|
11 |
+
---
|
12 |
+
|
13 |
+
|
14 |
+
<br>
|
15 |
+
|
16 |
+
# JudgeLM Model Card
|
17 |
+
|
18 |
+
## Model Details
|
19 |
+
|
20 |
+
JudgeLM is a judge model trained by fine-tuning Vicuna on JudgeLM-100K dataset.
|
21 |
+
|
22 |
+
- **Developed by:** [HUST](https://english.hust.edu.cn/), [BAAI](https://www.baai.ac.cn/english.html)
|
23 |
+
- **Model type:** An auto-regressive language model based on the transformer architecture.
|
24 |
+
- **License:** Non-commercial license
|
25 |
+
- **Finetuned from model:** [Vicuna](https://vicuna.lmsys.org).
|
26 |
+
|
27 |
+
### Model Sources
|
28 |
+
|
29 |
+
- **Repository:** https://github.com/baaivision/JudgeLM
|
30 |
+
- **Paper:** https://arxiv.org/abs/2310.17631
|
31 |
+
- **Demo:** http://218.91.113.230:9004/
|
32 |
+
|
33 |
+
## Uses
|
34 |
+
|
35 |
+
The primary use of JudgeLM is research on evaluating the performance of large language models and chatbots.
|
36 |
+
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
|
37 |
+
|
38 |
+
## How to Get Started with the Model
|
39 |
+
|
40 |
+
- Judge large language models with this model: https://github.com/baaivision/JudgeLM/tree/main/judgelm/llm_judge.
|
41 |
+
- Serve this model with the gradio: https://github.com/baaivision/JudgeLM/tree/main/judgelm/serve.
|
42 |
+
|
43 |
+
## Training Details
|
44 |
+
|
45 |
+
JudgeLM v1.0 is fine-tuned from Vicuna-v1.3 with supervised instruction fine-tuning.
|
46 |
+
The training data is around 200K judge samples from [JudgeLM-100K dataset](https://huggingface.co/datasets/BAAI/JudgeLM-100K).
|
47 |
+
See more details in the "Fine-tuning Settings" section in the appendix of this [paper](https://arxiv.org/abs/2310.17631).
|
48 |
+
|
49 |
+
## Evaluation
|
50 |
+
|
51 |
+
JudgeLM is evaluated on JudgeLM val set, with judgements produced by GPT-4 teacher. See more details in this [paper](https://arxiv.org/abs/2310.17631) and try it with [code](https://github.com/baaivision/JudgeLM/tree/main/judgelm/llm_judge).
|
52 |
+
|
53 |
+
## Additional Information
|
54 |
+
|
55 |
+
### Citation Information
|
56 |
+
|
57 |
+
```
|
58 |
+
@article{zhu2023judgelm,
|
59 |
+
title={JudgeLM: Fine-tuned Large Language Models are Scalable Judges},
|
60 |
+
author={Lianghui Zhu and Xinggang Wang and Xinlong Wang},
|
61 |
+
year={2023},
|
62 |
+
eprint={2310.17631},
|
63 |
+
archivePrefix={arXiv},
|
64 |
+
primaryClass={cs.CL}
|
65 |
+
}
|
66 |
+
```
|
67 |
+
|
68 |
+
|