File size: 2,318 Bytes
f0b81f1
 
 
 
 
 
 
 
 
 
 
00d9dd9
f0b81f1
 
1e5d0c2
f0b81f1
1e5d0c2
 
 
 
f0b81f1
1e5d0c2
f0b81f1
1e5d0c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
base_model: unsloth/Meta-Llama-3.1-8B
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
---

# MedGPT-Llama3.1-8B-v.1

- This model is a fine-tuned version of [unsloth/Meta-Llama-3.1-8B](https://huggingface.co/unsloth/Meta-Llama-3.1-8B) on an dataset created by [Valerio Job](https://huggingface.co/valeriojob) together with GPs based on real medical data.
- Version 1 (v.1) of MedGPT is the very first version of MedGPT and the training dataset has been kept simple and small with only 60 examples.
- This repo includes the 16bit format of the model as well as the LoRA adapters of the model. There is a separate repo called [valeriojob/MedGPT-Llama3.1-8B-BA-v.1-GGUF](https://huggingface.co/valeriojob/MedGPT-Llama3.1-8B-BA-v.1-GGUF) that includes the quantized versions of this model in GGUF format.
- This model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

## Model description

This model acts as a supplementary assistance to GPs helping them in medical and admin tasks.

## Intended uses & limitations

The fine-tuned model should not be used in production! This model has been created as a initial prototype in the context of a bachelor thesis.

## Training and evaluation data

The dataset (train and test) used for fine-tuning this model can be found here: [datasets/valeriojob/BA-v.1](https://huggingface.co/datasets/valeriojob/BA-v.1)

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- per_device_train_batch_size = 2,
- gradient_accumulation_steps = 4,
- warmup_steps = 5,
- max_steps = 60,
- learning_rate = 2e-4,
- fp16 = not is_bfloat16_supported(),
- bf16 = is_bfloat16_supported(),
- logging_steps = 1,
- optim = "adamw_8bit",
- weight_decay = 0.01,
- lr_scheduler_type = "linear",
- seed = 3407,
- output_dir = "outputs"

### Training results

| Training Loss | Step |
|:-------------:|:----:|
| 1.793200      | 1    |
| 1.635900      | 2    |
| 1.493000      | 3    |
| 1.227600      | 5    |
| 0.640500      | 10   |
| 0.438300      | 15   |
| 0.370200      | 20   |
| 0.205100      | 30   |
| 0.094900      | 40   |
| 0.068500      | 50   |
| 0.059400      | 60   |

## Licenses
- **License:** apache-2.0