File size: 1,337 Bytes
68ef990
 
c4c5f7b
a7de615
 
 
 
 
9f34d59
754d1f7
 
9f34d59
754d1f7
9f34d59
68ef990
c0aaf9b
 
35c2a17
d96ac82
c0aaf9b
 
fa54530
 
d96ac82
 
c0aaf9b
 
7f72575
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9051e5d
d96ac82
 
ca8179a
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
pipeline_tag: text-generation
license: apache-2.0
language:
- en
tags:
- SOLAR-10.7B-v1.0
- Open-platypus-Commercial
base_model: upstage/SOLAR-10.7B-v1.0
datasets:
- kyujinpy/Open-platypus-Commercial
model-index:
- name: T3Q-platypus-SOLAR-10.7B-v1.0
  results: []
---
Update @ 2024.03.07

## T3Q-platypus-SOLAR-10.7B-v1.0

This model is a fine-tuned version of upstage/SOLAR-10.7B-v1.0

**Model Developers** Chihoon Lee(chlee10), T3Q

## Training hyperparameters

The following hyperparameters were used during training:

```python
python finetune.py \
    --base_model PracticeLLM/Twice-KoSOLAR-16.1B-test \
    --data-path  kyujinpy/KOR-OpenOrca-Platypus-v3 \
    --output_dir ./Twice-KoSOLAR-16.1B-instruct-test \
    --batch_size 64 \
    --micro_batch_size 1 \
    --num_epochs 1 \
    --learning_rate 3e-5 \
    --cutoff_len 4096 \
    --val_set_size 0 \
    --lora_r 16 \
    --lora_alpha 16 \
    --lora_dropout 0.05 \
    --lora_target_modules '[q_proj, k_proj, v_proj, o_proj, gate_proj, down_proj, up_proj, lm_head]' \
    --train_on_inputs False \
    --add_eos_token False \
    --group_by_length False \
    --prompt_template_name user_prompt \
    --lr_scheduler 'cosine' \
    #--warmup_steps 100 \
```

## Framework versions

  - Transformers 4.34.1
  - Pytorch 2.1.0+cu121
  - Datasets 2.13.0
  - Tokenizers 0.14.1