File size: 953 Bytes
aa97a27 fb42eb6 aa97a27 df20836 30be51a 7c2426b aa97a27 f264b55 82d7cec 30c1e02 0d573a9 30c1e02 bdebe1e f264b55 254361b 2132b62 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
language:
- ko
- en
license: mit
---
# Model Card for free-evo-qwen72b-v0.8
## Developed by : [Freewheelin](https://freewheelin-recruit.oopy.io/) AI Technical Team
## 1st place : 2024 4th May - avg. 81.28 [Open Llm Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
but this kicked away. maybe the explanation was not enough.
## Method
- We were inspired by this [Sakana project](https://sakana.ai/evolutionary-model-merge/)
## Process
- you need two models with the same architecture
- 1. choose one model and fine-tune a model to make a gap between the original one and fine-tuned one.
- 2. merge two of them
- 3. evaluate the merged model
- 4. fine-tune a specific evaluation part of the model
- 5. evaluate again
- 6. merge again
- 5. evaluate again
- 6. keep going until evaluate avg is higher then original one
that's it. simple.
## Base Architecture
- QWEN2
## Base Models
- several QWEN2 based models |