nitky's picture
Upload 22 files
f8dfccd verified
|
raw
history blame
No virus
3.06 kB
---
base_model:
- allenai/tulu-2-dpo-70b
- tokyotech-llm/Swallow-70b-instruct-hf
tags:
- mergekit
- merge
language:
- en
- ja
library_name: transformers
pipeline_tag: text-generation
license: llama2
model_type: llama
---
# Superswallow
**Important Notice:**
This model partially utilizes the parameters of Tulu V2 DPO finetuned based on Llama 2, so it may inherit the AI2 ImpACT license. Please use the model keeping in mind that there may be changes regarding the license if AI2 contacts me.
The [AI2 ImpACT license](https://allenai.org/impact-license) includes information about data artifacts and model artifacts, but does not cover the case of directly applying parts of the LLM parameters of a model artifact to other models. However, I respect their research and great work, so I will change the license immediately if AI2 contacts me.
## Description
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). The model was created by injecting the ability to follow user intent from [Tulu 2 DPO](https://arxiv.org/abs/2311.10702) into the [Swallow](https://zenn.dev/tokyotech_lm/articles/d6cb3a8fdfc907) instract model.
It was a proof of concept for merging LLMs trained in other languages, and paid close attention to preserving the linguistic capabilities of the merge-based model.
As far as I know, Swallow is the full set Llama 2 model(7B, 13B, 70B) that can output the most beautiful Japanese. Therefore, I used it as the base model for merging this time. Thank you for their wonderful work.
## Prompt template: Swallow (Alpaca format)
```
ไปฅไธ‹ใซใ€ใ‚ใ‚‹ใ‚ฟใ‚นใ‚ฏใ‚’่ชฌๆ˜Žใ™ใ‚‹ๆŒ‡็คบใŒใ‚ใ‚Šใ€ใใ‚Œใซไป˜้šใ™ใ‚‹ๅ…ฅๅŠ›ใŒๆ›ดใชใ‚‹ๆ–‡่„ˆใ‚’ๆไพ›ใ—ใฆใ„ใพใ™ใ€‚ใƒชใ‚ฏใ‚จใ‚นใƒˆใ‚’้ฉๅˆ‡ใซๅฎŒไบ†ใ™ใ‚‹ใŸใ‚ใฎๅ›ž็ญ”ใ‚’่จ˜่ฟฐใ—ใฆใใ ใ•ใ„ใ€‚
### ๆŒ‡็คบ:
{instruction}
### ๅฟœ็ญ”:
```
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [tokyotech-llm/Swallow-70b-instruct-hf](https://huggingface.co/tokyotech-llm/Swallow-70b-instruct-hf) as a base.
### Models Merged
The following models were included in the merge:
* [allenai/tulu-2-dpo-70b](https://huggingface.co/allenai/tulu-2-dpo-70b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: tokyotech-llm/Swallow-70b-instruct-hf
# no parameters necessary for base model
- model: allenai/tulu-2-dpo-70b # follow user intent
parameters:
density: 1
weight:
- filter: mlp.down_proj
value: [0.3, 0.25, 0.25, 0.15, 0.1]
- filter: mlp.gate_proj
value: [0.7, 0.25, 0.5, 0.45, 0.4]
- filter: mlp.up_proj
value: [0.7, 0.25, 0.5, 0.45, 0.4]
- filter: self_attn
value: [0.7, 0.25, 0.5, 0.45, 0.4]
- value: 0 # fallback for rest of tensors.
merge_method: dare_ties
base_model: tokyotech-llm/Swallow-70b-instruct-hf
dtype: bfloat16
tokenizer_source: union
```