Text Generation
Safetensors
qwen2
chat
conversational
Eval Results
magnum-v2-72b / README.md
lucyknada's picture
Update LICENSE (#2)
061e5e2 verified
|
raw
history blame
2.31 kB
metadata
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/anthracite-org/magnum-v2-72b/blob/main/LICENSE
language:
  - en
  - fr
  - de
  - es
  - it
  - pt
  - ru
  - zh
  - ja
pipeline_tag: text-generation
tags:
  - chat

image/png

This is the seventh (Lucky!) in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus. This model is fine-tuned on top of Qwen-2 72B Instruct.

Prompting

Model has been Instruct tuned with the ChatML formatting. A typical input would look like this:

"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""

Credits

This model has been a team effort, and the credits goes to all members of Anthracite.

Training

The training was done for 2 epochs. We used 8x AMD Instinct™ MI300X Accelerators for the full-parameter fine-tuning of the model.

We also trained with a weight decay of 0.01 to help further stabilize the loss trajectory and mitigate catastrophic forgetting, and utilize a peak learning rate of 4e-6 to prevent the 2nd epoch loss from dropping too significantly (as it is a strong indicator of overfitting). image/png

Sample Packing was done for 16k tokens rather than the 8k tokens used in our previous runs.

Built with Axolotl

Safety

...