Edit model card

Llama3 Amharic DPO

Amharic Llama3 8B Alpaca further DPO tuned on an amharic translated dolly-15k dataset to always respond in Amharic.

Very token inefficient.

  • Developed by: simonbutt
  • License: apache-2.0
  • Finetuned from model:
    • unsloth/llama-3-8b-bnb-4bit
    • simonbutt/am_llama3_alpaca

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for simonbutt/am_llama3_dpo

Finetuned
(2426)
this model

Datasets used to train simonbutt/am_llama3_dpo

Collection including simonbutt/am_llama3_dpo