Edit model card

Introduction

MoMo-70B-lora-1.8.6-DPO is trained via Direct Preference Optimization(DPO) from MoMo-70B-LoRA-V1.4 as its base model, with several optimizations in hyperparameters.
MoMo-70B-LoRA-V1.4 is trained via Supervised Fine-Tuning (SFT) using LoRA, with the QWEN-72B model as its base-model.
Note that we did not exploit any form of weight merge.
For leaderboard submission, the trained weight is realigned for compatibility with llama.
MoMo-70B is trained using Moreh's MoAI platform, which simplifies the training of large-scale models, and AMD's MI250 GPU.

Details

Used Librarys

  • torch
  • peft

Used Datasets

Model ARC MMLU TruthfulQA GSM8K
V1.8.6(result < 0.1, %) TBU TBU 0.73 TBU

Used Environments

How to use

# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("moreh/MoMo-70B-LoRA-V1.8.6")
model = AutoModelForCausalLM.from_pretrained(
    "moreh/MoMo-70B-LoRA-V1.8.6"
)
Downloads last month
13
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.