Edit model card

Model Overview

𝐌𝐨𝐝𝐞π₯ 𝐍𝐚𝐦𝐞:ElEmperador

image/png

Model Description:

ElEmperador is an ORPO-based finetune derived from the Mistral-7B-v0.1 base model.

Evals:

BLEU:0.209

Inference Script:

def generate_response(model_name, input_text, max_new_tokens=50):
    # Load the tokenizer and model from Hugging Face Hub
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    model = AutoModelForCausalLM.from_pretrained(model_name)
    
    # Tokenize the input text
    input_ids = tokenizer(input_text, return_tensors='pt').input_ids
    
    # Generate a response using the model
    with torch.no_grad():
        generated_ids = model.generate(input_ids, max_new_tokens=max_new_tokens)
    
    # Decode the generated tokens into text
    generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
    
    return generated_text

if __name__ == "__main__":
    # Set the model name from Hugging Face Hub
    model_name = "AINovice2005/ElEmperador" 
    input_text = "Hello, how are you?"

    # Generate and print the model's response
    output = generate_response(model_name, input_text)
    
    print(f"Input: {input_text}")
    print(f"Output: {output}")

Results

Firstly,ORPO is a viable RLHF algorithm to improve the performance of your models along with SFT finetuning.Secondly, it also helps in aligning the model’s outputs more closely with human preferences, leading to more user-friendly and acceptable results.

Downloads last month
409
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for AINovice2005/ElEmperador

Finetuned
(685)
this model
Quantizations
1 model

Dataset used to train AINovice2005/ElEmperador