Edit model card

This model was trained 2x faster with Unsloth and Huggingface's TRL library.

This is an experiment on fixing models with incorrect behaviors.

This experiment serves to test and refine a specific training and evaluation pipeline research framework. Its primary objective is to identify potential optimizations, with a focus on data engineering, architectural efficiency, and evaluation performance.

The goal of this experiment is to evaluate the effectiveness of a new training and evaluation pipeline for Large Language Models (LLMs). To achieve this, we will explore adjustments in data preprocessing, model training algorithms, and evaluation metrics to test methods for improvement.

Quantized version (GGUF)

Mistroll-7B-v2.2-Q8_0

Thank Yam for your incredible experiment & the Unsloth Community!

PS: Numero uno brothers!

image/png

Downloads last month
27
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for BarraHome/Mistroll-7B-v2.2

Quantized
(7)
this model
Finetunes
4 models
Merges
3 models
Quantizations
4 models

Spaces using BarraHome/Mistroll-7B-v2.2 5