Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Model Card for Model ID

A 4bit Mistral 7B model finetuned using unsloth on T4 GPU

Model Details

Model Description

Training Details

Training Data

https://huggingface.co/datasets/yahma/alpaca-cleaned

Training Procedure

Preprocessing

Alpaca prompt template is used:

alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instruction:
{}

### Input:
{}

### Response:
{}"""

Training Hyperparameters

        per_device_train_batch_size = 2,
        gradient_accumulation_steps = 4,
        warmup_steps = 5,
        max_steps = 60,
        learning_rate = 2e-4,
        fp16 = not torch.cuda.is_bf16_supported(),
        bf16 = torch.cuda.is_bf16_supported(),
        logging_steps = 1,
        optim = "adamw_8bit",
        weight_decay = 0.01,
        lr_scheduler_type = "linear",
        seed = 3407
  • Hardware Type: T4 GPU
  • Cloud Provider: Google Colab

Framework versions

  • PEFT 0.7.1
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Adishah31/mistral_4bit_lora_model

Adapter
(60)
this model

Dataset used to train Adishah31/mistral_4bit_lora_model