Edit model card

RoPE Scaled QLoRA Fine-tune of Llama-13b on airoboros-gpt4-1.4.1 (LoRA)

Full model card with merged GPTQ 4bit quantized weights can be found here: https://huggingface.co/bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-GPTQ

fp16 merged weights can be found here: https://huggingface.co/bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-fp16

Overview

This is Jon Durbin's Airoboros 13B GPT4 1.4 (LoRA weights) with several key modifications:

  • Context length extended to 8192 by RoPE Scaled Embeddings, but NOT via the superHOT LoRA. I started with base Llama-13b.
  • Training sequences beyond 2048 have the target truncated to equal 2048.
  • Used airoboros-gpt4-1.4.1 dataset instead of airoboros-gpt4-1.4
  • This is a QLoRA fine-tune. The original 13b model is a full fine-tune.

It was trained on 1x RTX 6000 Ada for ~17 hours.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train bhenrym14/airoboros-13b-gpt4-1.4.1-PI-8192-LoRA