iqbalamo93's picture
Update README.md
da4d410 verified
|
raw
history blame
948 Bytes
metadata
library_name: transformers
tags: []

Model Card for Model ID

This is quantized adapters trained on the Ultrachat 200k dataset for the TinyLlama-1.1B Intermediate Step 1431k 3T model.

adapter_name = TinyLlama-1.1B-intermediate-1431k-3T-adapters-ultrachat

Model Details

Base model was quantized using BitsAndBytes

from bitsandbytes import BitsAndBytesConfig

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,                  # Use 4-bit precision model loading
    bnb_4bit_quant_type="nf4",          # Quantization type
    bnb_4bit_compute_dtype="float16",   # Compute data type
    bnb_4bit_use_double_quant=True      # Apply nested quantization
)


### Model Description
This is quantized adapters trained on the Ultrachat 200k dataset for the TinyLlama-1.1B Intermediate Step 1431k 3T model.

- ** Finetuned from model : (TinyLlama)[https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T]