metadata
library_name: transformers
tags:
- ultrachat
datasets:
- HuggingFaceH4/ultrachat_200k
base_model:
- TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T
Model Card for Model ID
This is quantized adapters trained on the Ultrachat 200k dataset for the TinyLlama-1.1B Intermediate Step 1431k 3T model.
adapter_name = TinyLlama-1.1B-intermediate-1431k-3T-adapters-ultrachat
Model Details
Base model was quantized using BitsAndBytes
from bitsandbytes import BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(
load_in_4bit=True, # Use 4-bit precision model loading
bnb_4bit_quant_type="nf4", # Quantization type
bnb_4bit_compute_dtype="float16", # Compute data type
bnb_4bit_use_double_quant=True # Apply nested quantization
)
Model Description
This is quantized adapters trained on the Ultrachat 200k dataset for the TinyLlama-1.1B Intermediate Step 1431k 3T model.
- Finetuned from model : TinyLlama