bnb-4 bit
Collection
A 4-bit quantized version models designed for fine-tuning. Demo: https://colab.research.google.com/drive/19UpFUjtbJoLua-4DMb1JKKwJAcvMyMHb
•
21 items
•
Updated
This model is 4bit quantized version of Meta-Llama-3-8B-Instruct using bitsandbytes. It's designed for fine-tuning! The PAD token is set as UNK.
Base model
meta-llama/Meta-Llama-3-8B-Instruct