Edit model card

Reminder to use the dev version Transformers:

pip install git+https://github.com/huggingface/transformers.git

Finetune Phi-3.5, Llama 3.1, Mistral 2-5x faster with 70% less memory via Unsloth!

We have a free Google Colab Tesla T4 notebook for Phi-3.5 (mini) here: https://colab.research.google.com/drive/1lN6hPQveB_mHSnTOYifygFcrO8C1bxq4?usp=sharing

✨ Finetune for Free

All notebooks are beginner friendly! Add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.

Unsloth supports Free Notebooks Performance Memory use
Llama-3.1 8b ▶️ Start on Colab 2.4x faster 58% less
Phi-3.5 (mini) ▶️ Start on Colab 2x faster 50% less
Gemma-2 9b ▶️ Start on Colab 2.4x faster 58% less
Mistral 7b ▶️ Start on Colab 2.2x faster 62% less
TinyLlama ▶️ Start on Colab 3.9x faster 74% less
DPO - Zephyr ▶️ Start on Colab 1.9x faster 19% less

Special Thanks

A huge thank you to Microsoft AI and Phi team for creating and releasing these models.

Downloads last month
7,721
Safetensors
Model size
2.07B params
Tensor type
F32
·
BF16
·
U8
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for unsloth/Phi-3-mini-4k-instruct-bnb-4bit

Adapters
22 models
Finetunes
563 models
Quantizations
55 models

Spaces using unsloth/Phi-3-mini-4k-instruct-bnb-4bit 2

Collection including unsloth/Phi-3-mini-4k-instruct-bnb-4bit