Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
winglian 
posted an update Jan 8

alpaca finetune: https://huggingface.co/openaccess-ai-collective/phi2-alpaca

That link gives a 404 error to me, is the model public?

·

Sorry about that! Public now.

Nice! @winglian do you know what the largest model you can fit on a single 24GB GPU (w/o LoRA/QLoRA) is?

·

Hey question for @brucethemoose & @LoneStriker I recall you guys mentioning model merging with only 24GB VRam & 32GB RAM as well as swap being a must in your workflows for model merging. Do you mind sharing some of your workflow to enlighten us gpu poor🫠

@winglian , I want to start by expressing my immense gratitude for all your hard work! Axolotl has become my go-to repository for training. I have a quick question: Is there any possibility we might see FlashAttention for Qwen2 in the near future? 😊