Minimal Alpaca-LORA trained with databricks/databricks-dolly-v2-15k dataset and based on OpenLLaMA-3B-600BT.
There is a pre-trained LoRA adapter and a Colab Jupyter notebook for fine-tuning (about 3 hours for 1 epoch on T4).
Minimal Alpaca-LORA trained with databricks/databricks-dolly-v2-15k dataset and based on OpenLLaMA-3B-600BT.
There is a pre-trained LoRA adapter and a Colab Jupyter notebook for fine-tuning (about 3 hours for 1 epoch on T4).