This model was fine-tuned using 4-bit QLoRa, following the instructions in https://huggingface.co/blog/llama2#fine-tuning-with-peft.
The dataset includes 10k prompts.
I used a Amazon EC2 g5.xlarge instance (1xA10G GPU), with the Deep Learning AMI for PyTorch. Training time was about 10 hours. On-demand price is about $10, which can easily be reduced to about $3 with EC2 Spot Instances.
The full log is included, as well as a simple inference script.
Training procedure
The following bitsandbytes
quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
Framework versions
- PEFT 0.5.0
- Downloads last month
- 8
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for juliensimon/llama2-7b-qlora-openassistant-guanaco
Base model
meta-llama/Llama-2-7b-hf