metadata
base_model: unsloth/gemma-2-9b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
datasets:
- yahma/alpaca-cleaned
Uploaded model
- Developed by: NotAiLOL
- License: apache-2.0
- Finetuned from model : unsloth/gemma-2-9b-bnb-4bit
This gemma2 model was trained 2x faster with Unsloth and Huggingface's TRL library.
Details
This model is fine tuned from unsloth/gemma-2-9b-bnb-4bit on the alpaca-cleaned dataset using the rsLoRA method.
This model achieved a loss of 0.829200 on the alpaca-cleaned dataset after step 120.
This model follows the alpaca prompt:
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{}
### Input:
{}
### Response:
{}
Training
This model is trained on a single Tesla T4 GPU.
- 669.6535 seconds used for training.
- 11.16 minutes used for training.
- Peak reserved memory = 9.383 GB.
- Peak reserved memory for training = 2.807 GB.
- Peak reserved memory % of max memory = 63.622 %.
- Peak reserved memory for training % of max memory = 19.033 %.