Gemma2 LoRA Adapters
Collection
Gemma2 LoRA adapters fine-tuned using SFT in TRL on diverse tasks such as coding, SQL, Japanese to English, and function calling
•
6 items
•
Updated
This model is a fine-tuned version of google/gemma-2-2b-it on the generator dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.8466 | 0.9994 | 398 | 0.8928 |