metadata
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: NousResearch/Meta-Llama-3-8B
This is was experinment to update my training method for future finetunes.
This model was trained on the dataset bellow for 5 epochs, at a learning rate of 2e-4, with 20 warm up steps.
https://huggingface.co/datasets/Replete-AI/code_test_dataset_10k
Prompt format: Alpaca
Below is an instruction that describes a task, Write a response that appropriately completes the request.
### Instruction:
### Response: