--- base_model: unsloth/Meta-Llama-3.1-8B language: - en license: llama3.1 tags: - text-generation-inference - transformers - unsloth - llama - trl - sft - not-for-all-audiences datasets: - mpasila/Literotica-stories-short --- Dataset used: [mpasila/Literotica-stories-short](https://huggingface.co/datasets/mpasila/Literotica-stories-short) which contains only a subset of the stories from the full Literotica dataset and was chunked down to fit within 8192 tokens. Prompt format is: No formatting LoRA: [mpasila/Llama-3.1-Literotica-LoRA-8B](https://huggingface.co/mpasila/Llama-3.1-Literotica-LoRA-8B) Trained with regular LoRA (not quantized/QLoRA) and LoRA rank was 128 and Alpha set to 32. Trained for 1 epoch using A40 for about 13 hours. # Uploaded model - **Developed by:** mpasila - **License:** Llama 3.1 Community License Agreement - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [](https://github.com/unslothai/unsloth)