--- license: apache-2.0 datasets: - macadeliccc/opus_samantha --- # Opus-Samantha-Llama-3-8B Opus-Samantha-Llama-3-8B is a SFT model made with [AutoSloth](https://colab.research.google.com/drive/1Zo0sVEb2lqdsUm9dy2PTzGySxdF9CNkc#scrollTo=MmLkhAjzYyJ4) by [macadeliccc](https://huggingface.co/macadeliccc) Trained on 1xL4 for 1 hour _model is curretly very nsfw. uneven distribution of subjects in dataset. will be back with v2_ ## Process - Original Model: [unsloth/llama-3-8b](https://huggingface.co/unsloth/llama-3-8b) - Datatset: [macadeliccc/opus_samantha](https://huggingface.co/datasets/macadeliccc/opus_samantha) - Learning Rate: 2e-05 - Steps: 2772 - Warmup Steps: 277 - Per Device Train Batch Size: 2 - Gradient Accumulation Steps 1 - Optimizer: paged_adamw_8bit - Max Sequence Length: 4096 - Max Prompt Length: 2048 - Max Length: 2048 ## 💻 Usage ```python !pip install -qU transformers torch import transformers import torch model_id = "macadeliccc/Opus-Samantha-Llama-3-8B" pipeline = transformers.pipeline( pipeline("Hey how are you doing today?") ```