--- license: apache-2.0 datasets: - Open-Orca/SlimOrca - jondurbin/airoboros-3.1 - riddle_sense language: - en library_name: transformers --- [Built with Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) # SlimOrcaBoros A Mistral 7B finetuned model using SlimOrca, Auroboros 3.1 and RiddleSense. ### Training Trained for 4 epochs, but released @ epoch 3. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_openaccess-ai-collective__mistral-7b-slimorcaboros) | Metric | Value | |-----------------------|---------------------------| | Avg. | 54.1 | | ARC (25-shot) | 63.65 | | HellaSwag (10-shot) | 83.7 | | MMLU (5-shot) | 63.46 | | TruthfulQA (0-shot) | 55.81 | | Winogrande (5-shot) | 77.03 | | GSM8K (5-shot) | 23.43 | | DROP (3-shot) | 11.62 |