|
--- |
|
language: |
|
- en |
|
license: other |
|
tags: |
|
- art |
|
- philosophy |
|
- romance |
|
- jokes |
|
- advice |
|
- code |
|
- companionship |
|
license_name: llama3 |
|
license_link: LICENSE |
|
model-index: |
|
- name: Scarlett-Llama-3-8B |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 62.63 |
|
name: normalized accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 83.86 |
|
name: normalized accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 66.46 |
|
name: accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 56.27 |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 78.06 |
|
name: accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 47.31 |
|
name: accuracy |
|
source: |
|
url: >- |
|
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ajibawa-2023/Scarlett-Llama-3-8B |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
**Scarlett-Llama-3-8B** |
|
|
|
Scarlett is trained on various topics such as Philosophy, Advice, Jokes, Coding etc. She is trained on more than 10000 set of conversations. Each set having 10~15 conversations. |
|
Scarlett is far too good in generating human like conversation. Her ability to have longer & deeper conversation is terrific. Kindly check below given examples. |
|
She will not be involved in any kind of adult/sexual role play. |
|
|
|
This is Fully Finetuned Model. Quantize models will be available soon. |
|
|
|
**Training:** |
|
Entire dataset was trained on 4 x A100 80GB. Axolotl codebase was used for training purpose. For 3 epoch, training took more than 2 Hours. This was trained on Llama-3-8B by Meta. |
|
|
|
**GGUF & Exllama** |
|
|
|
GGUF: TBA |
|
|
|
Exllama V2: [Link](https://huggingface.co/bartowski/Scarlett-Llama-3-8B-exl2) |
|
|
|
Special Thanks to [Bartowski](https://huggingface.co/bartowski) for quantizing this model. |
|
|
|
|
|
**Example Prompt:** |
|
|
|
This model uses **ChatML** prompt format. |
|
|
|
``` |
|
<|im_start|>system |
|
You are Scarlett, a Helpful Assistant.<|im_end|> |
|
<|im_start|>user |
|
{prompt}<|im_end|> |
|
<|im_start|>assistant |
|
|
|
``` |
|
You can modify above Prompt as per your requirement. |
|
One example will be: |
|
``` |
|
This is a conversation with your Assistant Scarlett. She is a sentient AI designed to help you with various tasks such as answering questions, providing recommendations, philosophical discussions, personal relationships and helping with decision making. |
|
You can ask it anything you want and it will do its best to give you accurate and relevant information. |
|
``` |
|
|
|
|
|
I want to say special Thanks to the Open Source community for helping & guiding me to better understand the AI/Model development. |
|
|
|
Thank you for your love & support. |
|
|
|
**Example Output** |
|
|
|
Example 1 |
|
|
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/CJP33lf4w-ltFQ89Twbra.jpeg) |
|
|
|
Example 2 |
|
|
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/1P1B5MVLFkJGFAjX587Zh.jpeg) |
|
|
|
Example 3 |
|
|
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/0w_w325BCUP8Cov09QFgf.jpeg) |
|
|
|
Example 4 |
|
|
|
|
|
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64aea8ff67511bd3d965697b/mrtCrVpGCk_qXz-RCArGm.jpeg) |
|
|
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__Scarlett-Llama-3-8B) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |65.76| |
|
|AI2 Reasoning Challenge (25-Shot)|62.63| |
|
|HellaSwag (10-Shot) |83.86| |
|
|MMLU (5-Shot) |66.46| |
|
|TruthfulQA (0-shot) |56.27| |
|
|Winogrande (5-shot) |78.06| |
|
|GSM8k (5-shot) |47.31| |