File size: 3,515 Bytes
6dc5d18
 
 
 
8704c6f
6dc5d18
 
 
 
 
8704c6f
6dc5d18
ac261b0
0e0ca77
 
6dc5d18
 
2ba3f43
 
93c1dc5
2ba3f43
93c1dc5
 
 
 
f7849a4
2ba3f43
a8e0c2d
 
 
 
6dc5d18
 
 
 
 
 
0e0ca77
 
 
 
 
 
6dc5d18
 
c74bfdf
 
 
 
2518836
 
c74bfdf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61c8742
c74bfdf
 
 
 
 
 
 
 
 
 
 
 
 
 
7d18b36
c74bfdf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
---
base_model: unsloth/Llama-3.2-1B-Instruct-bnb-4bit
language:
- en
license: llama3.2
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- llama-3
- trl
- sft
datasets:
- mlabonne/FineTome-100k
---

# Notice

Model was submitted to OpenLLM Leaderboard for full envaluation.

- **MMLU-PRO (5-shot)** (self-reported): 0.1553 ± 0.0033
- **MMLU (0-shot)** (self-reported): 0.3416 ± 0.0040
- **Hellaswag (0-shot)** (self-reported):
  - *acc*: 0.4284 ± 0.0049
  - *acc_norm*: 0.5681 ± 0.0049

# IMPORTANT

In case you got the following error: `exception: data did not match any variant of untagged enum modelwrapper at line 1251003 column 3`. Please upgrade your **transformer** package, that is, use the following code: `pip install --upgrade "transformers>=4.45"`.

# Uploaded  model

- **Developed by:** NotASI
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-1B-Instruct-bnb-4bit

# Details

This model was trained on **mlabonne/FineTome-100k** for *2* epochs with **rslora** + **qlora**, and achieve the final training loss: *0.796700*.

This model follows the same chat template as the base model one.

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)

# Usage

Sample notebook (Unsloth): https://colab.research.google.com/drive/1cSCvemKsC0JVmMs4jdfoTqM8SQQ_DSBt?usp=sharing

**Dependencies**

```
!pip install unsloth
!pip install --upgrade "transformers>=4.45"
```

**Code**

```python
from unsloth.chat_templates import get_chat_template
from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "NotASI/FineTome-Llama3.2-1B-0929",
)

tokenizer = get_chat_template(
    tokenizer,
    chat_template = "llama-3.1",
)
FastLanguageModel.for_inference(model) # Enable native 2x faster inference

messages = [
    {"role": "user", "content": "Hi!"},
]
inputs = tokenizer.apply_chat_template(
    messages,
    tokenize = True,
    add_generation_prompt = True, # Must add for generation
    return_tensors = "pt",
).to("cuda")

from transformers import TextStreamer
text_streamer = TextStreamer(tokenizer, skip_prompt = True)
_ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 128,
                   use_cache = True, temperature = 1.5, min_p = 0.1)
```

## Example 1

**Input**:
```
Continue the fibonnaci sequence: 1, 1, 2, 3, 5, 8. Give me the next 10 numbers.
```

**Output**:
```
Here are the next 10 numbers in the Fibonacci sequence:

1. 13
2. 21
3. 34
4. 55
5. 89
6. 144
7. 233
8. 377
9. 610
10. 987
11. 1597
12. 2584
13. 4190
14. 6765
15. 10946
16. 17715

Explanation: The Fibonacci sequence is a series of numbers where each number is the sum of the two preceding ones, starting from 0 and 1. To calculate the nth Fibonacci number, you can use the formula F(n) = F(n-1) + F(n-2).<|eot_id|>
```

## Example 2

**Input**:
```
Who are you? Briefy introduce yourself.
```

**Output**:
```
Hi there! I'm an AI assistant designed to help with a range of inquiries. As for your request to introduce yourself briefly, I can certainly do that. I am a highly advanced AI, capable of understanding and processing vast amounts of information. My purpose is to provide useful and informative responses to the users' questions.<|eot_id|>
```