legolasyiu commited on
Commit
849ecac
1 Parent(s): 7e3c370

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +313 -0
README.md CHANGED
@@ -11,6 +11,319 @@ tags:
11
  - trl
12
  ---
13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  # Uploaded model
15
 
16
  - **Developed by:** EpistemeAI2
 
11
  - trl
12
  ---
13
 
14
+ ## SFT fine tuning method:
15
+ Special fine tuned with PHD level and COT to Storm COT system.
16
+
17
+ ## Original Model card
18
+ ## Llama 3.1 Storm
19
+
20
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/tmOlbERGKP7JSODa6T06J.jpeg)
21
+
22
+ Authors: [Ashvini Kumar Jindal](https://www.linkedin.com/in/ashvini-jindal-26653262/), [Pawan Kumar Rajpoot](https://www.linkedin.com/in/pawanrajpoot/), [Ankur Parikh](https://www.linkedin.com/in/ankurnlpexpert/), [Akshita Sukhlecha](https://www.linkedin.com/in/akshita-sukhlecha/)
23
+
24
+ **🤗 Hugging Face Announcement Blog**: https://huggingface.co/blog/akjindal53244/llama31-storm8b
25
+
26
+ **🚀Ollama:** `ollama run ajindal/llama3.1-storm:8b`
27
+
28
+
29
+ ## TL;DR
30
+
31
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c75c1237333ccfef30a602/mDtDeiHwnBupw1k_n99Lf.png)
32
+
33
+ We present the [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model that outperforms Meta AI's [Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct) and [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) models significantly across diverse benchmarks as shown in the performance comparison plot in the next section. Our approach consists of three key steps:
34
+ 1. **Self-Curation**: We applied two self-curation methods to select approximately 1 million high-quality examples from a pool of ~2.8 million open-source examples. **Our curation criteria focused on educational value and difficulty level, using the same SLM for annotation instead of larger models (e.g. 70B, 405B).**
35
+ 2. **Targeted fine-tuning**: We performed [Spectrum](https://arxiv.org/abs/2406.06623)-based targeted fine-tuning over the Llama-3.1-8B-Instruct model. The Spectrum method accelerates training by selectively targeting layer modules based on their signal-to-noise ratio (SNR), and freezing the remaining modules. In our work, 50% of layers are frozen.
36
+ 3. **Model Merging**: We merged our fine-tuned model with the [Llama-Spark](https://huggingface.co/arcee-ai/Llama-Spark) model using [SLERP](https://huggingface.co/blog/mlabonne/merge-models#1-slerp) method. The merging method produces a blended model with characteristics smoothly interpolated from both parent models, ensuring the resultant model captures the essence of both its parents. [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) improves Llama-3.1-8B-Instruct across 10 diverse benchmarks. These benchmarks cover areas such as instruction-following, knowledge-driven QA, reasoning, truthful answer generation, and function calling.
37
+
38
+ ## 🏆 Introducing Llama-3.1-Storm-8B
39
+ [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) builds upon the foundation of Llama-3.1-8B-Instruct, aiming to enhance both conversational and function calling capabilities within the 8B parameter model class.
40
+
41
+ As shown in the left subplot of the above figure, [**Llama-3.1-Storm-8B**](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) model improves Meta-Llama-3.1-8B-Instruct across various benchmarks - Instruction-following ([IFEval](https://arxiv.org/abs/2311.07911)), Knowledge-driven QA benchmarks ([GPQA](https://arxiv.org/abs/2311.12022), [MMLU-Pro](https://arxiv.org/pdf/2406.01574)), Reasoning ([ARC-C](https://arxiv.org/abs/1803.05457), [MuSR](https://arxiv.org/abs/2310.16049), [BBH](https://arxiv.org/pdf/2210.09261)), Reduced Hallucinations ([TruthfulQA](https://arxiv.org/abs/2109.07958)), and Function-Calling ([BFCL](https://huggingface.co/datasets/gorilla-llm/Berkeley-Function-Calling-Leaderboard)). This improvement is particularly significant for AI developers and enthusiasts who work with limited computational resources.
42
+
43
+ We also benchmarked our model with the recently published model [Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) built on top of the Llama-3.1-8B-Instruct model. As shown in the right subplot of the above figure, **Llama-3.1-Storm-8B outperforms Hermes-3-Llama-3.1-8B on 7 out of 9 benchmarks**, with Hermes-3-Llama-3.1-8B surpassing Llama-3.1-Storm-8B on the MuSR benchmark and both models showing comparable performance on the BBH benchmark.
44
+
45
+
46
+ ## Llama-3.1-Storm-8B Model Strengths
47
+ Llama-3.1-Storm-8B is a powerful generalist model useful for diverse applications. We invite the AI community to explore [Llama-3.1-Storm-8B](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) and look forward to seeing how it will be utilized in various projects and applications.
48
+
49
+ <table>
50
+ <tr>
51
+ <td><strong>Model Strength</strong>
52
+ </td>
53
+ <td><strong>Relevant Benchmarks</strong>
54
+ </td>
55
+ <tr>
56
+ <tr>
57
+ <td>🎯 Improved Instruction Following
58
+ </td>
59
+ <td>IFEval Strict (+3.93%)
60
+ </td>
61
+ <tr>
62
+ <tr>
63
+ <td>🌐 Enhanced Knowledge Driven Question Answering
64
+ </td>
65
+ <td>GPQA (+7.21%), MMLU-Pro (+0.55%), AGIEval (+3.77%)
66
+ </td>
67
+ <tr>
68
+ <tr>
69
+ <td>🧠 Better Reasoning
70
+ </td>
71
+ <td>ARC-C (+3.92%), MuSR (+2.77%), BBH (+1.67%), AGIEval (+3.77%)
72
+ </td>
73
+ <tr>
74
+ <tr>
75
+ <td>🤖 Superior Agentic Capabilities
76
+ </td>
77
+ <td>BFCL: Overall Acc (+7.92%), BFCL: AST Summary (+12.32%)
78
+ </td>
79
+ <tr>
80
+ <tr>
81
+ <td>🚫 Reduced Hallucinations
82
+ </td>
83
+ <td>TruthfulQA (+9%)
84
+ </td>
85
+ <tr>
86
+ </table>
87
+
88
+ **Note**: All improvements are absolute gains over Meta-Llama-3.1-8B-Instruct.
89
+
90
+
91
+ ## Llama-3.1-Storm-8B Models
92
+ 1. `BF16`: [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
93
+ 2. ⚡ `FP8`: [Llama-3.1-Storm-8B-FP8-Dynamic](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic)
94
+ 3. ⚡ `GGUF`: [Llama-3.1-Storm-8B-GGUF](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B-GGUF)
95
+ 4. 🚀 Ollama: `ollama run ajindal/llama3.1-storm:8b`
96
+
97
+
98
+ ## 💻 How to Use the Model
99
+ The Hugging Face `transformers` library loads the model in `bfloat16` by default. This is the type used by the [Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B) checkpoint, so it’s the recommended way to run to ensure the best results.
100
+
101
+ ### Installation
102
+ ```bash
103
+ pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1
104
+ ```
105
+
106
+ Developers can easily integrate Llama-3.1-Storm-8B into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples:
107
+
108
+ ### Conversational Use-case
109
+ #### Use with [🤗 Transformers](https://github.com/huggingface/transformers)
110
+ ##### Using `transformers.pipeline()` API
111
+ ```python
112
+ import transformers
113
+ import torch
114
+ model_id = "akjindal53244/Llama-3.1-Storm-8B"
115
+ pipeline = transformers.pipeline(
116
+ "text-generation",
117
+ model=model_id,
118
+ model_kwargs={"torch_dtype": torch.bfloat16},
119
+ device_map="auto",
120
+ )
121
+ messages = [
122
+ {"role": "system", "content": "You are a helpful assistant."},
123
+ {"role": "user", "content": "What is 2+2?"}
124
+ ]
125
+ outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
126
+ print(outputs[0]["generated_text"][-1]) # Expected Output: {'role': 'assistant', 'content': '2 + 2 = 4'}
127
+ ```
128
+
129
+ ##### Using `model.generate()` API
130
+ ```bash
131
+ pip install flash_attn==2.6.3
132
+ ```
133
+
134
+ ```python
135
+ import torch
136
+ from transformers import AutoTokenizer, LlamaForCausalLM
137
+ # Apply Llama3.1 chat-template
138
+ def format_prompt(user_query):
139
+ template = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>\n\nYou are a helpful assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>\n\n{}<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n"""
140
+ return template.format(user_query)
141
+ model_id = 'akjindal53244/Llama-3.1-Storm-8B'
142
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
143
+ model = LlamaForCausalLM.from_pretrained(
144
+ model_id,
145
+ torch_dtype=torch.bfloat16,
146
+ device_map="auto",
147
+ load_in_8bit=False,
148
+ load_in_4bit=False,
149
+ use_flash_attention_2=True
150
+ )
151
+ # Build final input prompt after applying chat-template
152
+ prompt = format_prompt("What is 2+2?")
153
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("cuda")
154
+ generated_ids = model.generate(input_ids, max_new_tokens=128, temperature=0.01, do_sample=True, eos_token_id=tokenizer.eos_token_id)
155
+ response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True)
156
+ print(response) # Expected Output: '2 + 2 = 4'
157
+ ```
158
+
159
+ #### Use with [vLLM](https://github.com/vllm-project/vllm)
160
+ ```python
161
+ from vllm import LLM, SamplingParams
162
+ from transformers import AutoTokenizer
163
+ model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
164
+ num_gpus = 1
165
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
166
+ llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
167
+ sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
168
+ messages = [
169
+ {"role": "system", "content": "You are a helpful assistant."},
170
+ {"role": "user", "content": "What is 2+2?"}
171
+ ]
172
+ prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
173
+ print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: 2 + 2 = 4
174
+ ```
175
+
176
+ #### Use with [LitGPT](https://github.com/Lightning-AI/litgpt)
177
+ ```bash
178
+ pip install 'litgpt[all]'
179
+ litgpt download akjindal53244/Llama-3.1-Storm-8B --model_name meta-llama/Meta-Llama-3.1-8B
180
+ ```
181
+
182
+ ```python
183
+ from litgpt import LLM
184
+ llm = LLM.load(model="akjindal53244/Llama-3.1-Storm-8B")
185
+ llm.generate("What do Llamas eat?")
186
+ ```
187
+
188
+ ### Function Calling Use-case
189
+
190
+ [**Llama-3.1-Storm-8B**](https://huggingface.co/collections/akjindal53244/storm-66ba6c96b7e24ecb592787a9) has impressive function calling capabilities compared to Meta-Llama-3.1-8B-Instruct as demonstrated by the BFCL benchmark.
191
+
192
+ #### Prompt Format for Function Calling
193
+ Llama-3.1-Storm-8B is trained with specific system prompt for Function Calling:
194
+ ```
195
+ You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
196
+ Here are the available functions:
197
+ <tools>LIST_OF_TOOLS</tools>
198
+ For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
199
+ <tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>
200
+ ```
201
+ Above system prompt should be used with passing `LIST_OF_TOOLS` as input.
202
+
203
+
204
+ #### Use with [vLLM](https://github.com/vllm-project/vllm)
205
+ ```python
206
+ import json
207
+ from vllm import LLM, SamplingParams
208
+ from transformers import AutoTokenizer
209
+ model_id = "akjindal53244/Llama-3.1-Storm-8B" # FP8 model: "akjindal53244/Llama-3.1-Storm-8B-FP8-Dynamic"
210
+ num_gpus = 1
211
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
212
+ llm = LLM(model=model_id, tensor_parallel_size=num_gpus)
213
+ sampling_params = SamplingParams(max_tokens=128, temperature=0.01, top_k=100, top_p=0.95)
214
+ def create_system_prompt(tools_list):
215
+ system_prompt_format = """You are a function calling AI model. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into function. The user may use the terms function calling or tool use interchangeably.
216
+ Here are the available functions:
217
+ <tools>{}</tools>
218
+ For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags in the format:
219
+ <tool_call>{"tool_name": <function-name>, "tool_arguments": <args-dict>}</tool_call>"""
220
+
221
+ # Convert the tools list to a string representation
222
+ tools_str = json.dumps(tools_list, ensure_ascii=False)
223
+ # Format the system prompt with the tools list
224
+ system_prompt = system_prompt_format.format(tools_str)
225
+ return system_prompt
226
+ # Example tools list
227
+ tools_list = [
228
+ {
229
+ "name": "peers",
230
+ "description": "Retrieves a list of company peers given a stock symbol.",
231
+ "parameters": {
232
+ "symbol": {
233
+ "description": "The stock symbol for the company.",
234
+ "type": "str",
235
+ "default": ""
236
+ }
237
+ }
238
+ },
239
+ {
240
+ "name": "web_chain_details",
241
+ "description": "python",
242
+ "parameters": {
243
+ "chain_slug": {
244
+ "description": "The slug identifier for the blockchain (e.g., 'ethereum' for Ethereum mainnet).",
245
+ "type": "str",
246
+ "default": "ethereum"
247
+ }
248
+ }
249
+ }
250
+ ]
251
+ # Create the system prompt with the tools list
252
+ system_prompt = create_system_prompt(tools_list)
253
+ messages = [
254
+ {"role": "system", "content": system_prompt},
255
+ {"role": "user", "content": "I need to understand the details of the Ethereum blockchain for my cryptocurrency project. Can you fetch the details for 'ethereum'?"}
256
+ ]
257
+ prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize = False)
258
+ print(llm.generate([prompt], sampling_params)[0].outputs[0].text.strip()) # Expected Output: <tool_call>{'tool_name': 'web_chain_details', 'tool_arguments': {'chain_slug': 'ethereum'}}</tool_call>
259
+ ```
260
+
261
+ #### Use with [Ollama](https://ollama.com/)
262
+ ```
263
+ import ollama
264
+ tools = [{
265
+ 'type': 'function',
266
+ 'function': {
267
+ 'name': 'get_current_weather',
268
+ 'description': 'Get the current weather for a city',
269
+ 'parameters': {
270
+ 'type': 'object',
271
+ 'properties': {
272
+ 'city': {
273
+ 'type': 'string',
274
+ 'description': 'The name of the city',
275
+ },
276
+ },
277
+ 'required': ['city'],
278
+ },
279
+ },
280
+ },
281
+ {
282
+ 'type': 'function',
283
+ 'function': {
284
+ 'name': 'get_places_to_vist',
285
+ 'description': 'Get places to visit in a city',
286
+ 'parameters': {
287
+ 'type': 'object',
288
+ 'properties': {
289
+ 'city': {
290
+ 'type': 'string',
291
+ 'description': 'The name of the city',
292
+ },
293
+ },
294
+ 'required': ['city'],
295
+ },
296
+ },
297
+ },
298
+ ]
299
+ response = ollama.chat(
300
+ model='ajindal/llama3.1-storm:8b',
301
+ messages=[
302
+ {'role': 'system', 'content': 'Do not answer to nay vulgar questions.'},
303
+ {'role': 'user', 'content': 'What is the weather in Toronto and San Francisco?'}
304
+ ],
305
+ tools=tools
306
+ )
307
+ print(response['message']) # Expected Response: {'role': 'assistant', 'content': "<tool_call>{'tool_name': 'get_current_weather', 'tool_arguments': {'city': 'Toronto'}}</tool_call>"}
308
+ ```
309
+
310
+
311
+ ## Alignment Note
312
+ While **Llama-3.1-Storm-8B** did not undergo an explicit model alignment process, it may still retain some alignment properties inherited from the Meta-Llama-3.1-8B-Instruct model.
313
+
314
+ ## Cite Our Work
315
+ ```
316
+ @misc {ashvini_kumar_jindal_2024,
317
+ author = { {Ashvini Kumar Jindal, Pawan Kumar Rajpoot, Ankur Parikh, Akshita Sukhlecha} },
318
+ title = { Llama-3.1-Storm-8B },
319
+ year = 2024,
320
+ url = { https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B },
321
+ doi = { 10.57967/hf/2902 },
322
+ publisher = { Hugging Face }
323
+ }
324
+ ```
325
+
326
+
327
  # Uploaded model
328
 
329
  - **Developed by:** EpistemeAI2